ADVISOR DETAILS

RECENT BLOG POSTS

Finding your new Intel SSD for PCIe (think NVMe, not SCSI)

Sometimes we see customers on Linux wondering where their new NVMe capable SSD is on the Linux filesystem. It’s not in the standard place on the Linux filesystem in ‘/dev/sd*’ like all those scsi devices of the past 20+ years. So how come, where is it? For all of you new to the latest shipping Intel SSD’s for PCIe, they run on the NVMe storage controller protocol, and not the scsi protocol. That’s actually a big deal because that means efficiency and a protocol appropriate for “non-volatile memories” (NVM). Our newest P3700 and related drives will use the same, industry standard, and open source NVMe kernel driver. This driver drives I/O to the device and is part of the block driver subsystem of the linux kernel.


So maybe it is time to refresh on some not too familiar or oft-used linux administrative commands to see a bit more. The simple part is to look in “/dev/nvme*”. The devices will be numbered and the actual block device will have an n1 on the end, to support NVMe namespaces. So if you have one PCIe card or front-loading 2.5″ drive, you’ll have /dev/nvme0n1 as a block device to format, partition and use.


These important Data Center Linux distributions:

Red Hat 6.5/7.0

SUSE 11 SP2

Ubuntu 14.04 LTS


…all have in box nvme storage drivers, so you should be set if you are at these levels or newer.


Below are some basic Linux instructions and snapshots to give you a bit more depth. This is Red Hat/CentOS 6.5 distro relevant data below.


#1

Are the drives in my system scan the pci and block devices:

[root@fm21vorc10 ~]$ lspci | grep 0953

04:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

05:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

48:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

49:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

 

[root@fm21vorc07 ~]# lsblk

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda 8:0    0  372G  0 disk

─sda1 8:1    0    10G  0 part /boot

─sda2 8:2    0  128G  0 part [SWAP]

└─sda3        8:3 0  234G  0 part /

nvme0n1    259:0    0 372.6G  0 disk

└─nvme0n1p1 259:1    0 372.6G  0 part

#2

Is the nvme driver built into my kernel:

[root@fm21vorc10 ~]$ modinfo nvme

filename: /lib/modules/3.15.0-rc4/kernel/drivers/block/nvme.ko

version:        0.9

license:        GPL

author:        Matthew Wilcox <willy@linux.intel.com>

srcversion:    4563536D4432693E6630AE3

alias: pci:v*d*sv*sd*bc01sc08i02*

depends:

intree:        Y

vermagic:      3.15.0-rc4 SMP mod_unload modversions

parm: io_timeout:timeout in seconds for I/O (byte)

parm: nvme_major:int

parm: use_threaded_interrupts:int

 

#3

Is my driver actually loaded into the kernel

[root@fm21vorc10 ~]$ lsmod | grep nvm

nvme 54197  0

 

#4

Are my nvme block devices present:

[root@fm21vorc10 ~]$ ll /dev/nvme*n1

brw-rw—- 1 root disk 259, 0 Oct  8 21:05 /dev/nvme0n1

brw-rw—- 1 root disk 259, 1 Sep 25 17:08 /dev/nvme1n1

brw-rw—- 1 root disk 259, 2 Sep 25 17:08 /dev/nvme2n1

brw-rw—- 1 root disk 259, 3 Sep 25 17:08 /dev/nvme3n1

 

#5

Run a quick test to see if you have a GB/s class SSD to have fun with.

[root@fm21vorc07 ~]# hdparm -tT –direct /dev/nvme0n1

 

/dev/nvme0n1:

Timing O_DIRECT cached reads:  3736 MB in  2.00 seconds = 1869.12 MB/sec

Timing O_DIRECT disk reads: 5542 MB in  3.00 seconds = 1847.30 MB/sec


Remember to consolidate and create parallelism as much as possible in your workloads.These drives will amaze you.


Have fun!


Read more >

How IT killed the auto insurance market

Automobiles are becoming smart.  And the more that IT is implemented into vehicles, the more car insurance companies will need to worry.  


Recently, reports and studies of “driverless vehicles” have sparked public interest while encouraging the development and integration of smart technology in vehicles.  Today, we have cars and trucks that are not only able to drive themselves, but they can now talk to one another.

Hello Megatron!

crash.jpgWhile this technology becomes more and more prevalent in the public market, there will be a major increase in self-driving cars.  As a result, many of the everyday driving risks will disappear. Let’s imagine for a second… Speed limits will no longer be broken. Traffic jams will no longer occur. Road rage will not exist. Drowsy drivers can now take naps as their vehicles take them safely to their destination.  Having lunch in the car, which was once limited to a cheeseburger in one hand and a soda between the legs, can now consist of a good bowl of soup with the use of a spoon – clearly a two-handed operation.

Want to use your cell phone by dialing or texting?  Go ahead.  Applying makeup? No problem.  Teenage drivers? A ok. 

 

With an actual driver no longer being required, age restrictions for licenses will not be necessary.  In fact, licenses themselves will no longer be necessary.  In essence, the car becomes a device much like a smartphone or tablet. 

The best news of all: no more auto insurance needed.  With the elimination of human error, bodily injuries and accidents – what will we need to be covered for? Simply put, auto insurance companies will no longer be in business.

What does that mean for us?  No more commercials featuring Geckos, Flo or Cavemen.


Well that’s just better news.  One can only dream right?

 

Doc

 

Read more >

Bringing Electronic Checklists to Healthcare

Doctors and surgeons are some of the brightest individuals in the world. However, no one is immune to mistakes and simple oversights. Unintentional errors occur in any industry; what makes healthcare different is that a single misstep could cost a life. 

 

In, The Checklist Manifesto by Dr. Atul Gawande, he cites a fellow surgeon’s story of a seemingly routine stab wound.  The patient was at a costume party when he got into an altercation that led to the stabbing.  As the team prepared to treat the wound, the patient’s vitals began dropping rapidly. The surgeon and his team were unaware that the weapon was a bayonet that went more than a foot through the man, piecing his aorta.

 

After regaining control of the situation, the man recovered after a few days. This experience presented complications that no one could possibly predict unless the doctors had full knowledge of the situation.  Gawande states, “everyone involved got almost every step right […] except no one remembered to ask the patient or the medical technicians what the weapon was” (Gawande 3). There are many independent variables to account for; a standard checklist for incoming stab wound patients could ensure that episodes like this are avoided and that other red flags would be accounted for. 

 

Miscommunication between clinicians and patients annually accounts for roughly 800,000 deaths in the US, more than heart disease and more than cancer.  The healthcare industry spends roughly $8 billion on extended care as a result of clinical error every year. As accountable care continues to make progress, the healthcare industry is moving more towards evidence based medicine and best practices. This is certainly the case for care providers, but also for patients as well. 

 

Implementing checklists in all aspects of healthcare can eliminate simple mistakes and common oversights by medical professionals and empower patients to become more educated and informed. Studies by the Journal of the American Medical Association (JAMA) as well as the New England Journal of Medicine (NEJM) have concluded that implementing checklists in various facets of care can reduce errors by up to half. Certain implementations of checklists in Intensive Care Units for infection mitigation resulted in reducing infections by 100 percent.

 

Compelling evidence of the need for checklisting can be found in the preparation process for a colonoscopy.  Colonoscopy preparation is a rigorous process that requires patients to be watching their diet and the clock for two days before procedure.  It is not uncommon for a colonoscopy to fail due to inadequate patient preparation. Before the procedure, the patient must pay attention to an arsenal of instructions regarding food, liquid, and medication. A detailed checklist that guides each patient through the process would practically eliminate any errors and failures due to inadequate patient preparation. 

 

From the patient’s perspective, checklisting everything from pre-surgery preparation to a routine checkup should be a priority.   At the end of the day, the patient has the most at stake and should be entitled to a clear, user-friendly system to understand every last detail of any procedure or treatment.

 

A couple of companies are making waves in the area of patient safety checklists, most notably of which are BluMenlo and Parallax.

 

BluMenlo is a mobile patient safety firm founded in 2012. Its desktop, tablet, and mobile solution drives utilization of checklists for patient handoffs, infection mitigation, and Radiation Oncology Machine QA. Although initial focus is in the areas mentioned, BluMenlo is expanding into standardizing best practices hospital and ACO-wide.

 

Parallax specializes in operating room patient safety. Its CHaRM offering incorporates a Heads Up Display to leverage checklists in the Operating Room. The software learns a surgeon’s habits and techniques to accurately predict how long an operation may take as well as predict possible errors.

 

Electronic checklists will certainly take hold as health systems, ACOs and accountable care networks continue to focus on increased patient safety, improved provider communications and best practices for reducing costs across their organizations. We will even see these best practices expedited if we begin to inquire with our care providers as informed and engaged patients.

 

What questions about checklists do you have?

 

As a healthcare executive and strategist, Justin Barnes is an industry and technology advisor who also serves as an Entrepreneur-in-Residence at Georgia Tech’s Advanced Technology Development Center. In addition, Mr. Barnes is Chairman Emeritus of the HIMSS EHR Association as well as Co-Chairman of the Accountable Care Community of Practice.

Read more >

What Is Business Intelligence?

What Is Business Intelligence?Early in my career, I was encouraged to always ask questions, even the  most obvious and simple ones. This included questions about well-known topics that were assumed to be understood by everyone. With that in mind, let’s answer the question, “What is business intelligence (BI)?”

 

As you read this post, you probably fall into one of these three categories:

  1. You know exactly what BI is because you eat, sleep, and breathe it every day. BI is in your business DNA.
  2. The term means nothing more than the name of an exotic tech cocktail that might have pierced your ears, figuratively speaking of course.
  3. You‘re somewhere in between the two extremes. You’ve been exposed to the term, but haven’t had a chance yet to fully digest it or appreciate it.

 

Do you have something to learn about BI? Let’s roll up our sleeves and get to work.

 

To begin with, BI looked very different when I started my career in the early ’90s. You couldn’t look it up on a mobile device smaller than a floppy disk. Moreover, you couldn’t Google it, Bing it, or Yahoo it. Today, the keywords “business” and “intelligence” together return more than 250 million results on Google, though few will be relevant to you, nor will you have time to go through them. Nevertheless, the ease and the speed at which you are able to query large volumes of recorded data to make faster, better-informed conclusions puts the question at hand in perspective.

 

Scratching the surface

 

Beginners to BI should start their research with the definition. Wikipedia’s definition of BI is a good place to start, and from it you get the sense that BI includes tangibles such as hardware and software as well as intangibles such as people, culture, processes, and best practices. Continuing on the Wikipedia page, you can find out about the origins of the term. In 1958, Hans Peter Luhn, an IBM researcher, defined the term as “the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal.” By the ‘90s, the term had become more widespread. At CIO.com, BI is defined as “an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data.”

 

Digging deeper

 

Next, you can dig a little deeper by performing what I call a rapid-research exercise to glance at the websites of BI companies that develop the technology. In this way, your searches can transition from text-based and definition-centric explanations to visually rich and appealing presentations, including graphs and charts. This is where BI dashboards take center stage. Not surprisingly, the emphasis on mobile that showcases tablets and smart phones becomes apparent by pictures of BI artifacts shown on mobile devices. Additional references pop up for Big Data and Cloud. Both are hot technology terms that have gained popularity in the last few years. As you research and connect the dots, you can start to build your own definition of BI. This will be influenced by your own unique background, your experiences with technology (with or without BI), and possibly, your personal perceptions layered with your biases of BI. However, in the end, your definition may still fall short.

 

Hitting the core

 

Ultimately, BI is about decision making. In its simplest and purest form, I define BI as the framework that enables organizations of all sizes to make faster, better-informed business decisions.


I don’t claim that this particular definition of BI is better or more comprehensive than others. But it does provide a direct and concise answer with less emphasis on technology and more focus on business, people, and decision making.

 

When it comes to defining BI or technology in general, we need to put the focus on business and people more often. In this context, business decisions should be complemented by technology that promotes actionable insight, and not the other way around. BI is not a miracle pill.

 

BI alone does not solve business problems or cure corporate infections. Instead, BI is the enabler that, if designed, implemented, and executed effectively, can help organizations drive growth and profitability.

 

What is your definition of BI?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the The Decision Factor.

Read more >

Next Generation of CIOs Drive a New Style of Business

“The worst place to be as a CIO is to convince yourself you have control, when in fact you don’t,” says Intel CIO, Kim Stevenson in this interview on ComputerWeekly.com.  Stevenson hates the term Shadow IT – she views this as the enterprise at large becoming more educated about technology just like we continue to do in our personal lives.  Stevenson’s outlook symbolizes a new style of interaction with business stakeholders that is vital for competitive enterprises of the future.  It is no longer just about how CIOs help their stakeholders achieve their business objectives – it is also about the manner in which they present solutions in business terms. CIOs of tomorrow must drive a New Style of Business today across the enterprise.  Let us see what we can learn from the next generation of CIOs like Intel’s Stevenson.

 

Kim Stevenson.jpg

Twentieth Century Fox Executive CP and CIO, John Herbert, introduced the term Journey Management at HP Discover.  By realizing business gains for his stakeholders through clearly defined metrics, Herbert is delivering Enterprise IT at the pace of Business.  Through Herbert’s words, Enterprise IT at Fox is a “Service Broker” today instead of an order-taker.  This enables business functions that matter most to his stakeholders.

 

In this CIO.com interview, HP Enterprise Services CIO, Steve Bandrowczak calls out a powerful but rarely mentioned quality for the New Style of CIOs: humility.  The humble CIO will emphasize his people’s importance more than his own. It is the same mindset that drove leaders like Gandhi, Lincoln and Mother Teresa to make big data matter and make a difference in the global enterprise.

 

This mindset drives a spirit of co-opetition rather than competition with other stakeholders.  No wonder Stevenson suggests that CIOs who have worked in a control style of IT service must relinquish control in situations where IT cannot add any value.

 

Stevenson also shares an example of presenting IT solution in business terms.  Rather than letting business peers know that you have a team of Data Scientists who can work magic, she suggests: “How about if you say, ‘We can create a $10m return on investment in six months?’”. This approach was applied to the Resller SMART project. Her team used advanced analytics to provide insights about which customers were most likely to buy. The project delivered $20M in one year.

 

These are powerful messages from CIOs who integrate the business of IT every day. What is interesting is that they are still operating under the fundamental premise of Enterprise IT, enabling the business units to achieve their business objectives.  There is nothing intrinsically new about this premise. But, they are doing this with a different style of thinking and interaction that characterizes a New Style of Leadership to drive a New Style of Business.

 

How about you?  What are other characteristics you would suggest to drive this new style of business?  .

 

Team up with HP Technology Expert, E.G.Nadhan

 

Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog


References:


Read more >

Latin America Jumps into the Parallel Universe Computing Challenge

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group

 

At our inaugural Parallel Universe Computing Challenge (PUCC) at SC13, we had no representatives from Latin America. That’s changed for the 2014 PUCC with the proposed participation of a team representing supercomputing interests in Brazil, Columbia, Costa Rica, Mexico and Venezuela.

Several of the team members are from the Universidad Industrial de Santander (UIS) in Bucaramanga, Colombia. UIS, a research university, is the home of the Super Computing and Scientific Computing lab that also provides HPC training for Latin American and Caribbean countries—which is why they were able to garner additional team members from universities in other countries.

The lab’s research is focused on such science and applied science areas as bioinformatics and computational chemistry, materials and corrosion, condensed matter physics, astronomy and astrophysics; and on computer science areas including visualization and cloud computing, modeling and simulation, scheduling and optimization, concurrency and parallelism, and energy-aware advanced computing.

 

We talked with team captain Gilberto Díaz, Infrastructure chief of the supercomputer center at UIS, about the team he was assembling.

Q: Why did the team from Latin America decide to participate in the PUCC?
A: We would like to promote and develop more widespread awareness and use of HPC in our region. In addition to the excitement of participating in the 2014 event, our participation will help us to prepare students of master and PhD programs to better understand the importance of code modernization as well as preparing them to compete in future competitions.

Q: How will your team prepare for the Intel PUCC?
A: All of us work in HPC and participate in scientific projects where we have the opportunity to develop our skills.

Q: What are the most prevalent high performance computing applications in which your team members are involved?
A: We are developers, therefore, we are most familiar with programming languages than specific applications (MPI, CUDA, OpenMP).

Q: SC14 is using the theme “HPC Matters” for the conference. Can you explain why “HPC Matters” to you?
A: HPC is a fundamental tool to face some challenging problems and solving them will represent a significant advance for humanity, for example, new drug development for disease treatment, high tech components for cars, planes, etc., weather simulations to understand how we are affecting the climate of the world, etc.

Q: What is the significance of your team name (“SC3”)?
A: Super Computing and Scientific Computing in Spanish is Super Computación y Calculo Cientifico, which is the name of the lab at the Universidad Industrial de Santander.

Q: Who are your team members?
A: We have six people in addition to myself so far:

  • Robinson Rivas, Professor at Universidad Central de Venezuela (UCV) and director of the supercomputer center of UCV in Caracas
  • Carlos Barrios, Professor at Universidad Industrial de Santander (UIS) and director of the supercomputer center of UIS
  • Pedro Velho, Professor at Universidad Federal de Rio Grande del Sur in Porto Alegre, Brazil
  • Alvaro de la Ossa, Professor at Universidad de Costa Rica in San Jose, Costa Rica
  • Jesus Verduzco, Professor at Instituto Politécnico de Colima in Colima, Mexico
  • Monica Hernandez, System Engineer and student in Master program at UIS

 

Learn more about the PUCC at SC14.

 

(Left to Right) Pedro Velho, Carlos Barrios, Robinson Rivas, Gilberto Díaz


Jesus Verduzco

Read more >

Part 3 – Transforming the Workplace: Driving Innovation with Technology

This is part 3 of my blog series about transforming the workplace. Be sure to start with part 1 and part 2, and look for future posts in the series.


Imagine how your day might look in the workplace of the future. Your computer knows your face (it’s how you log in); it responds to your gestures; and it knows your voice. You connect, dock, and charge your personal computing device by simply sitting there, without the need for any wires. Even better, your computer becomes the assistant you never had. That 11 a.m. client meeting on your calendar? There’s an accident blocking the fastest route, so you’ll need to leave 20 minutes earlier. You didn’t know this, but your PC figured it out and told you by making contextual insights into your schedule. And this is just the tip of the iceberg.

 

Between this future-state vision and where we are today lies a transformational journey. And it’s never easy. In my last blog, I discussed how the nature and style of work is changing to support the need to innovate with velocity. To achieve true transformation, companies must overcome many barriers to change, from the cultural and environmental to the technological. Here I want to take a closer look at some of the technological leaps that will make the transformation possible, both in terms of where we are now and where we’re going.

 

Supporting natural, immersive collaboration

We all know that social, mobile, analytics, and cloud (SMAC) has changed things. Because today’s workforce is distributed across sites, cities, and even countries, collaboration can be a real challenge—a scenario exacerbated with the advent of agile practices working across company boundaries.

 

Take a typical brainstorming session, for example. Using a whiteboard to sketch out ideas is key, but it has limitations for workers attending by phone. Someone either has to explain what’s on the whiteboard, copy the work into meeting notes, or take a photo of the whiteboard and e-mail it. Not to mention that the picture, possibly of your company’s “next great idea,” uploads to your favorite public cloud provider. And while videoconferencing would seem a likely alternative here, video quality can be lackluster at best.

 

Intel is taking an innovative approach to solve these challenges. Advanced collaboration technologies will let workers connect in an intuitive, natural way—whether it’s a global team, a small group, or a simple one-on-one session. Unified communications with HD audio and video (complete with live background masking) is already changing videoconferencing with a more lifelike experience. And workers can interact in real time using a shared, multitouch interactive whiteboard that spans devices, from tablets to projection screens and everything in between. The whiteboard is visible and accessible to all attendees in real time. And that digital business assistant? One day it could even use natural language voice recognition to automatically transcribe meeting notes and track actions!

 

productivity.png

 

Boosting personal productivity

When it comes to productivity, the devil is in the details. And often those details translate into lost time, whether it’s a dead laptop battery or a password issue. Let’s say you forget your password and you can’t log in without IT assistance. It’s a drag on your time (and theirs), but it’s also interrupting workflow. Sharing work can also take longer than it should. We’ve all been there, in the conference room, stuck without the right adapter for the projector (“the thing that connects to the thing”). And if you can’t project, there’s not an easy way to share work.

 

Intel is making great strides to free workers from these burdens of computing by supporting existing workflows for maximum productivity.

  • A workplace without wiresbuilt-in wireless display now allows workers to connect automatically
  • “You are your password”
  • And getting back to that assistant … it will know you. Instead of having to tell your device everything, the reverse will be true. We foresee a day when your PC will know where you are, what you like, and what you need (like leaving early for that meeting). By anticipating your needs with proactive, contextual recommendations and powerful voice recognition, it will be able to streamline your day. And built-in theft protection will automatically measure proximity and motion to assess risk levels if you’re on the go.

 

Implementing facilities innovation

While we are “getting by” in today’s workspaces, they typically don’t meet the needs of a distributed workforce and can pose problems even for those working on site. It’s often a challenge to find a free conference room or, if one is available, the room itself is hard to find. I touched on videoconferencing earlier, but this is a place where the technology makes or breaks the deal. From poor quality audio and video to the wrong adapter, it all hampers workflow.

 

Intel is working to enable an integrated facilities experience through location-based services and embedded building intelligence. Location-based service capabilities on today’s PCs can help you find the resources you need based on current location, from people to conference rooms and printers. And like your PC will one day “know you,” so will the room, meaning it will automatically prepare for your meeting—connecting participants via video and distributing meeting notes. Immersive, high-quality audio and video will guarantee a natural, easy experience. And future installments of touch, gesture, and natural voice control will become more context aware, taking collaboration and productivity to the next level.

 

Moving forward

This perspective on the role of technology in driving workplace transformation can be seen in action by watching the Intel video, “The Near Future of Work.” Additionally, I’m currently working on a paper that will expand on Intel’s vision of workplace transformation, and I’ll let you know when it’s available.

However, while technology is a huge piece of the puzzle, there is so much more to it. True workplace transformation requires the right partnerships and culture change to be effective. For the next blog in this series, I’ll be taking a look at how to approach a strategy for workplace transformation and share key learnings from Intel’s own internal workplace program.

Meanwhile, please join the conversation and share your thoughts. And be sure to click over to the Intel® IT Center to find resources on the latest IT topics.

 

Until the next time …


Jim Henrys, Principal Strategist

Read more of my blogs here.

Read more >

Patient Care 2020: More Technology on the Way

 

The year 2020 seems far off, but is closer than you think. With the increasing use of technology in healthcare, and with patient empowerment growing each year with the advent of mobile devices, what will a clinician’s workday look like five years from now?


In the above video, we turn toward the future to show you how enabling technologies that exist today will transform the way clinicians treat their patients in 2020. Learn how wearable devices, sensors, rich digital collaboration, social media, and personalized medicine through genomics will be part of a clinician’s daily workflow as we enter the next decade.

 

Watch the short video and let us know what questions you have about the future of healthcare technology and where you think it’s headed.

Read more >

Keys To Building Your Own SaaS Security Playbook

As enterprise applications and data continues to move towards software as a service (SaaS), the need to evolve security controls and strategies has become increasingly apparent. New developments are now required to access and store data and applications. An evolving enterprise IT landscape calls for an evolving security strategy to keep pace with it.

SaaS.png

In a recent podcast, information security analyst, Jim Brennan, detailed how Intel’s development of a “SaaS Security Playbook” has given risk managers a foundation for running the same “plays.” By creating a guide for security stakeholders, your organization can ensure consistency in security strategy and responses.

 

The Right Security Framework

 

By adopting the Open Data Center Alliance (ODCA) security framework and security assurance levels of bronze, silver, gold, and platinum, businesses can identify and focus their limited security resources on the most sensitive parts of the business. The ODCA security framework also offers recommendations on the type of security assurances your business should require from providers at each tier. Additionally, it details requirements for access control, encryption, data masking, and more.

 

Know Thyself: Application Inventory & Insight

 

According to Brennan, one of the first steps toward creating a SaaS security playbook is to take stock of which services have been migrated to the cloud, and which are still hosted in-house. During this inventory process, your team should create documentation for all SaaS providers, tenants, and enterprise controls. By conducting a thorough inventory of existing services and their security controls, your team can take a holistic and informed approach to implementing appropriate security measures for the kinds of data and applications that are being hosted in the cloud.

 

Choosing The Right Partners

 

A huge part of a successful security strategy is to keep outside providers accountable. Since the ecosystem is still evolving, many SaaS products are still maturing. It’s important to carefully vet and scrutinize new providers before aligning with them. Security is an ongoing process — your security team should continually audit all SaaS providers and reassess risks associated with them.

 

Brennan anticipates a lot of consolidation in the SaaS space over the next five to 10 years, which is why he recommends signing short-term contracts with your providers. If your roadmaps no longer align, your IT organization should be able to quickly move from one provider to another.

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

The Prickly Love Affair Between Users and Software

September has proven to be a big month for Apple. Blockbuster announcements were made to introduce the iPhone 6, the iPhone 6 Plus, Apple Pay, and the Apple Watch.  Along with these major events came the debut of the iOS 8.0.1 update.


7066517_l.jpg

Then came the failure of iOS 8.0.1.

 

The software update was plagued by furious customer complaints within minutes of its debut. Less than an hour after launch, Apple retracted the update with promises of mending the bugs that were causing slower download speeds, dropped calls, keyboard malfunctions, and overall sluggish performance. Thereafter, Apple had to coach its grumpy users through restoring their devices to the previous iOS.

 

The iOS 8 misstep begs the question: Are we ready to be governed by software that guides our daily lives?

 

Software is proliferating homes, enterprises, and virtually everything in between. It’s becoming a part of our routine anywhere we go, and when it works, it has the capacity to greatly enhance our quality of life. When it doesn’t work, things go awry almost immediately. For the enterprise, the ramifications of incapable software can resemble Apple’s recent debacle. Consumerization is not to be taken lightly — it’s changing how we exist as a species. It’s changing what we require to function.

 

Raj Rao, VP and global head of software quality practice for NTT Data, recently wrote an article for Wired in which he states, “Today many of us don’t really know how many software components are in our devices, what their names are, what their versions are, or who makes them and what their investment and commitment to quality is. We don’t know how often software changes in our devices, or what the change means.”

 

The general lack of knowledge on what software is used within a particular device — specifically how and why — inevitably leads to ineptitude for troubleshooting problems when they arise. While a constant evolution in software is necessary for innovation, one can expect continual troubleshooting for the new technology.

 

For enterprise software users, Rao had three tips for keeping everybody satisfied. First, users should be encouraged to stick with programs they regularly use and understand. Second, large OS ecosystems should adhere to very strict control standards in order to ensure quality. And third, global software development practices need to become a priority if we want to guarantee a prioritized UX.

 

The bond between humans and software is constantly intensifying. Now is the time to ensure the high quality of your own software systems. Do you have an iOS 8.0.1 situation waiting to happen?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

The Data Stack – September 2014 Intel® Chip Chat Podcast Round-up

September is always a busy month at Intel, and this year was no exception. Intel® Chip Chat hit the road with live episodes from the Intel Xeon processor E5 v3 launch. A plethora of partners and Intel reps discussed their products/platforms and what problems they’re using the Xeon processor to tackle. We were also live from the showcase of the Intel Developer Forum and will be archiving those episodes in the next few months, starting with an episode on software-defined storage. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

  • Data Center Telemetry – Intel® Chip Chat episode 331: Iddo Kadim, a marketing director in the Data Center Group at Intel, stops by to talk about data center telemetry – information you can read from the infrastructure (like thermal data and security states) to help manage workloads more efficiently. In the future, the orchestration layer will work with telemetry data to manage workloads automatically for a more flexible and efficient data center. For more information, visit www.intel.com/txt and www.intel.com/inteldcm.
  • The Intel® IoT Analytics Kit for Intelligent Data Analysis and Response – Intel® Chip Chat ep 332: Vin Sharma (@ciphr), the Director of Planning and Marketing for Hadoop at Intel chats about collecting and extracting value from data. The Intel® Galileo Development Kit’s hardware and software components allow users to build an end-to-end solution while the Intel® Internet of Things Analytics Kit provides a cloud-based data processing platform. For more information, visit www.intel.com/galileo.
  • The Intel® Xeon® Processor E5-2600 v3 Launch – Intel® Chip Chat episode 333: Dylan Larson, the Director of Server Platform Marketing at Intel, kicks off our podcasts from the launch of the Intel® Xeon® processor E5 v3. This new generation of processors is the heart of the software-defined data center and offers versatile and energy-efficient performance while providing a foundation for security. Also launching are complementary storage and networking elements for a complete integration of capabilities. For more information, visit www.intel.com/xeon.
  • Optimizing for HPC with SGI’s ICE X Platform: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 334: Bill Mannel, the General Manager with the Compute and Storage Product Division at SGI, stops by to talk about SGI’s ICE* X platform featuring the recently-launched Intel® Xeon® processor E5-2600 v3. The ICE X blade is specifically optimized to provide higher levels of performance, scalability, and flexibility for HPC customers. For more information, visit www.sgi.com/products/servers.
  • Increased App Performance with Dell PowerEdge: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 335: Brian Payne, Executive Director of PowerEdge Product Management at Dell, chats about the Dell PowerEdge* 13G server line featuring the recently-launched Intel® Xeon® processor E5 v3. Flash server integration into the PowerEdge 13G is delivering immense increases in application and database performance to help customers meet workload requirements and adapt to new scale-out infrastructure models. For more information, visit www.dell.com.
  • Next-Gen Ethernet Controllers for SDI: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 336: Brian Johnson, Solutions Architect for Ethernet Products at Intel, discusses the release of the Intel® Ethernet Controller XL710. With the ability to achieve 40 Gbps speeds, the XL710 is architected for the next generation of SDI and virtualized cloud environments, as well as network functions virtualization in the telco industry. For more information, visit www.intel.com/go/ethernet.
  • The Reliable and High Performing Oracle Sun Server: Intel Xeon E5 v3 Launch – Chip Chat ep 337: Subban Raghunathan, the Director of Product Management of x86 Servers at Oracle, stops by to discuss the Intel® Xeon® processor E5 v3 launch and how Oracle’s optimized hardware and software in the Sun* Server product line has enabled massive performance gains. Deeper integration of flash technology drives increased reliability, performance, and solutions scalability and in-memory database technology delivers real-time caching of application data, which is a game changer for the enterprise. For more information, visit http://www.oracle.com/us/products/servers/overview/index.html.
  • Supermicro Platforms for Increased Perf/Watt: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 338: Charles Liang, Founder, President, CEO, and Chairman of the Board and Don Clegg, VP of Marketing and Business for Supermico discuss how the company has launched more than 50 platform designs optimized for the Intel® Xeon® processor E5 v3. Supermicro provides solutions for data center, cloud computing, enterprise IT, Hadoop/big data, HPC and embedded systems worldwide and focuses on delivering increased performance per watt, performance per square foot, and performance per dollar. For more information, visit www.supermicro.com.
  • The New Flexible Lenovo ThinkServer Portfolio: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 339: Justin Bandholz, a Portfolio Manager at Lenovo, stops by to announce the launch of a portfolio of products based on the Intel® Xeon® processor E5-2600 v3, including a premier 2-socket 1 and 2U rack servers, the ThinkServer* RD550 and ThinkServer RD650, as well as a 2-socket ThinkServer TD350 tower server. New fabric and storage technologies in the product portfolio are providing breakthroughs in flexibility for configuration of systems to suit customer workload needs. For more information, visit http://www.lenovo.com/servers.
  • Improving Network Security and Efficiency: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 340: Jeni Panhorst, Senior Product Line Manager at Intel, stops by to talk about the launch of the Intel® Communications Chipset 8900 series with Intel® QuickAssist Technology, which delivers cryptography and compression acceleration that benefits a number of applications. Use cases for the new chipset include securing back-end network ciphers to improve efficiency of equipment while delivering real-time cryptographic performance requirements, as well as network optimization – compressing data in the flow of traffic across a WAN. For more information, visit www.intel.com.
  • System Innovation with Colfax: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 341: Gautam Shah, the CEO of Colfax International, chats about how the Intel® Xeon® processor E5 v3 is a complete solution stack upgrade, including processor, networking and storage components, which allows customers to tackle problems they haven’t previously been able to solve cost-effectively (or at all). Colfax is delivering solutions with increased DDR4 memory, 12gb/s SAS, integrated SSDs, and networking solutions, which offer a great leap in system innovation. For more information, visit www.colfaxinternational.com or email sales@colfaxinternational.com with any questions.
  • Increased Data Center Security, Efficiency and Reliability with IBM – Intel® Chip Chat episode 342: Brian Connors, the VP of Global Product Development and Lab Services at IBM, stops by to talk about the launch of the company’s new M5 line of towers, racks and NeXtScale systems based on the Intel® Xeon® processor E5 v3. The systems have been designed for increased security (Trusted Platform Assurance and Enterprise Data Protection), efficiency and reliability and offer dramatic performance improvements over previous generations. For more information, visit www.ibm.com.
  • Innovations in VM Management with Hitachi: The Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 343: Roberto Basilio, the VP of Storage Product Management at Hitachi Data Systems, discusses the launch of the Intel® Xeon® processor E5 v3 and, in particular, how virtual machine control structure (VMCS) shadowing is innovating virtual machine management in the cloud. Shadowing improves the performance of Nested Virtualization and reduces latency and improves energy efficiency. For more information, visit http://www.hds.com/products/hitachi-unified-compute-platform/.
  • Re-architecting the Data Center with HP ProLiant Gen 9: Intel Xeon E5 v3 – Intel® Chip Chat ep 344: Peter Evans, a VP & Marketing Executive in HP’s Server Division, chats about the ProLiant* Generation 9 platform refresh, the foundation of which is the Intel® Xeon® processor E5 v3. The ProLiant Gen9 platform is driving advancements in performance, time to service, and optimization for addressing the explosion of data and devices in the new data center. For more information, visit www.hp.com/go/compute.
  • Software Defined Storage for Hyper-Convergence – Intel® Chip Chat episode 345: In this archive of a livecast from the Intel Developer Forum, Yoram Novick (Founder and CEO) and Carolyn Crandell (VP of Marketing) from Maxta discuss hyper-convergence and enabling SDI via the company’s software defined storage solutions. The recently announced MaxDeploy reference architecture, built on Intel® Server Boards, provides customers the ability to purchase a whole box (hardware and software) for a more simple and cost-effective solution than legacy infrastructure. For more information, visit www.maxta.com.
  • Modernizing Code for Dramatic Performance Improvements – Intel® Chip Chat episode 346: Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, stops by to talk about the importance of code modernization as we move into multi- and many-core systems in the HPC field. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization. Mike also discusses last year’s Parallel Universe Computing Challenge and its return at SC14 in November – $26,000 towards a charitable organization is on the line for the winning team. For more information about the PUCC, visit intel.ly/SC14 and for more on Intel and HPC, visit www.intel.com/hpc.

Read more >

How does business recover from a large-scale cyber security disaster?

Corporations need to get three things right in cyberspace: protect their valuable information, ensure that business operations continue during disturbances and maintain their reputation as trustworthy. These goals support one another and enable successful utilization of the digital world. Yet due to its dynamic nature there is no absolute security in cyberspace. What to do when something goes wrong? The best way to survive from a blast is to prepare for it in advance.

 

Cyber security requires transformed security thinking. Security should not be seen as an end-state once achieved through tailored investment in technology but as an on-going process that needs to adapt to changes in the environment. Effective security production is agile and innovative. It aligns cyber security with the overall business process so that the former supports the latter. When maintaining cyber security is seen as one of the corporation’s core managerial functions, its importance is raised to the correct level. Not only IT-managers and -officers need to understand cyberspace and realize how it relates to their areas of responsibility.

 

Integration of cyber security point of view in business process can be done, for example, via constructing and executing a specific cyber strategy for the corporation. This should start with enablement and consider opportunities that the corporation wishes to take advantage of in the digital world. It should also recognize threats in cyberspace and designate how these are counteracted. The strategy process should be led by the highest managerial level yet be responsive to ideas and feedback from both operational and technical levels of execution. Thus the entire organization will be committed to the strategy and feel it has an ownership in it. Moreover, the strategy will be realistic without attempting to reach unachievable goals or utilize processes which construction is technically impossible.

 

It is a common practice for corporations to do business continuity planning. However, operations in the digital world are not always included in this – regardless of the acknowledged dependency on cyberspace that characterizes modern business. There seems to be a strong belief in bits; that they won’t let us down. The importance of plan B is often neglected and the ability to operate without functioning cyberspace is lost. What should be in the plan B – which is an essential building block in cyber strategy – is the guidelines for partners, managers and employees in case of a security breach or a large cyber security incident. What to do; whom to inform; how to address the issue in public?

 

The plan B should include enhanced intrusion detection, adequate responses to security incidents and a communication strategy. Whom to inform, at what level of details and in which stage of the recovery process? Too little communication may give the impression that the corporation is trying to hide something or isn’t up-to-date with its responsibilities. Too much communication in too early stage of the mitigation and restoration process may lead to panic or exaggerated loss estimations. In both cases the reputation of the corporation suffers. Openness and correct timing are the key words here.

 

A resilient corporation is able to continue its business operations even when the digital world does not function the way it is supposed to. Digital services may be scaled down without customer experience suffering from it too much. Effective detection of both breaches and associated losses and fast restoration of services do not only serve the corporation’s immediate business goals but also enable projecting good cyber security. Admitting that there are problems but simultaneously demonstrating that necessary security measures are being taken is essential throughout the recovery period. So is honest communication to stakeholders at the right level of details.

 

Without adequate strategy work and its execution trust felt towards the corporation and its digital operations is easily lost. Without trust it is difficult to find to partners to cyber dependent business operations and customers turn away from the corporation’s digital offerings. Trust is the most valuable asset in cyberspace.

 

Planning in advance and building a resilient business entity safeguard corporations from digital disasters. In case such a thing has already happened it is important to speak up, demonstrate that lessons have been learned and show what is being done differently from now. The corporation must listen to those who have suffered and carry out its responsibilities. Only this way can market trust be restored.

 

- Jarno

 

Find Jarno on LinkedIn

Start a conversation with Jarno on Twitter

Read previous content from Jarno

Read more >

Breaking Down Battery Life

Many consumer devices have become almost exclusively portable. As we rely more and more on our tablets, laptops, 2-in-1s, and smartphones, we expect more and more out of our devices’ batteries. The good news is, we’re getting there. As our devices evolve, so do the batteries that power them. However, efficient batteries are only one component of a device’s battery life. Displays, processors, radios, and peripherals all play a key role in determining how long your phone or tablet will stay powered.

GreaterBatteryLife_1.png

Processing Power

Surprisingly, the most powerful processors can also be the most power-friendly. By quickly completing computationally intensive jobs, full-power processors like the Intel Core™ i5 processor can return to a lower power state faster than many so-called “power-efficient” processors. While it may seem counterintuitive at first glance, laptops and mobile devices armed with these full-powered processors can have battery lives that exceed those of smaller devices. Additionally, chip makers like Intel work closely with operating system developers like Google and Microsoft in order to optimize processors to work seamlessly and efficiently.


Display

One of the biggest power draws on your device is your display. Bright LCD screens require quite a bit of power when fully lighted. As screens evolve to contain more and more pixels, battery manufacturers have tried to keep up. The growing demand for crisp high-definition displays makes it even more crucial for companies to find new avenues for power efficiency.

 

Radios

Almost all consumer electronic devices being produced today have the capacity to connect to an array of networks. LTE, Wi-Fi, NFC, GPS — all of these acronyms pertain to some form of radio in your mobile phone or tablet, and ultimately mean varying levels of battery drain. As the methods of wireless data transfer have evolved, the amount of power required for these data transfers has changed. For example, trying to download a large file using a device equipped with older wireless technology may actually drain your battery faster than downloading the same file using a faster wireless technology. Faster downloads mean your device can stay at rest more often, which equals longer battery life.

 

Storage

It’s becoming more and more common for new devices to come equipped with solid-state drives (SSD) rather than hard-disk drives (HDD). By the nature of the technology, HDDs can use up to 3x the power of SSDs, and have significantly slower data transfer rates.

 

These represent just a few things you should evaluate before purchasing your next laptop, tablet, 2-in-1, or smartphone. For more information on what goes into evaluating a device’s battery life, check out this white paper. To join the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter

Read more >

Episode Recap – Transform IT with Guest Ray Noonan, CEO, Cogent

How did you like what Ray Noonan, CEO of Cogent, had to say about collaboration and the need to focus on business value?

 

Did it challenge you?Screen Shot 2014-10-03 at 10.57.47 AM.png

 

It probably should have. If I can summarize what Ray shared with us, it would be that we need to:

 

Break down the walls that separate us and keep us apart and to always put the business value above the needs of IT.


I’m quite sure that some of what he said sent shivers down the spines of IT people everywhere. But Ray wasn’t focused on “IT” – only on what IT can do to deliver value to the organization.

 

He believes that IT is too important to be segregated in a separate function and so he integrated it into the business units directly. He believes that we should all be technologists and so that we need to trust our people with technology decisions. He believes that the sense of “ownership” – to the degree that it inhibits sharing and collaboration – must be eliminated so that our teams can work together rapidly and fluidly. And he believes that the only thing that matters is the value that is generated for the business – so if an IT process or policy is somehow disrupting the delivery of value, then it should be changed.

 

If you keep your “IT hat” on, these ideas can seem scary and downright heretical. But if you think like a CEO, they make a lot more sense.

 

And that was Ray’s big challenge to all of us.

 

To break down our “ownership walls”.

To focus, instead, on how we create value for the organization.

To understand and embrace that value.

And then to deliver and protect it.

 

The question for you is how you’re going to start doing that. How will you begin?

 

Share with us the first step that you’re going to take to begin breaking down your own “ownership walls” and to focus on value.  I believe that your ability to understand how value is created for your business and how you, personally, contribute to that value, is perhaps one of the most critical first steps in your own personal transformation to becoming a true digital leader.

 

So decide what you will do to begin this process and start now. There’s no time to wait!

 

If you missed Episode 2, you can watch it on-demand here: http://intel.ly/1rrfyg1

 

Also, make sure you tune in on October 14th when I’ll be talking to Patty Hatter, Sr. VP Operations & CIO at McAfee about “Life at the Intersection of IT and Business.” You can register for a calendar reminder here.


You can join the Transform IT conversation anytime using the Twitter hashtags #TransformIT and #ITChat.

Read more >

Upgrade A NVME Capable Linux Kernel

Here in Intel NVM and SSD group (NSG) we build and test Linux systems a lot and we’ve been working to mature the nvme driver stack on all kinds of operating systems. The Linux kernel is the innovation platform today, and it has come a long way now with NVMe stability. We always had a high level kernel build document but never in a blog (bad Intel, we are changing those ways). We also wanted to refresh it a bit as maturity is well along now with NVMe and Linux. Kernel 3.10 (*spring 2014) is when integration really happened, and the important data center Linux OS vendors are fully supporting the driver. In case you are on a 2.6 kernel and want to move up to a newer kernel, here are the steps to build a kernel for your testing platform and try out one of Intel’s Data Center SSD’s for PCIe and NVMe.. This assumes you want the latest and greatest for testing and are not interested in an older or vendor supported kernel. By the way, on those “6.5 distributions” you won’t be able to get a supported 3.x kernel, that’s one reason I wrote this blog. But it will run and allow you test with something newer. You may have your own reasons I am sure. As far as production goes you will probably want to make sure you work together with your OS vendor.

 

I run a 3.16.3 kernel on some of the popular 6.5 distros, you can too.

 

1.    NVM Express background

The NVM express (NVMe) is optimized PCI Express SSD interface, NVM Express specification defines an optimized register interface, command set and feature set for PCI express (PCIe)-based Solid State Drives(SSD). Please refer towww.nvmexpress.org for background on NVMe.

The NVM Express Linux driver development utilizes the typical open-source process used by kernel.org. The development mailing list is linux-nvme@lists.infradead.org

The Linux NVMe driver intercepts kernel 3.10 and integrates to kernels above 3.10.

 

2.    Development tools required (possible pre-requisites)

In order to clone, compile and build new kernel/driver, the following packages are needed

  1. ncurses
  2. build tools
  3. git  (optional you could be using wget to get the Linux package)

You must be root to install these packages  

Ubuntu based

apt-get install git-core build-essential libncurses5-dev  

RHEL based

yum install git-core ncurses ncurses-develyum install groupinstall “Development Tools”  

SLES based        

zipper install ncurses-devel git-core              zipper install –type pattern Basis-Devel

 

3.    Build new Linux kernel with NVMe driver

Pick up a starting distribution, it doesn’t matter from driver’s perspective which distribution you use since it is going to put a new kernel on top of it, so use whatever you are most comfortable with and/or has the tools required.Get kernel and driver

  1. Or you can download “snapshot” from the top commit (here’s an example)

            wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.3.tar.xz

            tar –xvf linux-3.16.3.tar.xz

 

    2.      Build and install

Run menuconfig (which uses ncurses):

make menuconfig

Confirm the NVMe Driver under Block is set to <M>

Device Drivers-> Block Devices -> NVM Express block device

This creates .config file in same directory.

Then, run as root these make commands (use the j flag as ½ your cores to improve make time)

make –j10

make modules_install –j10

make install –j10

 

Depending on distribution you use, you may have to run update-initramfs and update-grub, but this is typically unnecessary. Once install is successful, reboot system to load new kernel and drivers. Usually the new kernel becomes default to boot which is the top line of menu.lst. Verify it with “uname –a” after booting, that the  running kernel is what you expect. , Use “dmesg | grep –i error”  and resolve any kernel loading issues.

 

4.  NVMe Driver basic tests and tools

          There are some basic open source nvme test programs you can use for checking nvme devices:

          http://git.infradead.org/users/kbusch/nvme-user.git

          Git’ing source codes

git clone git://git.infradead.org/users/kbusch/nvme-user.git

Making testing programs

Add/modify Makefile with proper lib or header links and compile these programs

make

 

Example, check nvme device controller “identify”, “namespace” etc

>>sudo ./nvme_id_ctrl /dev/nvme0n1

>>sudo ./nvme_id_ns /dev/nvme0n1

 

Intel SSD Data Center Tool 2.0 supports NVMe

 

Here are more commands you’ll find useful.

Zero out and condition a drive sequentially for performance testing:

dd if=/dev/zero of=/dev/nvme0n1 bs=2048k count=400000 oflag=direct

Quick test a drive, is it reading at over 2GB a second?

hdparm -tT –direct /dev/nvme0n1

 

Again enjoy these Gigabyte/s class SSD’s with low microsecond controller free performance!


Read more >

Health IT Does Not Transform Healthcare; Healthcare Cannot Transform Without Health IT

Below is a guest post from Steven E. Waldren, MD MS.

 

I was listening to the Intel Health videocast[1] of Eric Dishman, Dr. Bill Crounse, Dr. Andy Litt, and Dr. Graham Hughes. There was an introductory line that rang true, “EHR does not transform healthcare.” This statement prompted me to write this post.

 

The healthcare industry and policy makers have frequently seen health information technology (health IT) as a relatively easy fix to the quality and cost issues plaguing the U.S. health system. If we adopt health IT and make it interoperable, we will drastically improve quality and lower cost. Research provides evidence that health IT can do both.

 

I believe, however, that interpretation of this research misses a very important dependent variable; that variable is the sociotechnical system within which the health IT is deployed. For the uninitiated, Wikipedia provides a good description of a sociotechnical system.[2] In essence, it is the system of people, workflow, information, and technology in a complex work environment. Healthcare is definitely a complex adaptive environment[3]. To put a finer point on this, if you deploy health IT in an environment in which the people, workflow, and information are aligned to improve quality and lower cost, then you are likely to see those results. On the other hand, if you implement the technology in an environment in which the people, workflow, and information are not aligned, you will likely not see in either area.

 

Another reason it is important to look at health IT as a sociotechnical system is to couple the provider needs and capabilities to the health IT functions needed. I think, as an industry, we have not done this well. We too quickly jump into the technology, be it patient portal, registry, or e-prescribing, instead of focusing on the capability the IT is designed to enable, for example, patient collaboration, population management, or medication management, respectively.

 

Generally, the current crop of health IT has been focused on automating the business of healthcare, not on automating care delivery. The focus has been on generating and submitting billing, and generating documentation to justify billing. Supporting chronic disease management, prevention, or wellness promotion take a side seat if not a backseat. As the healthcare industry transitions to value-based payment, the focus has begun to change. As the healthcare system, we should focus on the capabilities that providers and hospitals need to support effective and efficient care delivery. From those capabilities, we can define the roles, workflows, data, and technology needed to support practices and hospitals in achieving those capabilities. Instead of adopting a standard, acquiring a piece of technology, or sending a message, by loosely coupling to the capabilities, we have a metric to determine whether we are successful.

 

If we do not focus on the people, workflow, data, and technology, but instead only focus on adopting health IT, we will struggle to achieve the “Triple Aim™,” to see any return on investment, or to improve the satisfaction of providers and patients. At this time, a real opportunity exists to further our understanding of the optimization of sociotechnical systems in healthcare and to create resources to deploy those learnings into the healthcare system. The opportunity requires us to expand our focus to the people, workflow, information, AND technology.

 

What questions do you have about healthcare IT?

 

Steven E. Waldren, MD MS, is the director, Alliance for eHealth Innovation at the American Academy of Family Physicians

 


[1] https://t.co/J7jISyg2NI

[2] http://en.wikipedia.org/wiki/Sociotechnical_system

[3]http://ti.gatech.edu/docs/Rouse%20NAEBridge2008%20HealthcareComplexity.pdf

Read more >

Will the Invincible Buckeyes Team from OSU and OSC Prove to be Invincible?

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group

 

Karen Tomko, Scientific Applications Group Manager at the Ohio Supercomputer Center (OSC), has assembled a team of fellow Buckeyes to attempt the Intel Parallel Universe Computing Challenge (PUCC) at SC14 in November.

 

We asked Karen a few questions about her team, called the Invincible Buckeyes (IB), and their proposed participation in the PUCC.

 

The 2014 Invincible Buckeyes (IB) team includes (from l to r) Khaled Hamidouche, a post-doctoral researcher at The Ohio State University (OSU); Raghunath Raja, Ph.D student (CS) at OSU; team captain Karen Tomko; and Akshay Venkatesh, Ph.D student (CS) at OSU. Not pictured is Hari Subramoni, a senior research associate at OSU

 

Q: What was the most exciting thing about last year’s PUCC?

A: Taking a piece of code from sequential to running in parallel on the Xeon Phi in 15 minutes, in a very close performance battle against the Illinois team was a lot of fun.

 

Q: How will your team prepare for this year’s challenge?

A: We’ll do our homework for the trivia, brush up on the parallel constructs, look at some Fortran codes, and make sure we have at least one vi user on the team.

 

Q: What would you suggest to other teams who are considering participation?

A: First I’d say, if you are considering it then sign up. It’s a fun break from the many obligations and talks at SC. When you’re in a match don’t over think, the time goes very quick. Also, watch out for the ‘Invincible Buckeyes’!

 

Q: SC14 is using the theme “HPC Matters” for the conference. Can you explain why “HPC Matters” to you?

A: HPC systems allow scientists and engineers to tackle grand challenge problems in their respective domains and make significant contributions to their fields. It has enabled innumerous discoveries in the fields of astro-physics, earthquake analysis, weather prediction, nanoscience modeling, multi-scale and multi-physics modeling, biological computations, and computational fluid dynamics, to name a few. Being able to contribute directly/indirectly to these discoveries by means of the research we do matters a lot to our team.

Read more >

IT Accelerating Business Innovation Through Product Design

For the Product Development IT team within Intel IT that I am a part of, these have been our recent mandates. We’ve been tasked with accelerating the development of Intel’s key System on Chip (SoC) platforms. We’ve been asked to be a key enabler of Intel’s growing software and services business. And we’ve been recognized as a model for employee engagement and cross-functional collaboration.

 

Much of this is new.

 

We’ve always provided the technology resources that facilitate the creation of world-class products and services. But the measures of success have changed. Availability and uptime are no longer enough. Today, it’s all about acceleration and transformation.

 

Accelerating at the Speed of Business

 

In many ways, we have become a gas pedal for Intel product development. We are helping our engineers design and deliver products to market faster than ever before. We are bringing globally distributed teams closer together with better communication and collaboration capabilities. And we are introducing new techniques and tools that are transforming the very nature of product design.

 

Dan McKeon, Vice President of Intel IT and General Manager of Silicon, Software and Services Group at Intel, recently wrote about the ways we are accelerating and transforming product design in the Intel IT Business Review.

 

The IT Product Development team, under Dan’s leadership, has enthusiastically embraced this new role. It allows us to be both a high-value partner and a consultant for the design teams we support at Intel. We now have a much better understanding of their goals, their pain points, and their critical paths to success—down to each job and workload. And we’ve aligned our efforts and priorities accordingly.

 

The results have been clear. We’ve successfully shaved weeks and months off of high-priority design cycles. And we continue to align with development teams to further accelerate and transform their design and delivery processes. Our goal in 2014 is to accelerate the Intel SoC design group’s development schedule by 12 weeks or more. We are sharing our best practices as we go, so please keep in touch.

 

Get the latest from Dan’s team on IT product development for faster time to market, download the Intel IT Business Review mobile app. http://itbusinessreview.intel.com/

Follow the conversation on Twitter: hashtag #IntelIT

Read more >