Recent Blog Posts

The Prickly Love Affair Between Users and Software

September has proven to be a big month for Apple. Blockbuster announcements were made to introduce the iPhone 6, the iPhone 6 Plus, Apple Pay, and the Apple Watch.  Along with these major events came the debut of the iOS 8.0.1 update.


7066517_l.jpg

Then came the failure of iOS 8.0.1.

 

The software update was plagued by furious customer complaints within minutes of its debut. Less than an hour after launch, Apple retracted the update with promises of mending the bugs that were causing slower download speeds, dropped calls, keyboard malfunctions, and overall sluggish performance. Thereafter, Apple had to coach its grumpy users through restoring their devices to the previous iOS.

 

The iOS 8 misstep begs the question: Are we ready to be governed by software that guides our daily lives?

 

Software is proliferating homes, enterprises, and virtually everything in between. It’s becoming a part of our routine anywhere we go, and when it works, it has the capacity to greatly enhance our quality of life. When it doesn’t work, things go awry almost immediately. For the enterprise, the ramifications of incapable software can resemble Apple’s recent debacle. Consumerization is not to be taken lightly — it’s changing how we exist as a species. It’s changing what we require to function.

 

Raj Rao, VP and global head of software quality practice for NTT Data, recently wrote an article for Wired in which he states, “Today many of us don’t really know how many software components are in our devices, what their names are, what their versions are, or who makes them and what their investment and commitment to quality is. We don’t know how often software changes in our devices, or what the change means.”

 

The general lack of knowledge on what software is used within a particular device — specifically how and why — inevitably leads to ineptitude for troubleshooting problems when they arise. While a constant evolution in software is necessary for innovation, one can expect continual troubleshooting for the new technology.

 

For enterprise software users, Rao had three tips for keeping everybody satisfied. First, users should be encouraged to stick with programs they regularly use and understand. Second, large OS ecosystems should adhere to very strict control standards in order to ensure quality. And third, global software development practices need to become a priority if we want to guarantee a prioritized UX.

 

The bond between humans and software is constantly intensifying. Now is the time to ensure the high quality of your own software systems. Do you have an iOS 8.0.1 situation waiting to happen?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

The Data Stack – September 2014 Intel® Chip Chat Podcast Round-up

September is always a busy month at Intel, and this year was no exception. Intel® Chip Chat hit the road with live episodes from the Intel Xeon processor E5 v3 launch. A plethora of partners and Intel reps discussed their products/platforms and what problems they’re using the Xeon processor to tackle. We were also live from the showcase of the Intel Developer Forum and will be archiving those episodes in the next few months, starting with an episode on software-defined storage. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

  • Data Center Telemetry – Intel® Chip Chat episode 331: Iddo Kadim, a marketing director in the Data Center Group at Intel, stops by to talk about data center telemetry – information you can read from the infrastructure (like thermal data and security states) to help manage workloads more efficiently. In the future, the orchestration layer will work with telemetry data to manage workloads automatically for a more flexible and efficient data center. For more information, visit www.intel.com/txt and www.intel.com/inteldcm.
  • The Intel® IoT Analytics Kit for Intelligent Data Analysis and Response – Intel® Chip Chat ep 332: Vin Sharma (@ciphr), the Director of Planning and Marketing for Hadoop at Intel chats about collecting and extracting value from data. The Intel® Galileo Development Kit’s hardware and software components allow users to build an end-to-end solution while the Intel® Internet of Things Analytics Kit provides a cloud-based data processing platform. For more information, visit www.intel.com/galileo.
  • The Intel® Xeon® Processor E5-2600 v3 Launch – Intel® Chip Chat episode 333: Dylan Larson, the Director of Server Platform Marketing at Intel, kicks off our podcasts from the launch of the Intel® Xeon® processor E5 v3. This new generation of processors is the heart of the software-defined data center and offers versatile and energy-efficient performance while providing a foundation for security. Also launching are complementary storage and networking elements for a complete integration of capabilities. For more information, visit www.intel.com/xeon.
  • Optimizing for HPC with SGI’s ICE X Platform: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 334: Bill Mannel, the General Manager with the Compute and Storage Product Division at SGI, stops by to talk about SGI’s ICE* X platform featuring the recently-launched Intel® Xeon® processor E5-2600 v3. The ICE X blade is specifically optimized to provide higher levels of performance, scalability, and flexibility for HPC customers. For more information, visit www.sgi.com/products/servers.
  • Increased App Performance with Dell PowerEdge: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 335: Brian Payne, Executive Director of PowerEdge Product Management at Dell, chats about the Dell PowerEdge* 13G server line featuring the recently-launched Intel® Xeon® processor E5 v3. Flash server integration into the PowerEdge 13G is delivering immense increases in application and database performance to help customers meet workload requirements and adapt to new scale-out infrastructure models. For more information, visit www.dell.com.
  • Next-Gen Ethernet Controllers for SDI: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 336: Brian Johnson, Solutions Architect for Ethernet Products at Intel, discusses the release of the Intel® Ethernet Controller XL710. With the ability to achieve 40 Gbps speeds, the XL710 is architected for the next generation of SDI and virtualized cloud environments, as well as network functions virtualization in the telco industry. For more information, visit www.intel.com/go/ethernet.
  • The Reliable and High Performing Oracle Sun Server: Intel Xeon E5 v3 Launch – Chip Chat ep 337: Subban Raghunathan, the Director of Product Management of x86 Servers at Oracle, stops by to discuss the Intel® Xeon® processor E5 v3 launch and how Oracle’s optimized hardware and software in the Sun* Server product line has enabled massive performance gains. Deeper integration of flash technology drives increased reliability, performance, and solutions scalability and in-memory database technology delivers real-time caching of application data, which is a game changer for the enterprise. For more information, visit http://www.oracle.com/us/products/servers/overview/index.html.
  • Supermicro Platforms for Increased Perf/Watt: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 338: Charles Liang, Founder, President, CEO, and Chairman of the Board and Don Clegg, VP of Marketing and Business for Supermico discuss how the company has launched more than 50 platform designs optimized for the Intel® Xeon® processor E5 v3. Supermicro provides solutions for data center, cloud computing, enterprise IT, Hadoop/big data, HPC and embedded systems worldwide and focuses on delivering increased performance per watt, performance per square foot, and performance per dollar. For more information, visit www.supermicro.com.
  • The New Flexible Lenovo ThinkServer Portfolio: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 339: Justin Bandholz, a Portfolio Manager at Lenovo, stops by to announce the launch of a portfolio of products based on the Intel® Xeon® processor E5-2600 v3, including a premier 2-socket 1 and 2U rack servers, the ThinkServer* RD550 and ThinkServer RD650, as well as a 2-socket ThinkServer TD350 tower server. New fabric and storage technologies in the product portfolio are providing breakthroughs in flexibility for configuration of systems to suit customer workload needs. For more information, visit http://www.lenovo.com/servers.
  • Improving Network Security and Efficiency: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 340: Jeni Panhorst, Senior Product Line Manager at Intel, stops by to talk about the launch of the Intel® Communications Chipset 8900 series with Intel® QuickAssist Technology, which delivers cryptography and compression acceleration that benefits a number of applications. Use cases for the new chipset include securing back-end network ciphers to improve efficiency of equipment while delivering real-time cryptographic performance requirements, as well as network optimization – compressing data in the flow of traffic across a WAN. For more information, visit www.intel.com.
  • System Innovation with Colfax: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 341: Gautam Shah, the CEO of Colfax International, chats about how the Intel® Xeon® processor E5 v3 is a complete solution stack upgrade, including processor, networking and storage components, which allows customers to tackle problems they haven’t previously been able to solve cost-effectively (or at all). Colfax is delivering solutions with increased DDR4 memory, 12gb/s SAS, integrated SSDs, and networking solutions, which offer a great leap in system innovation. For more information, visit www.colfaxinternational.com or email sales@colfaxinternational.com with any questions.
  • Increased Data Center Security, Efficiency and Reliability with IBM – Intel® Chip Chat episode 342: Brian Connors, the VP of Global Product Development and Lab Services at IBM, stops by to talk about the launch of the company’s new M5 line of towers, racks and NeXtScale systems based on the Intel® Xeon® processor E5 v3. The systems have been designed for increased security (Trusted Platform Assurance and Enterprise Data Protection), efficiency and reliability and offer dramatic performance improvements over previous generations. For more information, visit www.ibm.com.
  • Innovations in VM Management with Hitachi: The Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 343: Roberto Basilio, the VP of Storage Product Management at Hitachi Data Systems, discusses the launch of the Intel® Xeon® processor E5 v3 and, in particular, how virtual machine control structure (VMCS) shadowing is innovating virtual machine management in the cloud. Shadowing improves the performance of Nested Virtualization and reduces latency and improves energy efficiency. For more information, visit http://www.hds.com/products/hitachi-unified-compute-platform/.
  • Re-architecting the Data Center with HP ProLiant Gen 9: Intel Xeon E5 v3 – Intel® Chip Chat ep 344: Peter Evans, a VP & Marketing Executive in HP’s Server Division, chats about the ProLiant* Generation 9 platform refresh, the foundation of which is the Intel® Xeon® processor E5 v3. The ProLiant Gen9 platform is driving advancements in performance, time to service, and optimization for addressing the explosion of data and devices in the new data center. For more information, visit www.hp.com/go/compute.
  • Software Defined Storage for Hyper-Convergence – Intel® Chip Chat episode 345: In this archive of a livecast from the Intel Developer Forum, Yoram Novick (Founder and CEO) and Carolyn Crandell (VP of Marketing) from Maxta discuss hyper-convergence and enabling SDI via the company’s software defined storage solutions. The recently announced MaxDeploy reference architecture, built on Intel® Server Boards, provides customers the ability to purchase a whole box (hardware and software) for a more simple and cost-effective solution than legacy infrastructure. For more information, visit www.maxta.com.
  • Modernizing Code for Dramatic Performance Improvements – Intel® Chip Chat episode 346: Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, stops by to talk about the importance of code modernization as we move into multi- and many-core systems in the HPC field. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization. Mike also discusses last year’s Parallel Universe Computing Challenge and its return at SC14 in November – $26,000 towards a charitable organization is on the line for the winning team. For more information about the PUCC, visit intel.ly/SC14 and for more on Intel and HPC, visit www.intel.com/hpc.

Read more >

Accelerating the Adoption of Web Technologies in the Automotive Industry

The mass market for self-driving vehicles hasn’t yet arrived. But as automakers continue to integrate in-vehicle infotainment (IVI), and race down the path toward autonomous driving, there is no doubt that automotive cockpits are becoming increasingly defined by software. Data … Read more >

The post Accelerating the Adoption of Web Technologies in the Automotive Industry appeared first on IoT@Intel.

Read more >

How does business recover from a large-scale cyber security disaster?

Corporations need to get three things right in cyberspace: protect their valuable information, ensure that business operations continue during disturbances and maintain their reputation as trustworthy. These goals support one another and enable successful utilization of the digital world. Yet due to its dynamic nature there is no absolute security in cyberspace. What to do when something goes wrong? The best way to survive from a blast is to prepare for it in advance.

 

Cyber security requires transformed security thinking. Security should not be seen as an end-state once achieved through tailored investment in technology but as an on-going process that needs to adapt to changes in the environment. Effective security production is agile and innovative. It aligns cyber security with the overall business process so that the former supports the latter. When maintaining cyber security is seen as one of the corporation’s core managerial functions, its importance is raised to the correct level. Not only IT-managers and -officers need to understand cyberspace and realize how it relates to their areas of responsibility.

 

Integration of cyber security point of view in business process can be done, for example, via constructing and executing a specific cyber strategy for the corporation. This should start with enablement and consider opportunities that the corporation wishes to take advantage of in the digital world. It should also recognize threats in cyberspace and designate how these are counteracted. The strategy process should be led by the highest managerial level yet be responsive to ideas and feedback from both operational and technical levels of execution. Thus the entire organization will be committed to the strategy and feel it has an ownership in it. Moreover, the strategy will be realistic without attempting to reach unachievable goals or utilize processes which construction is technically impossible.

 

It is a common practice for corporations to do business continuity planning. However, operations in the digital world are not always included in this – regardless of the acknowledged dependency on cyberspace that characterizes modern business. There seems to be a strong belief in bits; that they won’t let us down. The importance of plan B is often neglected and the ability to operate without functioning cyberspace is lost. What should be in the plan B – which is an essential building block in cyber strategy – is the guidelines for partners, managers and employees in case of a security breach or a large cyber security incident. What to do; whom to inform; how to address the issue in public?

 

The plan B should include enhanced intrusion detection, adequate responses to security incidents and a communication strategy. Whom to inform, at what level of details and in which stage of the recovery process? Too little communication may give the impression that the corporation is trying to hide something or isn’t up-to-date with its responsibilities. Too much communication in too early stage of the mitigation and restoration process may lead to panic or exaggerated loss estimations. In both cases the reputation of the corporation suffers. Openness and correct timing are the key words here.

 

A resilient corporation is able to continue its business operations even when the digital world does not function the way it is supposed to. Digital services may be scaled down without customer experience suffering from it too much. Effective detection of both breaches and associated losses and fast restoration of services do not only serve the corporation’s immediate business goals but also enable projecting good cyber security. Admitting that there are problems but simultaneously demonstrating that necessary security measures are being taken is essential throughout the recovery period. So is honest communication to stakeholders at the right level of details.

 

Without adequate strategy work and its execution trust felt towards the corporation and its digital operations is easily lost. Without trust it is difficult to find to partners to cyber dependent business operations and customers turn away from the corporation’s digital offerings. Trust is the most valuable asset in cyberspace.

 

Planning in advance and building a resilient business entity safeguard corporations from digital disasters. In case such a thing has already happened it is important to speak up, demonstrate that lessons have been learned and show what is being done differently from now. The corporation must listen to those who have suffered and carry out its responsibilities. Only this way can market trust be restored.

 

- Jarno

 

Find Jarno on LinkedIn

Start a conversation with Jarno on Twitter

Read previous content from Jarno

Read more >

Breaking Down Battery Life

Many consumer devices have become almost exclusively portable. As we rely more and more on our tablets, laptops, 2-in-1s, and smartphones, we expect more and more out of our devices’ batteries. The good news is, we’re getting there. As our devices evolve, so do the batteries that power them. However, efficient batteries are only one component of a device’s battery life. Displays, processors, radios, and peripherals all play a key role in determining how long your phone or tablet will stay powered.

GreaterBatteryLife_1.png

Processing Power

Surprisingly, the most powerful processors can also be the most power-friendly. By quickly completing computationally intensive jobs, full-power processors like the Intel Core™ i5 processor can return to a lower power state faster than many so-called “power-efficient” processors. While it may seem counterintuitive at first glance, laptops and mobile devices armed with these full-powered processors can have battery lives that exceed those of smaller devices. Additionally, chip makers like Intel work closely with operating system developers like Google and Microsoft in order to optimize processors to work seamlessly and efficiently.


Display

One of the biggest power draws on your device is your display. Bright LCD screens require quite a bit of power when fully lighted. As screens evolve to contain more and more pixels, battery manufacturers have tried to keep up. The growing demand for crisp high-definition displays makes it even more crucial for companies to find new avenues for power efficiency.

 

Radios

Almost all consumer electronic devices being produced today have the capacity to connect to an array of networks. LTE, Wi-Fi, NFC, GPS — all of these acronyms pertain to some form of radio in your mobile phone or tablet, and ultimately mean varying levels of battery drain. As the methods of wireless data transfer have evolved, the amount of power required for these data transfers has changed. For example, trying to download a large file using a device equipped with older wireless technology may actually drain your battery faster than downloading the same file using a faster wireless technology. Faster downloads mean your device can stay at rest more often, which equals longer battery life.

 

Storage

It’s becoming more and more common for new devices to come equipped with solid-state drives (SSD) rather than hard-disk drives (HDD). By the nature of the technology, HDDs can use up to 3x the power of SSDs, and have significantly slower data transfer rates.

 

These represent just a few things you should evaluate before purchasing your next laptop, tablet, 2-in-1, or smartphone. For more information on what goes into evaluating a device’s battery life, check out this white paper. To join the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter

Read more >

Dishing up Some SMAC Talk

I have been a huge proponent of social media and social networking for the past few years. It’s been an interesting to see how social networking, once reserved for friends and family, has made its way into the enterprise workplace. Individuals are now more mobile and have a range of choices for what device(s) they utilize for any given task. There is more data than ever before, and a desire to turn those bits of information into insights and actions. And the cloud has created new opportunities to deliver applications, services, and value.

 

The combination of these transformative trends is known as SMAC: social, mobile, analytics, and cloud. And it’s the result of the increasing consumerization of IT, with users demanding the devices and capabilities they enjoy at home.

 

Intel IT has embraced the SMAC model with fervor. It’s a great way to give Intel employees the information and services they want, no matter where they are or what device they are using. And helps IT continually improve the speed and efficiency of resource and service delivery.

 

You can find out more about our SMAC model from Intel Vice President and General Manager of IT, David Aires, and how he and his team are moving to the leading edge of change in the Intel IT Business Review.

 

http://itbusinessreview.intel.com/leading-it/110-moving-to-the-leading-edge-of-the-change-wave

 

Here are a few examples of the progress made by David and his team:

 

  • Intel IT distributed nearly 14,000 touch-enabled Ultrabooks to our workforce in 2013 to give users a lighter, more mobile computing platform than PCs and laptops.

 

  • Intel IT implemented a BYOD program two years ago, and a majority of the 45,000 mobile devices at Intel are now employee-owned.

 

  • The increase in mobile devices has upped the demand for mobile apps. They developed 57 enterprise mobile apps in 2013 alone, and have delivered 123 mobile apps to the Intel workforce since 2011.

   Mobile.jpg

  • To increase IT agility and efficiency, they have virtualized more than 80 percent of Intel’s infrastructure and are delivering more services through IT’s internal cloud.

  cloud computing.jpg

 

These changes aren’t just good for our employees. They are also good for business. By adopting and promoting SMAC, this Intel IT team is boosting productivity, keeping costs down, and staying in front of industry trends.

 

To learn more how this team is delivering operational excellence, increasing employee productivity, reducing costs, and deploying new technologies raises expectations of IT, download the Intel IT Business Review mobile app. http://itbusinessreview.intel.com/

 

IIBR.jpg

 

 

Download the Intel IT Business Review mobile app to see how we are putting the latest technology trends to use.

 

And perhaps we can engage in some friendly “SMAC talk.”  Follow me on Twitter: @davidlaires #IntelIT

 

David Aires

General Manager of Operations

Intel Information Technology

Read more >

Episode Recap – Transform IT with Guest Ray Noonan, CEO, Cogent

How did you like what Ray Noonan, CEO of Cogent, had to say about collaboration and the need to focus on business value?

 

Did it challenge you?Screen Shot 2014-10-03 at 10.57.47 AM.png

 

It probably should have. If I can summarize what Ray shared with us, it would be that we need to:

 

Break down the walls that separate us and keep us apart and to always put the business value above the needs of IT.


I’m quite sure that some of what he said sent shivers down the spines of IT people everywhere. But Ray wasn’t focused on “IT” – only on what IT can do to deliver value to the organization.

 

He believes that IT is too important to be segregated in a separate function and so he integrated it into the business units directly. He believes that we should all be technologists and so that we need to trust our people with technology decisions. He believes that the sense of “ownership” – to the degree that it inhibits sharing and collaboration – must be eliminated so that our teams can work together rapidly and fluidly. And he believes that the only thing that matters is the value that is generated for the business – so if an IT process or policy is somehow disrupting the delivery of value, then it should be changed.

 

If you keep your “IT hat” on, these ideas can seem scary and downright heretical. But if you think like a CEO, they make a lot more sense.

 

And that was Ray’s big challenge to all of us.

 

To break down our “ownership walls”.

To focus, instead, on how we create value for the organization.

To understand and embrace that value.

And then to deliver and protect it.

 

The question for you is how you’re going to start doing that. How will you begin?

 

Share with us the first step that you’re going to take to begin breaking down your own “ownership walls” and to focus on value.  I believe that your ability to understand how value is created for your business and how you, personally, contribute to that value, is perhaps one of the most critical first steps in your own personal transformation to becoming a true digital leader.

 

So decide what you will do to begin this process and start now. There’s no time to wait!

 

If you missed Episode 2, you can watch it on-demand here: http://intel.ly/1rrfyg1

 

Also, make sure you tune in on October 14th when I’ll be talking to Patty Hatter, Sr. VP Operations & CIO at McAfee about “Life at the Intersection of IT and Business.” You can register for a calendar reminder here.


You can join the Transform IT conversation anytime using the Twitter hashtags #TransformIT and #ITChat.

Read more >

Upgrade A NVME Capable Linux Kernel

Here in Intel NVM and SSD group (NSG) we build and test Linux systems a lot and we’ve been working to mature the nvme driver stack on all kinds of operating systems. The Linux kernel is the innovation platform today, and it has come a long way now with NVMe stability. We always had a high level kernel build document but never in a blog (bad Intel, we are changing those ways). We also wanted to refresh it a bit as maturity is well along now with NVMe and Linux. Kernel 3.10 (*spring 2014) is when integration really happened, and the important data center Linux OS vendors are fully supporting the driver. In case you are on a 2.6 kernel and want to move up to a newer kernel, here are the steps to build a kernel for your testing platform and try out one of Intel’s Data Center SSD’s for PCIe and NVMe.. This assumes you want the latest and greatest for testing and are not interested in an older or vendor supported kernel. By the way, on those “6.5 distributions” you won’t be able to get a supported 3.x kernel, that’s one reason I wrote this blog. But it will run and allow you test with something newer. You may have your own reasons I am sure. As far as production goes you will probably want to make sure you work together with your OS vendor.

 

I run a 3.16.3 kernel on some of the popular 6.5 distros, you can too.

 

1.    NVM Express background

The NVM express (NVMe) is optimized PCI Express SSD interface, NVM Express specification defines an optimized register interface, command set and feature set for PCI express (PCIe)-based Solid State Drives(SSD). Please refer towww.nvmexpress.org for background on NVMe.

The NVM Express Linux driver development utilizes the typical open-source process used by kernel.org. The development mailing list is linux-nvme@lists.infradead.org

The Linux NVMe driver intercepts kernel 3.10 and integrates to kernels above 3.10.

 

2.    Development tools required (possible pre-requisites)

In order to clone, compile and build new kernel/driver, the following packages are needed

  1. ncurses
  2. build tools
  3. git  (optional you could be using wget to get the Linux package)

You must be root to install these packages  

Ubuntu based

apt-get install git-core build-essential libncurses5-dev  

RHEL based

yum install git-core ncurses ncurses-develyum install groupinstall “Development Tools”  

SLES based        

zipper install ncurses-devel git-core              zipper install –type pattern Basis-Devel

 

3.    Build new Linux kernel with NVMe driver

Pick up a starting distribution, it doesn’t matter from driver’s perspective which distribution you use since it is going to put a new kernel on top of it, so use whatever you are most comfortable with and/or has the tools required.Get kernel and driver

  1. Or you can download “snapshot” from the top commit (here’s an example)

            wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.3.tar.xz

            tar –xvf linux-3.16.3.tar.xz

 

    2.      Build and install

Run menuconfig (which uses ncurses):

make menuconfig

Confirm the NVMe Driver under Block is set to <M>

Device Drivers-> Block Devices -> NVM Express block device

This creates .config file in same directory.

Then, run as root these make commands (use the j flag as ½ your cores to improve make time)

make –j10

make modules_install –j10

make install –j10

 

Depending on distribution you use, you may have to run update-initramfs and update-grub, but this is typically unnecessary. Once install is successful, reboot system to load new kernel and drivers. Usually the new kernel becomes default to boot which is the top line of menu.lst. Verify it with “uname –a” after booting, that the  running kernel is what you expect. , Use “dmesg | grep –i error”  and resolve any kernel loading issues.

 

4.  NVMe Driver basic tests and tools

          There are some basic open source nvme test programs you can use for checking nvme devices:

          http://git.infradead.org/users/kbusch/nvme-user.git

          Git’ing source codes

git clone git://git.infradead.org/users/kbusch/nvme-user.git

Making testing programs

Add/modify Makefile with proper lib or header links and compile these programs

make

 

Example, check nvme device controller “identify”, “namespace” etc

>>sudo ./nvme_id_ctrl /dev/nvme0n1

>>sudo ./nvme_id_ns /dev/nvme0n1

 

Intel SSD Data Center Tool 2.0 supports NVMe

 

Here are more commands you’ll find useful.

Zero out and condition a drive sequentially for performance testing:

dd if=/dev/zero of=/dev/nvme0n1 bs=2048k count=400000 oflag=direct

Quick test a drive, is it reading at over 2GB a second?

hdparm -tT –direct /dev/nvme0n1

 

Again enjoy these Gigabyte/s class SSD’s with low microsecond controller free performance!


Read more >

Quick Q & A: Safer Highways, Smarter Cities, and Fewer Traffic Jams

It’s fascinating to think about how innovations in transportation, including the trend toward Internet of Things implementations, can enhance the quality of life for people across the globe. Indeed, technology can help us address significant challenges around really important aspects … Read more >

The post Quick Q & A: Safer Highways, Smarter Cities, and Fewer Traffic Jams appeared first on IoT@Intel.

Read more >

Health IT Does Not Transform Healthcare; Healthcare Cannot Transform Without Health IT

Below is a guest post from Steven E. Waldren, MD MS.

 

I was listening to the Intel Health videocast[1] of Eric Dishman, Dr. Bill Crounse, Dr. Andy Litt, and Dr. Graham Hughes. There was an introductory line that rang true, “EHR does not transform healthcare.” This statement prompted me to write this post.

 

The healthcare industry and policy makers have frequently seen health information technology (health IT) as a relatively easy fix to the quality and cost issues plaguing the U.S. health system. If we adopt health IT and make it interoperable, we will drastically improve quality and lower cost. Research provides evidence that health IT can do both.

 

I believe, however, that interpretation of this research misses a very important dependent variable; that variable is the sociotechnical system within which the health IT is deployed. For the uninitiated, Wikipedia provides a good description of a sociotechnical system.[2] In essence, it is the system of people, workflow, information, and technology in a complex work environment. Healthcare is definitely a complex adaptive environment[3]. To put a finer point on this, if you deploy health IT in an environment in which the people, workflow, and information are aligned to improve quality and lower cost, then you are likely to see those results. On the other hand, if you implement the technology in an environment in which the people, workflow, and information are not aligned, you will likely not see in either area.

 

Another reason it is important to look at health IT as a sociotechnical system is to couple the provider needs and capabilities to the health IT functions needed. I think, as an industry, we have not done this well. We too quickly jump into the technology, be it patient portal, registry, or e-prescribing, instead of focusing on the capability the IT is designed to enable, for example, patient collaboration, population management, or medication management, respectively.

 

Generally, the current crop of health IT has been focused on automating the business of healthcare, not on automating care delivery. The focus has been on generating and submitting billing, and generating documentation to justify billing. Supporting chronic disease management, prevention, or wellness promotion take a side seat if not a backseat. As the healthcare industry transitions to value-based payment, the focus has begun to change. As the healthcare system, we should focus on the capabilities that providers and hospitals need to support effective and efficient care delivery. From those capabilities, we can define the roles, workflows, data, and technology needed to support practices and hospitals in achieving those capabilities. Instead of adopting a standard, acquiring a piece of technology, or sending a message, by loosely coupling to the capabilities, we have a metric to determine whether we are successful.

 

If we do not focus on the people, workflow, data, and technology, but instead only focus on adopting health IT, we will struggle to achieve the “Triple Aim™,” to see any return on investment, or to improve the satisfaction of providers and patients. At this time, a real opportunity exists to further our understanding of the optimization of sociotechnical systems in healthcare and to create resources to deploy those learnings into the healthcare system. The opportunity requires us to expand our focus to the people, workflow, information, AND technology.

 

What questions do you have about healthcare IT?

 

Steven E. Waldren, MD MS, is the director, Alliance for eHealth Innovation at the American Academy of Family Physicians

 


[1] https://t.co/J7jISyg2NI

[2] http://en.wikipedia.org/wiki/Sociotechnical_system

[3]http://ti.gatech.edu/docs/Rouse%20NAEBridge2008%20HealthcareComplexity.pdf

Read more >