Recent Blog Posts

Intel® Technologies Help Saudi Arabia Meet Rising Demand for Healthcare Services


The King Faisal Specialist Hospital and Research Centre (KFSHRC) is the pinnacle of the healthcare system in the Kingdom of Saudi Arabia. With facilities in Riyadh and Jeddah, plus a children’s cancer center, KFSHRC provides care for the most seriously ill patients from anywhere in Saudi Arabia. Along with delivering advanced treatments, the center conducts leading-edge research and helps train the next generation of physicians, nurses and other clinicians.


Like many nations, Saudi Arabia faces an increasing demand for healthcare services as its population is growing, people are living longer, and lifestyle diseases are on the rise1

KFSHRC recognized that modernizing its technology infrastructure would be essential to meeting future challenges in a reliable, cost-effective way. So, the center is migrating much of its data center infrastructure to servers and storage systems based on the Intel® Xeon® processor E5 family. KFSHRC is also adopting 2 in 1 devices with the Intel® Core™ i5 vPro™ processor for clinicians on the go.

KFSHRC leaders say their technology strategy is enhancing care givers’ productivity and delivering an optimized data center that supports innovative healthcare IT solutions. For example, they have:

  • Reduced capital costs as well as ongoing costs for licensing, support, and maintenance
  • Reduced floor space requirements by 50 percent
  • Reduced cabling inside the data center by 70 percent
  • Improved system availability by 90 percent
  • Enhanced agility by enabling them to deploy new capabilities rapidly


KSFHRC’s technology strategy delivers important benefits for patients and healthcare providers alike. These include:

  • More coordinated, patient-centered care. Powerful healthcare IT solutions and mobile computing can empower treatment teams to provide more coordinated care before, during, and after the patient’s hospital stay.
  • Higher patient satisfaction and engagement. Patients and their families can experience shorter wait times, greater convenience, and fewer redundant procedures. Tools such as portals can help engage patients in managing their own health.
  • Improved productivity, hiring, and retention. Healthcare IT solutions can help medical professionals work more productively and reduce stress. Healthcare IT modernization help KSFHRC attract and retain doctors, nurses, and other clinicians.
  • Readiness for Healthcare 2020. KSFHRC is positioning itself to take advantage of technology-enabled advance that are transforming medicine.

Read the case study to learn more about KFSHRC’s technology strategy and its use of Intel technologies.

For more info on the latest in health and life sciences subscribe to our newsletter.

Follow us on Twitter: @IntelHealth, @IntelITCenter


1 For example, see data from the United Nations World Population Prospects, summarized in Demographic Profile of Saudi Arabia and the news release Saudi Health Interview Survey finds high rates of chronic diseases in the Kingdom of Saudi Arabia, based on a study conducted by the Saudi Ministry of Health and the Institute for Health Metrics and Evaluation (IHME) at the University of Washington, available at


Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to

Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

Read more >

Code Modernization: Intel HPC Developers Conference Nov 14-15

Intel will be holding our annual HPC Developer Conference in Austin, Texas in conjunction with SC’15 (Saturday afternoon November 14 and all day Sunday November 15). Registration is open NOW. The agenda is available (35 high quality talks plus keynotes … Read more >

The post Code Modernization: Intel HPC Developers Conference Nov 14-15 appeared first on Intel Software and Services.

Read more >

It Takes a Village to Accelerate Big Data Analytics Adoption

Today’s enterprises are adept at capturing and storing huge amounts of data. While that alone is an accomplishment, it’s the easy part of a much harder problem. The real challenge is to turn those mountains of data into meaningful insights that enhance competitiveness.


Business analysts at enterprises today are hampered by the lack of domain-specific and customizable applications based on big data analytics. App developers are slowed by the complexity of the big data infrastructure and the lack of analytic functions for which they depend on. And data scientists, in turn, are challenged by the time it takes to build handcrafted models. The models are re-coded for production and rarely reused, slowing down their deployment and update.


This is a problem not for any one company to solve but for the entire community to address and the new Trusted Analytics Platform (TAP) was built to help do this. This open source platform, developed by Intel and our ecosystem partners, enables data scientists and app developers to deploy predictive models faster on a shared big data analytics platform.


The platform provides an end-to-end solution with a data layer optimized for performance and security, an analytics layer with a built-in data science toolkit, and an application layer that includes a managed runtime environment for cloud-native apps. Collectively, the capabilities of the Trusted Analytics Platform will make it much easier for developers and data scientists to collaborate by providing a shared, flexible environment for advanced analytics in public and private clouds.


And this brings us to a new collaborative effort with cloud service providers that will leverage the Trusted Analytics Platform to accelerate the adoption of big data analytics. Intel has formed a strategic collaboration with to accelerate the adoption of big data analytics solutions with TAP and on’s infrastructure., based in France, is one of the largest and fastest growing cloud service providers in Europe and is the first service provider that we have announced collaborating with.


Through this partnership, announced at the OVH Summit in Paris this week, the two companies will work together to optimize infrastructure for the best performance per workload and security while leveraging new Intel technologies to grow and differentiate services.


At a broader level, partnerships like these reflect Intel’s long-running commitment to bring new cloud technologies to market and help our customers streamline and speed their path to deployment. We are deeply committed to working with partners like to accelerate the adoption of big data technologies and further Intel’s mission of enabling every organization to unlock intelligence from big data.


To further fuel big data solutions, and Intel are collaborating on a “big data challenge” to pilot the development of solutions using the new Trusted Analytics Platform. Here’s how the challenge works:


  • Submit your big data analytics challenge via by Oct 24th, 2015.
  • OVH and Intel will select up to three innovative big data use cases and help the winners develop and deploy their solutions with the Trusted Analytics Platform on managed hosting.
  • The selected innovators will receive free hosting for the pilot period and be featured in global big data conferences next year.


Or if you’re a data scientist or a developer who wants to capitalize on the TAP platform, visit to get the tools you need to accelerate the creation of cloud-native applications driven by big data analytics.

Read more >

Meet the Intel Intelligent Storage Acceleration Library

By Leah Schoeb, Intel


As data centers manage their growing volumes of data while maintaining SLAs, storage acceleration and optimization become all the more important. To help enterprises keep pace with data growth, storage developers and OEMs need technologies that enable them to accelerate the performance and throughput of data while making the optimal use of available storage capacity.


These goals are at the heart of the Intel Intelligent Storage Acceleration Library (Intel ISA-L), a set of building blocks designed to help storage developers and OEMs maximize performance, throughput, security, and resilience, while minimizing capacity usage, in their storage solutions. The acceleration comes from highly optimized assembly code, built with deep insight into the Intel® Architecture processors.


Intel® ISA-L is an algorithmic library that enables users to obtain more performance from Intel CPUs and reduce investment in developing their own optimizations. The library also uses dynamic linking to allow the same code to run optimally across Intel’s line of processors, from Atom to Xeon, and the same technique assures forwards and backwards compatibility as well, making it ideally suited for both software-defined storage and OEM or “known hardware” usage. Ultimately, the library helps end-user customers accelerate service deployment, improve interoperability, and reduce TCO by providing support for storage solutions that make data centers more efficient.


This downloadable library is composed of optimized algorithms in five main areas: data protection, data integrity, compression, cryptography, and hashing. For instance, Intel® ISA-L delivers up to 7x bandwidth improvement for hash functions compared to OpenSSL algorithms. In addition it, delivers up to 4x bandwidth improvement on compression compared to the zlib compression library, and it lets users get to market faster and with fewer resources than they would need if they had to develop (and maintain!) their own optimizations.


One way Intel® ISA-L could assist to accelerate storage performance in a cost-effective manner is by accelerating data deduplication algorithms using chunking and hashing functions. If you develop storage solutions, you know all about the goodness of data deduplication and how it can improve capacity optimization by reducing the need for duplicated data. During the data deduplication process, a hashing function can be combined to generate a fingerprint for the data chunks. Once each chunk has a fingerprint, incoming data can be compared to a stored database of existing fingerprints and, when a match is found, the duplicate data does not need to be written to the disk again.


Data deduplication algorithms can be very CPU-intensive and leave little processor utilization for other tasks, Intel® ISA-L removes this barrier. The combination of Intel® processors and Intel® ISA-L can provide the tools to help accelerate everything from small office NAS appliances up to enterprise storage systems.


The Intel® ISA-L toolkit is free to be downloaded, and parts of it are available as open source software. The open source version contains data protection, data integrity, and compression algorithms, while the full licensed version also includes cryptographic and hashing functions. In both cases, the code is provided free of charge.


Our investments in Intel® ISA-L reflect our commitment to helping our industry partners bring new, faster, and more efficient storage solutions to market. This is the same goal that underlies the new Storage Performance Development Kit (SPDK), launched this week at the Storage Developer Conference (SDC) in Santa Clara. This open source initiative, spearheaded by Intel, leverages an end-to-end user-level storage reference architecture, spanning from NIC to disk, to achieve performance that is both highly scalable and highly efficient.


For a deeper dive, visit the Intel Intelligent Storage Acceleration Library site. Or for a high-level overview, check out this quick Intel ISA-L video presentation from my colleague Nathan Marushak.

Read more >

RISC Replacement for Epic* and Caché*? One Health System’s Leaders Explain Why They’ve Made the Move

Located in Northwest Louisiana—one of the poorest areas of the United States—University Health System strives to deliver top-quality care to some of the sickest of the sick and the neediest of the needy. The two-hospital system is affiliated with Louisiana State University School of Medicine and maintains a Level One Trauma Center.


Healthcare budgets in Louisiana are tight and getting tighter. So when UHS’s technology leaders saw an opportunity to get their Epic* infrastructure on a more sustainable footing, they did their due diligence and then made the move. Today, their Epic* EMR and InterSystems Caché database run on the Intel® Xeon® processor E7 and E5 families powered by Red Hat Enterprise Linux* and VMware*.


In this short video, UHS CIO Marcus Hobgood (below, left) and executive IT director Gregory Blanchard (below, right) tell what they were after—and what they’ve achieved. (Hint: Zero downtime, 40 percent faster responsiveness, 50 percent lower acquisitions costs—and very happy clinicians.)

Marcus Hobgood and Gregory Blanchard.jpg


Watch the video and share your thoughts or questions in the comments. Have you moved your EMR database to an Intel® platform? Are your results similar to Marcus and Greg’s? Any insights to share based on your transition?


Join and participate in the Intel Health and Life Sciences Community.


Learn more about Intel® Health & Life Sciences.


Stay in touch: @IntelHealth, @hankinjoan

Read more >

Intel DC P3608 SSD: Intel’s new x8 NVMe SSD takes on billions of SQL Server 2014 data rows

One thing most DBA’s know that is that column-based us better than row-based when it comes to index compression efficiency and much less IO on disk. Pulling together much faster analytics queries against very large databases into the Terabyte class. These indexes are extremely efficient. Well does this all pair well with better hardware?


The answer is yes. Better hardware always matters just like better engineering wins in automotive, for safety, efficiency and fun to drive.


The same is true with NVMe technology which is standards based PCIe Solid State Drive technology. NVMe-based SSDs are the only kind of PCIe based SSD that Intel provides. We did a lot to invent it.  Is it fun to run very large TPC-H like queries against this type of drive? Well let me show you.


Here is some data that we put together where we show the Maximum Throughput of our new x8 P3608 against our best x4 card, the P3700. Also to put this into perspective I share the SATA versus PCIe run time of the entire 22 queries that exist within the TPC-H specification that is within HammerDB.


At the bottom of the blog is the link to the entire data sheet of our testing.


PCIe x8 and 4TB’s of usable capacity from Intel is now here. On September 23, 2015 we have released the new P3608. So how many Terabytes of SQL Server Warehouse you want to deploy with this technology? With 4 x8 cards, and 16TB, you’d be able to support over 40TB of compressed SQL Server data using the Fast Track Data Warehouse architectures, because of the excellent compression available with this architecture.


Here’s the data comparing our new x8 and x4 Intel PCIe drives and giving you some perspective on how much faster PCIe is over SATA, I am including a graph of the entire suite of queries on PCIe (P3700) over SATA. (S3710).


Here we compare the P3608 to the P3700 for maximum throughput.




Here we compare the P3608 versus the P3700 for query time on the most IO intensive queries.



Finally to give you some perspective here is what a SATA drive could do with this kind of SQL Server Database. This graph consists of all 22 queries , not just the IO intensive ones as above, and it’s the total time to run all queries within HammerDB.


Lower is better.



You can see all the data here.


Read more >

This is not Innovation for the Future, this is Innovation for the Now – IDC Pan-European Healthcare Executive Summit 2015

I’m excited to be leading a workshop on ‘Accelerating Innovation in Healthcare’ at IDC’s Pan-European Healthcare Executive Summit in Dublin this week. The theme of integrated care and collaboration across the entire healthcare ecosystem is underpinned by innovation, whether that be innovation in hardware such as mobile devices or innovation in thinking around perceptions by providers of what is possible.


Rapid Growth of IoT in Healthcare

I’m particularly interested in how the Internet of Things, robotics and natural language interfaces can change the way healthcare providers deliver high quality care. You may wish to read my earlier blog for a great example of how the Internet of Things is having meaningful impact today, with MimoCare helping the elderly live a more independent life through the use of sensor technology. It is estimated that the Internet of Things in healthcare could be worth $117bn by 2020 so given that we’re still in the relatively early stages of IoT implementation in the sector you get some idea of how rapid the adoption of these new technologies is likely to be. Healthcare providers need to be open to collaborating with innovators in this space and, encouragingly, there has been a lot of positive conversation about just that here in Dublin. The result of embracing IoT in healthcare? Lower costs, better patient outcomes and a real move towards prevention rather than cure.


Innovation for the Now

Other technologies discussed at the event included the Intel® RealSense™ Camera which has the potential to be used across a range of scenarios. Bringing 3D depth-sensing technology to healthcare offers up some exciting potential uses from being able to track the 22 joints of a hand to assist in post-operative treatment after hand surgery, to assessing the facial expressions with emotion-detection in patients recovering from a stroke. This is not innovation for the future, this is innovation for the now. We’ve worked with GPC in the area of wound care management and I think the impact of RealSense™ is summarised succinctly by GPC Medical Director, Dr. Ian Wiles, who said: “[This is] not 3D for the sake of 3D, but better care using 3D”.


NLP brings Streamlined Workflows and Lower Costs

When I look at disruptive technologies in healthcare I’m seeing lots of discussion around Natural Language Processing (NLP). NLP has the potential to transform Electronic Medical Records (EMRs) by extracting structured information from unstructured text. Imagine taking historical medical data in the form of freestyle notes and being able to pull that data together into a more structured format to monitor performance and critique clinical decisions. The benefits of NLP to providers are obvious, streamlining workflows, better decision-making and lower costs, all of which benefits the patient too. This will of course require all players in the healthcare ecosystem to be more flexible when it comes to exchanging data. It’s still early stages for NLP but I will share some of the work Intel is undertaking in this area in a future blog. If you’d like to be kept up-to-date on this topic and others across the healthcare and life sciences spectrum please do leave your details here.


Read more >

New Storage Performance Development Kit Goes Live

By Nathan Marushak, Director of Software Engineering for the DCG Storage Group at Intel



New non-volatile memory (NVM) technologies are transforming the storage landscape—and removing the historical I/O bottleneck caused by storage. With the performance of next-generation technologies like 3D XPoint, latency will now be measured not in milliseconds but nanoseconds.


While this is a huge step forward, fast media alone doesn’t get us to blazingly fast application performance. The rise of next-generation storage simply shifts the I/O bottlenecks from storage media to the network and software.


At Intel, we seek to address these bottlenecks, so developers can take full advantage of the potential performance of new NVM technologies. That’s a key goal driving the new Storage Performance Development Kit (SPDK), announced today at the Storage Developer Conference (SDC) in Santa Clara.


This new open source initiative, spearheaded by Intel, applies the high performance packet processing framework of the open source Data Plane Development Kit (DPDK) to a storage environment. SPDK offers a way to do things in Linux user space that typically requires context switching into the kernel.  By enabling developers to work in user space, it improves performance and makes development simpler for storage developers.


As part of the SPDK launch, Intel is releasing an open source user space NVMe driver. Why is this important?  For developers building their storage application in user space, this driver enables them to capitalize on the full potential of NVMe devices. Andy Warfield, CTO of Coho Data says, “The SPDK user space NVMe driver removes a distracting and time consuming barrier to harnessing the incredible performance of fast nonvolatile memory. It allows our engineering team to focus their efforts on what matters: building advanced functionality. This translates directly into more meaningful product development and a faster time to market.”


A lot of storage innovation is occurring in user space. This includes efforts like Containers, Ceph, Swift, Hadoop and proprietary applications designed to scale storage applications out or up. For applications like these, the SPDK NVMe polled-mode driver architecture delivers efficient performance and allows a single processor core to handle millions of IOPS. Removing or circumventing system bottlenecks are key for realization of the performance promised by next gen NVM technologies. For this reason, companies like Kingsoft Cloud have joined the SPDK community, and their experiences and feedback have already influenced the course of the project. Kingsoft Cloud is a leading cloud service provider, which provisions online cloud storage and hosting services to end users in China. “We will continue to evaluate SPDK techniques and expect to leverage SPDK for improving Kingsoft cloud’s scale out storage service” said Mr. Zhang, chief storage architect at Kingsoft Cloud.


With this announcement, storage developers can join an open ecosystem that is leveraging shared technology. This adds up to faster time to market while positioning organizations to capitalize on future NVM technology and storage advancements.


Intel’s work on SPDK won’t stop with today’s announcement. We expect to work closely with the broad ecosystem to enrich the features and functionality of SPDK in the months ahead. For example, we are already at work on additional modules that will join the NVMe driver in the SPDK open source community.


Ready to join the development effort? For easy access by all, the SPDK project resides on GitHub and related assets are available via, the online presence for the Intel Open Source Technology Center (OTC). To learn about other Intel storage software resources, visit And to learn more about next-generation non-volatile memory technologies, including the new 3D XPoint technology, visit

Read more >