Recent Blog Posts

Part II: 5 Significant Healthcare Public Policy Trends for 2015

In my last blog post, we looked at the first two significant policy issues that will shape the future of health IT this year and beyond—EHR meaningful use and interoperability. Today, we focus on alternative payments, telehealth care delivery models, and ICD-10 (briefly)


Alternative Payment and Care Delivery Models

A newly-proposed CMS Shared Savings Program Rule focuses on more ACO flexibility, greater performance-based risk and reward as well as the use of innovative care coordination and telehealth tools. While I am still holding out for passage of bipartisan, bicameral SGR/ FFS reform legislation, there has been real progress out of the Department of Health and Human Services (HHS) as it has proposed phasing in an alternative payment models that leverage outcomes and quality-based payments with a smaller fee-for-service reimbursement. Basically, paying providers for value, not volume.


Through this January announcement:


  • HHS has set a goal of tying 30 percent of traditional, or fee-for-service, Medicare payments to quality or value through alternative payment models, such as ACO, PCMH or bundled payment arrangements by the end of 2016, and tying 50 percent of payments to these models by the end of 2018


  • HHS also set a goal of tying 85 percent of all traditional Medicare payments to quality or value by 2016 and 90 percent by 2018 through programs such as the Hospital Value Based Purchasing and the Hospital Readmissions Reduction Programs


Note: In 2011, Medicare made almost no payments to providers through alternative payment models, but today such payments represent approximately 20 percent of Medicare payments. The goals announced in January represent a 50 percent increase by 2016.


  • To put this in perspective, in 2014, Medicare fee-for-service payments were $362 billion so a significant amount of payments will be shifting quickly into alternative payment models and this trend will not be tied to just Medicare but rather all insurers including Medicaid will be briskly moving in this direction


HHS has adopted a framework that categorizes health care payment according to how providers receive payment to provide care:


  • Category 1—fee-for-service with no link of payment to quality
  • Category 2—fee-for-service with a link of payment to quality
  • Category 3—alternative payment models built on fee-for-service architecture
  • Category 4—population-based payment


Medicare telehealth expansion includes use of health IT for chronic care

Medicare has expanded its covered telehealth services to include wellness (HCPCS code G0438) as well as several behavioral health visits. Beginning in January 2015, Medicare will reimburse physicians $40-$42/patient/month for chronic care management services for patients with more than one chronic condition

  • Physicians must use EHR systems that meet 2011 or 2014 certification criteria for meaningful use and a scope of service
  • Chronic care management is expected be provided by clinical staff directed by a physician or other qualified health professional. The level of service is expected to be 20 minutes per patient per month



Oh, and let’s not forget about our decade-long transition to ICD-10 on October 1, 2015.


So as you can see and are probably well aware, 2015 has already started off with seismic shifts in public policy in an attempt to stabilize the rate of growth of our annual healthcare costs. I don’t believe anyone can kid themselves and think that we will ever reduce our nation’s healthcare expenses, but what many of us are passionately working towards is creating a smarter and sustainable healthcare system that will at least reduce the rate in which our costs are increasing and truly create a healthcare system where we see intrinsic value and the patient becomes an informed and accountable consumer. We can all dream can’t we?


What questions do you have?


As a healthcare innovation executive and strategist, Justin is a corporate, board and policy advisor who also serves as an Entrepreneur-in-Residence with the Georgia Institute of Technology’s Advanced Technology Development Center (ATDC). In addition, Mr. Barnes is Chairman Emeritus of the HIMSS EHR Association as well as Co-Chairman of the Accountable Care Community of Practice. Barnes has appeared in more than 1,000 journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes also recently launched the weekly radio show, “This Just In.”

Read more >

Exploring Media Server Success with NFV at Mobile World Congress 2015

By Erin Jiang, Media Processing Segment Manager



As I was flying home from Mobile World Congress last week I reflected on what a great show it was for our virtual media processing and serving technology and how far this technology has come in its evolution.

It wasn’t too long ago that video processing required a dedicated appliance. But at MWC, we teamed up with Huawei to demonstrate real-time NFV video conferencing with 4K video processing on an Intel® Core i7 powered high density, power efficient server utilizing the Intel integrated hardware acceleration with Intel Iris Pro graphics and Intel Media Server Studio.

IMG_0133.JPGThe demonstration was part of our collaboration with Huawei to deliver next-generation NFV video and audio processing solutions for the company’s cloud-based media platform.


Based on Intel media server solutions and Intel graphics virtualization technology, Huawei’s server solution delivers high-density video encoding, composition, and decoding on OpenStack-managed virtual machines.

This joint project with Huawei was definitely a highlight, but we had some other noteworthy mobile video service demonstrations in the Intel booth from partners Elemental Technologies, Artesyn Embedded Technologies and Vantrix.


Elemental demonstrated its next-generation software-defined video platform, including its Elemental Live and Elemental Delta products.  Elemental and Intel are collaborating to enable mobile network operators to monetize network investment in new ways with personalized advertising that is customizable by geography, node, device, or even by subscriber.


Artesyn and Vantrix combined their technologies to show virtualized video transcoding for over-the-top services and live/linear video content. This technology packs a very high stream density in a compact form factor and provides both mobile video delivery as well as speech transcoding.


The transcoding server is designed using Intel’s integrated hardware-accelerated graphics SharpStreamer™ edge platforms for cloud-based media processing for cloud RAN and virtual RAN network functions.

With three partners using Intel technology to revolutionize video services delivery, I would say that MWC was a very good show that demonstrated just how far we’ve come and the promise of a great future.

Read more >

Transforming China

China has a rich history of innovation, and is the birthplace of many inventions that have shaped the world. Papermaking can be traced back to the year 105 CE, and the arrival of woodblock printing in 868 CE created the world’s first print culture. Together with the compass and gunpowder, these make up the four great inventions that were celebrated in the Beijing Olympics Opening Ceremony.


I recently had an opportunity to visit China to discover how today’s technologies are shaping major businesses. In the cloud, retail and financial services sectors especially, there was a strong desire to accelerate the pace of innovation, and to find new ways to reach customers. The size and scope of transformation is astounding, much of it led by the rise of younger, well-educated business leaders, who are shaking up workplaces and business models. The rapid change is supported by the financial “big bang” in Shanghai, which is driving the financial sector towards a leadership position on the global stage.

Engaging with Customers



A significant challenge in retail comes from the returns that result when customers buy clothes that are the wrong size. One of China’s biggest retailers has identified the opportunity to use 3D cameras in computing devices to help customers to choose clothes that are the right size, potentially from their homes as the cameras become integrated in more and more consumer devices. 3D cameras can be used to measure photographic subjects, enabling new applications that can offer tailored advice to shoppers seeking the perfect fit. Customer confidence and satisfaction are likely to improve significantly when customers can be sure of ordering the right size, which creates fantastic growth opportunities for businesses that demonstrate leadership in this area.


Reaching Out to Customers


One fascinating area of business at the moment is the rapid rise of the shared economy and the disruption it is creating for traditional business. In this model having access to something often trumps the need to own it. Consider examples such as AirBnB* accommodation services, or Uber* and Didi Dache* transportation services. This is causing many mainstream enterprises to put a spotlight on how to transform and create a digital platform for the future. I spoke with a large insurance provider and it was clear they knew they needed to transform but were unsure how to approach doing so. If you think about the shared economy examples above you could ask the question “what’s the insurance model” for them? It’s surely different to a traditional business. Is this an opportunity for the insurance industry first movers to develop new offerings around? If so, how would one deploy value add services to customers in the mobile manner they now demand and do so rapidly to gain some market traction? Part of the challenge with this is that many of the IT assets, systems and information sources are locked up in legacy architecture that has historically been difficult to access. A major advancement is in the sophistication and ease of use provided by modern API management technologies, which enable organizations to rapidly get to multiple sources of information and develop and mobilize new services in a fraction of the time it used to take.


The creation of the applications is sometimes complicated by the variety of mobile platforms in use. Many organizations worldwide are using HTML5 to create cross-platform applications, and one of the Chinese retailers I spoke to said they’ll be looking at tools that accelerate HTML5 development, so they can target more devices without increasing their development costs.


IoT for Energy


The Internet of Things (IoT) is a key driver of business transformation, enabling businesses to have unprecedented insight into their infrastructure. One of the energy companies I met with is looking to build intelligence into everything from the drill head to the refinery and even into the network of petrol stations they have. One supplier was quoting $15,000 for an attachment to make a drill intelligent. When you consider they literally have hundreds of thousands of assets they need to re-instrument, this clearly doesn’t scale, and it doesn’t represent the most cost-effective solution available today, when standardized gateway solutions can add intelligence and connectivity at a fraction of the cost. The cost vector along with growing maturity in this new area will allow companies to make massive (and cost effective) improvements to the way they deploy intelligence across their business.


Mobility in Healthcare


Healthcare is a sector that is being transformed by technology worldwide, and China is no exception. I had an insight into how one leading hospital is using technology to improve communications between doctors (with tablets), nurses (with smartphones) and patients (with interactive kiosks); and how workflows between the hospital and the pharmacy are being integrated. They’ve been able to cut the time taken to provide a prescription by half, and 55% of patients now use the mobile application or interactive kiosks to make appointments and check in. That is increasing efficiency and cutting waiting times, delivering real benefits to patients and improving the level of care they receive.


Coming Together for the Cloud


The 8th Annual Internet Protocol Data Casting (IPDC) Forum brought together the top cloud computing companies, ranging from small and highly specialized providers to companies offering hyperscale solutions. It was a great opportunity to meet with peers and explore ways we can collaborate to drive innovation and help more businesses deploy the cloud successfully. There was strong interest in the SMAC paradigm (social, mobile, analytics and cloud) which is the new platform for building a digital business. In China as much as anywhere, it was clear that this paradigm will be transforming the workplace of tomorrow profoundly.


I’m looking forward to seeing how China uses all these technologies to drive its continued rapid growth, building on its reputation for creativity and invention.


-Andrew Moore


*Other names and brands may be claimed as the property of others.

Read more >

Re-Architecting the Data Center: Intel and Industry Innovation at Open Compute Summit

Since its inception in 2011, Intel has been a key contributor to the Open Compute Project (OCP). As a founding member of OCP, Intel strives to continue increasing the number of open solutions based on OCP specifications available on the market. That mission was front and center today at the annual OCP Summit in San Jose where we talked about a number of OCP-based products available from Intel and our ecosystem partners.


One of the highlights today at the OCP Summit was the introduction of the Intel Xeon processor D product family, announced by Intel on March 9th. It was an exciting moment to share more details with the OCP Summit audience about the first Intel Xeon based product manufactured on 14nm. Intel has leveraged our extensive data center experience and our leading 14nm process technology to create a highly integrated system-on-chip (SoC) that integrates Intel Xeon compute cores, Intel networking, and I/O onto a single processor.


During my keynote I had the pleasure of welcoming Jason Taylor, Facebook’s VP of Infrastructure, to talk about how Intel and Facebook are collaborating to create solutions and share them with the OCP. During our conversation, Jason talked about an Intel Xeon processor D-based system called Yosemite that Facebook will adopt and contribute to OCP, and the power and performance benefits delivered by this new SoC. For Intel, the most important part of launching a new product is helping our customers to be successful, and it is very rewarding to see a key partner such as Facebook be an early adopter of our latest product and join us at the OCP Summit to share with the audience how the Intel Xeon processor D product family will benefit their data center.


In our demo booth at the OCP Summit we are demonstrating the first implementation of Intel Rack Scale Architecture (Intel RSA) based on OCP-compliant hardware. Intel RSA is a logical architecture that enables the disaggregation and pooling of compute, storage, and networking resources, allowing our customers to deliver higher performance while lowering the TCO of their data center systems. Already, there is a growing RSA ecosystem focused on the development of OCP hardware, with RSA evaluations under way in the data centers of leading cloud service providers.

Intel is also contributing a wide range of new networking and storage products and technologies. At our booth you can check the first live demo of a 100-gigabit Ethernet switch for Intel RSA and also a new 40GbE adapter that supports the OCP 2.0 design specification.


As these examples should show, Intel is deeply committed to the OCP and its mission to enable the design and delivery of open, highly efficient hardware for scalable computing. To date, Intel has made several contributions to the OCP in the form of servers, racks, storage, and networking components aligned to OCP specifications and has worked with our partners on the development of 40 OCP systems.


You can expect us to continue to innovate and work together with our ecosystem partners to share specifications and best practices with OCP that deliver highly efficient solutions to the industry.


Intel and the Intel logo are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.

Read more >

3D Imaging Creates Closer Relationships Between Clinician and Patient


Our latest video showcases how 3D imaging technology is used with mobile devices to help radiologists deliver improved patient care in hospitals.


I think we have all seen radiologists who would normally use a light box to analyse x-rays. Nowadays this has been replaced by powerful graphic workstations, which are usually stationary, making results difficult to move around in a hospital. Intel technology, with powerful in-built rendering and networking capabilities, have made these images mobile – truly mobile.


More powerful processing technology is enabling the production of 3D images that enable healthcare professionals to view fractures from every angle on a mobile device in today’s hospital. Being able to zoom in and rotate the injury on a tablet is no longer a futuristic vision. Intel technology has bought 3D visualisation to the patient’s bedside.


When I speak to doctors they tell me they get a real feel for how bone fragments are positioned and consequently can provide the best possible treatment for realignment. And the patient benefits too, as treatment plans can be discussed in detail with a highly visual explanation of their injury on a tablet device. Clinicians have known for decades that the most effective patient outcomes are achieved when the patient is educated about their medical injury and buys-in to the treatment plan.


We’re helping to bring images to life for doctors and patients here at Intel – check out our healthcare device selector tool to find the best solution for you and your patients.


Keep in touch with the latest healthcare IT news by following us on twitter via @intelhealth and join our healthcare professionals’ community to receive a monthly email with best practice solutions to help you deliver better patient care.

Read more >

Part I: 5 Significant Healthcare Public Policy Trends for 2015

As we head toward HIMSS in Chicago next month, it’s a good time to take a look at the significant policy issues that will shape the future of health IT. While we will see tweaks to important legislation and regulation, the major public policy impacts that I envision for 2015 and even 2016 will revolve around EHR meaningful use, interoperability and most importantly in my book and strategy, alternative payment and care delivery models. Yes, ICD-10 is in there too but literally for how many years can we talk about that?


In this two-part blog series, I’ll look at the five issues that I see as priorities. Today’s topics: EHR and interoperability.


EHR Meaningful Use
EHR meaningful use will almost certainly grab the biggest headlines throughout the year as we just saw with the popular CMS announcement of the delay in the Medicare EHR meaningful use attestation for the 2014 reporting year, whereas eligible professionals now have until March 20, 2015.


There is also a new EHR meaningful use rule expected this spring that is intended to be responsive to provider concerns about software implementation, information exchange readiness as well as be reflective of developments in the industry and progress toward program goals achieved since the program began in 2011.


Here are a few highlights:


  • Shorten the EHR reporting period in 2015 to 90 days to accommodate these changes
  • Realign hospital EHR reporting periods to the calendar year to allow eligible hospitals more time to incorporate 2014 Edition software into their workflows and to better align with other CMS quality programs
  • Modify other aspects of the program to match long-term goals, reduce complexity, and lessen providers’ reporting burdens


Interoperability and Data Exchange

The Office of the National Coordinator for Health Information Technology (ONC) released its shared interoperability Roadmap on January 30.


The ONC sees health IT as an important contributor to improving health outcomes, improving health care quality and lowering health care costs. They further state that health IT should facilitate the secure, efficient and effective sharing and use of electronic health information when and where it is needed.


Here are a few highlights:


  • ONC suggests that the community must expand its focus beyond institutional care delivery and health care providers, to a broad view of person-centered health
  • Healthcare is being transformed to deliver care and services in a person-centered manner and is increasingly provided through community and home-based services that are less costly and more convenient for individuals and caregivers
  • The Roadmap Identifies Four Critical Near-Term Actions for Enabling Interoperability
    • Establish a coordinated governance framework and process for nationwide health IT interoperability
    • Improve technical standards and implementation guidance for sharing and using a common clinical data set
    • Enhance incentives for sharing electronic health information according to common technical standards, starting with a common clinical data set
    • Clarify privacy and security requirements that enable interoperability


A personal favorite inside the Roadmap is the call for alignment of private payer efforts with CMS policies and programs, including incentives for health information exchange and e-clinical quality measures that will enable the three- and six-year goals in the Roadmap. This is a key component that will garner a lot of broad stakeholder support including the critical support of caregivers and IT professionals who struggle to participate in quality and incentive programs due to their lack of coordination and ability to report on measures.


The ONC did create a terrific infographic that details this journey as well. Public comments on the ONC Interoperability Roadmap are open until April 3, 2015.


What questions about EHR or interoperability do you have?


Watch for the second part of this blog series to be posted soon.        


As a healthcare innovation executive and strategist, Justin is a corporate, board and policy advisor who also serves as an Entrepreneur-in-Residence with the Georgia Institute of Technology’s Advanced Technology Development Center (ATDC). In addition, Mr. Barnes is Chairman Emeritus of the HIMSS EHR Association as well as Co-Chairman of the Accountable Care Community of Practice. Barnes has appeared in more than 1,000 journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes also recently launched the weekly radio show, “This Just In.”

Read more >

Challenging the Perceived Lack of Incentives to Improve Cybersecurity

Cybersecurity Incentives.jpgIs it a financially sound business decision for the industry to not invest in more cybersecurity? Recent news articles, congressional reports, and industry discussions have been cropping up around the community regarding the lack of incentives for companies to increase investments in cybersecurity.  With so many stories of breaches and attacks, affecting tens of millions of customers, could it really be true? 


It is an interesting discussion, but I don’t buy into the viewpoint, citing data from recent breeches, the overall calculated security impact costs to companies is not significant enough to justify investments in better security.


First, a couple of things about the approach of the analytical comparisons.  The costs are being compared to the gross sales of the impacted companies.  I believe a more relevant approach is to compare the impacts against net profits and not gross sales, as these are overhead costs that take away from the bottom line.  In the end, it can make a lot of difference to management if an attack consumes a big chunk of your profit or worse, pushes you from the green into the red side of the ledger.  Secondly, most cost calculations don’t reflect the insurance premiums, which will be going up, that companies pay annually to be covered against breach related losses.  Very few consider the downstream effects of other vendors and business partners who are impacted and assume some of the loss.  Finally, there is no good way of determining the long term detrimental effects on customer goodwill.  Every customer has a breaking point, especially where there is significant competition and alternative choices vying for patronage.  Let me stop here, as all these criticisms are not the real point.  I believe the important aspects of this discussion are being missed altogether.


What disturbs me greatly is the lack of strategic vision.  The relevance of this topic is not determined by looking back at last year’s breaches and comparing them to that year’s annual budget, sales, or profit.  It is far too shortsighted.  It is like evaluating the value of investing in braking technology when the automobile first emerged.  They really didn’t have anything we would consider a practical way of stopping well.  The cost of developing braking systems probably seemed extreme if they looked back on the previous year and calculated the low costs of accidents, few vehicles, and considered customers were still happy to just own one.  But as cars got faster and more people began taking to the roads, the situation changed fast.  Developing reliable brakes on automobiles became important.  The electronic ecosystem is moving much faster. 

Instead of looking back, we must have the vision to see where the trend and acceleration of the evolving events will take us.  As the world quickly becomes more reliant and integrated with technology, attacks will have a greater corresponding effect.  What is needed is an analysis which shows over time the inclination of attack frequency, direct losses, recovery costs, and secondary impacts such as the erosion of customer goodwill and additional regulatory hurdles.  Then extrapolate this against how technology will expand in size, permeate our lives, and how it will push an increase in the value of data and services.  This will drive ever greater potential impacts. 


The current inconvenience experienced by customers may transition to frustration and over time, to true dissatisfaction.  Someone having to swap out an old credit card with a replacement in their wallet is no big deal.  But what if their car won’t start, their credit is cratered when they are trying to buy groceries, a prescription is filled with the wrong medication, their smartphone stops working, or their retirement account is emptied?  Everyone has a threshold where purchasing decisions and brand loyalty will falter. 


Not too far in the future the types of impact may drastically change.  As attacks shift from relatively simple denial-of-service and data breach attacks, which are fairly straightforward to recover from, to more complex attacks which tamper with the integrity of internal transactions, we will see costs and impacts skyrocket!  It is coming (I would say we are already crossing this boundary with the Carbanak malware stealing a billion dollars from banks).


We must consider how the losses will viewed as we approach the technology cliff where future cybersecurity issues will put people’s lives in harm’s way.  What happens when cars (driverless or not) are hijacked and under the control of hackers, industrial safety systems are compromised leading to disaster, or defense systems are taken over by unfriendly groups?  All of which we are also seeing either happening or research showing proof-of-concept success.


In the end, the axiom holds true: Security is only relevant when it fails.  Cybersecurity investment is future facing.  Calculating the cost incentives of past failures is not nearly as interesting as mapping the trajectory of where they will be in three or five years.  A lack of investment now, places these organizations on a path of great losses in the future.  The real conversation should not be about the lack of current incentives, but the analysis of what the future incentives will become.




Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts


Read more >

Extending Intelligence to the Edge

By Nidhi Chappell, Product Line Manager for the Intel® Xeon® Processor D family, Intel



Cloud and telecom service providers are caught in a constant battle to speed new service delivery, handle rapid growth in numbers of users, and contain IT costs. To achieve these critical goals, service providers need to look for opportunities to optimize infrastructure for density and cost, both in the data center and at the network edge.


That’s the idea behind the new Intel® Xeon® processor D family, the first system-on-a-chip (SoC) in the Intel Xeon processor portfolio. Designed for dense, small form factors, this new SoC puts the performance and advanced intelligence of Intel Xeon processors into dense, low-power networking, storage, and microserver infrastructure.


This is an ideal SoC for cloud service providers operating hyperscale data centers who want to use microservers to process lightweight workloads, such as dynamic web serving, memory caching, web hosting, and warm storage. They can now pack more compute density into their data centers. Better still, with support for up to 128 GB of memory, the SoC allows service providers to meet the needs of more users per server.


The Intel Xeon processor D family is also a great choice for telecommunications service providers who want to replace fixed function, proprietary, network edge appliances with open-standards Intel® Architecture. They can now get the benefits of Moore’s law, which has lowered the cost of compute by 60x over the past 10 years alone. They also get the benefit of standard Intel Architecture that can run common software across Intel product lines, generation after generation.


Intel Xeon Intelligence in a Low-power SoC


When it comes to performance, don’t let the small size fool you. We’re talking about “big core” performance and intelligence in a microserver form factor. This new SoC delivers up to 3.4 times the performance per processor and up to 1.7 times the performance per watt of the Intel® Atom™ processor C2750 SoC for dynamic web serving workloads. (1, 2)  Here’s a good analogy to help translate the performance density of Xeon D: In a typical server rack, you could pack up to 150 of Xeon D processor and be able to simultaneously host the entire population of L.A. and Chicago combined — all 6.6 million people! (3)


Despite its dense design, the Intel Xeon processor D family delivers all the intelligence you’ve come to expect in Intel Xeon processors. The SoC includes server-class Reliability, Availability, and Serviceability (RAS), hardware-enhanced security and compliance features, and Platform Storage Extensions that offer new intelligence for dense, low-power storage solutions.


Looking ahead, you can expect more good news about the Intel Xeon processor D family in the coming months. Intel plans to release more Xeon D processor versions in the second half of 2015, to build on the 4- and 8-core SoCs that are available today. These processors will be available in thermal design points of near 20 watts to 45 watts, making them ideal for networking and Internet-of-things usages.


All of this gives cloud and telecom service providers a lot to look forward to. They will soon have a broader range of options to address the pressing need for low-power, high-density infrastructure—from the data center to the network edge.


For a closer look at the Intel Xeon processor D family, visit





Intel, the Intel logo Xeon, Intel Atom, and Intel Core are trademarks of Intel Corporation in the United States and other countries.

* Other names and brands may be claimed as the property of others.




1. Source: Intel performance estimates based on internal testing of Dynamic Web Serving (Performance and Performance per Watt)

New Configuration: Intel® Xeon Processor D-based reference platform with one Pre-Production Xeon Processor D (8C, 1.9GHz, 45W), Turbo Boost Enabled, Hyper-Threading enabled, 64GB memory (4x16GB DDR4-2133 RDIMM ECC), 2x10GBase-T X552, 3x S3700 SATA SSD, Fedora* 20 (3.17.8-200.fc20.x86_64, Nginx*1.4.4, Php-fpm* 15.4.14, memcached* 1.4.14, Simultaneous users=43843, Maximum un-optimized CRB wall power =114W, Perf/W=384.5 users/W . Note: Intel CRB (customer reference board) platform is not power optimized. Expect production platforms to consume less power. Other implementations based on microserverchassis, power=90W (estimated), Perf/W=487.15 users/W

Base Configuration: SupermicroSuperServer* 5018A-TN4 with one Intel Atom Processor C2750 (8C, 2.4GHz,20W), Turbo Boost Enabled, 32GB memory (4x8GB DDR3-1600 SO-DIMM ECC), 1x10GBase-T X520, 2x S3700 SATA SSD, Ubuntu* 14.10 (3.16.0-23 generic), Nginx* 1.4.4, Php-fpm* 15.4.14, memcached* 1.4.14, Simultaneous users=12896. Maximum wall power =46W, Perf/W=280.3 users/W

2. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase. For more information go to

3. 150 servers * 43843 simultaneous users per at approximately 90Ws per server in a 15KW rack

Read more >

The Seed of Innovation: Intel Xeon Processor D Product Family

All throughout the world you will find that some of the most innovative and impactful things have incredibly small beginnings. For example the giant sequoia, which grows to be the largest tree species in the world, starts as a tiny seed no longer than a grain of rice. All of the complex genetic makeup and intelligence that is needed to grow a massive living organism more than 250 feet tall is contained in a tiny footprint that can fit onto the tip of your finger. A small seed, like the giant sequoia, can provide the foundation for something that grows to massive heights and impacts the entire planet.

In the spirit of the giant sequoia seed, the Intel Xeon Processor D-1500 Product Family brings great intelligence in an extremely small footprint, enabling you to harness the incredible performance of Intel Xeon processors within a small, low power package that will help drive the transformation of the massive digital services economy and data center innovation.

Intel believes that everyone contains great potential, even though we as individuals represent only a small footprint on the face of the planet. To celebrate  the opportunity we all have to start with something small and impact great things, Intel will provide the funds to plant one of world’s largest species. Each time someone Tweets #XeonDTreePromo, Intel will donate $1.00 to the Penny Pines Reforestation Program in the Sequoia National Forest to fund the planting of a giant sequoia tree.  So that we all can all harness the great intelligence and innovation contained in the tiny footprint of a sequoia seed and help make the world a better place together.





Promotion Details:

Intel will donate $1.00 to The Penny Pines Reforestation Program for every instance that the phrase #XeonDTreePromo is Tweeted in an English language post, starting at 9:00 a.m. PST March 9, 2015 and ending 9:00 p.m. PST March 14, 2015, with the total possible donation limited to no more than $10,000. The funds donated to The Penny Pines Reforestation Program run through the Sequoia National Forest may be used to plant Sequoia seedlings, for general reforestation, or drawn upon as improvement projects are determined by resource managers within the program.

Read more >

Intel Xeon Processor E7 v2 Family, SAP HANA & VMware: Virtually, a Winning Combo

We’ve worked with SAP HANA* for more than a decade to deliver better performance for SAP* applications running on Intel® architecture. And the results just keep getting better. The latest Intel® Xeon® processor E7 v2 family can help IT get even more insights from SAP HANA, faster. When you add VMWare VSphere* to the mix, you’ll see a huge boost in efficiency without adding more servers.

Why virtualize? Data centers running mission-critical apps are pushing toward more virtualization because it can help reduce costs and labor, simplify management, and save energy and space. In response to this push, Intel, SAP, and VMware have collaborated to make a robust solution for data center virtualization with SAP HANA.

What does this mean for IT managers? Your data center can grow with more scalable memory. You can have the peace of mind your data is protected with greater reliability. And, you’ll see big gains in efficiency, even when virtualized.

Grow with scalable memory

The Intel Xeon processor E7 v2 family offers 3x more memory capacity than previous generations, This not only dramatically increases the speed of SAP HANA performance, but it also gives you plenty of room as your data grows. Our Intel Xeon processor E7 v2 family also provides up to 6 terabytes of memory in four-socket servers and 64 GB dual in-line memory modules.[


Relax, your mission-critical data is protected

We designed the Intel Xeon processor E7 v2 family to support improved reliability, accessibility, and serviceability (RAS) features, which means solid performance all day, every day with 99.999% uptime[4]. Intel® Run Sure Technology adds even more system uptime and increases data integrity for business-critical workloads. Whether you run SAP HANA on physical machines, virtualized, or on a private cloud, your data and mission-critical apps are in good hands.


Be amazed at the gains in efficiency

When Intel, SAP HANA, and VMWare join forces in a virtualized environment, efficiencies abound. Data processing can be twice as fast with a PCI Express* (PCle) solid-state drive, You can get up to 4x more I/O bandwidth, which equates to 4x more capacity for data circulation.5,6 Hyper-threading doubles the number of execution threads, increasing overall performance for complex workloads.


In summary, the Intel Xeon processor E7 v2 family unleashes the full range of SAP HANA capabilities with the simplicity and flexibility of virtualization on vSphere. Read more about this solution in our recent white paper, “Go Big or Go Home with Raw Processing Power for Virtualized SAP HANA.”

Follow #TechTim on Twitter and his growing #analytics @TimIntel community.

Read more >