Latin America Jumps into the Parallel Universe Computing Challenge

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group


At our inaugural Parallel Universe Computing Challenge (PUCC) at SC13, we had no representatives from Latin America. That’s changed for the 2014 PUCC with the proposed participation of a team representing supercomputing interests in Brazil, Columbia, Costa Rica, Mexico and Venezuela.

Several of the team members are from the Universidad Industrial de Santander (UIS) in Bucaramanga, Colombia. UIS, a research university, is the home of the Super Computing and Scientific Computing lab that also provides HPC training for Latin American and Caribbean countries—which is why they were able to garner additional team members from universities in other countries.

The lab’s research is focused on such science and applied science areas as bioinformatics and computational chemistry, materials and corrosion, condensed matter physics, astronomy and astrophysics; and on computer science areas including visualization and cloud computing, modeling and simulation, scheduling and optimization, concurrency and parallelism, and energy-aware advanced computing.


We talked with team captain Gilberto Díaz, Infrastructure chief of the supercomputer center at UIS, about the team he was assembling.

Q: Why did the team from Latin America decide to participate in the PUCC?
A: We would like to promote and develop more widespread awareness and use of HPC in our region. In addition to the excitement of participating in the 2014 event, our participation will help us to prepare students of master and PhD programs to better understand the importance of code modernization as well as preparing them to compete in future competitions.

Q: How will your team prepare for the Intel PUCC?
A: All of us work in HPC and participate in scientific projects where we have the opportunity to develop our skills.

Q: What are the most prevalent high performance computing applications in which your team members are involved?
A: We are developers, therefore, we are most familiar with programming languages than specific applications (MPI, CUDA, OpenMP).

Q: SC14 is using the theme “HPC Matters” for the conference. Can you explain why “HPC Matters” to you?
A: HPC is a fundamental tool to face some challenging problems and solving them will represent a significant advance for humanity, for example, new drug development for disease treatment, high tech components for cars, planes, etc., weather simulations to understand how we are affecting the climate of the world, etc.

Q: What is the significance of your team name (“SC3”)?
A: Super Computing and Scientific Computing in Spanish is Super Computación y Calculo Cientifico, which is the name of the lab at the Universidad Industrial de Santander.

Q: Who are your team members?
A: We have six people in addition to myself so far:

  • Robinson Rivas, Professor at Universidad Central de Venezuela (UCV) and director of the supercomputer center of UCV in Caracas
  • Carlos Barrios, Professor at Universidad Industrial de Santander (UIS) and director of the supercomputer center of UIS
  • Pedro Velho, Professor at Universidad Federal de Rio Grande del Sur in Porto Alegre, Brazil
  • Alvaro de la Ossa, Professor at Universidad de Costa Rica in San Jose, Costa Rica
  • Jesus Verduzco, Professor at Instituto Politécnico de Colima in Colima, Mexico
  • Monica Hernandez, System Engineer and student in Master program at UIS


Learn more about the PUCC at SC14.


(Left to Right) Pedro Velho, Carlos Barrios, Robinson Rivas, Gilberto Díaz

Jesus Verduzco

Read more >

Part 3 – Transforming the Workplace: Driving Innovation with Technology

This is part 3 of my blog series about transforming the workplace. Be sure to start with part 1 and part 2, and look for future posts in the series.

Imagine how your day might look in the workplace of the future. Your computer knows your face (it’s how you log in); it responds to your gestures; and it knows your voice. You connect, dock, and charge your personal computing device by simply sitting there, without the need for any wires. Even better, your computer becomes the assistant you never had. That 11 a.m. client meeting on your calendar? There’s an accident blocking the fastest route, so you’ll need to leave 20 minutes earlier. You didn’t know this, but your PC figured it out and told you by making contextual insights into your schedule. And this is just the tip of the iceberg.


Between this future-state vision and where we are today lies a transformational journey. And it’s never easy. In my last blog, I discussed how the nature and style of work is changing to support the need to innovate with velocity. To achieve true transformation, companies must overcome many barriers to change, from the cultural and environmental to the technological. Here I want to take a closer look at some of the technological leaps that will make the transformation possible, both in terms of where we are now and where we’re going.


Supporting natural, immersive collaboration

We all know that social, mobile, analytics, and cloud (SMAC) has changed things. Because today’s workforce is distributed across sites, cities, and even countries, collaboration can be a real challenge—a scenario exacerbated with the advent of agile practices working across company boundaries.


Take a typical brainstorming session, for example. Using a whiteboard to sketch out ideas is key, but it has limitations for workers attending by phone. Someone either has to explain what’s on the whiteboard, copy the work into meeting notes, or take a photo of the whiteboard and e-mail it. Not to mention that the picture, possibly of your company’s “next great idea,” uploads to your favorite public cloud provider. And while videoconferencing would seem a likely alternative here, video quality can be lackluster at best.


Intel is taking an innovative approach to solve these challenges. Advanced collaboration technologies will let workers connect in an intuitive, natural way—whether it’s a global team, a small group, or a simple one-on-one session. Unified communications with HD audio and video (complete with live background masking) is already changing videoconferencing with a more lifelike experience. And workers can interact in real time using a shared, multitouch interactive whiteboard that spans devices, from tablets to projection screens and everything in between. The whiteboard is visible and accessible to all attendees in real time. And that digital business assistant? One day it could even use natural language voice recognition to automatically transcribe meeting notes and track actions!




Boosting personal productivity

When it comes to productivity, the devil is in the details. And often those details translate into lost time, whether it’s a dead laptop battery or a password issue. Let’s say you forget your password and you can’t log in without IT assistance. It’s a drag on your time (and theirs), but it’s also interrupting workflow. Sharing work can also take longer than it should. We’ve all been there, in the conference room, stuck without the right adapter for the projector (“the thing that connects to the thing”). And if you can’t project, there’s not an easy way to share work.


Intel is making great strides to free workers from these burdens of computing by supporting existing workflows for maximum productivity.

  • A workplace without wiresbuilt-in wireless display now allows workers to connect automatically
  • “You are your password”
  • And getting back to that assistant … it will know you. Instead of having to tell your device everything, the reverse will be true. We foresee a day when your PC will know where you are, what you like, and what you need (like leaving early for that meeting). By anticipating your needs with proactive, contextual recommendations and powerful voice recognition, it will be able to streamline your day. And built-in theft protection will automatically measure proximity and motion to assess risk levels if you’re on the go.


Implementing facilities innovation

While we are “getting by” in today’s workspaces, they typically don’t meet the needs of a distributed workforce and can pose problems even for those working on site. It’s often a challenge to find a free conference room or, if one is available, the room itself is hard to find. I touched on videoconferencing earlier, but this is a place where the technology makes or breaks the deal. From poor quality audio and video to the wrong adapter, it all hampers workflow.


Intel is working to enable an integrated facilities experience through location-based services and embedded building intelligence. Location-based service capabilities on today’s PCs can help you find the resources you need based on current location, from people to conference rooms and printers. And like your PC will one day “know you,” so will the room, meaning it will automatically prepare for your meeting—connecting participants via video and distributing meeting notes. Immersive, high-quality audio and video will guarantee a natural, easy experience. And future installments of touch, gesture, and natural voice control will become more context aware, taking collaboration and productivity to the next level.


Moving forward

This perspective on the role of technology in driving workplace transformation can be seen in action by watching the Intel video, “The Near Future of Work.” Additionally, I’m currently working on a paper that will expand on Intel’s vision of workplace transformation, and I’ll let you know when it’s available.

However, while technology is a huge piece of the puzzle, there is so much more to it. True workplace transformation requires the right partnerships and culture change to be effective. For the next blog in this series, I’ll be taking a look at how to approach a strategy for workplace transformation and share key learnings from Intel’s own internal workplace program.

Meanwhile, please join the conversation and share your thoughts. And be sure to click over to the Intel® IT Center to find resources on the latest IT topics.


Until the next time …

Jim Henrys, Principal Strategist

Read more of my blogs here.

Read more >

Patient Care 2020: More Technology on the Way


The year 2020 seems far off, but is closer than you think. With the increasing use of technology in healthcare, and with patient empowerment growing each year with the advent of mobile devices, what will a clinician’s workday look like five years from now?

In the above video, we turn toward the future to show you how enabling technologies that exist today will transform the way clinicians treat their patients in 2020. Learn how wearable devices, sensors, rich digital collaboration, social media, and personalized medicine through genomics will be part of a clinician’s daily workflow as we enter the next decade.


Watch the short video and let us know what questions you have about the future of healthcare technology and where you think it’s headed.

Read more >

Keys To Building Your Own SaaS Security Playbook

As enterprise applications and data continues to move towards software as a service (SaaS), the need to evolve security controls and strategies has become increasingly apparent. New developments are now required to access and store data and applications. An evolving enterprise IT landscape calls for an evolving security strategy to keep pace with it.


In a recent podcast, information security analyst, Jim Brennan, detailed how Intel’s development of a “SaaS Security Playbook” has given risk managers a foundation for running the same “plays.” By creating a guide for security stakeholders, your organization can ensure consistency in security strategy and responses.


The Right Security Framework


By adopting the Open Data Center Alliance (ODCA) security framework and security assurance levels of bronze, silver, gold, and platinum, businesses can identify and focus their limited security resources on the most sensitive parts of the business. The ODCA security framework also offers recommendations on the type of security assurances your business should require from providers at each tier. Additionally, it details requirements for access control, encryption, data masking, and more.


Know Thyself: Application Inventory & Insight


According to Brennan, one of the first steps toward creating a SaaS security playbook is to take stock of which services have been migrated to the cloud, and which are still hosted in-house. During this inventory process, your team should create documentation for all SaaS providers, tenants, and enterprise controls. By conducting a thorough inventory of existing services and their security controls, your team can take a holistic and informed approach to implementing appropriate security measures for the kinds of data and applications that are being hosted in the cloud.


Choosing The Right Partners


A huge part of a successful security strategy is to keep outside providers accountable. Since the ecosystem is still evolving, many SaaS products are still maturing. It’s important to carefully vet and scrutinize new providers before aligning with them. Security is an ongoing process — your security team should continually audit all SaaS providers and reassess risks associated with them.


Brennan anticipates a lot of consolidation in the SaaS space over the next five to 10 years, which is why he recommends signing short-term contracts with your providers. If your roadmaps no longer align, your IT organization should be able to quickly move from one provider to another.

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

The Prickly Love Affair Between Users and Software

September has proven to be a big month for Apple. Blockbuster announcements were made to introduce the iPhone 6, the iPhone 6 Plus, Apple Pay, and the Apple Watch.  Along with these major events came the debut of the iOS 8.0.1 update.


Then came the failure of iOS 8.0.1.


The software update was plagued by furious customer complaints within minutes of its debut. Less than an hour after launch, Apple retracted the update with promises of mending the bugs that were causing slower download speeds, dropped calls, keyboard malfunctions, and overall sluggish performance. Thereafter, Apple had to coach its grumpy users through restoring their devices to the previous iOS.


The iOS 8 misstep begs the question: Are we ready to be governed by software that guides our daily lives?


Software is proliferating homes, enterprises, and virtually everything in between. It’s becoming a part of our routine anywhere we go, and when it works, it has the capacity to greatly enhance our quality of life. When it doesn’t work, things go awry almost immediately. For the enterprise, the ramifications of incapable software can resemble Apple’s recent debacle. Consumerization is not to be taken lightly — it’s changing how we exist as a species. It’s changing what we require to function.


Raj Rao, VP and global head of software quality practice for NTT Data, recently wrote an article for Wired in which he states, “Today many of us don’t really know how many software components are in our devices, what their names are, what their versions are, or who makes them and what their investment and commitment to quality is. We don’t know how often software changes in our devices, or what the change means.”


The general lack of knowledge on what software is used within a particular device — specifically how and why — inevitably leads to ineptitude for troubleshooting problems when they arise. While a constant evolution in software is necessary for innovation, one can expect continual troubleshooting for the new technology.


For enterprise software users, Rao had three tips for keeping everybody satisfied. First, users should be encouraged to stick with programs they regularly use and understand. Second, large OS ecosystems should adhere to very strict control standards in order to ensure quality. And third, global software development practices need to become a priority if we want to guarantee a prioritized UX.


The bond between humans and software is constantly intensifying. Now is the time to ensure the high quality of your own software systems. Do you have an iOS 8.0.1 situation waiting to happen?


To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

The Data Stack – September 2014 Intel® Chip Chat Podcast Round-up

September is always a busy month at Intel, and this year was no exception. Intel® Chip Chat hit the road with live episodes from the Intel Xeon processor E5 v3 launch. A plethora of partners and Intel reps discussed their products/platforms and what problems they’re using the Xeon processor to tackle. We were also live from the showcase of the Intel Developer Forum and will be archiving those episodes in the next few months, starting with an episode on software-defined storage. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


  • Data Center Telemetry – Intel® Chip Chat episode 331: Iddo Kadim, a marketing director in the Data Center Group at Intel, stops by to talk about data center telemetry – information you can read from the infrastructure (like thermal data and security states) to help manage workloads more efficiently. In the future, the orchestration layer will work with telemetry data to manage workloads automatically for a more flexible and efficient data center. For more information, visit and
  • The Intel® IoT Analytics Kit for Intelligent Data Analysis and Response – Intel® Chip Chat ep 332: Vin Sharma (@ciphr), the Director of Planning and Marketing for Hadoop at Intel chats about collecting and extracting value from data. The Intel® Galileo Development Kit’s hardware and software components allow users to build an end-to-end solution while the Intel® Internet of Things Analytics Kit provides a cloud-based data processing platform. For more information, visit
  • The Intel® Xeon® Processor E5-2600 v3 Launch – Intel® Chip Chat episode 333: Dylan Larson, the Director of Server Platform Marketing at Intel, kicks off our podcasts from the launch of the Intel® Xeon® processor E5 v3. This new generation of processors is the heart of the software-defined data center and offers versatile and energy-efficient performance while providing a foundation for security. Also launching are complementary storage and networking elements for a complete integration of capabilities. For more information, visit
  • Optimizing for HPC with SGI’s ICE X Platform: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 334: Bill Mannel, the General Manager with the Compute and Storage Product Division at SGI, stops by to talk about SGI’s ICE* X platform featuring the recently-launched Intel® Xeon® processor E5-2600 v3. The ICE X blade is specifically optimized to provide higher levels of performance, scalability, and flexibility for HPC customers. For more information, visit
  • Increased App Performance with Dell PowerEdge: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 335: Brian Payne, Executive Director of PowerEdge Product Management at Dell, chats about the Dell PowerEdge* 13G server line featuring the recently-launched Intel® Xeon® processor E5 v3. Flash server integration into the PowerEdge 13G is delivering immense increases in application and database performance to help customers meet workload requirements and adapt to new scale-out infrastructure models. For more information, visit
  • Next-Gen Ethernet Controllers for SDI: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 336: Brian Johnson, Solutions Architect for Ethernet Products at Intel, discusses the release of the Intel® Ethernet Controller XL710. With the ability to achieve 40 Gbps speeds, the XL710 is architected for the next generation of SDI and virtualized cloud environments, as well as network functions virtualization in the telco industry. For more information, visit
  • The Reliable and High Performing Oracle Sun Server: Intel Xeon E5 v3 Launch – Chip Chat ep 337: Subban Raghunathan, the Director of Product Management of x86 Servers at Oracle, stops by to discuss the Intel® Xeon® processor E5 v3 launch and how Oracle’s optimized hardware and software in the Sun* Server product line has enabled massive performance gains. Deeper integration of flash technology drives increased reliability, performance, and solutions scalability and in-memory database technology delivers real-time caching of application data, which is a game changer for the enterprise. For more information, visit
  • Supermicro Platforms for Increased Perf/Watt: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 338: Charles Liang, Founder, President, CEO, and Chairman of the Board and Don Clegg, VP of Marketing and Business for Supermico discuss how the company has launched more than 50 platform designs optimized for the Intel® Xeon® processor E5 v3. Supermicro provides solutions for data center, cloud computing, enterprise IT, Hadoop/big data, HPC and embedded systems worldwide and focuses on delivering increased performance per watt, performance per square foot, and performance per dollar. For more information, visit
  • The New Flexible Lenovo ThinkServer Portfolio: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 339: Justin Bandholz, a Portfolio Manager at Lenovo, stops by to announce the launch of a portfolio of products based on the Intel® Xeon® processor E5-2600 v3, including a premier 2-socket 1 and 2U rack servers, the ThinkServer* RD550 and ThinkServer RD650, as well as a 2-socket ThinkServer TD350 tower server. New fabric and storage technologies in the product portfolio are providing breakthroughs in flexibility for configuration of systems to suit customer workload needs. For more information, visit
  • Improving Network Security and Efficiency: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 340: Jeni Panhorst, Senior Product Line Manager at Intel, stops by to talk about the launch of the Intel® Communications Chipset 8900 series with Intel® QuickAssist Technology, which delivers cryptography and compression acceleration that benefits a number of applications. Use cases for the new chipset include securing back-end network ciphers to improve efficiency of equipment while delivering real-time cryptographic performance requirements, as well as network optimization – compressing data in the flow of traffic across a WAN. For more information, visit
  • System Innovation with Colfax: Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 341: Gautam Shah, the CEO of Colfax International, chats about how the Intel® Xeon® processor E5 v3 is a complete solution stack upgrade, including processor, networking and storage components, which allows customers to tackle problems they haven’t previously been able to solve cost-effectively (or at all). Colfax is delivering solutions with increased DDR4 memory, 12gb/s SAS, integrated SSDs, and networking solutions, which offer a great leap in system innovation. For more information, visit or email with any questions.
  • Increased Data Center Security, Efficiency and Reliability with IBM – Intel® Chip Chat episode 342: Brian Connors, the VP of Global Product Development and Lab Services at IBM, stops by to talk about the launch of the company’s new M5 line of towers, racks and NeXtScale systems based on the Intel® Xeon® processor E5 v3. The systems have been designed for increased security (Trusted Platform Assurance and Enterprise Data Protection), efficiency and reliability and offer dramatic performance improvements over previous generations. For more information, visit
  • Innovations in VM Management with Hitachi: The Intel Xeon E5 v3 Launch – Intel® Chip Chat ep 343: Roberto Basilio, the VP of Storage Product Management at Hitachi Data Systems, discusses the launch of the Intel® Xeon® processor E5 v3 and, in particular, how virtual machine control structure (VMCS) shadowing is innovating virtual machine management in the cloud. Shadowing improves the performance of Nested Virtualization and reduces latency and improves energy efficiency. For more information, visit
  • Re-architecting the Data Center with HP ProLiant Gen 9: Intel Xeon E5 v3 – Intel® Chip Chat ep 344: Peter Evans, a VP & Marketing Executive in HP’s Server Division, chats about the ProLiant* Generation 9 platform refresh, the foundation of which is the Intel® Xeon® processor E5 v3. The ProLiant Gen9 platform is driving advancements in performance, time to service, and optimization for addressing the explosion of data and devices in the new data center. For more information, visit
  • Software Defined Storage for Hyper-Convergence – Intel® Chip Chat episode 345: In this archive of a livecast from the Intel Developer Forum, Yoram Novick (Founder and CEO) and Carolyn Crandell (VP of Marketing) from Maxta discuss hyper-convergence and enabling SDI via the company’s software defined storage solutions. The recently announced MaxDeploy reference architecture, built on Intel® Server Boards, provides customers the ability to purchase a whole box (hardware and software) for a more simple and cost-effective solution than legacy infrastructure. For more information, visit
  • Modernizing Code for Dramatic Performance Improvements – Intel® Chip Chat episode 346: Mike Bernhardt, the Community Evangelist for HPC and Technical Computing at Intel, stops by to talk about the importance of code modernization as we move into multi- and many-core systems in the HPC field. Markets as diverse as oil and gas, financial services, and health and life sciences can see a dramatic performance improvement in their code through parallelization. Mike also discusses last year’s Parallel Universe Computing Challenge and its return at SC14 in November – $26,000 towards a charitable organization is on the line for the winning team. For more information about the PUCC, visit and for more on Intel and HPC, visit

Read more >

How does business recover from a large-scale cyber security disaster?

Corporations need to get three things right in cyberspace: protect their valuable information, ensure that business operations continue during disturbances and maintain their reputation as trustworthy. These goals support one another and enable successful utilization of the digital world. Yet due to its dynamic nature there is no absolute security in cyberspace. What to do when something goes wrong? The best way to survive from a blast is to prepare for it in advance.


Cyber security requires transformed security thinking. Security should not be seen as an end-state once achieved through tailored investment in technology but as an on-going process that needs to adapt to changes in the environment. Effective security production is agile and innovative. It aligns cyber security with the overall business process so that the former supports the latter. When maintaining cyber security is seen as one of the corporation’s core managerial functions, its importance is raised to the correct level. Not only IT-managers and -officers need to understand cyberspace and realize how it relates to their areas of responsibility.


Integration of cyber security point of view in business process can be done, for example, via constructing and executing a specific cyber strategy for the corporation. This should start with enablement and consider opportunities that the corporation wishes to take advantage of in the digital world. It should also recognize threats in cyberspace and designate how these are counteracted. The strategy process should be led by the highest managerial level yet be responsive to ideas and feedback from both operational and technical levels of execution. Thus the entire organization will be committed to the strategy and feel it has an ownership in it. Moreover, the strategy will be realistic without attempting to reach unachievable goals or utilize processes which construction is technically impossible.


It is a common practice for corporations to do business continuity planning. However, operations in the digital world are not always included in this – regardless of the acknowledged dependency on cyberspace that characterizes modern business. There seems to be a strong belief in bits; that they won’t let us down. The importance of plan B is often neglected and the ability to operate without functioning cyberspace is lost. What should be in the plan B – which is an essential building block in cyber strategy – is the guidelines for partners, managers and employees in case of a security breach or a large cyber security incident. What to do; whom to inform; how to address the issue in public?


The plan B should include enhanced intrusion detection, adequate responses to security incidents and a communication strategy. Whom to inform, at what level of details and in which stage of the recovery process? Too little communication may give the impression that the corporation is trying to hide something or isn’t up-to-date with its responsibilities. Too much communication in too early stage of the mitigation and restoration process may lead to panic or exaggerated loss estimations. In both cases the reputation of the corporation suffers. Openness and correct timing are the key words here.


A resilient corporation is able to continue its business operations even when the digital world does not function the way it is supposed to. Digital services may be scaled down without customer experience suffering from it too much. Effective detection of both breaches and associated losses and fast restoration of services do not only serve the corporation’s immediate business goals but also enable projecting good cyber security. Admitting that there are problems but simultaneously demonstrating that necessary security measures are being taken is essential throughout the recovery period. So is honest communication to stakeholders at the right level of details.


Without adequate strategy work and its execution trust felt towards the corporation and its digital operations is easily lost. Without trust it is difficult to find to partners to cyber dependent business operations and customers turn away from the corporation’s digital offerings. Trust is the most valuable asset in cyberspace.


Planning in advance and building a resilient business entity safeguard corporations from digital disasters. In case such a thing has already happened it is important to speak up, demonstrate that lessons have been learned and show what is being done differently from now. The corporation must listen to those who have suffered and carry out its responsibilities. Only this way can market trust be restored.


- Jarno


Find Jarno on LinkedIn

Start a conversation with Jarno on Twitter

Read previous content from Jarno

Read more >

Breaking Down Battery Life

Many consumer devices have become almost exclusively portable. As we rely more and more on our tablets, laptops, 2-in-1s, and smartphones, we expect more and more out of our devices’ batteries. The good news is, we’re getting there. As our devices evolve, so do the batteries that power them. However, efficient batteries are only one component of a device’s battery life. Displays, processors, radios, and peripherals all play a key role in determining how long your phone or tablet will stay powered.


Processing Power

Surprisingly, the most powerful processors can also be the most power-friendly. By quickly completing computationally intensive jobs, full-power processors like the Intel Core™ i5 processor can return to a lower power state faster than many so-called “power-efficient” processors. While it may seem counterintuitive at first glance, laptops and mobile devices armed with these full-powered processors can have battery lives that exceed those of smaller devices. Additionally, chip makers like Intel work closely with operating system developers like Google and Microsoft in order to optimize processors to work seamlessly and efficiently.


One of the biggest power draws on your device is your display. Bright LCD screens require quite a bit of power when fully lighted. As screens evolve to contain more and more pixels, battery manufacturers have tried to keep up. The growing demand for crisp high-definition displays makes it even more crucial for companies to find new avenues for power efficiency.



Almost all consumer electronic devices being produced today have the capacity to connect to an array of networks. LTE, Wi-Fi, NFC, GPS — all of these acronyms pertain to some form of radio in your mobile phone or tablet, and ultimately mean varying levels of battery drain. As the methods of wireless data transfer have evolved, the amount of power required for these data transfers has changed. For example, trying to download a large file using a device equipped with older wireless technology may actually drain your battery faster than downloading the same file using a faster wireless technology. Faster downloads mean your device can stay at rest more often, which equals longer battery life.



It’s becoming more and more common for new devices to come equipped with solid-state drives (SSD) rather than hard-disk drives (HDD). By the nature of the technology, HDDs can use up to 3x the power of SSDs, and have significantly slower data transfer rates.


These represent just a few things you should evaluate before purchasing your next laptop, tablet, 2-in-1, or smartphone. For more information on what goes into evaluating a device’s battery life, check out this white paper. To join the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter

Read more >

Episode Recap – Transform IT with Guest Ray Noonan, CEO, Cogent

How did you like what Ray Noonan, CEO of Cogent, had to say about collaboration and the need to focus on business value?


Did it challenge you?Screen Shot 2014-10-03 at 10.57.47 AM.png


It probably should have. If I can summarize what Ray shared with us, it would be that we need to:


Break down the walls that separate us and keep us apart and to always put the business value above the needs of IT.

I’m quite sure that some of what he said sent shivers down the spines of IT people everywhere. But Ray wasn’t focused on “IT” – only on what IT can do to deliver value to the organization.


He believes that IT is too important to be segregated in a separate function and so he integrated it into the business units directly. He believes that we should all be technologists and so that we need to trust our people with technology decisions. He believes that the sense of “ownership” – to the degree that it inhibits sharing and collaboration – must be eliminated so that our teams can work together rapidly and fluidly. And he believes that the only thing that matters is the value that is generated for the business – so if an IT process or policy is somehow disrupting the delivery of value, then it should be changed.


If you keep your “IT hat” on, these ideas can seem scary and downright heretical. But if you think like a CEO, they make a lot more sense.


And that was Ray’s big challenge to all of us.


To break down our “ownership walls”.

To focus, instead, on how we create value for the organization.

To understand and embrace that value.

And then to deliver and protect it.


The question for you is how you’re going to start doing that. How will you begin?


Share with us the first step that you’re going to take to begin breaking down your own “ownership walls” and to focus on value.  I believe that your ability to understand how value is created for your business and how you, personally, contribute to that value, is perhaps one of the most critical first steps in your own personal transformation to becoming a true digital leader.


So decide what you will do to begin this process and start now. There’s no time to wait!


If you missed Episode 2, you can watch it on-demand here:


Also, make sure you tune in on October 14th when I’ll be talking to Patty Hatter, Sr. VP Operations & CIO at McAfee about “Life at the Intersection of IT and Business.” You can register for a calendar reminder here.

You can join the Transform IT conversation anytime using the Twitter hashtags #TransformIT and #ITChat.

Read more >

Upgrade A NVME Capable Linux Kernel

Here in Intel NVM and SSD group (NSG) we build and test Linux systems a lot and we’ve been working to mature the nvme driver stack on all kinds of operating systems. The Linux kernel is the innovation platform today, and it has come a long way now with NVMe stability. We always had a high level kernel build document but never in a blog (bad Intel, we are changing those ways). We also wanted to refresh it a bit as maturity is well along now with NVMe and Linux. Kernel 3.10 (*spring 2014) is when integration really happened, and the important data center Linux OS vendors are fully supporting the driver. In case you are on a 2.6 kernel and want to move up to a newer kernel, here are the steps to build a kernel for your testing platform and try out one of Intel’s Data Center SSD’s for PCIe and NVMe.. This assumes you want the latest and greatest for testing and are not interested in an older or vendor supported kernel. By the way, on those “6.5 distributions” you won’t be able to get a supported 3.x kernel, that’s one reason I wrote this blog. But it will run and allow you test with something newer. You may have your own reasons I am sure. As far as production goes you will probably want to make sure you work together with your OS vendor.


I run a 3.16.3 kernel on some of the popular 6.5 distros, you can too.


1.    NVM Express background

The NVM express (NVMe) is optimized PCI Express SSD interface, NVM Express specification defines an optimized register interface, command set and feature set for PCI express (PCIe)-based Solid State Drives(SSD). Please refer for background on NVMe.

The NVM Express Linux driver development utilizes the typical open-source process used by The development mailing list is

The Linux NVMe driver intercepts kernel 3.10 and integrates to kernels above 3.10.


2.    Development tools required (possible pre-requisites)

In order to clone, compile and build new kernel/driver, the following packages are needed

  1. ncurses
  2. build tools
  3. git  (optional you could be using wget to get the Linux package)

You must be root to install these packages  

Ubuntu based

apt-get install git-core build-essential libncurses5-dev  

RHEL based

yum install git-core ncurses ncurses-develyum install groupinstall “Development Tools”  

SLES based        

zipper install ncurses-devel git-core              zipper install –type pattern Basis-Devel


3.    Build new Linux kernel with NVMe driver

Pick up a starting distribution, it doesn’t matter from driver’s perspective which distribution you use since it is going to put a new kernel on top of it, so use whatever you are most comfortable with and/or has the tools required.Get kernel and driver

  1. Or you can download “snapshot” from the top commit (here’s an example)


            tar –xvf linux-3.16.3.tar.xz


    2.      Build and install

Run menuconfig (which uses ncurses):

make menuconfig

Confirm the NVMe Driver under Block is set to <M>

Device Drivers-> Block Devices -> NVM Express block device

This creates .config file in same directory.

Then, run as root these make commands (use the j flag as ½ your cores to improve make time)

make –j10

make modules_install –j10

make install –j10


Depending on distribution you use, you may have to run update-initramfs and update-grub, but this is typically unnecessary. Once install is successful, reboot system to load new kernel and drivers. Usually the new kernel becomes default to boot which is the top line of menu.lst. Verify it with “uname –a” after booting, that the  running kernel is what you expect. , Use “dmesg | grep –i error”  and resolve any kernel loading issues.


4.  NVMe Driver basic tests and tools

          There are some basic open source nvme test programs you can use for checking nvme devices:

          Git’ing source codes

git clone git://

Making testing programs

Add/modify Makefile with proper lib or header links and compile these programs



Example, check nvme device controller “identify”, “namespace” etc

>>sudo ./nvme_id_ctrl /dev/nvme0n1

>>sudo ./nvme_id_ns /dev/nvme0n1


Intel SSD Data Center Tool 2.0 supports NVMe


Here are more commands you’ll find useful.

Zero out and condition a drive sequentially for performance testing:

dd if=/dev/zero of=/dev/nvme0n1 bs=2048k count=400000 oflag=direct

Quick test a drive, is it reading at over 2GB a second?

hdparm -tT –direct /dev/nvme0n1


Again enjoy these Gigabyte/s class SSD’s with low microsecond controller free performance!

Read more >

Health IT Does Not Transform Healthcare; Healthcare Cannot Transform Without Health IT

Below is a guest post from Steven E. Waldren, MD MS.


I was listening to the Intel Health videocast[1] of Eric Dishman, Dr. Bill Crounse, Dr. Andy Litt, and Dr. Graham Hughes. There was an introductory line that rang true, “EHR does not transform healthcare.” This statement prompted me to write this post.


The healthcare industry and policy makers have frequently seen health information technology (health IT) as a relatively easy fix to the quality and cost issues plaguing the U.S. health system. If we adopt health IT and make it interoperable, we will drastically improve quality and lower cost. Research provides evidence that health IT can do both.


I believe, however, that interpretation of this research misses a very important dependent variable; that variable is the sociotechnical system within which the health IT is deployed. For the uninitiated, Wikipedia provides a good description of a sociotechnical system.[2] In essence, it is the system of people, workflow, information, and technology in a complex work environment. Healthcare is definitely a complex adaptive environment[3]. To put a finer point on this, if you deploy health IT in an environment in which the people, workflow, and information are aligned to improve quality and lower cost, then you are likely to see those results. On the other hand, if you implement the technology in an environment in which the people, workflow, and information are not aligned, you will likely not see in either area.


Another reason it is important to look at health IT as a sociotechnical system is to couple the provider needs and capabilities to the health IT functions needed. I think, as an industry, we have not done this well. We too quickly jump into the technology, be it patient portal, registry, or e-prescribing, instead of focusing on the capability the IT is designed to enable, for example, patient collaboration, population management, or medication management, respectively.


Generally, the current crop of health IT has been focused on automating the business of healthcare, not on automating care delivery. The focus has been on generating and submitting billing, and generating documentation to justify billing. Supporting chronic disease management, prevention, or wellness promotion take a side seat if not a backseat. As the healthcare industry transitions to value-based payment, the focus has begun to change. As the healthcare system, we should focus on the capabilities that providers and hospitals need to support effective and efficient care delivery. From those capabilities, we can define the roles, workflows, data, and technology needed to support practices and hospitals in achieving those capabilities. Instead of adopting a standard, acquiring a piece of technology, or sending a message, by loosely coupling to the capabilities, we have a metric to determine whether we are successful.


If we do not focus on the people, workflow, data, and technology, but instead only focus on adopting health IT, we will struggle to achieve the “Triple Aim™,” to see any return on investment, or to improve the satisfaction of providers and patients. At this time, a real opportunity exists to further our understanding of the optimization of sociotechnical systems in healthcare and to create resources to deploy those learnings into the healthcare system. The opportunity requires us to expand our focus to the people, workflow, information, AND technology.


What questions do you have about healthcare IT?


Steven E. Waldren, MD MS, is the director, Alliance for eHealth Innovation at the American Academy of Family Physicians





Read more >

Will the Invincible Buckeyes Team from OSU and OSC Prove to be Invincible?

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group


Karen Tomko, Scientific Applications Group Manager at the Ohio Supercomputer Center (OSC), has assembled a team of fellow Buckeyes to attempt the Intel Parallel Universe Computing Challenge (PUCC) at SC14 in November.


We asked Karen a few questions about her team, called the Invincible Buckeyes (IB), and their proposed participation in the PUCC.


The 2014 Invincible Buckeyes (IB) team includes (from l to r) Khaled Hamidouche, a post-doctoral researcher at The Ohio State University (OSU); Raghunath Raja, Ph.D student (CS) at OSU; team captain Karen Tomko; and Akshay Venkatesh, Ph.D student (CS) at OSU. Not pictured is Hari Subramoni, a senior research associate at OSU


Q: What was the most exciting thing about last year’s PUCC?

A: Taking a piece of code from sequential to running in parallel on the Xeon Phi in 15 minutes, in a very close performance battle against the Illinois team was a lot of fun.


Q: How will your team prepare for this year’s challenge?

A: We’ll do our homework for the trivia, brush up on the parallel constructs, look at some Fortran codes, and make sure we have at least one vi user on the team.


Q: What would you suggest to other teams who are considering participation?

A: First I’d say, if you are considering it then sign up. It’s a fun break from the many obligations and talks at SC. When you’re in a match don’t over think, the time goes very quick. Also, watch out for the ‘Invincible Buckeyes’!


Q: SC14 is using the theme “HPC Matters” for the conference. Can you explain why “HPC Matters” to you?

A: HPC systems allow scientists and engineers to tackle grand challenge problems in their respective domains and make significant contributions to their fields. It has enabled innumerous discoveries in the fields of astro-physics, earthquake analysis, weather prediction, nanoscience modeling, multi-scale and multi-physics modeling, biological computations, and computational fluid dynamics, to name a few. Being able to contribute directly/indirectly to these discoveries by means of the research we do matters a lot to our team.

Read more >

IT Accelerating Business Innovation Through Product Design

For the Product Development IT team within Intel IT that I am a part of, these have been our recent mandates. We’ve been tasked with accelerating the development of Intel’s key System on Chip (SoC) platforms. We’ve been asked to be a key enabler of Intel’s growing software and services business. And we’ve been recognized as a model for employee engagement and cross-functional collaboration.


Much of this is new.


We’ve always provided the technology resources that facilitate the creation of world-class products and services. But the measures of success have changed. Availability and uptime are no longer enough. Today, it’s all about acceleration and transformation.


Accelerating at the Speed of Business


In many ways, we have become a gas pedal for Intel product development. We are helping our engineers design and deliver products to market faster than ever before. We are bringing globally distributed teams closer together with better communication and collaboration capabilities. And we are introducing new techniques and tools that are transforming the very nature of product design.


Dan McKeon, Vice President of Intel IT and General Manager of Silicon, Software and Services Group at Intel, recently wrote about the ways we are accelerating and transforming product design in the Intel IT Business Review.


The IT Product Development team, under Dan’s leadership, has enthusiastically embraced this new role. It allows us to be both a high-value partner and a consultant for the design teams we support at Intel. We now have a much better understanding of their goals, their pain points, and their critical paths to success—down to each job and workload. And we’ve aligned our efforts and priorities accordingly.


The results have been clear. We’ve successfully shaved weeks and months off of high-priority design cycles. And we continue to align with development teams to further accelerate and transform their design and delivery processes. Our goal in 2014 is to accelerate the Intel SoC design group’s development schedule by 12 weeks or more. We are sharing our best practices as we go, so please keep in touch.


Get the latest from Dan’s team on IT product development for faster time to market, download the Intel IT Business Review mobile app.

Follow the conversation on Twitter: hashtag #IntelIT

Read more >

High Performance Computing in Today’s Personalized Medicine Environment


The goal of personalized medicine is to shift from a population-based treatment approach (i.e. all people with the same type of cancer are treated in the same way) to an approach where the care pathway with the best possible prognosis is selected based on attributes specific to a patient, including their genomic profile.


After a patient’s genome is sequenced, it is reconstructed from the read information, compared against a reference genome, and the variants are mapped; this determines what’s different about the patient as an individual or how their tumor genome differs from their normal DNA.  This process is often called downstream analytics (because it is downstream from the sequencing process).


Although the cost of sequencing has come down dramatically over the years (faster than Moore’s law in fact), the cost of delivering personalized medicine in a clinical setting “to the masses” is still quite high. While not all barriers are technical in nature, Intel is working closely with the industry to remove some of the key technical barriers in an effort to accelerate this vision:


  • Software Optimization/Performance: While the industry is doing genomics analytics on x86 architecture, much of the software has not been optimized to take advantage of parallelization and instruction enhancements inherent with this platform
  • Storing Large Data Repositories: As you might imagine, genomic data is large, and with each new generation of sequencers, the amount of data captured increases significantly.  Intel is working with the industry to apply the Lustre (highly redundant/highly scalable) file system in this domain
  • Moving Vast Repositories of Data: Although (relatively) new technologies like Hadoop help the situation by “moving compute to the data”, sometimes you can’t get around the need to move a large amount of data from point A to point B. As it turns out, FTP isn’t the most optimal way to move data when you are talking Terabytes


I’ll leave you with this final thought: Genomics is not just for research organizations. It is accelerating quickly into the provider environment. Cancer research and treatment is leading the way in this area, and in a more generalized setting, there are more than 3,000 genomic tests already approved for clinical use. Today, this represents a great opportunity for healthcare providers to differentiate themselves from their competition… but in the not too distant future, providers who don’t have this capability will be left behind.


Have you started integrating genomics into your organization? Feel free to share your observations and experiences below.


Chris Gough is a lead solutions architect in the Intel Health & Life Sciences Group and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@CGoughPDX)

Check out his previous posts

Read more >

Can the Coding Illini Return to the Parallel Universe Computing Challenge Finals Again?

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group.


When the Gaussian Elimination Squad from Germany indicated their interest in defending their 2013 Intel Parallel Universe Computing Challenge (PUCC) championship title, little did they know the first team to respond would be the one they faced in last year’s finals, the Coding Illini.

Last year’s Coding Illini team defeated Team Ohio in their first round and K2I18 (Rice University) in round two to advance into the finals. According to team captain Mike Showerman, this year’s team hopes to go all the way to the championship, “Coding Illini will be even fiercer this year and will take every opportunity to bring the title home.”

The Coding Illini will represent the National Center for Supercomputing Applications (NCSA) and the University of Illinois at Urbana–Champaign.

Similar to the inaugural PUCC held at SC13, the 2014 challenge will include an entertaining supercomputing trivia round followed by a parallel computing code challenge live and on stage in the Intel booth at SC14 in New Orleans. Teams from around the globe are expected to take part in the challenge again this year and may submit a PUCC interest form to express their desire to participate.


The 2013 Coding Illini included (left to right): Omar Padron (NCSA research programmer and a member of the Science and Engineering Applications Support team), Mike Showerman (team captain), Xiang Ni (computer science PhD student), Nikhil Jain (Computer Science PhD student), and Andriy Kot (A post-doctoral research associate at NCSA).

Read more >

The SDI Data center of the future is here… Now let’s distribute it more evenly

The science-fiction writer William Gibson once observed, “The future is already here — It’s just not very evenly distributed.” The same could be said of the today’s data centers.


On one hand, we have amazing new data centers being built by cloud service providers and the powerhouses of search, ecommerce and social media. These hyperscale data center operators are poised to deploy new services in minutes and quickly scale up to handle enormous compute demands. They are living in the future.


And then on the other hand, we have enterprises that are living with the data center architectures of an earlier era, a time when every application required its own dedicated stack of manually provisioned resources. These traditional enterprise data centers were built with a focus on stability rather than agility, scalability and efficiency—the things that drive cloud data centers.


Today, the weaknesses of legacy approaches are a growing source of pain for enterprises. While cloud providers enjoy the benefits that come with pooled and shared resources, traditional enterprises wrestle with siloed architectures that are resistant to change.


But there’s good news on the horizon. Today, advances in data center technologies and the rise of more standardized cloud services are allowing enterprise IT organizations to move toward a more agile future based on software-defined infrastructure (SDI) and hybrid clouds.


With SDI and the hybrid cloud approach, enterprise IT can now be managed independently of where the physical hardware resides. This fundamental transformation of the data center will enable enterprises to achieve the on-demand agility and operational efficiencies that have long belonged to large cloud service providers.


At Intel, we are working actively to deliver the technologies that will allow data centers to move seamlessly into the era of SDI and hybrid clouds. Here’s one example: The new Intel® Xeon® Processor E5 v3 family exposes a wide range of information on hardware attributes—such as security, power, thermals, trust and utilization—to the orchestration layer. With access to this information, the orchestration engine can make informed decisions on the best placement for workloads within a software-defined or cloud environment.


And here’s another of many potential examples: The new Intel Xeon processors incorporate a Cache QoS Monitoring feature. This innovation helps system administrators gain the utilization insights they need to ward off resource-contention issues in cloud environments. Specifically, Cache QoS Monitoring identifies “noisy neighbors,” or virtual machines that consume a large amount of the shared resources within a system and cause the performance of other VMs to suffer.


And that’s just the start. If space allowed, we could walk through a long list of examples of Intel technologies that are helping enterprise IT organizations move toward software-defined data centers and take advantage of hybrid cloud approaches.


This transformation, of course, takes more than new technologies. Bringing SDI and hybrid clouds to the enterprise requires extensive collaboration among technology vendors, cloud service providers and enterprises. With that thought on in mind, Intel is working to enable a broad set of ecosystem players, both commercial and open source, to make the SDI vision real.


One of the key mechanisms for bringing this vast ecosystem together is the Open Data Center Alliance (ODCA), which is working to shape the future of cloud computing around open, interoperable standards. With more than 300 member companies spanning multiple continents and industries, the ODCA is uniquely positioned to drive the shift to SDI and seamless, secure cloud computing. There is no equivalent organization on the planet that can offer the value and engagement opportunity of ODCA.


Intel has been a part of the ODCA from the beginning. As an ODCA technology advisor, we gathered valuable inputs from the ecosystem regarding challenges, usage models and value propositions. And now we are pleased to move from an advisory role to that of a board member. In this new role, we will continue to work actively to advance the ODCA vision.


Our work with the ecosystem doesn’t stop there. Among other efforts, we’re collaborating on the development of Redfish, a specification for data center and systems management that delivers comprehensive functionality, scalability and security. The Redfish effort is focused on driving interoperability across multiple server environments and simplifying management, to allow administrators to speak one language and be more productive.


Efforts like this push us ever closer to next-generation data centers — And a future that is more evenly distributed.



For more follow me @PoulinPDX on Twitter.

Read more >

Part 1: The Changing Life of Modern Pharmaceutical Sales Professionals

Below is the second in a series of guest blogs from Dr. Peter J. Shaw, chief medical officer at QPharma Inc. Watch for additional posts from Dr. Shaw in the coming months.


With all the recent advances in tablet technology, the way pharmaceutical sales professionals interact with health care providers (HCPs), and in particular doctors, has changed. Most pharmaceutical companies are now providing their sales teams with touch screen tablets as their main platform for information delivery. The day of paper sales aids, clinical reprints and marketing materials is rapidly fading. The fact is that doctors have less time to see sales professionals during their working day and there are increasing restrictions on access to doctors by many institutions. Therefore, the pharmaceutical industry is having to be more and more inventive and flexible in the way that it approaches doctors and conveys the information needed to keep up-to-date on pharmaceutical, biotech and medical device advances.


  • How has this impacted the life of the sales professional?
  • How have pharmaceutical companies adapted to the changes?
  • To what extent has the use of mobile devices been adopted?
  • What impact has this had on the quality of the interaction with HCPs?
  • What are alternatives to the face-to-face doctor visit?
  • How have doctors received the new way of detailing using mobile technology?
  • What do doctors like/dislike about being detailed with a mobile device?
  • What does the future look like?
  • Are there any disadvantages to relying solely on mobile technology?


To answer some of these questions, and hopefully to generate a lively discussion on the future of mobile technology in the pharmaceutical sales world, I would like to share some facts and figures from recent research we conducted on the proficiency of sales reps using mobile devices in their interactions with HCPs, and the impact this has had on clinical and prescribing behaviors.


  • In tracking the use of mobile devices for the last three years, it is clear that there is variable use of mobile devices by sales professionals.
  • Where sales reps only have the mobile device, they are using them in only 7 to 35 percent of interactions with HCPs.
  • The use of mobile devices increases with the duration of the interaction with HCPs, in that the device is used in almost all calls lasting over 15-20 minutes.
  • Many reps do not use mobile devices in calls under 5 minutes. Often this is due to the non-interactive nature of the content, or the awkwardness of navigating through required multiple screens before arriving at information relevant to that particular HCP.
  • We have data to show that where the mobile device is very interactive and the sales rep is able to use it to open every call, the call will be on average 5-7 minutes longer with the doctor than if it is not used.
  • In cases where doctors will take virtual sales calls, these calls are greatly enhanced if there is a two-way visual component. Any device used in virtual sales calls much have a two-way video capability as the HCP will expect to see something to back up the verbal content of the sales call.
  • Most doctors feel that the use of mobile technology in face-to-face calls enhances the interaction with sales reps provided it is used as a means to visually back up the verbal communication in an efficient and direct manner.
  • Screen size is the main complaint we hear from HCPs. Most say that where the rep is presenting to more than one HCP the screen needs to be bigger than the 10” that is on most of the devices currently used by reps.


The mobile device is clearly here to stay. HCPs use them in their day-to-day clinical practice and now accept that sales professionals will also use them. When the mobile device is expected to be used as the sole means for information delivery, more work needs to go into designing the content and making it possible for the sales professional to navigate to the information that is relevant to that particular HCP. All aspects of the sales call need to be on the one device; information delivery, signature capture and validation for sample requests, and ability to email clinical reprints immediately to the HCP are just the start.


In part 2, we will look at how sales reps are using mobile devices effectively and the lessons to be learned from three years of data tracking the use of these devices and the increasing acceptance of virtual sales calls.


What questions do you have?


Dr. Peter J. Shaw is chief medical officer at QPharma Inc. He has 25 years of experience in clinical medicine in a variety of specialties, 20 years’ experience in product launches and pharmaceutical sales training and assessment, and 10 years’ experience in post-graduate education.

Read more >