The Behavioral Shift Driving Change in the World of Retail

Ready or Not, Cross-Channel Shopping Is Here to Stay


Of all the marketplace transitions that have swept through the developed world’s retail industry over the last five to seven years, the most important is the behavioral shift to cross-channel shopping.


The story is told in these three data points1:


  1. 60 plus percent of U.S. shoppers (and a higher number in the U.K.) regularly begin their shopping journey online.
  2. Online ratings and reviews have the greatest impact on shopper purchasing decisions, above friends and family, and have four to five times greater impact than store associates.
  3. Nearly 90 percent of all retail revenue is carried out in the store.


Retail today is face-to-face with a shopper who’s squarely at the intersection of e-commerce, an ever-present smartphone, and an always-on connection to the Internet.


Few retailers are blind to the big behavioral shift. Most brands are responding with strategic omni-channel investments that seek to erase legacy channel lines between customer databases, inventories, vendor lists, and promotions.



Channel-centric organizations are being trimmed, scrubbed, or reshaped. There’s even a willingness — at least among some far-sighted brands — to deal head-on with the thorny challenge of revenue recognition.


All good. All necessary.



Redefining the Retail Space


But, as far as I can tell, only a handful of leaders are asking the deeper question: what, exactly, is the new definition of the store?


What is the definition of the store when the front door to the brand is increasingly online?


What is the definition of the store when shoppers know more than the associates, and when the answer to the question of how and why becomes — at the point of purchase — more important than what and how much?


What is the definition of the store beyond digital? Or of a mash-up of the virtual and physical?


What is the definition — not of brick-and-mortar and shelves and aisles and four-ways and displays — but of differentiating value delivery?


This is a topic we’re now exploring through whiteboard sessions and analyst and advisor discussions. We’re hard at work reviewing the crucial capabilities that will drive the 2018 cross-brand architecture.


Stay tuned. I’ll be sharing my hypotheses (and findings) as I forge ahead.



Jon Stine
Global Director, Retail Sales

Intel Corporation


This is the second installment of the Tech in Retail series.

Click here to view: blog #3

To view more posts within the series click here: Tech & Finance Series


1 National Retail Federation. “2015 National Retail Federation Data.” 06 January 2015.

Read more >

Meet Your New Business Partner – the IT Department

Over the last several years, Intel IT has been implementing the Information Technology Infrastructure Library (ITIL) framework to transform our service delivery and enable us to align more effectively with the strategies and priorities of each of Intel’s lines of business (LOBs). In doing so, we can focus on high-priority activities that may potentially transform Intel’s entire business and boost the relevancy of IT. As the Chief of Staff for Product Development IT and the Director of Business Solutions Integration for Intel IT, I’m looking forward to meeting with others who have found the same value in using this practice or are considering starting that journey.



Intel IT at the Forefront of Business Relationship Management


From the top down, Intel IT fully understands the importance of business relationship management. In the last 18 months, we have transitioned from an organization loosely coupled to the business to one directly aligned with the business, literally sitting at the table to help make key business decisions.


To survive today, organizations must be adept at making effective use of information technology (IT) to support business operations and administration. Only a few, however, truly innovate business products, services, processes, and business models, even though today’s technology landscape offers a host of innovation enablers.

—Vaughan Merlyn, co-founder of the Business Relationship Management Institute

In 2013, Intel’s CIO, Kim Stevenson, personally asked each LOB to include an IT general manager (GM) on their staff. This suggestion was met favorably by the LOBs, who saw tremendous value in connecting more formally and more closely with IT.


Intel IT has adopted a user-centered approach to delivering IT services that enables us to optimize our IT solutions, improve employee productivity, and increase business velocity. Our user-centered approach involves proactively engaging and partnering with Intel employees and business groups to learn about their needs for information, technology, and services, as well as desired experience. ITIL has been integral in placing the customer at the center, and our new Business Solutions Integration (BSI) service aligns with our user-centered IT strategy. It integrates business relationship management and business demand management, presenting the LOBs with a “One IT” view. Each LOB has a dedicated IT LOB GM, along with other dedicated IT staff that form that LOB’s core IT team: a business relationship manager, a principal engineer, and a finance controller.


The day I’m representing Intel’s LOB more than my day job, I’ve arrived. 

    —Intel IT Staff Member

With a single point of contact for IT, the LOBs can more easily request services. But more important, IT is attuned to the LOB’s strategies, priorities, and pain points. We’ve slashed the time it takes us to say “yes” or “no” to a business request from an average of 36 hours to 8 hours, and our level of support has improved dramatically, according to annual Partnership Excellence surveys.




Run, Grow, Transform


IT used to be thought of as the organization that kept the lights on and the business running, building tools when necessary. But here at Intel, while Intel IT does indeed keep the business running, our best value lies in proactively collaborating with our customers. Therefore, instead of only focusing exclusively on “Run” activities (such as providing network connectivity), we also actively pursue “Grow” and “Transform” activities.


In the “Grow” category, for example, we conduct proofs of concept (PoCs) and enterprise early adoption tests for emerging technologies. Even more valuable are our “Transform” activities, where we are directly involved in co-creating marketable products with our product groups and providing Intel with competitive advantage.

Our BSI service incorporates these higher-value activities through its integration with the IT2Intel program. I’ll explore each of these activities in more detail in future blogs. But briefly, our IT2Intel program enables us to accelerate Intel’s growth in enterprise markets by leveraging Intel IT’s expertise in partnership with Intel product groups.




Shifting with the Business



Our close alignment with Intel’s lines of business (LOBs) helps us shift our priorities to meet the growing demand from the Internet of Things Group (IoTG).

As an example of how our direct involvement with Intel’s LOBs shapes our work, consider the following graphic that shows the distribution of business requests from the various LOBs. In 2013, Intel’s Internet of Things Group (IoTG), represented by the dark blue block at the top of the left-hand graph, had very few requests for IT. But in 2014, the number of IoTG business requests grew significantly. Because we have a seat at the table, we were able to evolve with the business and meet the demands of this burgeoning sector of Intel’s market.


Through our close communication with the IoTG and early PoCs, we’ve deployed infrastructure based on the Intel® IoT Platform. We are leveraging that experience to help the group deliver solutions to Intel customers. This is just one example of how, through our BSI service, IT stays relevant and valuable to the entire enterprise.

I encourage you connect with me on the IT Peer Network and on Twitter @azmikephillips to share your thoughts and experiences relating to IT business relationship management and how it can metamorphose the role of IT from transactional to transformational.

Read more >

Where in the World Is… My Mobile-Device Design Information?

OEMs and other customers use Intel’s system-on-a-chip (SoC) products in their mobile devices. Intel makes a variety of SoCs, and any one SoC includes many components, with processor, memory controller, graphics, and sound integrated on a single chip. Each of these components comes with its own documentation, and there’s even more documentation that describes how to integrate these components with other custom components designed by the OEM. Pretty soon, you have tens of thousands of pages of documentation.


But each Intel customer needs only a fraction of the total available documentation — a piece here and a piece there. They don’t want to read a 20,000-page document to find the three paragraphs they need.


Intel IT recently partnered with the Intel product group that helps Intel customers with mobile device design, to improve the delivery of content to customers.

Enter Stage Right: Topic-Based Content

Which would you rather use: a 500-page cookbook with general headings like “stove-top cooking” and “oven recipes,” or one with tabs for breakfast, lunch, and dinner, and cross-references and indexes that help you find casseroles, breads, stir frys, and crockpot recipes, as well as recipes that use a particular ingredient such as sour cream or eggs? Clearly, the latter would be easier to use because you can quickly find the recipes (topics) that interest you.


This-vs-that.pngDarwin Information Typing Architecture, known as DITA (pronounced dit-uh), is an XML-based publishing standard defined and maintained by the OASIS DITA Technical Committee. DITA can help structure, develop, manage, and publish content, making it easier to find relevant information.


Four basic concepts underlie the DITA framework:

  • Topics. A topic is the basic content unit of DITA, defined as a unit of information that can be understood in isolation and used in multiple contexts. Topics address a single subject and are short and standardized to include defined elements, such as name, title, information type, and expected results.
  • DITA maps. DITA maps identify the products a topic is associated with and the target audience. All these things help determine which topics are included in search results. DITA maps also include navigational information, such as tables of contents.
  • Output formats. DITA-based content can be delivered in various formats, such as web, email, mobile, or print. For ease of use, the content’s final design and layout—its presentation—varies to accommodate the unique characteristics of each output format.
  • Dynamic content. Customers can select and combine different topics to create their own custom documents, which is sort of like being able to replace one piece of a DNA map to create a brand new animal.

(If  DITA intrigues you, consider attending the 2015 Content Management Strategies/DITA North America conference in Chicago, April 20–22).

Intel’s Mobile Design Center Leverages DITA to Improve Our Customer’s User Experience

We designed a solution that eliminates the need for the previous long-form documentation. Instead, the solution enables SoC customers to assemble relevant content based on topics of interest. To achieve this, the Client Computing Group changed its documentation structure to topic-based content so that customers can quickly find highly specific information, enabling faster time to market for their mobile solutions and reducing the amount of time Intel engineers must spend helping customers find the information they need. The content is tagged with metadata so that customers can search on specific topics and bundle those topics into custom binders that they can reference or print as needed.


CustomerSatisfaction.pngThe Intel Mobile Design Center portal is described in detail in our paper, “Optimizing Mobile-Device Design with Targeted Content.” The portal’s ease of use contributed significantly to overall customer satisfaction with the solution. According to a survey we conducted, customer satisfaction scores have increased from 69 percent before implementation to 80 percent after.

Based on what the mobile communications group created in the Mobile Design Center, other groups are taking notice and creating their own design centers. For example, the Service Provider Division  have committed to creating its own design center and are delivering all of its content in DITA to provide an even more interactive design for their customers.

Getting from Here to There

Converting existing FrameMaker and Word documents to DITA was not an easy undertaking. For the mobile communications group, some content wasn’t converted due to lack of time, although the group has committed to using DITA for all new content. This group performed the conversion manually, taking about 5 to 10 pages per hour. The entire conversion project took months.


For the second group we worked with, who converted their entire documentation set, the conversion was accomplished using several methods. For large FrameMaker docs, they used a third-party product to partially automate the conversion process. While the resulting DITA docs still needed manual touch-up, the automated conversion was a time-saver. For smaller FrameMaker documents, topics were created manually. For Word docs, topics were manually cut and pasted.


So, was the effort worth it? Both groups agree that indeed it was. First, conversion to DITA revealed that there was a lot of duplication between documents. When in the DITA format, revisions to a topic only take place in that topic — there is no need to search for every document that contains that topic. Not only does this reduce the time it takes to make revisions, but it also improves the quality of our documentation. In the past, without DITA, some documentation might be out-of-date because a topic was revised in one place but not in another.


“By converting to DITA we reduced the amount of content, allowing for reuse. This also reduced the amount of work for the authors,” said one team member. “DITA gives you a better feel of the makeup of your content,” said another.


Other team members touted improved revisions and version control and the ability to tag content by more than just document name.

What’s Next for DITA at Intel?

Because the solution we created is scalable, we anticipate that additional product and business groups across Intel will begin to take advantage of topic-based content to improve customer experience and Intel’s efficiency.


I’d love to hear how other enterprises are putting DITA to work for their customers, increasing customer satisfaction, encouraging dynamic content creation, and accelerating the pace of business. Feel free to share your comments and join the conversation at the IT Peer Network.

Read more >

Connected Data and Advanced Analytics Lead to Significant Revenue Uplift at Intel

Business analytics and data insights empower today’s business leaders for faster decision making. A recent data consolidation and analytics project uplifted Intel’s revenue by $264 million in 2014, as highlighted in our recently published Annual Business Review. This $264 million represents only a portion of the $351 million in value generated by Intel IT through the use of big data, business intelligence, and analytic tools. Access to connected data in an efficient and timely manner has enabled stakeholders to analyze market trends and make faster and better business decisions.





The Right Data at the Right Time

Intel’s business processes use a significant amount of historical data to reach decisions. But isolated datasets are not very useful because they provide only a glimpse of a much larger picture. Recognizing the power of connected data, Intel IT engaged in an 18-month data cleansing and consolidation effort, connecting more than 200 GB of historical data from various disparate and vertical systems using common measures and dimensions.

The complexity of this project was daunting. There were many spreadsheets and applications, and even the same data had inconsistent identifiers in different datasets. Our efforts resulted in replacing more than 4,000 spreadsheets with a single database solution that included over 1,000 data measures and 12 dimensions, as well as tracking information for about 4 million production and engineering samples provided to customers.

Even connected data, however, is not inherently valuable, unless the data is conveyed in terms of trends and patterns that guide effective decision making. On top of our now-connected data, we added advanced analytics and data visualization capabilities that enable Intel’s decision makers to convert data into meaningful insights. About 9,000 application users that serve Intel and external customers have access to this data, along with 15,000 reporting users.


As part of the project, we automated our data management processes, so that we can now integrate new datasets in just a few hours, instead of in several months.



Boosting Sales with Reseller Market Insights

Another significant chunk of the previously mentioned $351 million — $76 million — was generated by a sales and marketing analytics engine that provides valuable information to Intel sales teams, helping them strategically focus their sales efforts to deliver greater revenue. The engine’s recommendations identify which customers sales reps should contact and what they should talk to them about. This data significantly shortened the sales cycle and enabled sales reps to reach customers who were previously off the radar. (Watch a video about the analytics engine here.) The fact that this recommendation engine garnered Intel a 2014 CIO 100 award illustrates how important CIOs consider technology in today’s business environment.



What’s Next for Data Visualization at Intel

Going forward, we intend to promote the collaborative analytics to Intel decision makers. For example, Intel IT has developed an Info Wall that harnesses the power of data visualization. This solution is built on Intel® architecture and is Intel’s first interactive video wall with a viewing area measuring 5 feet high and 15 feet wide. While it’s too early to state any specific results, this unique implementation will enable new possibilities for business intelligence and data visualization. Currently, the Info Wall and data focus on sales and marketing; we plan to soon expand the application of the Info Wall to other areas of Intel business.


In an age when organizations such as Intel are rich in data, finding value in this data lies in the ability to analyze it and efficiently derive actionable business intelligence. Intel IT will continue to invest in tools that can transform data into insights to help solve high-value business problems.


Read more >

Top 3 Healthcare Security Takeaways from HIMSS 2015

Security was a major area of focus at HIMSS 2015 in Chicago. From my observations, here are a few of the key takeaways from the many meetings, sessions, exhibits, and discussions in which I participated:


Top-of-Mind: Breaches are top-of-mind, especially cybercrime breaches such as those recently reported by Anthem and Premera. No healthcare organization wants to be the next headline, and incur the staggering business impact. Regulatory compliance is still important, but in most cases not currently the top concern.


Go Beyond: Regulatory compliance is necessary but not enough to sufficiently mitigate risk of breaches. To have a fighting chance at avoiding most breaches, and minimizing impact of breaches that do occur, healthcare organizations must go way beyond the minimum but sufficient for compliance with regulations. int_brand_883_PtntNrsBdsd_5600_cmyk_lowres.jpg


Multiple Breaches: Cybercrime breaches are just one kind of breach. There are several others, for example:

  • There are also breaches from loss or theft of mobile devices which, although often less impactful (because they often involve a subset rather than all patient records), do occur far more frequently than the cybercrime breaches that have hit the news headlines recently.


  • Insider breach risks are way underappreciated, and saying they are not sufficiently mitigated would be a major understatement. This kind of breach involves a healthcare worker accidentally exposing sensitive patient information to unauthorized access. This occurs in practice if patient data is emailed in the clear, put unencrypted on a USB stick, posted to an insecure cloud, or sent via an unsecured file transfer app.


  • Healthcare workers are increasingly empowered with mobile devices (personal, BYOD and corporate), apps, social media, wearables, Internet of Things, etc. These enable amazing new benefits in improving patient care, and also bring major new risks. Well intentioned healthcare workers, under time and cost pressure, have more and more rope to do wonderful things for improving care, but also inadvertently trip over with accidents that can lead to breaches. Annual “scroll to the bottom and click accept” security awareness training is often ineffective, and certainly insufficient.


  • To improve effectiveness of security awareness training, healthcare organizations need to engage healthcare workers on an ongoing basis. Practical strategies I heard discussed at this year’s HIMSS include gamified spear phishing solutions to help organizations simulate spear phishing emails, and healthcare workers recognize and avoid them. Weekly or biweekly emails can be used to help workers understand recent healthcare security events such as breaches in peer organizations (“keeping it real” strategy), how they occurred, why it matters to the healthcare workers, the patients, and the healthcare organization, and how everyone can help.


  • Ultimately any organization seeking achieve a reasonable security posture and sufficient breach risk mitigation must first successfully instill a culture of “security is everyone’s job”.


What questions do you have? What other security takeaways did you get from HIMSS?

Read more >

The Promise – and Demands – of Precision Medicine

The idea of precision medicine is simple: When it comes to medical treatment, one size does not necessarily fit all, so it’s important to consider each individual’s inherent variability when determining the most appropriate treatment. This approach makes sense, but until recently it has been very difficult to achieve in practice, primarily due to lack of data and insufficient technology. However, in a recent article in the New England Journal of Medicine, Dr. Francis Collins and Dr. Harold Varmus describe the President Obama’s new Precision Medicine Initiative, saying they believe the time is right for precision medicine. The way has been paved, the authors say, by several factors: Ted Slater.jpg


  • The advent of important (and large) biological databases;
  • The rise of powerful methods of generating high-resolution molecular and clinical data from each patient; and
  • The availability of information technology adequate to the task of collecting and analyzing huge amounts of data to gain the insight necessary to formulate effective treatments for each individual’s illness.


The near-term focus of the Precision Medicine Initiative is cancer, for a variety of good reasons. Cancer is a disease of the genome, and so genomics must play a large role in precision medicine. Cancer genomics will drive precision medicine by characterizing the genetic alterations present in patients’ tumor DNA, and researchers have already seen significant success with associating these genomic variations with specific cancers and their treatments. The key to taking full advantage of genomics in precision medicine will be the use of state-of-the-art computing technology and software tools to synthesize, for each patient, genomic sequence data with the huge amount of contextual data (annotation) about genes, diseases, and therapies available, to derive real meaning from the data and produce the best possible outcomes for patients.


Big data and its associated techniques and technologies will continue to play an important role in the genomics of cancer and other diseases, as the volume of sequence data continues to rise exponentially along with the relevant annotation. As researchers at pharmaceutical companies, hospitals and contract research organizations make the high information processing demands of precision medicine more and more a part of their workflows, including next generation sequencing workflows, the need for high performance computing scalability will continue to grow. The ubiquity of genomics big data will also mean that very powerful computing technology will have to be made usable by life sciences researchers, who traditionally haven’t been responsible for directly using it.


Fortunately, researchers requiring fast analytics will benefit from a number of advances in information technology happening at just the right time. The open-source Apache Spark™ project gives researchers an extremely powerful analytics framework right out of the box. Spark builds on Hadoop® to deliver faster time to value to virtually anyone with some basic knowledge of databases and some scripting skills. ADAM, another open-source project, from UC Berkeley’s AMPLab, provides a set of data formats, APIs and a genomics processing engine that help researchers take special advantage of Spark for increased throughput. For researchers wanting to take advantage of the representational and analytical power of graphs in a scalable environment, one of Spark’s key libraries is GraphX. Graphs make it easy to associate individual gene variants with gene annotation, pathways, diseases, drugs and almost any other information imaginable.


At the same time, Cray has combined high performance analytics and supercomputing technologies into the Intel-based Cray®Urika-XA™ extreme analytics platform, an open, flexible and cost-effective platform for running Spark. The Urika-XA system comes preintegrated with Cloudera Hadoop and Apache Spark and optimized for the architecture to save time and management burden. The platform uses fast interconnects and an innovative memory-storage hierarchy to provide a compact and powerful solution for the compute-heavy, memory-centric analytics perfect for Hadoop and Spark.


Collins and Varmus envision more than 1 million Americans volunteering to participate in the Precision Medicine Initiative. That’s an enormous amount of data to be collected, synthesized and analyzed into the deep insights and knowledge required to dramatically improve patient outcomes. But the clock is ticking, and it’s good to know that technologies like Apache Spark and Cray’s Urika-XA system are there to help.


What questions do you have?


Ted Slater is a life sciences solutions architect at Cray Inc.

Read more >

Attackers Expand to Hack Hardware

Cyber attackers and researchers continually evolve, explore, and push the boundaries of finding vulnerabilities.  Hacking hardware is the next step on that journey.  It is important for computing device makers and the IoT industry to understand they are now under the microscope and attackers are a relentless and unforgiving crowd.  Application and operating systems have taken the brunt of attacks and scrutiny over the years, but that may change as the world embraces new devices to enable and enrich our lives.

Vulnerabilities exist everywhere in the world’s technology landscape, but they are not equal and it can take greatly varying levels of effort, timing, luck, and resources to take advantage of them.  Attackers tend to follow the path-of-least-resistance in alignment with their pursuit of nefarious goals.  As security closes the easiest paths, attackers move on to the next available option.  It is a chess game. 

In the world of vulnerabilities there is a hierarchy, from easy to difficult to exploit and from trivial to severe in overall impact.  Technically, hacking data is easiest, followed by applications, operating systems, firmware, and finally hardware.  This is sometimes referred to as the ‘stack’ because it is how systems are architecturally layered. 
Attackers Move Down the Stack.jpg
The first three areas are software and are very portable and dynamic across systems, but subject to great scrutiny by most security controls.  Trojans are a classic example where data becomes modified with malicious payloads and can be easily distributed across networks.  Such manipulations are relatively exposed and easy to detect at many different points.  Applications can be maliciously written or infected to act in unintended ways, but pervasive anti-malware is designed to protect against such attacks and are constantly watchful.  Vulnerabilities in operating systems provide a means to hide from most security, open up a bounty of potential targets, and offer a much greater depth of control.  Knowing the risks, OS vendors are constantly identifying problems and sending a regular stream of patches to shore up weaknesses, limiting the viability of continued exploitation by threats.  It is not until we get to Firmware and Hardware, do most of the mature security controls drop away.   


The firmware and hardware, residing beneath the software layers, tends to be more rigid and represents a significantly greater challenge to compromise and scale attacks.  However, success at the lower levels means bypassing most detection and remediation security controls which live above, in the software.  Hacking hardware is very rare and intricate, but not impossible.  The level of difficulty tends to be a major deterrent while the ample opportunities and ease which exist in the software layers is more than enough to keep hackers comfortable in staying with easier exploits in pursuit of their objectives. 
Some attackers are moving down the stack.  They are the vanguard and blazing a path for others to follow.  Their efforts, processes, and tools will be refined and reused by others.  There are tradeoffs to attacks at any level.  The easy vulnerabilities in data and applications yield much less benefits for attackers in the way of remaining undetected, persistence after actions are taken against them, and the overall level of control they can gain.  Most security products, patches, and services have been created to detect, prevent, and evict software based attacks.  They are insufficient at dealing with hardware or firmware compromises.  Due to the difficulty and lack of obvious success, most vulnerability research doesn’t explore much in the firmware and hardware space.  This is changing.  It is only natural, attackers will seek to maneuver where security is not pervasive.


As investments in offensive cyber capabilities from nations, organized crime syndicates, and elite hackers-for-hire continue to grow, new areas such as IoT hardware, firmware, and embedded OS vulnerabilities will be explored and exploited.

Researchers targeting hardware are breaking new ground which others will follow, eventually leading to broad research in hardware vulnerabilities across computing products which influence our daily lives.  This in turn will spur security to evolve in order to meet the new risks.  So the chess game will continue.  Hardware and firmware hacking is part of the natural evolution of cybersecurity and therefore a part of our future we must eventually deal with.


Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts



Read more >

Retailers Must Learn to Love the Digital Native


We know from the research that today’s shopper hops back and forth between channels on their way to purchase. To wit:


Some two-thirds of U.S. shoppers regularly begin their decision journeys online.1


More than eight in ten rely most often on online ratings and reviews when making purchase decisions.2

And yet some 88 percent of revenue is transacted in a brick and mortar store.3


From online to in-store. Obvious. But what all of us often miss is that it’s not just a PC-to-store journey anymore.


We’re now in the world of the eight-screen-hopper shopper.

Shopping in a World of Screens


Today’s digital natives, whose numbers and purchasing power are growing, live by their screens. The smartphone, as we know, is the most beloved device — the ultimate personal device — and worldwide adoption is increasing rapidly. Most smartphone users would sacrifice their wallets before giving up their phones. Indeed, by 2017, it’s expected that one-third of the world’s population will be using smartphones.4


A recent Capgemini survey across 13 developed nations found that one in five digital shoppers polled — the true digitals — prefer using smartphones for every aspect of the shopping experience. They search, compare prices, order, pay, and track delivery with their phones.5


But the smartphone is only one of the points of influence. Those retailers who seek a smartphone strategy will address only a portion of what’s needed for a cohesive mobile strategy.

Predicting the Desires of the Digital Native


It’s time for a retail screens strategy. One that connects with and keeps the shopper as they move from the smartphone screen to the PC screen to the tablet screen to the video screen to the smart vending screen to the digital billboard screen to the in-store display screen to the kiosk-terminal screen to the automobile screen.


What is today’s automobile but a smartphone with a steering wheel? The truly mobile device.


Given the eight-screen reality, today’s leaders must focus upon consistent, device-right and journey-right content delivery. They must influence the decision at the right time, through the right screen, with the right message.


It’s an area of emphasis for us in 2015.


Check back in this space for more in the coming weeks. This is the third installment of a series on Retail & Tech. Click here to read Moving from Maintenance to Growth in Retail Technology and The Behavioral Shift Driving Change in the World of Retail.


Jon Stine
Global Director, Retail Sales

Intel Corporation


1 Cisco, IBSG, January 2014.

2 Merkle-Intel Digital Shopper Behavior Survey, 2014.

3 National Retail Federation. “2015 National Retail Federation Data.” 06 January 2015.

4 eMarketer, 2014.

5 CapGemini, “Digital Shopper Relevancy,” 2014.

Read more >

Change Your Desktops, Change Your Business

Do you ever think about how much your business could accomplish if your employees were even more productive? If you’re like most people, the answer is “yes.” Then again, it’s quite likely your employees are already working hard and working smart, putting in quality time day in and day out to help your business grow and prosper. How much more productive could they really be?


A lot more, actually. And not because your workers aren’t putting in the effort. In most businesses, employees are doing all they can to help their company succeed: arriving early, skipping lunch (at least sometimes), getting things done, making a difference. But in many cases, their computers simply can’t keep up.


In fact, a new study has found that desktops older than five years dramatically underperform compared to today’s newer PCs, which means that no matter how hard your employees try, if they’re using an outdated computer there’s a limit to how productive they can be. But there’s good news too. The same study shows that new desktop PCs powered by the latest Intel® processors, including all-in-ones (AiOs) and Mini PCs, outperform older machines by as much as 145 percent. As the study notes, “a faster, more responsive desktop means your employees can finish the same tasks in less time, ultimately providing them an opportunity to be more productive.”


Likewise, a study by J. Gold Associates on replacing enterprise PCs recommends companies move to a two-year refresh cycle for most corporate machines. The old model of waiting three to four years before an upgrade, when the cost of maintaining a PC outweighs the cost of replacing it, is no longer valid in the modern age of advanced chip technology, cloud-based systems, and new tech innovations. Research shows that significant ROI can be achieved by upgrading more often so companies can do more with the resources they currently have.


Rediscover-Intel-Desktop.pngFaster System Performance = Increased Productivity


Working on a slow PC is painful. With a new, more responsive desktop computer, workers can complete the same tasks faster—up to 145 percent faster—according the desktop upgrade study conducted by Principled Technologies. The report uses five of the most common industry benchmarks to compare performance between a new Intel®-based AiO and Mini PC, and a legacy desktop. With newer PCs boasting faster processing speed, the study found that employees can finish tasks in less than half the time, compared with older computers.


Imagine this: A 10-minute task could be shaved down to a mere four minutes. Now multiply this improvement across an entire enterprise. An organization replacing 10,000 desktops could save 900,000 employee hours over four years! Think of the opportunity for overall productivity—based on the size of your desktop fleet—not to mention the peace of mind your employees will experience working on the latest, fastest machines. Happy employees means a happy company, right?


Stay tuned to learn more about the study and discover how your business can do more with a PC refresh. In Part 2, we’ll cover how new desktop computers can help your business save money by paying less for power and reducing IT costs across the organization. You can also download the full study online.

In the meantime, join the conversation using #IntelDesktop, and get ready to rediscover the desktop.

This is the third and most recent installment of the Desktop World Tech Innovation Series.

To view more posts within the series click here: Desktop World Series

Read more >

5 Questions for Dr. Peter White, GenomeNext

Dr. Peter White is the developer and inventor of the “Churchill” platform, and serves as GenomeNext’s principal genomic scientist and technical advisor.


Dr. White is a principal investigator in the Center for Microbial Pathogenesis at The Research Institute at Nationwide Children’s Hospital and an Assistant Professor of Pediatrics at The Ohio State University. He is also Director of Molecular Bioinformatics, serving on the research computing executive governance committee, and Director of the Biomedical Genomics Core, a nationally recognized microarray and next-gen sequencing facility that help numerous investigators design, perform and analyze genomics research. His research program focuses on molecular bioinformatics and high performance computing solutions for “big data”, including discovery of disease associated human genetic variation and understanding the molecular mechanism of transcriptional regulation in both eukaryotes and prokaryotes. Dr. Peter White.jpg


We recently caught up with Dr. White to talk about population scale genomics and the 1000 Genomes Project.


Intel: What is population scale genomics?


White: Population scale genomics refers to the large-scale comparison of sequenced DNA datasets of a large population sample. While there is no minimum, it generally refers to the comparison of sequenced DNA samples from hundreds, even thousands, of individuals with a disease or from a sampling of populations around the world to learn about genetic diversity within specific populations.


The human genome is comprised of approximately 3,000,000,000 DNA base-pairs (nucleotides). The first human genome sequence was completed in 2006, the result of an international effort that took a total of 15 years to complete. Today, with advances in DNA sequencing technology, it is possible to sequence as many as 50 genomes per day, making it possible to study genomics on a population scale.


Intel: Why does population scale genomics matter?


White: Population scale genomics will enable researchers to understand the genetic origins of disease. Only by studying the genomes of 1000’s of individuals will we gain insight into the role of genetics in diseases such as cancer, obesity and heart disease. The larger the sample size that can be analyzed accurately, the better researchers can understand the role that genetics plays in a given disease, and from that we will be able to better treat and prevent disease.


Intel: What was the first population scale genomic analysis?


White: The 1000 Genomes Project is an international research project, through the efforts of a consortium of over 400 scientists and bioinformaticians, set out to establish a detailed catalogue of human genetic variation. This multi-million dollar project was started in 2008 and sequencing of 2,504 individuals was completed in April 2013. The data analysis of the project was completed 18 months later, with the release of the final population variant frequencies in September 2014. The project resulted in discovery of millions of new genetic variants and successfully produced the first global map of human genetic diversity.


Intel: Can analysis of future large population scale genomics studies be automated?


White: Yes. The team at GenomeNext and Nationwide Children’s Hospital were challenged to analyze a complete population dataset compiled by the 1,000 Genomes Consortium in one week as part of the Intel Heads In the Clouds Challenge on Amazon Web Services (AWS).  The 1000 Genomes Project is the largest publically available dataset of genomic sequences, sampled from 2,504 individuals from 26 populations around the world.


All 5,008 samples (2,504 whole genome sequences & 2,504 high depth exome sequences) were analyzed on GenomeNext’s Platform, leveraging its proprietary genomic sequence analysis technology (recently published in Genome Biology) operating on the AWS Cloud powered by Intel processors. The entire automated analysis process was completed in one week, with as many as 1,000 genome samples being completed per day, generating close to 100TB of processed result files. The team found there was a high-degree of correlation with the original analysis performed by the 1,000 Genomes Consortium, with additional variants potentially discovered during the analysis performed utilizing GenomeNext’s Platform.


Intel: What does GenomeNext’s population scale accomplishment mean?


White: GenomeNext believes this is the fastest, most accurate and reproducible analysis of a dataset of this magnitude. One of the benefits of this work will enable researchers and clinicians, using population scale genomic data to distinguish common genetic variation as discovered in this analysis, from rare pathogenic disease causing variants. As populations scale genomic studies become routine, GenomeNext provides a solution through which the enormous data burden of such studies can be managed and by which analysis can be automated and results shared with scientists globally through the cloud. Access to a growing and diverse repository of DNA sequence data, including the ability to integrate and analyze the data is critical to accelerating the promise of precision medicine.


Our ultimate goals are to provide a global genomics platform, automate the bioinformatics workflow from sequencer to annotated results, provide a secure and regulatory compliant platform, dramatically reduce the analysis time and cost, and remove the barriers of population scale genomics.

Read more >

Experience Intel Integrated Video at NAB 2015

Today, 70 percent of US consumer Internet traffic is video, and it’s growing every day with over-the-top (OTT) providers delivering TV and movies to consumers, broadcasters and enterprises streaming live events. Cloud computing is changing the landscape for video production as well. Much of the work that used to require dedicated workstations is being moved to servers in data centers and offered remotely by cloud service providers and private cloud solutions. As a result, the landscape for content creation and delivery is undergoing significant changes. The National Association of Broadcasters (NAB) show in Las Vegas highlights these trends. And Intel will be there highlighting how we help broadcasters, distributors, and video producers step up to the challenges.


Intel processors have always been used for video processing, but today’s video workloads place new demands on processing hardware. The first new demand is for greater processing performance. As video data volume explodes, encoding schemes become more complex, and processing power becomes more critical. The second demand is for increased data center density. As video processing moves to servers in data centers, service cost is driven by space and power. And the third demand is for openness. Developers want language- and platform-independent APIs like OpenCL* to access CPU and GPU graphics functions. The Intel® Xeon® processor E3 platform with integrated Intel® IrisTM Pro Graphics and Intel® Quick Sync Video transcoding acceleration provides the performance and open development environment required to drive innovation and create the optimized video delivery systems needed by today’s content distributors. And does it with unparalleled density and power efficiency.


The NAB 2015 show provides an opportunity for attendees to see how these technologies come together in new, more powerful industry solutions  to deliver video content across the content lifecycle—acquire, create, manage, distribute, and experience.


We’ve teamed with some of our key partners at NAB 2015 to create the StudioXperience showcase that demonstrates a complete end-to-end video workflow across the content lifecycle. Waskul TV will generate real time 4k video and pipe it into a live production facility featuring Xeon E3 processors in an HP Moonshot* server and Envivio Muse* Live. The workflow is divided between on air HD production for live streaming and 4K post-production for editorial and on demand delivery. The cloud-based content management and distribution workflow is provided by Intel-powered technologies from technology partners to create a solution that streams our content to the audience via Waskul TV.


Other booths at the show let attendees drill down into some of the specific workflows and the technologies that enable them. For example, “Creative Thinking 800 Miles Away—It’s Possible” lets attendees experience low latency, remote access for interactive creation and editing of video content in the cloud. You’ll see how Intel technology lets you innovate and experiment with modeling, animation, and rendering effects—anywhere, anytime. And because the volume of live video content generated by broadcasters, service providers, and enterprises continues to explode, we need faster and more efficient ways of encoding it for streaming over the Internet. So Haivision’s “Powerful Wide-Scale Video Distribution” demo will show how their Intel-based KulaByte* encoders and transcoders can stream secure, low latency HD video at extremely low bitrates over any network, including low cost, readily available, public Internet connections.


To learn more about how content owners, service providers, and enterprises are using Intel Xeon processor E3 based platforms with integrated HD Graphics and Intel Quick Sync video to tame the demand for video, check out the interview I did on Intel Chip Chat recently. And even if you’re not attending NAB 2015, you can still see it in action. I’ll be giving a presentation Tuesday, April 14 at 9:00 a.m. Pacific time. We’ll stream it over the very systems I’ve described, and you can watch it on Waskul.TV. Tune in.





© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

NVMe SSD: An Introduction to the Standards for PCIe SSDs



NVMe is a term for faster storage designed for non-volatile memory (NVM) technologies. But what does it mean, and how do we break through the alphabet soup?


NVMe relates to NVM Express, an industry standard for using Non-Volatile Memory (e.g., NAND memory) in a Solid State Drive (SSD). NVMe standardizes the interface from the storage driver to the SSD, including command set and features (e.g., power management). The standard enables native OS drivers in Windows*, Linux*, and VMware* to provide a seamless user experience. The standard was defined from the ground up for NVM, so it is capable of much higher IOPs and lower latency than legacy storage standards (SATA, SAS) that were designed for hard drives. NVMe first shipped in servers in the second half of 2014, and has just launched in client systems in April of 2015. The starting point for the standards specification, and resources on the standardization effort for NVMe is at The specification can be found at


So now you know the difference between, NVM (the media inside of an SSD), and NVMe (the standard enabling high performance SSDs). The majority of SSDs in the market today support the legacy storage standards, SATA and SAS. To take full advantage of today’s SSDs, you need NVMe. SATA is bottlenecked at 600 MB/s and ~ 100K IOPs maximum. NVMe brings today almost 3 GB/s of read bandwidth and ~ 500K IOPs with NAND media technology. And NVM memory companies are innovating with different types of NVM, as was recently seen in the announcement for 3D NAND, a whole new class of NVM. This means for any new platform, you need to design in support for NVM Express.


You sometimes see drives, including the Intel P3700, intermingle acronyms, as either a PCIe or NVMe drive, or another mouthful: NVMe PCIe Solid State Drive. Why is that? Well part of it is newness of terms, and to provide a reference to the class of drives that came before NVMe SSDs, which were called PCIe SSDs. The Intel P3700 and all future Intel NVMe drives can be attached directly to the CPU through PCI Express (without an intervening host bus adapter). PCI Express is the hardware interface and NVMe standardizes the software, commands, and features for SSDs on top of the PCIe hardware bus. All future Intel drives on PCIe will be NVMe.  We expect the entire industry to move away from proprietary implementations and embrace the NVMe standard for PCIe SSDs.


The switch to NVMe is coming from SSD vendors. Two Data Center SSD market leaders, already provide production NVMe drives, and there is dedication within the industry to see a lot more offerings of NVMe SSDs. That helps you, the end users, because NVMe standards in your server platform guarantees the interoperability, choice, and performance you desire.


To help cement this topic a bit more here are two excellent NVMe video blogs:

How do NVMe SSDs shape the future of the Data Center?

Unlocking SSD performance with NVM Express Technology

Read more >

Seniors Embrace Social Networking Using Intel-Powered Tablets


For care home residents, keeping in touch with family can be difficult. It’s not always possible for relatives to visit as often as they’d like, and family members might live far away.

Avery care homes in Lincolnshire, UK, is bringing its residents together with their families using Intel technology-based tablets and the Finerday social network. Finerday is designed to be easily used by seniors, but also engaging for children. Besides supporting private messaging, email, photo display, and sharing, it enables video messaging using just two buttons.


“The nice thing about the tablets,” says Helen Brown, regional recreation and leisure manager for Avery care homes, “is that the residents feel they are in control. Some residents were quite scared by the Internet and by computers, but they’ve found the tablet approachable. Many of them are comfortable using the on-screen keyboard and find the Windows tablet’s keyboard easier to use than others they’ve tried.”

Video Messaging Resonates with Residents


The portability means residents can use the tablets in their own rooms, which is especially helpful for those with hearing impairments, who can use it somewhere quiet.


The video messages are a popular feature with both residents and their family members. “Residents often get emotional using Finerday,” says Brown. “One of our residents has two loved ones who live abroad. They love being able to talk to each other. The family members say that although they can talk to the carers to hear how their mum is doing, they feel so happy when they see her on the screen in front of them, and can see she is happy.”

Top-Performing Tablets Use Intel Technology


Though Avery Lodge has used Finerday on a number of devices, “The Dell tablet with the Intel Atom processor is definitely the preferred tablet of all the ones we’ve tried,” says Brown. “The experience on Windows 8 is absolutely fantastic. Rather than the wait we had on previous tablets, the Intel technology-based tablets provide the information straight away. There are no pauses or gaps, which can be confusing for residents with dementia who don’t understand why we’ve stopped, or that we’re waiting for a page to load.”


The project has been so successful that Brown and her colleague Kerry Angeloni won an award for Best Care Initiative at the 2014 Nursing and Residential Care awards for introducing Finerday.

Read our case study to find out more about how seniors are using tablets at Avery care homes.

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


Jane Williams

Online Sales Development Manager

Intel Corporation

Read more >

Evaluating Security Aspects of M&A Investment Deals

M&A activities can introduce a number of unexpected security risks to an organization and affect the overall value of an investment.  Acquiring or divesting intellectual property, people, or technology environments can expose corporate assets, bypass important security controls, and create situations of liability and regulatory non-compliance.  Additionally, unknown security incidents at an acquisition may require significant clean-up investment and dramatically reduce the value of acquired IP, thus undermining value of the prospective deal.

When acquiring another company, it can be a mystery what security problems you may inherit.  Are their systems riddled with malware, employees careless in security practices, has the IP been already been stolen, or is the network vulnerable to outsiders?  Connecting an acquired company’s assets, networks, processes, and people to a parent company can put in jeopardy the organization and quickly undermine an established security posture.

Experts believe examination of a company’s IT security posture should be part of the due diligence process prior to investment or mergers and acquisition activity.

It is important to evaluate the technical and behavioral aspects with consistent and comprehensive rigor, so proper risk management and deal value decisions can be made.  Analysis results become a primer for the institution of any controls deemed necessary as the project progresses.

For a few years, I had the pleasure of leading the security program of Intel’s mergers, acquisitions, divestitures, site closures, and co-location projects.  I developed a training presentation for new security champions and to educate deal partners on risk areas.


I found M&A security work to be truly fascinating and challenging.  Typically, there are political, business, technical, and behavioral challenges to overcome.  In the end, proper diligence in managing the security of M&A projects is important to the determination of proper deal value and lays the framework for establishing necessary controls to protect the acquiring organization.



Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts


Read more >

Extending Open Source SDN and NFV to the Enterprise

By Christian Buerger, Technologist, SDN/NFV Marketing, Intel



This week I am attending the Intel Developer Forum (IDF) in Shenzhen, China, to promote Intel’s software defined networking (SDN) and network functions virtualization (NFV) software solutions. During this year’s IDF, Intel has made several announcements and our CEO Brian Krzanich has showcased Intel’s innovation leadership across a wide range of technologies with our local partners in China. On the heel of Krzanich’s announcements, Intel Software & Services Group Senior VP Doug Fisher extended Krzanich’s message to stress the importance of open source collaboration to drive industry innovation and transformation, citing OpenStack and Hadoop as prime examples.


I participated at the signing event and press briefing for a ground-breaking announcement between Intel and Huawei’s enterprise division to jointly define a next-generation Network as a Service (NaaS) SDN software solution. Under the umbrella of Intel’s Open Network Platform (ONP) server reference platform, Intel and Huawei intend to jointly develop a SDN reference architecture stack. This stack is based on integrating Intel architecture optimized open source ingredients from projects such as Cloud OS/OpenStack, OpenDaylight (ODL), Data Plane Development Kit (DPDK), and Open Virtual Switch (OVS) with virtual network appliances such as a virtual services router and virtual firewall. We are also deepening existing collaboration initiatives in various open source projects such as ODL (on Service Function Chaining and performance testing), OVS (SRIOV-based performance enhancements), and DPDK.


In addition to the broad range of open source SDN/NFV collaboration areas this agreement promotes, what makes it so exciting to me personally is the focus on the enterprise sector. Specifically, together with Huawei we are planning to develop reference solutions that target specific enterprise vertical markets such as education, financial services, and government. Together, we are extending our investments into SDN and NFV open source projects to not only accelerate advanced NaaS solutions for early adopters in the telco and cloud service provider space, but also to create broad opportunities to drive massive SDN adoption in the enterprise in 2015. As Swift Liu, President of Huawei’s Switch and Enterprise Communication Products, succinctly put it, Intel and Huawei “are marching from software-hardware collaboration to the entirely new software-defined era in the enterprise.”





© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

An HPC Breakthrough with Argonne National Laboratory, Intel, and Cray

At a press event on April 9, representatives from the U.S. Department of Energy announced they awarded Intel contracts for two supercomputers totaling (just over) $200 million as part of its CORAL program. Theta, an early production system, will be delivered in 2016 and will scale to 8.5 petaFLOPS and more than 2,500 nodes, while the 180 PetaFLOPS, greater than 50,000 node system called Aurora will be delivered in 2018. This represents a strong collaboration for Argonne National Laboratory, prime contractor Intel, and sub-contractor Cray on a highly scalable and integrated system that will accelerate scientific and engineering breakthroughs.



Rendering of Aurora


Dave Patterson (President of Intel Federal LLC and VP of the Data Center Group), led the Intel team on the ground in Chicago; he was joined on stage by Peter Littlewood (Director of Argonne National Laboratory), Lynn Orr (Undersecretary for Science and Energy, U.S. Department of Energy), and Barry Bolding (Vice President of Marketing and Business Development for Cray). Also joining the press conference were Dan Lipinski (U.S. Representative, Illinois District 3), Bill Foster (U.S. Representative, Illinois District 11), and Randy Hultgren (U.S. Representative, Illinois District 14).


DavePatterson1.jpgDave Patterson at the Aurora Announcement (Photo Courtesy of Argonne National Laboratory)


This cavalcade of company representatives disclosed details on the Aurora 180 PetaFLOPS/50,000 node/13 Megawatt system. It utilizes much of the Intel product portfolio via Intel’s HPC scalable system framework, including future Intel Xeon Phi processors (codenamed Knights Hill), second generation Intel Omni-Path Fabric, a new memory hierarchy composed of Intel Lustre, Burst Buffer Storage, and persistent memory through high bandwidth on-package memory. The system will be built using Cray’s next generation Shasta platform.


Peter Littlewood kicked off the press conference by welcoming everyone and discussing Argonne National Laboratory – the mid west’s largest federally funded R&D center fostering discoveries in energy, transportation, protecting the nation and more. He handed off to Lynn Orr, who made the announcement of the $200 million contract and the Aurora and Theta supercomputers. He discussed some of the architectural details of Aurora and talked about the need for the U.S. to dedicate funds to build supercomputers to reach the next exascale echelon and how that will fuel scientific discovery, a theme echoed by many of the speakers to come.


Dave Patterson took the stage to give background on Intel Federal, a wholly owned subsidiary of Intel Corporation. In this instance, Intel Federal conducted the contract negotiations for CORAL. Dave touched on the robust collaboration with Argonne and Cray needed to bring Aurora on line in 2018, as well as introducing Intel’s HPC scalable system framework – a flexible blueprint for developing high performance, balanced, power-efficient and reliable systems capable of supporting both compute- and data-intensive workloads.


Next up, Barry Bolding from Cray talked about the platform system underpinning Aurora – the next generation Shasta platform. He mentioned that when deployed, Aurora has the potential to be one of the largest/most productive supercomputers in the world.


And finally, Dan Lipinski, Bill Foster and Randy Hultgren, all representing Illinois (Argonne’s home base) in the U.S. House of Representatives each gave a few short remarks. They echoed Lynn Orr’s previous thoughts that the United States needs to stay committed to building cutting edge supercomputers to stay competitive in a global environment and tackle the next wave of scientific discoveries. Representative Hultgren expressed very succinctly: “[The U.S.] needs big machines that can handle big jobs.”



Dan Lipinski (Photo Courtesy of Argonne National Laboratory)



Bill Foster (Photo Courtesy of Argonne National Laboratory)


Randy Hultgren (Photo Courtesy of Argonne National Laboratory)


After the press conference, Mark Seager (Intel Fellow, CTO of the Tech Computing Ecosystem) contributed: “We are defining the next era of supercomputing.” While Al Gara (Intel Fellow, Chief Architect of Exascale Systems) took it a step further with: “Intel is not only driving the architecture of the system, but also the new technologies that have emerged (or will be needed) to enable that architecture. We have the expertise to drive silicon, memory, fabric and other technologies forward and bring them together in an advanced system.”



The Intel and Cray teams prepping for the Aurora announcement


Aurora’s disruptive technologies are designed to work together to deliver breakthroughs in performance, energy efficiency, overall system throughput and latency, and cost to power. This signals the convergence of traditional supercomputing and the world of big data and analytics that will drive impact for not only the HPC industry, but also more traditional enterprises.


Argonne scientists – who have a deep understanding of how to create software applications that maximize available computing resources – will use Aurora to accelerate discoveries surrounding:

  • Materials science: Design of new classes of materials that will lead to more powerful, efficient and durable batteries and solar panels.
  • Biological science: Gaining the ability to understand the capabilities and vulnerabilities of new organisms that can result in improved biofuels and more effective disease control.
  • Transportation efficiency: Collaborating with industry to improve transportation systems to design enhanced aerodynamics features, as well as enable production of better, more highly-efficient and quieter engines.
  • Renewable energy: Wind turbine design and placement to greatly improve efficiency and reduce noise.
  • Alternative programming models: Partitioned Global Address Space (PGAS) as a basis for Coarray Fortran and other unified address space programming models.


The Argonne Training Program on Extreme-Scale computing will be a key program for training the next generation of code developers – having them ready to drive science from day one when Aurora is made available to research institutions around the world.


For more information on the announcement, you can head to our new Aurora webpage or dig deeper into Intel’s HPC scalable system framework.






© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

Desiderata for Enterprise Health Analytics in the 21st Century

With apologies and acknowledgments to Dr. James Cimino, whose landmark paper on controlled medical terminologies still sets a challenging bar for vocabulary developers, standards organizations and vendors, I humbly propose a set of new desiderata for analytic systems in health care. These desiderata are, by definition, a list of highly desirable attributes that organizations should consider as a whole as they lay out their health analytics strategy – rather than adopting a piecemeal approach.


The problem with today’s business intelligence infrastructure is that it was never conceived of as a true enterprise analytics platform, and definitely wasn’t architected for the big data needs of today or tomorrow. Many, in fact probably most, health care delivery organizations have allowed their analytic infrastructure to evolve in what a charitable person might describe as controlled anarchy. There has always been some level of demand for executive dashboards which led to IT investment in home grown, centralized, monolithic and relational database-centric enterprise data warehouses (EDWs) with one or more online analytical processing-type systems (such as Crystal Reports, Cognos or BusinessObjects) grafted on top to create the end-user-facing reports. Graham_Hughes.jpg


Over time, departmental reporting systems have continued to grow up like weeds; data integration and data quality has become a mini-village that can never keep up with end-user demands. Something has to change. Here are the desiderata that you should consider as you develop your analytic strategy:


Define your analytic core platform and standardize. As organizations mature, they begin to standardize on the suite of enterprise applications they will use. This helps to control processes and reduces the complexity and ambiguity associated with having multiple systems of record. As with other enterprise applications such as electronic health record (EHR), you need to define those processes that require high levels of centralized control and those that can be configured locally. For EHR it’s important to have a single architecture for enterprise orders management, rules, results reporting and documentation engines, with support for local adaptability. Similarly with enterprise analytics, it’s important to have a single architecture for data integration, data quality, data storage, enterprise dashboards and report generation – as well as forecasting, predictive modelling, machine learning and optimization.


Wrap your EDW with Hadoop. We’re entering an era where it’s easier to store everything than decide which data to throw away. Hadoop is an example of a technology that anticipates and enables this new era of data abundance. Use it as a staging area and ensure that your data quality and data transformation strategy incorporates and leverages Hadoop as a highly cost-effective storage and massively scalable query environment.


Assume mobile and web as primary interaction. Although a small number of folks enjoy being glued to their computer, most don’t. Plan for this by making sure that your enterprise analytic tools are web-based and can be used from anywhere on any device that supports a web browser.


Develop purpose-specific analytic marts. You don’t need all the data all the time. Pick the data you need for specific use cases and pull it into optimized analytic marts. Refresh the marts automatically based on rules, and apply any remaining transformation, cleansing and data augmentation routines on the way inbound to the mart.


Leverage cloud for storage and Analytics as a Service (AaaS). Cloud-based analytic platforms will become more and more pervasive due to the price/performance advantage. There’s a reason that other industries are flocking to cloud-based enterprise storage and computing capacity, and the same dynamics hold true in health care. If your strategy doesn’t include a cloud-based component, you’re going to pay too much and be forced to innovate at a very slow pace.


Adopt emerging standards for data integration. Analytic insights are moving away from purely retrospective dashboards and moving to real-time notification and alerting. Getting data to your analytic engine in a timely fashion becomes essential; therefore, look to emerging standards like FHIR, SPARQL and SMART as ways to provide two-way integration of your analytic engine with workflow-based applications.


Establish a knowledge management architecture. Over time, your enterprise analytic architecture will become full of rules, reports, simulations and predictive models. These all need to be curated in a managed fashion to allow you to inventory and track the lifecycle of your knowledge assets. Ideally, you should be able to include other knowledge assets (such as order sets, rules and documentation templates), as well as your analytic assets.


Support decentralization and democratization. Although you’ll want to control certain aspects of enterprise analytics through some form of Center of Excellence, it will be important for you to provide controlled access by regional and point-of-service teams to innovate at the periphery without having to provide change requests to a centralized team. Centralized models never can scale to meet demand, and local teams need to be given some guardrails within which to operate. Make sure to have this defined and managed tightly.

Create a social layer. Analytics aren’t static reports any more. The expectation from your users is that they can interact, comment and share the insights that they develop and that are provided to them. Folks expect a two-way communication with report and predictive model creators and they don’t want to wait to schedule a meeting to discuss it. Overlay a portal layer that encourages and anticipates a community of learning.


Make it easily actionable. If analytics are just static or drill-down reports or static risk scores, users will start to ignore them. Analytic insights should be thought of as decision support; and, the well-learned rules from EHRs apply to analytics too. Provide the insights in the context of my workflow, make it easy to understand what is being communicated, and make it easily actionable – allow users to take recommended actions rather than trying to guess what they might need to do next.


Thanks for reading, and please let me know what you think. Do these desiderata resonate with you? Are we missing anything essential? Or is this a reasonable baseline for organizations to get started?

Dr. Graham Hughes is the Chief Medical Officer at SAS and an industry expert in SAS Institute’s Healthcare & Life Sciences Center for Health Analytics and Insights (CHAI). A version of this post was originally published last August on A Shot in the Arm, the SAS Health and Life Sciences Blog.

Read more >

How to Meet the Needs of Mobile Patients through Interoperability

Even when Patient Health Information is effectively shared across a healthcare network – provider organizations still struggle to share patient data across organizational boundaries to meet the healthcare needs of increasingly mobile patient populations.


For instance, consider the healthcare needs of retirees escaping the deep freeze of a Midwestern winter for the warmer climate of Florida. Without full access to unified and comprehensive patient data, healthcare providers new to the patient run the risk of everything from ordering expensive, unnecessary tests to prescribing the wrong medications. In these situations, at minimum, the patient’s quality of care is suboptimal. And in the worst-case scenarios, a lack of interoperability across networks can lead to devastating patient outcomes. noland joiner 2.jpg


System and process


To ensure better patient outcomes, healthcare organizations require the level of system and process interoperability that enables sharing of real-time patient data that leads to informed provider decision-making, decreased expenses for payer organizations – and ultimately enhanced patient-centered care across network and geographic boundaries. Effective interoperability means everyone wins.


Information support


To keep efficiency, profitability and patient-centered care all moving in the right direction, healthcare organizations need complete visibility into all critical reporting metrics across hospitals, programs and regions. In answer to that need, Intel has partnered with MarkLogic and Tableau to develop an example of the business intelligence dashboard of the future. This interactive dashboard runs on Intel hardware and MarkLogic software. Aptly named, Tableau’s visually rich and shareable display features critical, insightful analytics that tell the inside story behind each patient’s data. This technology empowers clinicians new to the patient with a more holistic view of his health.


Combined strength


By combining MarkLogic’s Enterprise NoSQL technology with Intel’s full range of products, Tableau is able to break down information silos, integrate heterogeneous data, and give access to critical information in real-time – providing a centralized application of support for everything from clinical informatics and fraud prevention to medical research and publishing. Tableau powered by MarkLogic and Intel delivers clear advantages to payers, providers and patients.


To see Intel, MarkLogic, and Tableau in action, please stop by and visit with us at HIMSS in the Mobile Health Knowledge Zone, Booth 8368. We’ll guide you through an immersive demonstration that illustrates how the ability to integrate disparate data sources will lead you to better outcomes for all stakeholders in the healthcare ecosystem, including:

  • Patient empowerment
  • Provider-patient communication
  • Payer insight into individuals or population heath
  • Product development (drug and device manufacturers)


What questions do you have about interoperability?


Noland Joiner is Chief Technology Officer, Healthcare, at MarkLogic.

Read more >