Recent Blog Posts

Extending Open Source SDN and NFV to the Enterprise

By Christian Buerger, Technologist, SDN/NFV Marketing, Intel

 

 

This week I am attending the Intel Developer Forum (IDF) in Shenzhen, China, to promote Intel’s software defined networking (SDN) and network functions virtualization (NFV) software solutions. During this year’s IDF, Intel has made several announcements and our CEO Brian Krzanich has showcased Intel’s innovation leadership across a wide range of technologies with our local partners in China. On the heel of Krzanich’s announcements, Intel Software & Services Group Senior VP Doug Fisher extended Krzanich’s message to stress the importance of open source collaboration to drive industry innovation and transformation, citing OpenStack and Hadoop as prime examples.

 

I participated at the signing event and press briefing for a ground-breaking announcement between Intel and Huawei’s enterprise division to jointly define a next-generation Network as a Service (NaaS) SDN software solution. Under the umbrella of Intel’s Open Network Platform (ONP) server reference platform, Intel and Huawei intend to jointly develop a SDN reference architecture stack. This stack is based on integrating Intel architecture optimized open source ingredients from projects such as Cloud OS/OpenStack, OpenDaylight (ODL), Data Plane Development Kit (DPDK), and Open Virtual Switch (OVS) with virtual network appliances such as a virtual services router and virtual firewall. We are also deepening existing collaboration initiatives in various open source projects such as ODL (on Service Function Chaining and performance testing), OVS (SRIOV-based performance enhancements), and DPDK.

 

In addition to the broad range of open source SDN/NFV collaboration areas this agreement promotes, what makes it so exciting to me personally is the focus on the enterprise sector. Specifically, together with Huawei we are planning to develop reference solutions that target specific enterprise vertical markets such as education, financial services, and government. Together, we are extending our investments into SDN and NFV open source projects to not only accelerate advanced NaaS solutions for early adopters in the telco and cloud service provider space, but also to create broad opportunities to drive massive SDN adoption in the enterprise in 2015. As Swift Liu, President of Huawei’s Switch and Enterprise Communication Products, succinctly put it, Intel and Huawei “are marching from software-hardware collaboration to the entirely new software-defined era in the enterprise.”

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

An HPC Breakthrough with Argonne National Laboratory, Intel, and Cray

At a press event on April 9, representatives from the U.S. Department of Energy announced they awarded Intel contracts for two supercomputers totaling (just over) $200 million as part of its CORAL program. Theta, an early production system, will be delivered in 2016 and will scale to 8.5 petaFLOPS and more than 2,500 nodes, while the 180 PetaFLOPS, greater than 50,000 node system called Aurora will be delivered in 2018. This represents a strong collaboration for Argonne National Laboratory, prime contractor Intel, and sub-contractor Cray on a highly scalable and integrated system that will accelerate scientific and engineering breakthroughs.

 

Aurora-Plain-TransparentBackground-LowRes.png

Rendering of Aurora

 

Dave Patterson (President of Intel Federal LLC and VP of the Data Center Group), led the Intel team on the ground in Chicago; he was joined on stage by Peter Littlewood (Director of Argonne National Laboratory), Lynn Orr (Undersecretary for Science and Energy, U.S. Department of Energy), and Barry Bolding (Vice President of Marketing and Business Development for Cray). Also joining the press conference were Dan Lipinski (U.S. Representative, Illinois District 3), Bill Foster (U.S. Representative, Illinois District 11), and Randy Hultgren (U.S. Representative, Illinois District 14).

 

DavePatterson1.jpgDave Patterson at the Aurora Announcement (Photo Courtesy of Argonne National Laboratory)

 

This cavalcade of company representatives disclosed details on the Aurora 180 PetaFLOPS/50,000 node/13 Megawatt system. It utilizes much of the Intel product portfolio via Intel’s HPC scalable system framework, including future Intel Xeon Phi processors (codenamed Knights Hill), second generation Intel Omni-Path Fabric, a new memory hierarchy composed of Intel Lustre, Burst Buffer Storage, and persistent memory through high bandwidth on-package memory. The system will be built using Cray’s next generation Shasta platform.

 

Peter Littlewood kicked off the press conference by welcoming everyone and discussing Argonne National Laboratory – the mid west’s largest federally funded R&D center fostering discoveries in energy, transportation, protecting the nation and more. He handed off to Lynn Orr, who made the announcement of the $200 million contract and the Aurora and Theta supercomputers. He discussed some of the architectural details of Aurora and talked about the need for the U.S. to dedicate funds to build supercomputers to reach the next exascale echelon and how that will fuel scientific discovery, a theme echoed by many of the speakers to come.

 

Dave Patterson took the stage to give background on Intel Federal, a wholly owned subsidiary of Intel Corporation. In this instance, Intel Federal conducted the contract negotiations for CORAL. Dave touched on the robust collaboration with Argonne and Cray needed to bring Aurora on line in 2018, as well as introducing Intel’s HPC scalable system framework – a flexible blueprint for developing high performance, balanced, power-efficient and reliable systems capable of supporting both compute- and data-intensive workloads.

 

Next up, Barry Bolding from Cray talked about the platform system underpinning Aurora – the next generation Shasta platform. He mentioned that when deployed, Aurora has the potential to be one of the largest/most productive supercomputers in the world.

 

And finally, Dan Lipinski, Bill Foster and Randy Hultgren, all representing Illinois (Argonne’s home base) in the U.S. House of Representatives each gave a few short remarks. They echoed Lynn Orr’s previous thoughts that the United States needs to stay committed to building cutting edge supercomputers to stay competitive in a global environment and tackle the next wave of scientific discoveries. Representative Hultgren expressed very succinctly: “[The U.S.] needs big machines that can handle big jobs.”

 

DanLipinski1.jpg

Dan Lipinski (Photo Courtesy of Argonne National Laboratory)

 

BillFoster.jpg

Bill Foster (Photo Courtesy of Argonne National Laboratory)


RandyHultgren.jpg

Randy Hultgren (Photo Courtesy of Argonne National Laboratory)

 

After the press conference, Mark Seager (Intel Fellow, CTO of the Tech Computing Ecosystem) contributed: “We are defining the next era of supercomputing.” While Al Gara (Intel Fellow, Chief Architect of Exascale Systems) took it a step further with: “Intel is not only driving the architecture of the system, but also the new technologies that have emerged (or will be needed) to enable that architecture. We have the expertise to drive silicon, memory, fabric and other technologies forward and bring them together in an advanced system.”

 

Cray_Intel.png

The Intel and Cray teams prepping for the Aurora announcement

 

Aurora’s disruptive technologies are designed to work together to deliver breakthroughs in performance, energy efficiency, overall system throughput and latency, and cost to power. This signals the convergence of traditional supercomputing and the world of big data and analytics that will drive impact for not only the HPC industry, but also more traditional enterprises.

 

Argonne scientists – who have a deep understanding of how to create software applications that maximize available computing resources – will use Aurora to accelerate discoveries surrounding:

  • Materials science: Design of new classes of materials that will lead to more powerful, efficient and durable batteries and solar panels.
  • Biological science: Gaining the ability to understand the capabilities and vulnerabilities of new organisms that can result in improved biofuels and more effective disease control.
  • Transportation efficiency: Collaborating with industry to improve transportation systems to design enhanced aerodynamics features, as well as enable production of better, more highly-efficient and quieter engines.
  • Renewable energy: Wind turbine design and placement to greatly improve efficiency and reduce noise.
  • Alternative programming models: Partitioned Global Address Space (PGAS) as a basis for Coarray Fortran and other unified address space programming models.

 

The Argonne Training Program on Extreme-Scale computing will be a key program for training the next generation of code developers – having them ready to drive science from day one when Aurora is made available to research institutions around the world.

 

For more information on the announcement, you can head to our new Aurora webpage or dig deeper into Intel’s HPC scalable system framework.

 

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

Desiderata for Enterprise Health Analytics in the 21st Century

With apologies and acknowledgments to Dr. James Cimino, whose landmark paper on controlled medical terminologies still sets a challenging bar for vocabulary developers, standards organizations and vendors, I humbly propose a set of new desiderata for analytic systems in health care. These desiderata are, by definition, a list of highly desirable attributes that organizations should consider as a whole as they lay out their health analytics strategy – rather than adopting a piecemeal approach.

 

The problem with today’s business intelligence infrastructure is that it was never conceived of as a true enterprise analytics platform, and definitely wasn’t architected for the big data needs of today or tomorrow. Many, in fact probably most, health care delivery organizations have allowed their analytic infrastructure to evolve in what a charitable person might describe as controlled anarchy. There has always been some level of demand for executive dashboards which led to IT investment in home grown, centralized, monolithic and relational database-centric enterprise data warehouses (EDWs) with one or more online analytical processing-type systems (such as Crystal Reports, Cognos or BusinessObjects) grafted on top to create the end-user-facing reports. Graham_Hughes.jpg

 

Over time, departmental reporting systems have continued to grow up like weeds; data integration and data quality has become a mini-village that can never keep up with end-user demands. Something has to change. Here are the desiderata that you should consider as you develop your analytic strategy:

 

Define your analytic core platform and standardize. As organizations mature, they begin to standardize on the suite of enterprise applications they will use. This helps to control processes and reduces the complexity and ambiguity associated with having multiple systems of record. As with other enterprise applications such as electronic health record (EHR), you need to define those processes that require high levels of centralized control and those that can be configured locally. For EHR it’s important to have a single architecture for enterprise orders management, rules, results reporting and documentation engines, with support for local adaptability. Similarly with enterprise analytics, it’s important to have a single architecture for data integration, data quality, data storage, enterprise dashboards and report generation – as well as forecasting, predictive modelling, machine learning and optimization.

 

Wrap your EDW with Hadoop. We’re entering an era where it’s easier to store everything than decide which data to throw away. Hadoop is an example of a technology that anticipates and enables this new era of data abundance. Use it as a staging area and ensure that your data quality and data transformation strategy incorporates and leverages Hadoop as a highly cost-effective storage and massively scalable query environment.

 

Assume mobile and web as primary interaction. Although a small number of folks enjoy being glued to their computer, most don’t. Plan for this by making sure that your enterprise analytic tools are web-based and can be used from anywhere on any device that supports a web browser.

 

Develop purpose-specific analytic marts. You don’t need all the data all the time. Pick the data you need for specific use cases and pull it into optimized analytic marts. Refresh the marts automatically based on rules, and apply any remaining transformation, cleansing and data augmentation routines on the way inbound to the mart.

 

Leverage cloud for storage and Analytics as a Service (AaaS). Cloud-based analytic platforms will become more and more pervasive due to the price/performance advantage. There’s a reason that other industries are flocking to cloud-based enterprise storage and computing capacity, and the same dynamics hold true in health care. If your strategy doesn’t include a cloud-based component, you’re going to pay too much and be forced to innovate at a very slow pace.

 

Adopt emerging standards for data integration. Analytic insights are moving away from purely retrospective dashboards and moving to real-time notification and alerting. Getting data to your analytic engine in a timely fashion becomes essential; therefore, look to emerging standards like FHIR, SPARQL and SMART as ways to provide two-way integration of your analytic engine with workflow-based applications.

 

Establish a knowledge management architecture. Over time, your enterprise analytic architecture will become full of rules, reports, simulations and predictive models. These all need to be curated in a managed fashion to allow you to inventory and track the lifecycle of your knowledge assets. Ideally, you should be able to include other knowledge assets (such as order sets, rules and documentation templates), as well as your analytic assets.

 

Support decentralization and democratization. Although you’ll want to control certain aspects of enterprise analytics through some form of Center of Excellence, it will be important for you to provide controlled access by regional and point-of-service teams to innovate at the periphery without having to provide change requests to a centralized team. Centralized models never can scale to meet demand, and local teams need to be given some guardrails within which to operate. Make sure to have this defined and managed tightly.


Create a social layer. Analytics aren’t static reports any more. The expectation from your users is that they can interact, comment and share the insights that they develop and that are provided to them. Folks expect a two-way communication with report and predictive model creators and they don’t want to wait to schedule a meeting to discuss it. Overlay a portal layer that encourages and anticipates a community of learning.

 

Make it easily actionable. If analytics are just static or drill-down reports or static risk scores, users will start to ignore them. Analytic insights should be thought of as decision support; and, the well-learned rules from EHRs apply to analytics too. Provide the insights in the context of my workflow, make it easy to understand what is being communicated, and make it easily actionable – allow users to take recommended actions rather than trying to guess what they might need to do next.

 

Thanks for reading, and please let me know what you think. Do these desiderata resonate with you? Are we missing anything essential? Or is this a reasonable baseline for organizations to get started?

Dr. Graham Hughes is the Chief Medical Officer at SAS and an industry expert in SAS Institute’s Healthcare & Life Sciences Center for Health Analytics and Insights (CHAI). A version of this post was originally published last August on A Shot in the Arm, the SAS Health and Life Sciences Blog.

Read more >

How to Meet the Needs of Mobile Patients through Interoperability

Even when Patient Health Information is effectively shared across a healthcare network – provider organizations still struggle to share patient data across organizational boundaries to meet the healthcare needs of increasingly mobile patient populations.

 

For instance, consider the healthcare needs of retirees escaping the deep freeze of a Midwestern winter for the warmer climate of Florida. Without full access to unified and comprehensive patient data, healthcare providers new to the patient run the risk of everything from ordering expensive, unnecessary tests to prescribing the wrong medications. In these situations, at minimum, the patient’s quality of care is suboptimal. And in the worst-case scenarios, a lack of interoperability across networks can lead to devastating patient outcomes. noland joiner 2.jpg

 

System and process

 

To ensure better patient outcomes, healthcare organizations require the level of system and process interoperability that enables sharing of real-time patient data that leads to informed provider decision-making, decreased expenses for payer organizations – and ultimately enhanced patient-centered care across network and geographic boundaries. Effective interoperability means everyone wins.

 

Information support

 

To keep efficiency, profitability and patient-centered care all moving in the right direction, healthcare organizations need complete visibility into all critical reporting metrics across hospitals, programs and regions. In answer to that need, Intel has partnered with MarkLogic and Tableau to develop an example of the business intelligence dashboard of the future. This interactive dashboard runs on Intel hardware and MarkLogic software. Aptly named, Tableau’s visually rich and shareable display features critical, insightful analytics that tell the inside story behind each patient’s data. This technology empowers clinicians new to the patient with a more holistic view of his health.

 

Combined strength

 

By combining MarkLogic’s Enterprise NoSQL technology with Intel’s full range of products, Tableau is able to break down information silos, integrate heterogeneous data, and give access to critical information in real-time – providing a centralized application of support for everything from clinical informatics and fraud prevention to medical research and publishing. Tableau powered by MarkLogic and Intel delivers clear advantages to payers, providers and patients.

 

To see Intel, MarkLogic, and Tableau in action, please stop by and visit with us at HIMSS in the Mobile Health Knowledge Zone, Booth 8368. We’ll guide you through an immersive demonstration that illustrates how the ability to integrate disparate data sources will lead you to better outcomes for all stakeholders in the healthcare ecosystem, including:

  • Patient empowerment
  • Provider-patient communication
  • Payer insight into individuals or population heath
  • Product development (drug and device manufacturers)

 

What questions do you have about interoperability?

 

Noland Joiner is Chief Technology Officer, Healthcare, at MarkLogic.

Read more >

Technology-enabled business transformation

Man-Tablet-Cloud_Computing_Diagram.jpg

 

 

By Mike Simons | Computerworld UK

 

Intel’s CEBIT presence highlighted everything from workplace transformation to wearable’s via the reengineered data centre. Here is a brief overview of how the processor giant is innovating in these vital areas.

 

Workplace transformation

 

Wireless technologies caused a buzz at CeBit this year, with WiGig (multi-gigabit speed wireless communications) and Intel’s ProWiDi technologies laying the foundation for Workplace Transformation. (Workplace Transformation refers to changes in the workplace that will lead to higher employee productivity through things like wire-free working, mobility and collaboration.)

 

WiGig will enable workers to share large volumes of data quickly, videoconference in high-definition, and stream TV- quality media, among other things. Ten times faster than Wi-Fi, WiGig could become fast enough to let users transfer the contents of a 25GB Blu-ray disc in less than a minute.

 

Intel Pro Wireless Display (ProWiDi ), on the other hand, lets you securely share content from tablets, PCs, and Ultrabook devices on your conference room displays without wires. It’s another piece in the jigsaw of wireless working, with Intel promoting both its ease-of-use and administration, and security features.

 

Also on display on Intel’s CeBIT stand, and promising to transform the workplace, were a number of new device form factor innovations from vendors including HP, Fujitsu and Dell.

 

These included 2-in 1 devices: notebooks that can be turned into tablets with a twist – or removal – of a lid, or a detachable keyboard; super-thin-and-light tablets, phablets (smart phone tablets), more powerful laptops and all-in-ones: high-definition touch-screen computers with everything included.

 

As well as these new devices designed for workplace transformation, Intel demonstrated technologies aimed at helping businesses to create value from IoT – connected smart objects; analytics that make use of complex big data; and cloud technologies that will transform the enterprise.

 

Also on its stand, Intel showcased a development project with Germany’s E.on SE, part of the energy giant E.on, using ‘smart grid’ processors and technologies. These help power generators and users to monitor usage, so utilities firms can adjust their supply to meet consumption; also cutting costs by saving energy.

 

Talking about IoT, Intel’s VP and GM EMEA, Christian Morales, said: “[Intel] were in embedded applications about 35 to 40 years ago. We were the first company to introduce embedded application microcontrollers. The big change is that those applications are becoming connected to the cloud and the Internet.”

 

Data centre transformation

 

As well as workplace transformation, data transformation was also a key theme at CeBIT. Intel showcased its Xeon server processors and its ability to build manageable cloud services on top of this core product.

 

For example, Intel has designed its Xeon E5-2600 v3 processors to be used with data centres built on a software defined infrastructure (SDI) model: where data centre resources, such as compute, storage and networking, can be managed and manipulated through software to get the most efficient usage.

 

So, compared to a typical 4-year-old server, server platforms based on the new Xeon E5-2600 v3 processors offer up to 8.7 times the performance; up to three times the virtual machine density, and three times the energy efficiency of the older server systems, according to Intel.

 

Intel’s strategy with the latest Xeon processors is to encourage enterprises to use them to power their hybrid cloud infrastructures, and utilise the accompanying management tools and technologies. Among these are intelligent workload placement through automated provisioning; thin provisioning of storage; and tiered storage orchestration.

 

Alongside its Xeon processor developments, Intel is also making advances in rack-scale architecture using Silicon Photonics. This is a new approach to make optical devices out of silicon, using photons of light to move huge amounts of data at very high speeds – up to 100Gbps. This happens over a thin optical fibre at extremely low power, rather than using electrical signals over a copper cable.

 

Wearable’s

 

Other notable innovations that Intel showcased centred on wearable computers. For example, Intel showed the ProGlove on its stand. This sensor-based ‘smart glove’ can boost productivity for manufacturing jobs by enabling manual workers to work faster, and through scanning and sensing, collecting data that can be analysed for production management purposes. The ProGlove team won third place for the Intel Make It Wearable Challenge in November 2014.

 

Intel recently ran the contest to encourage entrepreneurs, universities and schoolchildren to design wearable computers that could be used for practical purposes, based on Intel’s Edison technology. Among these were a wearable camera drone, and sensor-equipped items for pregnant mothers and parents of new-born babies.

 

Another exciting development on display was Intel Real Sense 3D, based on 3D camera technology. This features the first integrated camera that sees more like humans do, with the system able to understand and respond to natural movement in three dimensions. Consequently, users can interact with the device with natural movements. In addition, 3D scans can be manipulated and altered, shared, or printed with a 3D printer.

 

The system works by using a conventional camera, an infrared camera, and an infrared laser projector. Together, the three lenses allow the device to infer depth by detecting infrared light that has bounced back from objects in front of it. This visual data, taken in combination with Intel RealSense motion-tracking software, create a touch-free interface that responds to hand, arm, and head motions as well as facial expressions. Consequently, 3D technology also has the potential to be used for security purposes, for additional biometric input such as face recognition.

 

On show were HP’s Sprout computer which uses the 3D technology. Although it’s targeted at consumers, employees are likely to find uses for it, with the vendor talking about parts manufacturing when Sprout is linked to a 3D printer. Dell also demoed its 3D enabled, Venue 8 7000 series tablet, based on the Intel Atom Z3500 processor. This super-thin device fits in a jacket pocket, and will allow enterprises to seek new uses for 3D technology, by making it mobile and comparatively cheap.

 

With its involvement in the workplace, the data centre and end-user computing, Intel used CEBIT to showcase the breadth of its innovation and its deep and broad reach inside enterprise computing.

 

This article was originally published on: http://www.computerworlduk.com/sponsored-article/it-business/3606336/technology-enabled-business-transformation/

Read more >

March 2015 Intel® Chip Chat Podcast Round-up

In March, we started off covering the future of next generation Non-Volatile Memory technologies and the Open Compute Project Summit, as well as the recent launch of the Intel® Xeon® Processor D-1500 Product Family. Throughout the second half of March we archived Mobile World Congress podcasts recorded live in Barcelona. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

  • The Future of High Performance Storage with NVM Express – Intel® Chip Chat episode 370: Intel Senior Principal Engineer Amber Huffman stops by to talk about the performance benefits enabled when NVM Express is combined with the Intel® Solid-State Drive Data Center Family for PCIe. She also describes the future of NVMe over fabrics and the coming availability of NVMe on the client side within desktops, laptops, 2-in-1s, and tablets. To learn more visit: http://www.nvmexpress.org/
  • The Intel® Xeon® Processor D-1500 Product Family – Intel® Chip Chat episode 371: John Nguyen, a Senior Product Manager at Supermicro discusses the Intel® Xeon® Processor D-1500 Product Family launch and how Supermicro is integrating this new solution into their products today. He illustrates how the ability to utilize the small footprint and low power capabilities of the Intel Xeon Processor D-1500 Product Family is facilitating the production of small department servers for enterprise, as well as enabling small businesses to take advantage of the Intel Xeon Processor Family performance. To learn more visit: www.supermicro.com/products/embedded/
  • Innovating the Cloud w/ Intel® Xeon® Processor D-1500 Product Family – Intel® Chip Chat episode 372: Nidhi Chappell, Entry Server and SoC Product Marketing Manager at Intel, stops by to announce the launch of the Intel® Xeon® Processor D-1500 Product Family. She illustrates how this is the first Xeon processor in a SoC form factor and outlines how the low power consumption, small form factor, and incredible performance of this solution will greatly benefit the network edge and further enable innovation in the telecommunications industry and the data center in general. To learn more visit: www.intel.com/xeond
  • Making the Open Compute Vision a Reality – Intel® Chip Chat episode 373: Raejeanne Skillern, General Manager of the Cloud Service Provider Organization within the Data Center Group at Intel explains Intel’s involvement in the Open Compute Project and the technologies Intel will be highlighting at the 2015 Open Compute Summit in San Jose California. She discusses the launch of the new Intel® Xeon® Processor D-1500 Product Family, as well as how Intel will be demoing Rack Scale Architecture and other solutions at the Summit that are aligned with OCP specifications.
  • The Current State of Mobile and IoT Security – Intel® Chip Chat episode 374: In this archive of a livecast from Mobile World Congress in Barcelona, Gary Davis (twitter.com/garyjdavis), Chief Consumer Security Evangelist at Intel Security stops by to talk about the current state of security within the mobile and internet of things industry. He emphasizes how vulnerable many wearable devices and smart phones can be to cybercriminal attacks and discusses easy ways to help ensure that your personal information can be protected on your devices. To learn more visit: www.intelsecurity.com or home.mcafee.com
  • Enabling Next Gen Data Center Infrastructure – Intel® Chip Chat episode 375: In this archive of a livecast from Mobile World Congress Howard Wu, Head of Product Line for Cloud Hardware and Infrastructure at Ericsson chats about the newly announced collaboration between Intel and Ericsson to launch a next generation data center infrastructure. He discusses how this collaboration, which is in part enabled by Intel® Rack Scale Architecture, is driving the optimization and scaling of cloud resources across private, public, and enterprise cloud domains for improved operational agility and efficiency. To learn more visit: www.ericsson.com/cloud
  • Rapidly Growing NFV Deployment – Intel® Chip Chat episode 376: In this archive of a livecast from Mobile World Congress John Healy, Intel’s GM of the Software Defined Networking Division, stops by to talk about the current state of Network Functions Virtualization adoption within the telecommunications industry. He outlines how Intel is driving the momentum of NFV deployment through initiatives like Intel Network Builders and how embracing the open source community with projects such as OPNFV is accelerating the ability for vendors to now offer many solutions that are targeted towards function virtualization.

 

Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Read more >

5 Solutions Showing Off Intel IoT Gateway and Ecosystem Collaboration

Internet of Things solutions are all about connections that transform business and change lives — from predictive maintenance and operational efficiencies to personalized healthcare and beyond. When you consider that more than 85 percent of today’s legacy systems are unconnected, … Read more >

The post 5 Solutions Showing Off Intel IoT Gateway and Ecosystem Collaboration appeared first on IoT@Intel.

Read more >

Accepting the CORAL Challenge—Where Collaboration and Innovation Meet

By Dave Patterson, President, Intel Federal LLC and Vice President, Data Center Group, Intel

 

 

The U.S. Department of Energy’s (DOE) CORAL program (Collaboration of Oak Ridge, Argonne and Lawrence Livermore National Laboratories) is impressive for a number of advanced technical reasons. But the recent award announcement to Intel has shown a spotlight on another topic I am very excited about: Intel Federal LLC.

 

Intel Federal is a subsidiary that enables Intel to contract directly and efficiently with the U.S. Government. Today we work with DOE across a range of programs that address some of the grand scientific and technology challenges that must be solved to achieve extreme scale computing. One such program is Intel’s role as a prime contractor in the Argonne Leadership Computing Facility (ALCF) CORAL program award.

 

Intel Federal is a collaboration center. We’re involved in strategic efforts that need to be orchestrated in direct relationship with the end users. This involves the engagement of diverse sets of expertise from Intel and our partners, ranging from providers of hardware to system software, fabric, memory, storage and tools. The new supercomputer being built for ALCF, Aurora, is a wonderful example of how we bring together talent from all parts of Intel in collaboration with our partners to realize unprecedented technical breakthroughs.

 

Intel’s approach to working with the government is unique – I’ve spent time in the traditional government contracting space, and this is anything but. Our work today is focused on understanding how Intel can best bring value through leadership and technology innovation to programs like CORAL.

 

But what I’m most proud of about helping bring Aurora to life is what this architectural direction with Intel’s HPC scalable system framework represents in terms of close collaboration in innovation and technology. Involving many different groups across Intel, we’ve built excellent relationships with the team at Argonne to gather the competencies we need to support this monumental effort.

 

Breakthroughs in leading technology are built into Intel’s DNA. We’re delighted to be part of CORAL, a great program with far-reaching impact for science and discovery. It stretches us, redefines collaboration, and pushes us to take our game to the next level.  In the process, it will transform the HPC landscape in ways that we can’t even imagine – yet.

 

Stay tuned to CORAL, www.intel.com/hpc

 

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

Challenge for Health IT: Move Beyond Meaningful Use

As we all know, healthcare is a well regulated and process-driven industry. The current timeline for new research and techniques to be adopted by half of all physicians in the United States of America is around 17 years. While many of these regulations and policies are created with the best of intentions, they are often designed by criteria that doesn’t have the patient in mind, but play more to the needs of our billing needs, reimbursements, and being efficient as organizations. Rarely do we see these being designed with the experience and interactions with a patient.

 

The challenge for technology at the moment, especially for the physician, is how to move beyond the meaningful use criteria that the federal government has adopted. Doug Wood.png

 

Outdated record rules

 

We are currently working with medical record rules and criteria that are 20 years old, and trying to adapt and apply them to our electronic records. The medical records have become a repository of large amounts of waste of words and phrases that have little meaning to the physician/patient interaction. For me to wade through a medical record (because of the meaningful use criteria and structure of medical records) it is very difficult to find relevant information.

 

As a person involved in quality review, what I find more and more in electronic records is that it’s very easy to potentiate mistakes and errors. One part of the whole system that I find uncontainable is to have the physician, who is one of the most costly members of the team, take time to ostensibly be a clerk, or scribe, and take time to fill out the required records.

 

Disrupts visits

 

The problem that we can identify with all of this, at least in the office visit portion, is that it disrupts the visit with the patient. It focuses the conversation to adhere to getting the clerical tasks necessary for meaningful use criteria completed. And to me, there’s nothing more oppressive in this interaction than to doing this clerical work, than when it’s done electronically, and getting worse.

 

So if we look at this situation from the perspective of people (both the patient and physician), and how we can use electronic tools, we could rapidly be liberated from the oppression of regulatory interactions. It would be so easy, right now, to capture patient’s activities and health to create a historical archive. This could be created in some template using video and audio technologies, and language dictation software that could give the physician much more content about what is going on.

 

I say this after visiting the Center for Innovation team at the Mayo Clinic Scottsdale location, where they are conducting a wearables experiment, on which the provider is wearing Google Glass when at an office visit with a patient.

 

The experiment had a scribe in another room observing and recording the interaction through the Glass feed, both video and audio, to capture the visit and create the medical record. As I looked through the note that was put together, it was a good note. It met the requirements for the bureaucrats, but it missed the richness of the visit that I observed, and it missed what the patient needed. It missed the steps and instructions that the physician covered with the patient. There is no place to record this in the current set up.

 

Easy review access

 

Just think if that interaction was available, through a HIPAA compliant portal, for the patient and provider to access. When the patient goes home, and a few days later asks, “What did my doctor cover during my visit,” they would be able to watch and hear the conversation right there. They might have brochures and literature that was given to them, but imagine if they had access to that video and audio to replay and watch again.

 

It seems to me that we have the technology at hand to make this a viable reality.

 

The biggest challenge here is to convince certain parties, like the Federal Government and Medicare, that there is a better way to do this, and that these are more meaningful ways. Recalling who the decision makers are that designed these processes and regulations, we must work to change the design criteria from that of a compliance perspective, to one where the needs of the patient come first.

 

That’s where I think we have the great opportunities and great challenges to turn this around. If we think for a minute, and decide to do away with all this useless meaningful criteria, and instead say, “Let’s go back and think how we can make the experience better for the patient,” and leverage technologies to do just that, we would be much better off.

 

What questions do you have?

 

Dr. Douglas Wood is a practicing cardiologist and the Medical Director for the Mayo Clinic’s Center for Innovation.

Read more >

An Intel partnership that’s a win for you too

By Charlie Wuischpard, VP & GM High Performance Computing at Intel

 

Every now and then in business it all really comes together— a valuable program, a great partner, and an outcome that promises to go far beyond just business success. That’s what I see in our newly announced partnership with the Supercomputing Center of the Chinese Academy of Sciences. We’re collaborating to create an Intel Parallel Computing Center (Intel® PCC) in the People’s Republic of China. We expect our partnership with the Chinese Academy of Sciences to pay off in many ways.

 

Through working together to modernize “LAMMPS”, the world’s most broadly adopted Molecular Dynamics application, Intel and the Chinese Academy of Sciences will help researchers and scientists understand everything from physics and semiconductor design to biology, pharmaceuticals, DNA analysis and ultimately aid in identifying cures for diseases, genetics, and more.

 

The establishment of the Intel® PCC with the Chinese Academy of Sciences is an important step. The relationship grows from our ongoing commitment to cultivate our presence in China and to find and engage Chinese businesses and institutions that will collaborate to bring their talents and capabilities to the rest of the world. Their Supercomputing Center has been focused on operating and maintaining supercomputers and exploiting and supporting massively parallel computing since 1996. Their work in high performance computing, scientific computing, computational mathematics, and scientific visualization has earned national and international acclaim. And it has resulted in important advances in the simulation of large-scale systems in fields like computational chemistry and computational material science.

 

We understand, solving the biggest challenges for society, industry, and science requires a dramatic increase in computing efficiency. Many leverage High Performance Computing to solve these challenges, but seldom realize they are only using a small fraction of the compute capability their systems provide. To take advantage of the full potential of current and future hardware (i.e. cores, threads, caches, and SIMD capability), requires what we call “modernization”. We know building Supercomputing Centers is an investment. By ensuring software fully exploits the modern hardware, this will aid in maximizing the impact of these investments. Customers will realize the greatest long-term benefit when they pursue modernization in an open, portable and scalable manner.

 

The goals of the Intel® PCC effort go beyond just creating software that takes advantage of hardware, all the way to delivering value to researchers and other users around the world. Much of our effort is training and equipping students, scientists, and researchers to write modern code that will ultimately accelerate discovery.

 

We look forward to our partnership with the Chinese Academy of Science and the great results to come from this new Intel® Parallel Computing Center. You can find additional information regarding this effort by visiting our Intel® PCC website.

Read more >

Data Center: The Future is Software Defined

It is a very exciting time for the industry of information and communication technology (ICT) as it continues the massive transformation to the digital service, or “on demand”, economy.  Earlier today I had the pleasure of sharing Intel’s perspective and vision of the Data Center market at IDF15 in Shenzhen and I can think of no place better than China to exemplify how the digital services economy is impacting people’s everyday lives.  In 2015 ICT spending in China will exceed $465 Billion, comprising 43% of global ICT spending growth.  ICT is increasingly the means to fulfil business, public sector and consumer needs and the rate at which new services are being launched and existing services are growing is tremendous.  The result is 3 significant areas of growth for data center infrastructure:  continued build out of Cloud computing, HPC and Big Data.

 

Cloud computing provides on-demand, self-serve attributes that enable application developers to deliver new services to the markets in record time.  Software Defined Infrastructure, or SDI, optimizes this rapid creation and delivery of business services, reliably, with a programmable infrastructure.  Intel has been making great strides with our partners towards the adoption of SDI.  Today I was pleased to be joined by Huawei, who shared their efforts to enable the network transformation, and Alibaba, who announced their recent success in powering on Intel’s Rack Scale Architecture (RSA) in their Hangzhou lab.

 

Just as we know the future of the data center is software defined, the future of High Performance Computing is software optimized. IDC predicts that the penalties for neglecting the HPC software stack will grow more severe, making modern, parallel, optimized code essential for continued growth. To this end, today we announced that the first Intel® Parallel Computing Center in China has been established in Beijing to drive the next generation of high performance computing in the country.  Our success is also dependent on strong partnerships, so I was happy to have Lenovo onstage to share details on their new Enterprise Innovation Center focused on enabling our joint success in China.

 

As the next technology disruptor, Big Data has the ability to transform all industries.  For healthcare, through the use of Big Data analytics, precision medicine becomes a possibility providing tremendous opportunities to advance the treatment of life threatening diseases like cancer.  By applying all the latest Cloud, HPC and Big Data analytics technology and products, and working collectively as an industry, we can enable the sequence of a whole genome, identify the fundamental genes that cause the cancer, and the means to block them through the creation of personalized treatment, all in one day by 2020.

 

Through our partnership with China technology leaders we will collective enable the Digital Service Economy and deliver the next decade of discovery, solving the biggest challenges in society, industry and the sciences.

Read more >