Recent Blog Posts

Today We Have True Clinical-Grade Devices

When I used to work for the UK National Health Service, I encouraged doctors and nurses to use mobile devices. But that was 15 years ago, and the devices available only had about two hours of battery life and weighed a ton. In other words, my IT colleagues and I were a bit overly optimistic about the mobile devices of the time being up to the task of supporting clinicians’ needs.

 

So it’s great to be able to stand up in front of health professionals today and genuinely say that we now have a number of clinical-grade devices available. They come in all shapes and sizes. Many can be sanitized and can be dropped without damaging them. And they often have a long battery life that lasts the length of a clinician’s shift. The work Intel has done over the last few years on improving device power usage and efficiency has helped drive the advancements in clinical-grade devices.

 

It is very clear that each role in health has different needs. And as you can see from the following real-world examples, today’s clinical-grade devices are up to the task whatever the role.

 

Wit-Gele Kruis nurses are using Windows 8 Dell Venue 11 Pro tablets to help them provide better care to elderly patients at home. The Belgian home-nursing organization selected the tablets based on feedback from the nurses who would be using them. “We opted for Dell mainly because of better battery life compared to the old devices,” says Marie-Jeanne Vandormael, Quality Manager, Inspection Service, at Wit-Gele Kruis, Limburg. “The new Dell tablets last at least two days without needing a charge. Our old devices lasted just four hours. Also, the Dell tablets are lightweight and sit nicely in the hand, and they have a built-in electronic ID smartcard reader, which we use daily to confirm our visits.”

 

In northern California, Dr. Brian Keeffe, a cardiologist at Marin General Hospital loves that he can use the Microsoft Surface Pro 3 as either a tablet or a desktop computer depending on where he is and the task at hand (watch video below).

 

 

When he’s with patients, Dr. Keeffe uses it in its tablet form. “With my Surface, I am able to investigate all of the clinical data available to me while sitting face-to-face with my patients and maintaining eye contact,” says Dr. Keeffe.

 

And when he wants to use his Surface Pro 3 as a desktop computer, Dr. Keeffe pops it into the Surface docking station, so he can be connected to multiple monitors, keyboards, mice, and other peripherals. ”In this setup, I can do all of my charting, voice recognition, and administrative work during the day on the Surface,” explains Dr. Keeffe.

 

These are just two examples of the wide range of devices on the market today that meet the needs of different roles in health. So if you’re an IT professional recommending mobile devices to your clinicians, unlike me 15 years ago, you can look them in the eye and tell them you have a number of great clinical-grade options to show them.

 

Gareth Hall is Director, Mobility and Devices, WW Health at Microsoft

Read more >

Digitization of the Utility Industry Part II: The Impact of Metcalfe’s Law

My previous post focused on The Digitization of the Utility Industry, and mentioned Metcalfe’s Law… “Metcalfe’s law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2). First formulated in this form … Read more >

The post Digitization of the Utility Industry Part II: The Impact of Metcalfe’s Law appeared first on Grid Insights by Intel.

Read more >

OPNFV Day at OpenStack Summit

By Tony Dempsey


I’m here attending the OpenStack Summit in Vancouver, BC and wanted to find out more about OPNFV, a cross industry initiative to develop a reference architecture for operators to use as a reference for their NFV deployments. Intel is a leading contributor to OPNFV, and I was keen to find out more, so I attended a special event being held as part of the conference.

 

Heather Kirksey (OPNFV Director) kicked off today’s event by describing what OPNFV is all about, including the history around why OPNFV was formed as well as an overview on what areas OPNFV is focused on. OPNFV is a carrier-grade integrated open source platform to accelerate the introduction of new NFV products and services, which was an initiative coming out of the ETSI SIG group and its initial focus is on the NFVI layer.

 

OPNFV’s first release will be called Arno (naming is themed on names of rivers) and will include OpenStack, OpenDaylight, and Open vSwitch.  No date for the release is available just yet but is thought to be soon. Notably, Arno is expected to be used in lab environments initially, versus a commercial deployment. High Availability (HA) will be part of the first release (control and deployment side is supported). The plan is to make OpenStack Telco-Grade instead of trying to make a Telco-Grade version of OpenStack. AT&T gave an example as to how they were going to use the initial Arno release.  As an example of how this release will be implemented, AT&T indicated they going to bring the Arno release into their lab, add additional elements to it, and test for performance and security. They see this release very much as a means to uncover gaps in open source projects, help identify fixes and upstream these fixes. OPNFV is committed to working with the upstream communities to ensure a good relationship.  Down the road it might be possible for OPNFV releases to be deployed by service providers but currently this is a development tool.

 

An overview on OPNFV’s Continuous Integration (CI) activities was given along with a demo. The aim of the CI activity is to give fast feedback to developers in order to increase and improve the rate at, which software is developed. Chris Price (TSC Chair) spoke about requirements for the projects and working with upstream communities. According to Chris, OPNFV’s focus is working with the open source projects to define the issues, understand which open source community can likely solve the problem, work with that community to find a solution, and then upstream that solution. Mark Shuttleworth (founder of Canonical) gave an auto-scaling demo showing a live VIMS core (from Metaswitch) with CSCF auto-scaling running on top of Arno. 

 

I will be on the lookout for more OPNFV news throughout the Summit to share. In the meantime, check out Intel Network Builders for more information on Intel’s support of OPNFV and solutions delivery from the networking ecosystem.

Read more >

Intel’s Role in Building Diversity at the OpenStack Summit

By Suzi Jewett, Diversity & Inclusion Manager, Data Center Group, Intel

 

 

I have the fantastic job of driving diversity and inclusion strategy for the Data Center Group at Intel.  For me it is the perfect opportunity to align my skills, passions, and business imperatives in a full time role.  I have always had the skills and passions, but it was not until recently that the business imperative portion grew within the company to a point that we needed a full time person to fill this role and many similar throughout Intel.  Being a female mechanical engineer I have always known I am one of the few and at times that was awkward, but even I didn’t know the business impact of not having diverse teams.


Over the last 2-3 years the information on the bottom line results to the business of having diverse persons on teams and in leadership positions has become clear, and has provided overwhelming evidence that says that we can no longer be okay with having a flat or dwindling diverse persons representation in our teams.  We also know that all employees actually have more passion for their work and are able to bring their whole-selves to work when we have an inclusive environment.  Therefore, we will not achieve the business imperatives we need to unless we embrace diverse backgrounds, experiences, and thoughts in our culture and in our every decision.

 

Within the Data Center Group one area that we recognize as well below where we need it to be is female participation in open source technologies. So, I decided that we should host a networking event for women at the OpenStack Summit this year and really start making our mark in increasing the number of women in the field.

 

Today I had my first opportunity to interact with people working in OpenStack at the Women of OpenStack Event. We had a beautiful cruise around the Vancouver Harbor and then chatted the night away at Black + Blue Steakhouse. About 125 women attended and a handful of male allies (yeah!). The event was put on by the OpenStack foundation and sponsored by Intel & IBM. The excitement of women there and the non-stop conversation was so energizing to be a part of and it was obvious that the women loved having some kindred spirits to talk tech and talk life with. I was able to learn more about how OpenStack works, why it’s important, and see the passion of everyone in the room to work together to make it better. I learned that many of the companies design features together, meeting weekly and assigning ownership to divvy up the work between the companies to complete feature delivery to the code…being new to open source software I was amazed that this is even possible and excited at the same to see the opportunities to really have diversity in our teams because the collaborative design has the opportunity to bring in a vast amount of diversity and create a better end product.

 

A month or so ago I got asked to help create a video to be used today to highlight the work Intel is doing in OpenStack and the importance to Intel and the industry of having women as contributors. The video was shown tonight along with a great video from IBM and got lots of applause and support throughout the venue as different Intel women appeared to talk about their experiences. Our Intel ‘stars’ were a hit and it was great to have them be recognized for their technical contributions to the code and leadership efforts for Women of OpenStack. What’s even more exciting is that this video (LINK HERE) will play at a keynote this week for all 5000 attendees to highlight what Intel is doing to foster inclusiveness and diversity in OpenStack!

 

Read more >

Accelerating Business Intelligence and Insights

By Mike Pearce, Ph.D. Intel Developer Evangelist for the IDZ Server Community

 

 

On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family.  One key area of focus for the new processor family is that it is designed to accelerate business insight and optimize business operations—in healthcare, financial, enterprise data center, and telecommunications environments—through real-time analytics. The new Xeon processor is a game-changer for those organizations seeking better decision-making, improved operational efficiency, and a competitive edge.

 

The Intel Xeon processor E7 v3 family’s performance, memory capacity, and advanced reliability now make mainstream adoption of real-time analytics possible. The rise of the digital service economy, and the recognized potential of “big data,” open new opportunities for organizations to process, analyze, and extract real-time insights. The Intel Xeon processor E7 v3 family tames large volumes of data accumulated by cloud-based services, social media networks, and intelligent sensors, and enable data analytics insights, aided by optimized software solutions.

 

A key enhancement to the new processor family is its increased memory capacity – the industry’s largest per socket1 – enabling entire datasets to be analyzed directly in high-performance, low-latency memory rather than traditional disk-based storage. For software solutions running on and/or optimized for the new Xeon processor family, this means businesses can now obtain real-time analytics to accelerate decision-making—such as analyzing and reacting to complex global sales data in minutes, not hours.  Retailers can personalize a customer’s shopping experience based on real-time activity, so they can capitalize on opportunities to up-sell and cross-sell.  Healthcare organizations can instantly monitor clinical data from electronic health records and other medical systems to improve treatment plans and patient outcomes.

 

By automatically analyzing very large amounts of data streaming in from various sources (e.g., utility monitors, global weather readings, and transportation systems data, among others), organizations can deliver real-time, business-critical services to optimize operations and unleash new business opportunities. With the latest Xeon processors, businesses can expect improved performance from their applications, and realize greater ROI from their software investments.

 

 

Real Time Analytics: Intelligence Begins with Intel

 

Today, organizations like IBM, SAS, and Software AG are placing increased emphasis on business-intelligence (BI) strategies. The ability to extract insights from data is a something customers expect from their software to maintain a competitive edge.  Below are just a few examples of how these firms are able to use the new Intel Xeon processor E7 v3 family to meet and exceed customer expectations.

 

Intel and IBM have collaborated closely on a hardware/software big data analytics combination that can accommodate any size workload. IBM DB2* with BLU Acceleration is a next-generation database technology and a game-changer for in-memory computing. When run on servers with Intel’s latest processors, IBM DB2 with BLU Acceleration optimizes CPU cache and system memory to deliver breakthrough performance for speed-of-thought analytics. Notably, the same workload can be processed 246 times faster3 running on the latest processor than the previous version of IBM DB2 10.1 running on the Intel Xeon processor E7-4870.

 

By running IBM DB2 with BLU Acceleration on servers powered by the new generation of Intel processors, users can quickly and easily transform a torrent of data into valuable, contextualized business insights. Complex queries that once took hours or days to yield insights can now be analyzed as fast as the data is gathered.  See how to capture and capitalize on business intelligence with Intel and IBM.

 

From a performance speed perspective, Apama* streaming analytics have proven to be equally impressive. Apama (a division of Software AG) is an extremely complex event process engine that looks at streams of incoming data, then filters, analyzes, and takes automated action on that fast-moving big data. Benchmarking tests have shown huge performance gains with the newest Intel Xeon processors. Test results show 59 percent higher throughput with Apama running on a server powered by the Intel Xeon processor E7 v3 family compared to the previous-generation processor.4

 

Drawing on this level of processing power, the Apama platform can tap the value hidden in streaming data to uncover critical events and trends in real time. Users can take real-time action on customer behaviors, instantly identify unusual behavior or possible fraud, and rapidly detect faulty market trades, among other real-world applications. For more information, watch the video on Driving Big Data Insight from Software AG. This infographic shows Apama performance gains achieved when running its software on the newest Intel Xeon processors.

 

SAS applications provide a unified and scalable platform for predictive modeling, data mining, text analytics, forecasting, and other advanced analytics and business intelligence solutions. Running SAS applications on the latest Xeon processors provides an advanced platform that can help increase performance and headroom, while dramatically reducing infrastructure cost and complexity. It also helps make analytics more approachable for end customers. This video illustrates how the combination of SAS and Intel® technologies delivers the performance and scale to enable self-service tools for analytics, with optimized support for new, transformative applications. Further, by combining SAS* Analytics 9.4 with the Intel Xeon processor E7 v3 family and the Intel® Solid-State Drive Data Center Family for PCIe*, customers can experience throughput gains of up to 72 percent. 5

 

The new Intel Xeon processor E7 v3 processor’s ability to drive new levels of application performance also extends to healthcare. To accelerate Epic* EMR’s data-driven healthcare workloads and deliver reliable, affordable performance and scalability for other healthcare applications, the company needed a very robust, high-throughput foundation for data-intensive computing. Epic’s engineers benchmark-tested a new generation of key technologies, including a high performance data platform from InterSystem*, new virtualization tools from VMware*, and the Intel Xeon processor E7 v3 family. The result was an increase in database scalability of 60 percent,6, 7 a level of performance that can keep pace with the rising data access demands in the healthcare enterprise while creating a more reliable, cost-effective, and agile data center. With this kind of performance improvement, healthcare organizations can deliver increasingly sophisticated analytics and turn clinical data into actionable insight to improve treatment plans and ultimately, patient outcomes.

 

These are only a handful of the optimized software solutions that, when powered by the latest generation of Intel processors, are enabling tremendous business benefits and competitive advantage. From the highly improved performance, memory capacity, and scalability, the Intel Xeon E7 v3 processor family helps deliver more sockets, heightened security, increased data center efficiency and the critical reliability to handle any workload, across a range of industries, so that your data center can bring your business’s best ideas to life. To learn more, visit our software solutions page and take a look at our Enabled Applications Marketing Guide.

 

 

 

 

 

 

1 Intel Xeon processor E7 v3 family provides the largest memory footprint of 1.5 TB per socket compared to up to 1TB per socket delivered by alternative architectures, based on published specs.

2 Up to 6x business processing application performance improvement claim based on SAP* OLTP internal in-memory workload measuring transactions per minute (tpm) on SuSE* Linux* Enterprise Server 11 SP3. Configurations: 1) Baseline 1.0: 4S Intel® Xeon® processor E7-4890 v2, 512 GB memory, SAP HANA* 1 SPS08. 2) Up to 6x more tpm: 4S Intel® Xeon® processor E7-8890 v3, 512 GB memory, SAP HANA* 1 SPS09, which includes 1.8x improvement from general software tuning, 1.5x generational scaling, and an additional boost of 2.2x for enabling Intel TSX.

3 Software and workloads used in the performance test may have been optimized for performance only on Intel® microprocessors. Previous generation baseline configuration: SuSE Linux Enterprise Server 11 SP3 x86-64, IBM DB2* 10.1 + 4-socket Intel® Xeon® processor E7-4870 using IBM Gen3 XIV FC SAN solution completing the queries in about 3.58 hours.  ‘New Generation’ new configuration: Red Hat* Enterprise LINUX 6.5, IBM DB2 10.5 with BLU Acceleration + 4-socket Intel® Xeon® processor E7-8890 v3 using tables in-memory (1 TB total) completing the same queries in about 52.3 seconds.  For more complete information visit http://www.intel.com/performance/datacenter

4 One server was powered by a four-socket Intel® Xeon® processor E7-8890 v3 and another server with a four-socket Intel Xeon processor E7-4890 v2. Each server was configured with 512 GB DDR4 DRAM, Red Hat Enterprise Linux 6.5*, and Apama 5.2*. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

5 Up to 1.72x generational claim based on SAS* Mixed Analytics workload measuring sessions per hour using SAS* Business Analytics 9.4 M2 on Red Hat* Enterprise Linux* 7. Configurations: 1) Baseline: 4S Intel® Xeon® processor E7-4890 v2, 512 GB DDR3-1066 memory, 16x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.11 sessions/hour. 2) Up to 1.72x more sessions per hour: 4S Intel® Xeon® processor E7-8890 v3, 512 GB DDR4-1600 memory, 4x 2.0 TB Intel® Solid-State Drive Data Center P3700 + 8x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.19 sessions/hour.

6 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance

7 Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

Read more >

Introduction to Intel Ethernet Flow Director

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued its Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”

 

In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).  1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which, ironically, has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.

 

motophone.jpg

 

The key reason for Ethernet’s longevity, IMHO, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.

 

The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it, is done when data gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.

 

Intel developed a technology to help address that problem, and we call it Intel® Ethernet Flow Director.  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does, in a nutshell, is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic.

 

I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  I hope you find both the video and white paper enjoyable and illuminating.

 

Intel, the Intel logo, and Intel Ethernet Flow Director are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

Read more >

Intel and Citrix Take 3D Visualization to Remote Engineers

In today’s world, engineering teams can be located just about anywhere in the world, and the engineers themselves can work from just about any location, including home offices. This geographic dispersion creates a dilemma for corporations that need to arm engineers with tools that make them more productive while simultaneously protecting valuable intellectual property—and doing it all in an affordable manner.

 

Those goals are at the heart of hosted workstations that leverage new combinations of technologies from Intel and Citrix*. These solutions, unveiled this week at the Citrix Synergy 2015 show in Orlando, allow engineers to work with demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted in a secure data center. Remote users can work from the same data set, with no need for high-volume data transfers, while enjoying the benefits of fast, clear graphics running on a dense, cost-effective infrastructure.

 

These solutions are in the spotlight at Citrix Synergy. Event participants had the opportunity to see demos of remote workstations capitalizing on the capabilities of the Intel® Xeon® processor E3-1200 product family and Citrix XenApp*, XenServer*, XenDesktop*, and HDX 3D Pro* software.

 

Show participants also had a chance to see demos of graphics passthrough with Intel® GVT-d in Citrix XenServer* 6.5, running Autodesk* Inventor*, SOLIDWORKS*, and Autodesk Revit* software. Other highlights included a technology preview of Intel GVT-g with Citrix HDX 3D Pro running Autodesk AutoCAD*, Adobe* Photoshop*, and Google* Earth.

 

Intel GVT-d and Intel GVT-g are two of the variants of Intel® Graphics Virtualization Technology. Intel GVT-d allows direct assignment of an entire GPU’s capabilities to a single user—it passes all of the native driver capabilities through the hypervisor. Intel GVT-g allows multiple concurrent users to share the resources of a single GPU.

 

The new remote workstation solutions showcased at Citrix Synergy build on a long, collaborative relationship between engineers at Intel and Citrix. Our teams have worked together for many years to help our mutual customers deliver a seamless mobile and remote workspace experience to a distributed workforce. Users and enterprises both benefit from the secure and cost-effective delivery of desktops, apps, and data from the data center to the latest Intel Architecture-based endpoints.

 

For a closer look at the Intel Xeon processor E3-1200 product family and hosted workstation infrastructure, visit intel.com/workstation.

 

 

Intel, the Intel logo, Intel inside, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Citrix, the Citrix logo, XenDesktop, XenApp, XenServer, and HDX are trademarks of Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. * Other names and brands may be claimed as the property of others.

Read more >

Digitization of the Utility Industry Part II: The Impact of Metcalfe’s Law

In the previous post focused on The Digitization of the Utility Industry Part I, I mentioned Metcalfe’s Law…

 

“Metcalfe’s law states that the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2). First formulated in this form by George Gilder in 1993,[1] and attributed to Robert Metcalfe” (Source : Wikipedia

 

So what you ask has this to do with the Utility Industry?

 

Odonovan.jpgI’d propose the following. What’s become known as the ‘Smart Grid’ is simply a use case for IoT. IoT is all about securely connecting more and more devices to a network, collecting the data from these devices and either locally or centrally, analyzing that data to create insight. Today we have more and more devices being added to the grid. Be these Smart Meters, Renewable Energy Generation devices (think solar panels, wind turbines..), electric vehicle charging stations, or simply automating more of the existing transmission and distribution Grid. It’s all about more and more devices getting connected to the grid.

 

In addition, there are deployments of Smart Thermostats, Building Energy Management systems, Electric vehicles etc etc… Now you may say, these devices are not connected ‘physically’ to the electric grid. This is true. While the business models for what’s connected via a private or public network are evolving, there will always be valid reasons to have extremely secure and low latency private networks for given parts of the grid. However the data from all these devices will be being combined in the ‘cloud’ to uncover all sorts of insights that lead to be services and business models. This is what is already going on for example via Opower, Google/Nest, C3, Alstom, AWS/Splunk, British Gas Hive and many others.

 

Now each of the use cases called out above have their own current value propositions, return of investment criteria.. For MetCalfe’s Law to hold for the smart gird, then we’ll have to see exponential value getting created as a result of more and more devices getting added and value accrued. This is already happening. Utilities in the US are using data from Smart Meters to respond to the effects of earthquakes faster, thus adding huge economic value. Energy Savings are being accrued as people get insight into how their homes can better consumer energy and utilities can use this to then plan future load profiles in the grid thus maximizing investment. All are examples of Metcalfe’s Law beginning to kick in. And it will only accelerate as new ways of combining big data from more and more devices come on stream. 

 

So if it were not for Moore’s Law, Metcalfe’s Law and human innovation, then the concept of the Smart Grid would never come about.


Agree or disagree?  Let me know your thoughts. 


– Kev


Let’s continue the conversation on Twitter: @Kevin_ODonovan

Read more >