Recent Blog Posts

3 Reasons Analytics Matter to Physicians

As physicians, we’re taught to practice evidence-based medicine where the evidence comes primarily from trade journals that document double blind, randomized control trials. Or, perhaps we turn to society meetings, problem-based learning discussions (PBLD), or peer group discussion forums. We are dedicated to finding ways to improve patient outcomes and experience, yet we miss huge opportunities every day.


We are lost in a sea of data, left to debate continuous process improvement with ‘gut feelings’ and opinions. We do the ‘best we can’ because we lack the ability to glean meaningful perspective from our daily actions. As an anesthesiologist, I know there’s a wonderful opportunity for analytics to make a difference in our surgical patients’ experience, and I can only imagine there are similar opportunities in other specialties.int_brand_879_LabDocTblt_5600_cmyk._lowresjpg.jpg


Here are three undeniable reasons analytics should matter to every physician:


Secure Compensation

Quality compliance is here to stay, and it’s only becoming more onerous. In 2015, the CMS-mandated Physician Quality Reporting System (PQRS) finally transitioned from bonus payments to 2 percent penalties. It also raised the reporting requirements from 3 metrics to 9 metrics across 3 domains, including 2 outcome measures.


Unfortunately, in the absence of the right technology, compliance is too often considered just another costly burden. We’re relegated to either rely on unresponsive 3rd party vendors to update our software or else we’re forced to hire additional human resources to ‘count beans’. More frustratingly, we rarely see these efforts translate into meaningful change for the patients we serve. We arrive at the erroneous conclusion that these efforts only increase costs while offering no tangible benefits.


What if our technology was flexible enough to keep up with changing regulations while also making us faster and more intelligent at our jobs?  How would this change our perception of regulatory requirements? Thankfully such solutions exist, and with our input they can and should be adopted.


Gain Control

It’s too easy for providers to limit themselves to the “practice of medicine” – diagnosing and treating patients – and disengage from the management of our individual practices. We do ourselves a disservice because, as physicians, we have a significant advantage when it comes to interpreting the ever-increasing government regulations and applying them to our patients’ needs. There is often latitude in this interpretation, which ultimately gives rise to incorrect assumptions and unnecessary work. When we assume the responsibility for setting the definitions, we gain control over the metrics and consequently influence their interpretations.


By engaging in our analytics, we’re equipped to speak more convincingly with administration, we gain independence from poor implementations, and we gain freedom from added inefficiencies. We lose the all-too-common “victim perspective”, and we return to a position of influence in how and why we practice the way we do. Through analytics, we are better positioned to improve our patients’ experiences, and that can be incredibly gratifying.


Transform Your Industry

This ability to leverage real-time analytics has already transformed other industries. In retail, the best companies deliver exceptional service because their sales representatives know exactly who we are, what we’ve purchased, how we’ve paid, when we’ve paid, etc. Because they know our individual preferences at the point of sale, they deliver first-class customer service. Consider the example of Target, who used predictive analytics to identify which customers were pregnant simply from analyzing their transactional data, thus allowing them to intelligently advertise to a compelling market segment.


Imagine leveraging this same capability within the realm of surgical services. What if we could deliver individualized patient education at the time it’s needed. For example, a text message the evening before surgery reading, “It’s now time to stop eating.” Or, an automated message when the patient arrives to the surgical facility, stating, “Here’s a map to the registration desk”. There are plenty of opportunities to leverage mobility and connectivity to deliver personalized care throughout the surgical experience. Further, by analyzing the data generated during the course of that surgical experience, what if we could predict who was likely to be dissatisfied before they even complained. Could we automatically alert guest relations for a service recovery before the patient is discharged? There’s no doubt – of course, we can! We just need appropriate management of our surrounding data.



Through analytics we have the ability to secure our compensation, gain more control of our practices, and transform our industry by improving outcomes, improving the patient experience, and reducing costs.


When we’re equipped with analytical capabilities that are real-time, interactive, individualized, and mobile, we’ve implemented a framework with truly transformative power. We’ve enabled a dramatic reduction in the turnaround time for continuous process improvement. As regulatory requirements continue to increase in complexity, we have the opportunity to either work smarter using more intelligent tools or else surrender to an unfriendly future. Fellow practitioners, I much prefer the former.


What questions do you have? What’s your view of analytics?

Read more >

Smart Cities: At the Confluence of Energy, Environment and Internet of Things

The Smart Cities Council defines a Smart City as one that uses information and communications technology (ICT) to enhance its livability, workability and sustainability. In simplest terms, there are three parts to that job: collecting, communicating and “crunching.” First, a … Read more >

The post Smart Cities: At the Confluence of Energy, Environment and Internet of Things appeared first on Grid Insights by Intel.

Read more >

Meeting of the Minds: A Discussion about Data Science

May 5th, 2015 was an exciting day for Big Data analytics. Intel hosted an event focused on data analytics, announcing the next generation of the Intel® Xeon® Processor E7 family and sharing an update on Cloudera one year after investing in the company.



At the event, I had the pleasure of hosting a panel discussion among three very interesting data science experts:


  • David Edwards, VP and Engineering Fellow at Cerner, a healthcare IT and electronic medical records company, has overseen the development of a Cloudera-based Big Data analytics system for patient medical data that has enabled the creation of a number of highly effective predictive models that have already saved the lives of hundreds of patients.


  • Don Fraynd, CEO of TeacherMatch, an analytics company that has developed models that correlate a broad variety of school teacher attributes with actual student performance measures to increase the effectiveness of the teacher hiring process. These models are used to identify the most promising candidates for each teaching position, given the individual circumstances of the teaching opportunity.


  • Andreas Weigend, Director of the Social Data Lab, professor at Stanford and UC Berkeley, and past Chief Scientist at Amazon, has been a leader in data science since before data science was a “thing.” His insights into measuring customer behavior and predicting how they make decisions has changed the way we experience the Internet.


My guests have all distinguished themselves by creating analytics solutions that provide actionable insights into individual human behavior in the areas of education, healthcare and retail.  Over the course of the discussion a major theme that emerged was that data analytics must empower individuals to take action in real time.


David described how Cerner’s algorithms are analyzing a variety of patient monitoring data in the hospital to identify patients who are going into septic shock, a life threatening toxic reaction to infection. “If you don’t close that loop and provide that immediate feedback in real time, it’s very difficult to change the outcome.”


Don explained how TeacherMatch is “using hot data, dashboards, and performance management practices in our schools to effect decisions in real time…What are the precursors to a student failing a course? What are the precursors to a student having a major trauma event?”


Andreas advanced the concept of a dashboard one step further and postulated that a solution analogous to a navigation system is what’s needed, because it can improve the quality of the data over time. “Instead of building complicated models, build incentives so that people share with you…I call this a data refinery…that takes data of the people, data by the people and makes it data to be useful for the people.”


Clearly, impactful analytics are as much about timeliness and responsivity as they are about data volume and variety, and they drive actions, not just insights.


In his final comments, David articulated one of my own goals for data science: “To make Big Data boring and uninteresting.” In other words, our goal is to make it commonplace for companies to utilize all of their data, both structured and unstructured, to provide better customer experiences, superior student performance or improved patient outcomes. As a data scientist, I can think of no better outcome for the work I do every day.


Thanks to our panelists and the audience for making this an engaging and informative event.

Read more >

#TechConnect Apr. 8 Chat Recap: “Digital Security and Surveillance Systems for Small Business”

Thanks to all who joined the Tech Connect Chat on Wednesday, April 8 at 1 p.m. EDT/ 10 a.m. PDT. Intel’s DSS expert David Panziera with Vince Ricco from Axis* and Bill Rhodes from Buffalo*  led the discussion on “Digital Security and Surveillance … Read more >

The post #TechConnect Apr. 8 Chat Recap: “Digital Security and Surveillance Systems for Small Business” appeared first on Technology Provider.

Read more >

Training Trivia continues – Join us each week on Intel® Technology Provider social channels

Engage in Training Trivia each week for course-related questions posted on Wednesdays on Intel® Technology Provider’s social channels. Correct answers will enter in a weekly drawing for a $20 Starbucks gift card as well as a chance to win the … Read more >

The post Training Trivia continues – Join us each week on Intel® Technology Provider social channels appeared first on Technology Provider.

Read more >

Central Management Simplifies the Transition to Hardware-Based Encryption

I am always happy when technology makes my job as a client security engineer easier.


Intel’s recent deployment of hardware-based encryption using the Intel® Solid-State Drive (Intel® SSD) Professional Family (currently consisting of the Intel® SSD Pro 1500 Series and the Intel® SSD Pro 2500 Series), combined with McAfee® Drive Encryption 7.1 encryption software, has done exactly that. For some organizations, the deployment of Opal-compliant drives might disrupt encryption management policies and procedures — but not at Intel, thanks to the level of integration between McAfee Drive Encryption and McAfee® ePolicy Orchestrator (McAfee ePO).


Intel IT has used ePO for several years to manage other McAfee security solutions, such as virus protection and firewalls. Now, as we transition to Opal drives, ePO’s integration with encryption management means that end users don’t have to learn a new user interface or process when they change from software-based to hardware-based encryption. They just enter their encryption password and they’re in — the same as before when using software-based encryption.


Mixed Environment? Not a Problem

We are transitioning to the new drives using our standard refresh cycle. Therefore, our computing environment still contains a fair number of older Intel SSDs that must use software-based encryption. But for IT staff, there’s no difference between provisioning one of the Opal-compliant drives and a non-Opal-compliant drive. McAfee Drive Encryption provides a hybrid agent that can detect whether software- or hardware-based encryption can be used, based on the configuration of the drive and rules defined by the IT administrator. The same policy is used, regardless of the drive manufacturer or whether the drive needs hardware-based or software-based encryption. The technician just tags the computer for encryption, and that’s it. Decryption, when necessary, is just as easy.


When McAfee releases a new version of Drive Encryption, or when a new version of the Opal standard is released (the Intel SSD Pro 2500 Series, in initial phases of deployment at Intel, are Opal 2.0-compliant), the policies won’t change, and the update will be transparent. We can just push the new version to the client PCs — employees don’t have to visit service centers, and IT technicians don’t need to make desk-side visits with USB sticks. The system tree organization of ePO’s policies enables us to set different policies for different categories of systems, such as IT-managed client PCs and servers and Microsoft Active Directory Exchange servers.


The transition to Opal-compliant drives is also transparent to the rest of the IT department: there is no change is the system imaging process — the same image and process is used whether the drive is an older SSD or a new Intel SSD Pro 1500 Series. The recovery process is also identical regardless of whether the drive is hardware or software encrypted. It is all performed from the same console, using the same process. Intel Help Desk technicians do not need to learn a new method of recovery when a new drive is introduced.


Bird’s Eye View of Encryption Across the Enterprise

McAfee ePO enables us to easily determine the encryption status of all PCs in the environment. The ePO query interface is easy to use (you don’t even have to know SQL, although it is available for advanced users). The interface comes with most common reports already built-in (see the figure for examples) and allows for easy customization. Some reports take less than 30 seconds to generate; some take a little longer (a few minutes).


Using ePO, we can obtain a bird’s-eye view of encryption across the enterprise. The ePO dashboard is customizable. For example, we can view the entire encryption state of the environment, what Drive Encryption version and agent version are being used, and if there are any incompatible solutions that are preventing encryption from being enforced. We can even drill down to a particular PC to see what is causing an incompatibility.



Sample McAfee® ePolicy Orchestrator Dashboard (from left to right): encryption status, McAfee® Drive Encryption versions installed, encryption provider. These graphs are for illustrative purposes only and do not reflect Intel’s current computing environment.

Encryption can be removed in one of the following ways:

  • The IT admin applies the decrypt policy. This method requires communication between the client PC and server.
  • The IT Service Center uses a recovery image with an identification XML file exported from the server, or the users’ password, to decrypt the drive.


Decrypting in this manner guarantees that the encryption status reported in ePO is in fact the status of the drive.


The information displays in near real-time, making it helpful if a PC is lost or stolen. Using ePO, we can find the state of the drive. If it was encrypted, we know the data is safe. But if not, we can find out what sort of data was on the PC, and act accordingly. ePO lets IT admins customize the time interval for communication between a specific PC and ePO.


Customizable Agent

Although the McAfee agent reports a significant amount of information by default, the product developers realized that they probably couldn’t think of everything. So, they built in four client registry values that provide even more maneuverability. For example, we needed a way to differentiate between tablets and standard laptops, because we needed to assign a policy based on the availability of touch capabilities during preboot. So, during the build, we set one of the four registry values to indicate whether the PC has a fixed keyboard. The McAfee agent reports this property to ePO, which in turn, based on the value, assigns a compatible policy.


Single Pane of Glass

Before integrating Drive Encryption, ePO, and the Opal-compliant Intel® SSD Professional Family, some IT support activities, such as helping users who forgot their encryption password, were time-consuming and inefficient. Recovery keys were stored in one location, while other necessary information was stored elsewhere. Now, one console handles it all. If a user calls in, the IT technician has everything necessary, all in one place — a one-stop shop for everything encryption.


We have found the combination of McAfee Drive Encryption 7.1 software and Opal-compliant Intel SSDs featuring hardware-based encryption to provide a more robust solution than would be possible with either technology alone. I’d be interested to hear how other IT organizations are faring as the industry as a whole adopts Opal-compliant drives. Feel free to share your comments and join the conversation at the IT Peer Network.

Read more >

SSD Endurance. What does it mean to you?

I continuously think about the endurance aspect of our products, how SSD users understand it and use it for its positive benefits. Sadly, endurance is often underestimated and sometimes overestimated. I see customers buying High Endurance products for the benefit of protection, without understanding the real requirements of the application. Now that piece of night thoughts goes to my blog..


How do you define the SSD endurance?


By definition, endurance is the total amount of data that can be written to the SSD. Endurance can be measured in two different ways:

  • First called TBW – terabytes written, which exactly follows the meaning, total data amount during life span. It’s estimated for every SSD SKU individually even within product line.
  • Second way is DWPD – drive writes per day.  This is multiplier only, same for all SKUs in the product line. By saying DWPD=10 (high endurance drive), we mean the TBW = DWPD * CAPACITY * 365 (days) * 5 (years warranty).  That looks to be simple math, but that’s not just it… It uses another dimension  – time. I’ll explain this later.


Three main factors affect the endurance.rusty-ship-5-164395-m.jpg


  • NAND quality. It’s measured in the number of Programming/Erase cycles. Better NAND has higher count.  High Endurance Technology NAND is used in the x3700 Series product families. So, the NAND between the S3700 and S3500 Series, for example, is physically different. Please, take a moment to learn more about Validating High Endurance on the Intel® Solid-State Drive white paper.
  • Workload, different workload pattern, such as random big block or random small block writes, can have the difference on endurance up to 5x. For data center SSDs we’re using JESD-219 workload set (the mix of small random I/o to big blocks) which represents the worst-case scenario for customer. In reality, this means in most of the usage cases, customers will see better endurance in his environment.


Real life example:

Customer says he uses the drive as a scratch/temp partition. He thinks he needs the highest endurance for the SSD. Do you agree the SCRATCH use case (even with small blocks access) is worst I/O scenario? Notat all :), First, it’s 50/50 R/W mix, everything we write, will be read after. However R/W ratio is not a significant factor in workload nearly as much as random vs sequential.  In this case, scratch files are typically saved in a small portion of the drive, and without threading are sequential.  Even small files are “big” to an SSD.


  • Spare Area capacity. Bigger spare area allows the SSD to decrease Write Amplification Factor. WAF is the ration of amount of data writes to NAND to the amount of data host writes to SSD. Target to 1 if the SSD controller doesn’t have the compression. But it can never be the one due to NAND structure – we read the data in sectors, write in pages (multiple sectors) and erase in blocks (number of pages). That’s HW limitation of the technology, but genius engineers were able to control it in a FW and make WAF of Intel SSDs lowest in the industry.


Firmware means a lot, does it?


Of course, on top of these three main influencers we add FW tricks, optimizations, and the reporting. Two similar SSDs from different vendors never are the same if they have different FW. Let’s have a look at the features of our FW:

  • SMART reporting – common for the industry. Allows seeing the current status of the drive, errors, endurance (TBW to the date) and remaining lifetime. That’s what every vendor has and absolutely every user needs for daily monitoring.
  • Endurance Analyzer – very special FW feature of Intel DC SSDs. Allows to forecast expected lifetime based on the user workload. Works simple – you reset specific SMART attribute timer, run your workload for few hours better days, and then read another SMART value which tells you estimated life time in days/months/years of exactly that SSD and exactly your workload. That’s the amazing advantage of our products.


How to run Endurance Analyzer?111.jpg


Definitely it’s not the rocket science, let me point to this document as the reference. There are some hints, which will help you to go through that process easier. Endurance Analyzer is supported on both Intel Data Center SSD product families – SATA and PCIe NVMe SSDs such as P3700/P3600/P3500. For the SATA case you need to make sure you can communicate to the drive by SMART commands. That can be the limitation for some specific RAID/HBA configurations where vendors don’t support pass through mode for AHCI commands. In such cases a separate system with SATA ports routed from PCH (or other supported configuration) should be used. Next, you need correct SW tool, which is capable to reset required timer. We’re some open source tools, but I advise to use Intel SSD Data Center Tool which is cross-platform, supports every Intel DC SSD and can do lot more than basic management tools. Here are the steps:


1.      Reset SMART Attributes using the reset option. This will also save a file that contains the base SMART data. This file is needed, and used, in step 4 when the life expectancy is calculated.

                    isdct.exe set –intelssd # enduranceanalyzer=reset

2.      Remove the SSD and install in test system.

3.      Apply minimum 60-minute workload to SSD.

4.      Reinstall SSD in original system. Compute endurance using the show command.

                    isdct.exe show –a –intelssd #

5.      Read the Endurance Analyzer value, which represents the drive’s life expectancy in years.


Another Real life example here:

Big trip reservation agency has complained to Intel SSD endurance behind the RAID array, saying it’s not enough for their workloads. And according to I/O traces under OS the drive must have higher endurance to support lots of writes. My immediate proposal was to confirm it with the Endurance Analyzer. It has provided the understanding of what happened on the SSD device level, taking off OS and the RAID controller. After we ran the test for a week (including work week and a weekend), we got 42 years of expected life time on that week workload. Customer might be right if he measured peak workload only and projected it for a whole week, which is not the case for the environment.


Wrapping up…

Now you understand there are three important factors that effect endurance. We’re able to change two of them – workload profile and increase the over provisioning. But don’t confuse yourself – you can’t make High Endurance Technology SSD (such as P3700 or S37x0) from Standard or Mid Endurance drive (P3600/S3610, P3500/S35x0). They use different NAND with a different maximum number of erase/programming cycles. Likely, you can use the Endurance Analyzer to make an optimal choice of the exact product and requirements for the over provisioning.

At the end I have another customer story…


Final real life example here:

I want to address my initial definition of the endurance and two ways to measure it – TBW and DWPD. Look, how tricky is it…

Customer A did an over provisioning of the drive by 30%. He was absolutely happy with write performance improving on 4k block writes. He tested it with his real application and confirmed the stunning result. Then he decided to use Endurance Analyzer to understand the endurance improvement estimated in a days. He ran the procedure with a few days test. He was surprised with the result. Endurance in TBW has increased significantly, but the performance was increased too, so, now with 30% over provisioning on his workload he was not able to meet 5 years life span. The only way to avoid such was setting the limit for the write performance.


Andrey Kudryavtsev,

SSD Solution Architect

Intel Corp.

Read more >

Observations From HPC User Forum Norfolk, Virginia & Bio IT World, Boston, 2015

By Aruna Kumar, HPC Solutions Architect Life Science, Intel



15,000 to 20,000 variants per exome (33 Million bases) vs. 3 million single nucleotide polymorphisms per genome. HPC a clearly welcome solution to deal with the computational and storage challenges of genomics at the cross roads of clinical deployment.


At the High performance Computing User Forum held at Norfolk in mid-April, it was clear that the face of HPC is changing. The main theme was Bio-Informatics – a relatively newcomer to the user base of HPC. Bioinformatics including high throughput sequencing have introduced computing to entire new fields that have not utilized computing in the past. Just as in social sciences, these fields appear to share a thirst for large amounts of data that is still largely a search for incidental findings but seeking architectural, algorithmic optimizations and usage based abstractions simultaneously. This is a unique challenge for HPC and one that is challenging HPC systems solutions.


What does this mean for the care of our health?


Health outcomes are increasingly tied to the real time usage of vast amounts of both structured and unstructured data. Sequencing of the genome or targeted exome is distinguished by its breadth. Clinical diagnostics such as blood work for renal failure, diabetes, or aneamia that are characterized by depth of testing, genomics is characterized by breadth of testing.


As aptly stated by Dr. Leslie G. Biesecker and Dr. Douglas R. Green in 2014 New England Journal of Medicine paper, “The interrogation of variation in about 20,000 genes simultaneously can be a powerful and effective diagnostics method.”


However, it is amply clear from the work presented by Dr. Barbara Brandom, Director of Global Rare Diseases Patient Registry Data Repository (GRDR) at NIH, that the common data elements that need to be curated to improve therapeutic development and quality of life for many people with rare diseases is an relatively complex blend of structured and unstructured data.


GRDR Common Data Elements table includes contact information, socio-demographic information, diagnosis, family history, birth and reproductive history, Anthropometric information, patient-reported outcome, medications/devices/health services, clinical research and biospecimen, and communication preferences.


Now to some sizing of data and compute needs to appropriately scale the problem from a clinical perspective. Current sequencing sampling is at 30x from the Illumina HiSeqX systems. That is 46 thousand files that are generated in a three day sequencing run adding up to a 1.3 terabyte (TB) of data. This data is converted to variant calling referred to by Dr. Green earlier in the article. This analysis to the point of generating variant calling files accumulates an additional 0.5 TB of data per human genome. In order for clinicians and physicians to identify stratified subpopulation segments with specific variants, it is often necessary to sequence complex targeted regions at much higher sampling rates with longer read lengths than that generated by current 30x sampling. This will undoubtedly exacerbate an already significant challenge.


So how does Intel’s solutions fit in?


Intel Genomics Solutions together with the Intel Cluster Ready program are providing much needed sizing guidance to enable the clinicians and their associated IT data center to provide personalized medicine in the most efficient manner to scale with growing needs.


The needs broadly from a compute perspective, are to handle the volume of genomics data in a real time manner to generate alignment mapping files.  These mapping files contain the entire sequence information, the quality and position information, resulting from a largely single threaded process of converting FASTQ files into alignment mapping files. The alignment mapping files are generated as text files and converted to a more compressed binary format often known as BAM (binary alignment map) files. The difference between a reference genome and the aligned sample file (BAM) is what is contained in a variant calling files. Variants come in many forms, although the most common form is the presence or absence in a corresponding position of a single base or nucleotide. This is known as single nucleotide polymorphism (SNP). The process of research and diagnostics involves generation and visualization of BAM, SNPs and entire VCF files.


Given the lack of penetrance of incidental findings across a large numbers of diseases, the final step to impacting patient outcomes unstructured data and meta data, requires the use of parallel file systems such as Lustre and object storage technologies that provide the ability to scale-out and support personalized medicine use cases.


More details on how Intel Genomics Solutions aid the scale out to directly impact personalized medicine in a clinical environment in a future blog!



For more resources you can find out Intel’s role in Health and Life Sciences here and learn more about Intel in HPC at or learn more about Intel’s boards and systems products at

Read more >

April 2015 Intel® Chip Chat Podcast Round-up

In April, we continued to share Mobile World Congress podcasts recorded live in Barcelona, as well as the announcement from the U.S. Department of Energy that it had selected Intel to be part of its CORAL program, in collaboration with Cray and Argonne National Laboratory, to create two new revolutionary supercomputers. We also covered interesting topics like Intel Security’s True Key™ technology and Intel’s software defined infrastructure (SDI) maturity model. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel Chip Chat:

In this archive of a livecast from Mobile World Congress John Healy, Intel’s GM of the Software Defined Networking Division, stops by to talk about the current state of Network Functions Virtualization (NFV) adoption within the telecommunications industry. He outlines how Intel is driving the momentum of NFV deployment through initiatives like Intel Network Builders and how embracing the open source community with projects such as Open Platform for NFV (OPNFV) is accelerating the ability for vendors to now offer many solutions that are targeted towards function virtualization.

Paul Messina, the Director of Science at Argonne Leadership Computing Facility at Argonne National Laboratory, stops by to talk about the leading edge scientific research taking place at the Argonne National Laboratory. He announces how the Aurora system, in collaboration with Intel and Cray, will enable new waves of scientific discovery in areas like wind turbine simulation, weather prediction, and aeronautical design. Aurora will employ an integrated system design that will drive high performance computing possibilities to new heights.

Barry Bolding, VP of Marketing and Business Development at Cray, announces that Intel, Cray, and the Department of Energy are collaborating on the delivery and installation of one of the biggest supercomputers in the world. He discusses how Cray is working to help its customers tackle the most challenging supercomputing, data analytics, storage, and data management problems possible. The 180 PetaFLOPS Aurora system will help solve some of the most complex challenges that the Department of Energy faces today from material science and fluid dynamics to modeling more efficient solar cells and reactors.

Ed Goldman, CTO of the Enterprise Datacenter Group at Intel, discusses the maturity model that Intel is using to help enable enterprises lay the groundwork to move their data centers to SDI. Ed explains that the SDI Enterprise Maturity Model has five stages in the progression from traditional hard-wired architecture to SDI and how Intel is providing intelligence in the hardware platforms that include security, workload acceleration, and intelligent resource orchestration that help data centers become more enabled for SDI.

In this archive of a livecast from Mobile World Congress Mark Hocking, Vice President & General Manager of Safe Identity at Intel Security, stops by to talk about the new True Key product by Intel Security that is providing a solution to address a universal pain point for computing users around the globe: passwords. Mark discusses how True Key is innovating the way people log in to websites and applications by using personal biometrics and password storage so that you can automatically and securely sign in to your digital life without having to struggle with numerous passwords. To learn more visit

In this archive of a livecast from Mobile World Congress Sandra Rivera, VP and General Manager of the Network Platforms Group at Intel, chats about new innovative service capabilities that are solving business challenges in the telecommunications industry. She outlines how NFV has transformed the industry and highlights the work that Intel is doing to help enable the telecommunications industry ecosystem through the Intel Network Builders program, which now has over 125 community members. For more information, visit


Intel, the Intel logo, and True Key are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Read more >

Nurses Week: Telehealth Set to Increase Tenfold and Help Nurses Provide Even Better Care

As we celebrate Nurses Week across the world, I wanted to highlight the impact of telehealth on the changing nature of nursing and healthcare more generally.


But before I do that we must recognise that for all of the technological advancements, the priority for nurses is to provide the best patient care. And telehealth is helping nurses to do just that. With ageing populations across most developed nations putting additional stress on healthcare systems there is an increasing need to free up costly hospital beds and nurses time by monitoring and managing patients remotely.


From blood pressure to fall sensors, telehealth is enabling nurses to provide better care by being able to work more efficiently and be better informed about a patient’s condition. Recent research (see infographic below) suggests that telehealth will increase tenfold from 2013 to 2018 and with advances around the ‘Internet of Things’ bringing enhanced levels of connected care, I see nurses being able to do a great job even better in the future.



telehealthInfographic 6.jpg

Read more >

A New Era of Human-Computer Interaction: Celebrating Developer Innovation

As we reflect on the 50th anniversary of Moore’s Law, we stand at the threshold of a personal computing revolution. New user interface solutions such as head-mounted displays and Intel® RealSense™ technology are transforming human-computer interaction. There are plenty of … Read more >

The post A New Era of Human-Computer Interaction: Celebrating Developer Innovation appeared first on Intel Software and Services.

Read more >

The NFV Business Process Revolution

Network functions virtualization (NFV) is generally viewed as a revolutionary concept because of the positive economic impact it is having on service provider networks.  NFV takes cost out of the network by replacing proprietary hardware and software with industry standard servers running open standards-based solutions.  By deploying network applications and services built on these servers, service providers can achieve service agility for their customers.


NFV also ushers in another significant business process revolution: the deep involvement of service providers in the definition and development of NFV standards and solutions. I was reminded of that as I looked over the list of NFV demos that Intel is showing at the NFV World Congress, in San Jose on May 5-7.


In the pre-NFV days, key service providers worked very intimately with equipment vendors on the building of the telecom network. Strict adherence to industry standard specs was paramount, but innovation sometimes got stifled. The approach was well suited to a more closed technology environment.


Now, service providers are getting much more involved at the ground level with significant contributions to the solutions and the underlying technology. One case in point is Telefonica, which spent a year developing software interfaces for NFV management and organization (MANO) and then just recently made it available by delivering open source called OpenMANO. The code strengthens the connection between the virtual infrastructure manager (VIM) and the NFV orchestrator.


Other service providers will be showcasing their contributions in the Intel booth at NFV World Congress. Here’s a list of all of the demos that we’ll feature at the show:


End-to-End NFV Implementation: This demo will use MANO technology to help carriers guarantee performance during the on-boarding of new virtual network functions. The exhibit will explore and understand a simplified approach to ingesting the SDN/VNFD and performing the intelligent VNF placement through an intelligent orchestration engine. Participating partners: Telefonica, Intel, Cyan, Red Hat and Brocade.


Carrier Grade Service Function Chain: China Telecom will showcase the centralized software-defined networking (SDN) controller with an integrated service chaining scheduler to support dynamic service chaining for data centers, IP edge network and mobile network in a very flexible and scalable fashion. China Telecom’s work using the open source Data Plane Development Kit enhancements improves performance of VM-to-VM communication. Participating partners: China Telecom, Intel.


Multi-vendor NFV for Real-Time OSS/BSS: This live demonstration shows how NFV concepts can be applied to OSS/BSS functions.  With support of deep packet inspection (DPI), policy, charging and analytics, and OSS/BSS – all in an NFV implementation – this demonstration will deliver increased system agility, elasticity, and greater service availability. Participating partners: Vodafone, Red Hat, Intel, Vodafone Openet, Procera Networks, Amartus and Cobham Wireless.


Nanocell: The nanocell is a next-generation small wireless base station running on an Intel-powered blade server.  Developed by the China Mobile Research Institute the nanocell supports GSM / TD-SCDMA / TD-LTE standards and WLAN (WiFi) network connections. In most applications, the nanocell will have a range of between 100m and 500m, making it ideal for deployment in enterprise, home and high-capacity hotspot locations. Participating partners: China Mobile Research Institute, Intel.


Service Function Chaining: In addition to these carrier demos, Intel and Cisco will reprise one of the most popular demos from the recent Mobile World Congress: the first network service header (NSH)-based service function chaining demo. The demo presents a chance to see Intel’s new 100GbE technology and Cisco’s Open Daylight implementation which provides advanced user controls.


If you are planning to attend the NFV World Congress, stop by the Intel booth and take some time with these demonstrations. I look forward to seeing you there.

Read more >