Recent Blog Posts

An Introduction to Dual-Port NVMe SSD.

March 31st 2016. It’s last day of Q1 and it’s full of surprises. If you missed the announcement, take a chance and spend a minute reading press release. I’m proud today; we at NSG (Non-volatile Memory Solutions Group) have just released new products, based on very new to the industry technologies.It comes first with a first Intel 3D NAND based NVMe SSD for Data Center. My peer Vivek Sarathy is excited about its performance and similar to SATA price in his blog.

That’s not just it. Another new SSD family is announced, Intel® SSD DC D3700 / D3600 Series. These are very special SSDs to address dual-port PCI Express* SSDs High Availability designs. This architecture is used to address a critical redundancy and failover, protecting against to every single path failure.

 

diagram.png

 

In reality that means that the SSD has a capability to be connected to two hosts at a time, shown as Storage Controller on the diagram. They can be connected directly to a host CPU or via PCIe switch topology if higher SSD count is required. If you’re familiar with Enterprise Storage HA designs based on SAS, this looks very similar but implemented with PCIe bus.

Dual-Port NVMe extensions were added to original specification with NVMe 1.1 revision few years ago. Since that time few vendors have announced products and solutions based on that technology. It’s ramping up now but the eco-system is new and very focused on addressing specific problems. These problems are common for Enterprise Storage (Scale Up Storage) and some other areas such as HPC Storage. By the way, please, take a look on my other peer’s blog, Allen Scheer “What Kind of Storage Buyer Are You?”.

 

Dual-port NVMe is another way for HA topologies. This also means system design of single port NVMe SSDs needs the re-architecture. As the product SSD is available in single form factor – 2.5” U.2, sharing similar connector as before. That also means, it still has 4 lanes of PCIe Gen3 as in original design, but for dual-port designs they are split into the pair by 2, so 2 x PCIe Gen3 x2. In order to support new connectivity, system must have new backplane which have a PCIe properly routed to two hosts with or without PCIe switches.

 

There is another advantage of D3700 / D3600 series over current single port SSDs. These drives are based on NVMe 1.2 specification, which introduces new features for all NVMe SSDs.


features.png


The one of those is multiple Namespace support. You can make here an analogy with SCSI LUNs, so a single SSD can be partitioned in multiple hardware partitions where a namespace can be assigned to two hosts or otherwise dedicated to a single host. This allows isolating the partition from another host until a critical failure on assigned host happened.


Looks complicated? Yes, it’s complex design changes but they are paid back right away by performance improvement. This also means to make the product successful Intel partnership with hardware and software vendors to enable the support of new drives. I’m very happy to see storage innovators such as XIO and E8 Storage working with Intel to show the benefits proof points in enterprise storage solutions as followings. More works are going on with Quanta, Wistron, AIC and others storage partners.

 

xio.png

xio2.png

Intel SSD DC D3700 vs. SAS SSD performance comparison.  Source – XIO.  Configuration – External Host with windows server 2008 running. External host specifications: HP DL360, G7 with dual intel E5-2620 and 25GB ram. Storage array system using E52699v3 with 40*Intel DC D3700 10 DWPD 800GB & Storage array system using E52699v3 with  40* SAS 10 DWPD 400GB . Test – 8K transfer with 80/20 Read/Write workload on QD 1,2,4 accessing 1 volume on the shared storage array.  Measurements taken on IOMeter.

 

 

e8.png

E8 Storage high availability. Source – E8.  Configuration: 4 host connected to E8 PoC storage system: 2 E5 2650v3 CPU and 24 intel DC D3700 800GB Drive. Performance measured by 8FIO threads per host, QD=32 per thread, 4k 100% Random read.

 

Can’t wait sharing more with you. See you at IDF16 at Dual-Port NVMe class.

Read more >

Smart Infrastructure: Is the next IoT revolution on the right trajectory?

In early to mid-2015, the Indian government announced plans to turn 100 Indian cities into “smart cities.” The idea is to leverage cloud technology, IoT/M2M, and big data in order to rethink waste management, traffic, electricity, and other city infrastructures. Smart city initiatives have been proposed or launched all over the world in the past few years. Cities across the spectrum like Singapore, Helsinki, Nairobi, and New York are all in the midst of it.

 

Smart City blog image.jpgBut there’s a huge obstacle that these cities are encountering in their first attempts at converting their city into a smart city: a lack of concerted planning, communication, and collaboration between multiple players.

 

The Way to Smart Cities is Connected Infrastructure

 

At this point, the smart infrastructure movement is still disparate. The companies building these smart solutions see their products as autonomous but, for the cities trying to integrate these systems, nothing could be further from the truth.

 

A truly smart city will communicate seamlessly. Different technologies and products made by different companies have to speak the same language and play by the same rules. But at this early stage in the IoT movement, most companies making these technologies are waiting for standards to make them more integrated and collaborative.

 

That’s what we’re doing at Intel: finding ways to unite more of the players in the space. We’re stepping back with a product-agnostic approach and watching the market with an eye on best solutions and products. Based on what we find and what customers need, we’re making connections between players in the ecosystem. We’re looking for and influencing designs and standards that are future-proof, sustainable, and scalable.

 

The Factors at Play

 

Here’s an example: A city decides they want to install smart traffic or weather cameras on the streets. There are several ways the city can go about pulling the data from the camera. They could do it right away, at the camera itself: the camera catches the activity and it has the intelligence built-in to process the data and send an alert to the right authority. Or, it could be set up as a “dumb” camera: the camera simply captures the images and then sends them all the way back to the data center in a centralized location, where the information is processed and alerts are sent out.

 

For a city trying to find the right products, there are dozens of factors to consider before making a decision. What’s the internet infrastructure like in their city? How expensive is it to send the data back and forth? How expensive is it to use hardware that can process the data at the site? And that’s on top of the challenges we discussed above, about how the different pieces of technology within the product itself speak to one another and the software that’s used to analyze the data.

 

In order to make good choices and create truly smart infrastructure, we need to evaluate what each city’s needs truly are and what solutions best fit these needs. These cities need some help from advisors that are truly agnostic.

 

Because of the many different products Intel makes, the company has been working closely with Original Design Manufacturers (or ODMs) in nearly every area of tech for the last 50 years. It’s given us a deeper understanding of these brands, their products, technology roadmaps and the ways in which smart infrastructure can be successfully implemented. Put simply – for the gastronomically inclined: the company offers the best ingredients to transform your recipes

 

In future posts, we’ll be delving deeper into the challenges smart cities and enterprises are facing as they implement more IoT solutions.

 

Kavitha Mohammad is the Director of Sales for Intel IoT and SmartCities, in the Asia Pacific Japan region. Follow her on Twitter and LinkedIn.

Read more >

New Cancer Institutions Join OHSU and Intel in the Collaborative Cancer Cloud

Precision medicine is gaining traction worldwide. Countries like China, the UK and Saudi Arabia are all committing to enabling precision medicine to improve the health of their people. In the US, I have been honored to learn from, and serve on, the NIH advisory group for the President’s Precision Medicine Initiative (PMI). Recently, Intel made corporate commitments to help accelerate the PMI effort.  We’ve launched an industry challenge called “All in One Day” to make an individual’s precision treatment possible, easy, and affordable within 24 hours from genome sequence to customized care plan.

 

As I and my team travel around the world to drive this initiative, we are hearing a common refrain around the need for robust and secure ways to share data so we can accelerate the scientific breakthroughs and insights for precision medicine.  It is increasingly clear that secure data sharing—at a scale far beyond what today’s efforts have achieved so far—is a fundamental barrier we must overcome to scale precision medicine for all. Vice President Biden’s “cancer moonshot” effort, for example, is focusing on this crucial data sharing challenge.

 

To that end, we announced our work with OHSU on the Collaborative Cancer Cloud in August. Earlier today, Intel and OHSU were pleased to announce the expansion of the Collaborative Cancer Cloud to include Dana-Farber Cancer Institute and Ontario Institute for Cancer Research. I am excited to welcome them as fellow pioneers in collaborating on this personalized medicine platform.

Cancer research and institutions doing the research, benefit greatly when the size of the datasets are maximized. By participating in the Collaborative Cancer Cloud, the institutions increase the chances of making new discoveries and finding potential life-saving insights through collaborative analytics across patient datasets the institutions have collectively assembled.

 

The Collaborative Cancer Cloud is unique because it uses a federated approach, meaning the institutions don’t need to upload their data in a centralized location in order to share or run analytics on larger datasets. This approach overcomes many of the concerns around collaborating on sensitive datasets while having access to unprecedented volumes of data. This allows for secure, aggregated computation across distributed sites without loss of local control of the data, ensuring an institution’s ability to maintain proper custody of its datasets and protecting patient privacy and any institutional intellectual property that may result.

 

As more institutions join precision medicine platforms like the Collaborative Cancer Cloud, they will break trail on many important elements of collaborating in a federated environment. The Collaborative Cancer Cloud is designed to allow researchers to determine how and when their data will be used. For example, while the Collaborative Cancer Cloud does provide a standard set of tools, it is the institutions who determine what tools they will use and what tools can be used on their data. This type of personalized medicine platform is designed to evolve and adapt to meet the needs of the institutions using it, and not having the institutions conform to the tools they are using.

 

With the announcement today of OICR and DFCI helping Intel and OHSU to prove out and scale out these tools, it feels like the All in One Day is one step closer. But we have many miles to go to drive the kind of security, the kind of scale, the kind of collaborative data sharing that will be needed to accelerate the research, and thus the clinical options, for not only people with cancer but a wide range of diseases. We look forward to bringing on more collaborators, more data, and more tools-makers in the near future.

 

Learn more about Intel Life Sciences www.intel.com/healthcare/lifesciences

Read more >

Can Data Analytics Help Save the Next Baby P?

If you were living in England in 2007, you probably remember the tragic death of Baby P.


Little Peter Connelly, age 17 months, died that year after sustaining more than 50 injuries over eight months at the hands of his caregivers. Despite numerous encounters with the healthcare and social care systems, Peter fell through the cracks. He died before anyone recognized the pattern of his injuries and intervened successfully to save him.

 

Peter’s death epitomizes the question that plagues case workers and clinicians around the world: How can I prevent the next Baby P? With heavy caseloads and many organizations involved, how can conscientious clinicians and caseworkers – whether they work with children, the frail elderly, victims of domestic violence, or other vulnerable individuals – assess each client’s life and health, identify clients who are at greatest risk, and get the right resources to them at the right time?

 

To accomplish this, clinicians and case workers need a comprehensive picture of the client’s encounters with diverse agencies. Oftentimes, valuable information is housed in clinical case notes and incompatible record-keeping silos, leaving care providers with only a partial view of the client’s health situation.

 

Now, there’s technology that can help. The North East London National Health Service (NHS) Foundation Trust (NELFT) recently worked with Intel and Santana Big Data Analytics Ltd. (Santana BDA) on a proof-of-concept project demonstrating a practical, affordable tool for extracting relevant information from large volumes of clinical case notes.

 

The Santana solution uses sophisticated big data analytics techniques to search through text-based clinical notes from diverse sources, such as those made by GPs, psychiatrists, community nurses, school nurses, and others. As it searches, it extracts crucial information and then presents it in a quick, easy-to-review format to authorized care professionals. Using these results, care professionals may be better able to:

  • Get value from written notes that are too voluminous for practical, timely review by humans
  • Gain a more complete understanding of the patient’s health and circumstances
  • Identify risks and prioritize caseloads to help ensure critical needs are met
  • Respond proactively rather than re-actively
  • Make better use of consultation time and conduct more focused, relevant dialogue with patients
  • Improve resource utilization through earlier intervention and potentially avoiding hospital admission


As a nurse and a former locality commissioner, I recognize just how important these technology innovations are that might help us prevent another Baby P. I invite you to read this recent paper from Intel, NELFT and Santana BDA which outlines our collective work to reduce risk and improve care.

 

Read more >

Chris Hoofnagle’s Federal Trade Commission Law and Policy Offers Exceptional Insights into the FTC

By John Kincaide, Privacy and Security Policy Attorney at Intel The Federal Trade Commission (FTC) is a US agency whose strategic goals include protecting consumers from fraud, deception and unfair business practices, and maintaining competition by focusing on anticompetitive mergers … Read more >

The post Chris Hoofnagle’s Federal Trade Commission Law and Policy Offers Exceptional Insights into the FTC appeared first on Policy@Intel.

Read more >

Workplace Transformation Part 2: March Madness or Wire Madness-Which is the Bigger Productivity Killer?

March Madness is in full swing and my final four are still alive. Like many, I’m enthusiastic about cheering on my alma mater (outside of working hours, naturally.) It’s estimated that more than 50 million office workers will participate in … Read more >

The post Workplace Transformation Part 2: March Madness or Wire Madness-Which is the Bigger Productivity Killer? appeared first on Technology Provider.

Read more >

We Have Seen the Future: Findings from the Pacific Northwest Smart Grid Demonstration Project

The recently-concluded Pacific Northwest Smart Grid Demonstration Project was “…a grand experiment, delivering useful results that will help shape future smart grid activity in our nation,” according to Dr. Ron Melton of Battelle Pacific Northwest Laboratories, the project’s director. But … Read more >

The post We Have Seen the Future: Findings from the Pacific Northwest Smart Grid Demonstration Project appeared first on Grid Insights by Intel.

Read more >

Intel Teams up with Open Labs, Lenovo and Linkin Park to Drive High-Performance Music Creation

Last year, I had a musical experience of a lifetime. I was sitting in the the famed Red Bull studios in Santa Monica, talking with members of Grammy-award winning band Linkin Park. Star-struck, yes. But my focus wasn’t on their … Read more >

The post Intel Teams up with Open Labs, Lenovo and Linkin Park to Drive High-Performance Music Creation appeared first on Technology@Intel.

Read more >