Recent Blog Posts

Bringing the “Wow” of 4K Display to Desktop Computing

4K display technology is nearly indescribable. But I’ll try.

 

With 8.3 million pixels on the screen—four times more than mainstream Full HD 1080p displays—every  video of your kids playing football, every holiday photograph, and every heart-pounding move you make in a game is brought to life. And desktops powered by the latest Intel® processors provide an optimal combination of screen size and close distance interaction to unleash the true magic of the technology.

 

  • When you create, edit, or view photography using a desktop with a 4K display, the gap between real and digital narrows—colors pop, lines are super smooth, and every detail is crisp.
  • The little dimples on the football as it spirals through the air, the glistening beads of water dripping from the leaves during a morning hike—4K delights with the sharpest video ever, whether you’re a professional or a master of the home studio.
  • The newest generation of games on the desktop are mind-blowing packages of creative wizardry, original storytelling, compelling characters, and huge-budget production quality. Now quadruple that.
  • With sharper text and graphics, 4K helps reduce eye strain, rendering sharp and crisp text that makes reading, writing, and working more pleasant on many desktop apps, including the Web.

4K-Ultra-HD.png

Intel has been working with key collaborators, including Samsung, to bring 4K technology to mainstream desktop computing. A new white paper, Stunning 4K Display Technology: A Must Have for Desktop Computing, tells the story of this initiative, with details, insights, a view of what’s to come, and an interview—excerpted below—with Platform Innovation Manager Roland Wooster, Desktop Client Platforms, CCG, who has been a driving force in this area.

 

Roland-Wooster.png

Why are you focusing on desktop computers?

 

TV manufacturers have been delivering 4K for a few years now, and there’s already been a lot of progress made in the displays of smaller devices like smartphones and tablets. But there was a big gap where the real enthusiast users really need it—on desktop computers. People who are serious gamers or content creators—those who produce and edit photos and videos—want to work and play on more powerful systems with larger displays, which logically means a desktop computer. Plus, we know that the distance between the user and a 24” 4K display is the ultimate sweet spot for the optimum immersive experience. This is exactly where we need to be right now.

 

Beyond the screen, what’s required for a true 4K experience?

 

Processor speed is certainly key. For the best experience with photo editing, you need Intel® Core™ i5 at the very least, and for video editing and gaming you absolutely need to be running Intel® Core™ i7 along with a discrete video card. And of course, to get the most out of 4K displays, you need software that supports the technology. We also worked with a number of software developers, like Microsoft, Adobe, and DivX, to help prepare the applications that will deliver 4K content on these new systems. Having those applications ready when the monitor prices drop is key to making sure that mainstream users will be able to enjoy it.


Read the entire interview and learn more about 4K technology: Stunning 4K Display Technology: A Must Have for Desktop Computing. And join the conversation using #IntelDesktop.


This is the fourth installment of the Desktop World Tech Innovation Series.

 

To view more posts within the series, click here: Desktop World Series

Read more >

Intel Internet of Things Sets Industrial Developers on a Path Forward at Hannover Messe 2015

In today’s reality, the critical infrastructure of successful smart factories relies more on bits and bytes than nuts and bolts. To help accelerate the development of more intelligent factories, Intel IoT guided developers toward a more accessible and robust path … Read more >

The post Intel Internet of Things Sets Industrial Developers on a Path Forward at Hannover Messe 2015 appeared first on IoT@Intel.

Read more >

What the Future Holds for Antenna Pass-Thru and Utility Workers

It’s true. Antenna pass-thru for in-vehicle computing devices is officially passé. Even with a stellar installer and an external antenna (think rabbit ears for the vehicle), you’re limiting your mobile capabilities. Applying 1990’s technology yields little in-vehicle benefit now that … Read more >

The post What the Future Holds for Antenna Pass-Thru and Utility Workers appeared first on Grid Insights by Intel.

Read more >

Data Scalability with InterSystems Caché® 2015.1 and Intel® Xeon® Processors

Can your data platform keep up with rising demands?

 

That’s an important question given the way changes in the healthcare ecosystem are causing the volume of concurrent database requests to soar. As your healthcare enterprise grows and you have more users needing access to vital data, how can you scale your health record databases in an efficient and cost-effective way? Decarolus.jpg

 

InterSystems recently introduced a major new release of Caché and worked with performance engineers from Epic to put it to the test. The test engineers found that InterSystems Caché 2015.1 with Enterprise Cache Protocol (ECP) technology on the Intel® Xeon® processor E7 v2 family achieved nearly 22 million global references per second (GREFs) while maintaining excellent response times.

 

That’s more than triple the load levels they achieved using Caché 2013.1 on the Intel® Xeon® processor E5 family. And it’s excellent news for Epic users who want a robust, affordable solution for scalable, data-intensive computing.

 

Intel, InterSystems, and Epic have created a short whitepaper describing these tests. I hope you’ll check it out. It provides a nice look at a small slice of the work Epic does to ensure reliable, productive experiences for the users of its electronic medical records software. Scalability tests such as these are just one part of Epic’s comprehensive approach to sizing systems, which includes a whole range of additional factors.

 

These test results also show that InterSystems’ work to take advantage of modern multi-core architectures is paying off with significant advances in ultra-high-performance database technology. Gartner identifies InterSystems as a Leader in its Magic Quadrant for Operational Database Management Systems,[1] and Caché 2015.1 should only solidify its position as a leader in SQL/NoSQL data platform computing.

                                                                    

Intel’s roadmap shows that the next generation of the Intel Xeon processor E7 family is just around the corner. I’ll be very interested to see what further performance and scalability improvements the new platform can provide for Epic and Caché. Stay tuned!

 

Read the whitepaper.

 

Join & participate in the Intel Health and Life Sciences Community: https://communities.intel.com/community/itpeernetwork/healthcare

 

Follow us on Twitter: @InterSystems, @IntelHealth, @IntelITCenter

 

Learn more about the technologies

 

Peter Decoulos is a Strategic Relationship Manager at Intel Corporation

 


[1] InterSystems Recognized As a Leader in Gartner Magic Quadrant for Operational DBMS, October 16, 2014. http://www.intersystems.com/our-products/cache/intersystems-recognized-leader-gartner-magic-quadrant-operational-dbms/

Read more >

5G Is a Huge Opportunity for TEMs

By Caroline Chan, Wireless access segment manager, Network Platform Group, Intel



No matter where you fit in the wireless food chain, expect the transition to 5G to be exhilarating. The demand for new devices and mobile infrastructure will be incredible, making the coming years a very busy time for telecommunications equipment manufacturers (TEMs). First deployments in 2020 is a realistic objective, according to a panel of industry leaders hosted by Frost & Sullivan.1

 

5G Vision

 

Although final requirements haven’t been ironed out yet, major industry players already have high aspirations for 5G. This includes major performance improvements such as an order of magnitude reduction in latency (both air and end-to-end) and more than a ten times increase in peak data rate. There will also be provisions for critical service assurance for connected cars and very low-rate services for the billions of Internet of Things (IoT) devices that come online.

 

Along these lines, the Next Generation Mobile Networks (NGMN) Alliance recently released a 5G white paper proposing requirements around system performance, user experience, devices, business models, management and operation, and enhanced services.2

 

5G Technology at MWC 2015

 

To no one’s surprise, 5G was a key theme at this year’s Mobile World Congress. “Huawei, Ericsson, and Nokia Networks demonstrated technology that forms the basis of their 5G road maps; and some leading operators, such as Deutsche Telekom, also spoke about how developments, including network functions virtualization (NFV) and software defined networks (SDN), are making 5G possible,” wrote Monica Alleven, editor of FierceWirelessTech.3

 

Sky-High Forecasts

 

Over time, 5G infrastructure is expected to serve around ten thousand times more devices than are currently connected to mobile networks, with IoT devices and cars accounting for a large part of the growth. This trend will ultimately generate a tremendous amount of business for TEMs per Intel’s 5G vision reflected in the following:

 

5g_graphic.png

 

Radio access network (RAN) capacity expands by 1,000 times to increase mobility and coverage for subscribers, IoT devices, and cars. This includes more radio towers, smart cells, and remote radio heads (RHHs) supporting Cloud-RAN (C-RAN) deployments.

 

Mobile core adds 100 times more capacity to meet the growing traffic demand. This is primarily evolved packet core (EPC) equipment, which today is represented by various LTE network elements:

 

  • Serving Gateway (Serving GW)
  • PDN Gateway (PDN GW)
  • Mobility Management Entity (MME)
  • Policy and Charging Rules Function (PCRF) Server
  • Home Subscriber Server (HSS)

 

Backhaul capacity is expected to increase ten-fold. It is the infrastructure, like routers, switches, fiber, and microwave, that connects a cell site to the mobile core.

 

Virtualized Infrastructure

 

The momentum behind virtualized equipment will grow stronger with 5G, as SDN and NFV advancements continue and spread to the RAN, customer-premises equipment (CPE), and other devices. Look for new services based on big data to influence the way networks are being constructed and monetized.

 

5G is looking like a wonderful opportunity for TEMs, perhaps even better than the first four generations of mobile networks. Read more about Intel’s 5G vision at http://iq.intel.com/will-5g-bring-new-dimension-wireless-world.

 

 

 

 

 

1 Source: Jessy Cavazos, Frost & Sullivan, “5 insights about 5G that may surprise you,” March 17, 2105, www.evaluationengineering.com/2015/03/17/5-insights-about-5g-that-may-surprise-you.

2 Source: Next Generation Mobile Networks (NGMN) Alliance, “NGMM 5G White Paper,’ February 17, 2015, https://www.ngmn.org/fileadmin/ngmn/content/images/news/ngmnnews/NGMN5GWhitePaperV10.pdf.

3 Source: Monica Alleven, FierceWirelessTech, “MWC 2015: NGMN Alliance, Huawei, Ericsson, Nokia talk 5G and more,” March 9, 2015, www.fiercewireless.com/tech/story/mwc-2015-ngmn-alliance-huawei-ericsson-nokia-talk-5g-and-more/2015-03-09.

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Read more >

HPC User Forum Norfolk, Virginia & Bio IT World, Boston, 2015 – Observations from the desk of Aruna B. Kumar

HPC User Forum Norfolk, Virginia & Bio IT World, Boston, 2015 – Observations from the desk of Aruna B. Kumar

27 April 2015

By Aruna Kumar, HPC Solutions Architect Life Science, Intel


15,000 to 20,000 variants per exome (33 Million bases) vs. 3 million single nucleotide polymorphisms per genome. HPC a clearly welcome solution to deal with the computational and storage challenges of genomics at the cross roads of clinical deployment.


At the High performance Computing User Forum held at Norfolk in mid-April, it was clear that the face of HPC is changing. The main theme was Bio-Informatics – a relatively newcomer to the user base of HPC. Bioinformatics including high throughput sequencing have introduced computing to entire new fields that have not utilized computing in the past. Just as in social sciences, these fields appear to share a thirst for large amounts of data that is still largely a search for incidental findings but seeking architectural, algorithmic optimizations and usage based abstractions simultaneously. This is a unique challenge for HPC and one that is challenging HPC systems solutions.


What does this mean for the care of our health?


Health outcomes are increasingly tied to the real time usage of vast amounts of both structured and unstructured data. Sequencing of the genome or targeted exome is distinguished by its breadth. Clinical diagnostics such as blood work for renal failure, diabetes, or aneamia that are characterized by depth of testing, genomics is characterized by breadth of testing.


As aptly stated by Dr. Leslie G. Biesecker and Dr. Douglas R. Green in 2014 New England Journal of Medicine paper, “The interrogation of variation in about 20,000 genes simultaneously can be a powerful and effective diagnostics method.”


However, it is amply clear from the work presented by Dr. Barbara Brandom, Director of Global Rare Diseases Patient Registry Data Repository (GRDR) at NIH, that the common data elements that need to be curated to improve therapeutic development and quality of life for many people with rare diseases is an relatively complex blend of structured and unstructured data.


GRDR Common Data Elements table includes contact information, socio-demographic information, diagnosis, family history, birth and reproductive history, Anthropometric information, patient-reported outcome, medications/devices/health services, clinical research and biospecimen, and communication preferences.


Now to some sizing of data and compute needs to appropriately scale the problem from a clinical perspective. Current sequencing sampling is at 30x from the Illumina HiSeqX systems. That is 46 thousand files that are generated in a three day sequencing run adding up to a 1.3 terabyte (TB) of data. This data is converted to variant calling referred to by Dr. Green earlier in the article. This analysis to the point of generating variant calling files accumulates an additional 0.5 TB of data per human genome. In order for clinicians and physicians to identify stratified subpopulation segments with specific variants, it is often necessary to sequence complex targeted regions at much higher sampling rates with longer read lengths than that generated by current 30x sampling. This will undoubtedly exacerbate an already significant challenge.


So how does Intel’s solutions fit in?


Intel Genomics Solutions together with the Intel Cluster Ready program are providing much needed sizing guidance to enable the clinicians and their associated IT data center to provide personalized medicine in the most efficient manner to scale with growing needs.


The needs broadly from a compute perspective, are to handle the volume of genomics data in a real time manner to generate alignment mapping files.  These mapping files contain the entire sequence information, the quality and position information, resulting from a largely single threaded process of converting FASTQ files into alignment mapping files. The alignment mapping files are generated as text files and converted to a more compressed binary format often known as BAM (binary alignment map) files. The difference between a reference genome and the aligned sample file (BAM) is what is contained in a variant calling files. Variants come in many forms, although the most common form is the presence or absence in a corresponding position of a single base or nucleotide. This is known as single nucleotide polymorphism (SNP). The process of research and diagnostics involves generation and visualization of BAM, SNPs and entire VCF files.


Given the lack of penetrance of incidental findings across a large numbers of diseases, the final step to impacting patient outcomes unstructured data and meta data, requires the use of parallel file systems such as Lustre and object storage technologies that provide the ability to scale-out and support personalized medicine use cases.

More details on how Intel Genomics Solutions aid the scale out to directly impact personalized medicine in a clinical environment in a future blog!

 

Follow the Intel HPC conversation and information on twitter! @cjanes85

Find out Intel’s role in Health and Life Sciences here.

Learn more about Intel in HPC at intel.com/go/hpc

Learn more about Intel’s boards and systems products at http://www.intelserveredge.com/

Intel Health & Life Sciences

The Data Stack

IT Peer Network

Read more >

Enabling the Data Center of the Future

When I attended the International Data Corporation (IDC) Directions 2015 conference in Boston last month, one theme kept coming up: data center transformation. Presenters and conference-goers alike were talking about moving to cloud-based data centers that enable flexibility, scalability, and fast time to deployment.

 

The popularity of the topic didn’t surprise me at all. Right now, enterprises of all sizes—and in all industries—are re-envisioning their data centers for fast, agile, and efficient delivery of services, which is what “cloud” is all about.

 

I had the opportunity to speak on a panel at HPC on Wall Street several weeks ago on the topic of “Cloud and the New Trading Landscape” to outline Intel’s vision for this evolution of the datacenter.

 

The Cloud: Leading the Shift to the Digital Bank

 

As I mentioned in another blog post, the cloud is fast becoming an enabler for digital transformation in financial services. That’s because cloud-based technologies give banks and other financial institutions a way to rapidly deploy new services and new ways to interact with customers.

 

However, cloud is not a “pure” technology and one size doesn’t fit all. Each workload needs to be considered for performance and security. The primary adoption barriers for cloud are concerns around security and data governance, performance, and a lack of in-house expertise and skills to support the migration.

 

Intel is investing in technology to enable this new cloud based datacenter paradigm which enables innovation and allows financial services organizations to improve operational efficiency, enhance customer engagement, and support the growing requirements for compliance and risk management.

 

Software-Defined Infrastructure

 

Reenvision-Data-Center.png

At Intel, the strategy for re-envisioning the data center is software-defined infrastructure (SDI), and it provides a foundation for pervasive analytics and insight, allowing organizations to extract value from data.

 

The underpinning of SDI is workload-optimized silicon, which is applying Moore’s Law to the datacenter. The modern financial services datacenter must support many diverse workloads. Keeping up with the evolving needs of financial services requires data centers that are flexible and responsive and not bound by legacy approaches to how compute, storage and networks are designed. Intel is enabling dynamic resource pooling by working with industry leaders to bring new standards-based approaches to market to make infrastructure more responsive to user needs. This enables servers, networking, and storage to move from fixed functions to flexible, agile solutions that are virtualized and software defined. These pooled resources can be automatically provisioned to improve utilization, quickly deliver new services, and reduce costs.

 

Intelligent resource orchestration is required to manage and provision the datacenter of the future. Intel is working with software providers including VMWare, Microsoft, and the OpenStack community on solutions that allow users to manage and optimize workloads for performance and security. The data center of the future will have intelligent resource orchestration that monitors the telemetry of the system, makes decisions based on this data to comply with established policies, automatically acts to optimize performance, and learns through machine learning for continuous improvement.

 

This journey to a software-defined infrastructure will lead to pervasive analytics and insights that will give financial services end users the ability to unlock their data. A flexible, scalable software-defined infrastructure is key to harnessing and extracting value from the ever-increasing data across an enterprise.

 

The new paradigm of cloud (whether public, private, or hybrid) is a re-envisioning of the datacenter where systems will be workload-optimized, infrastructure will be software-defined, and analytics will be pervasive. Three closing thoughts on cloud: 1) cloud is not a pure technology (one size doesn’t fit all), 2) cloud enables innovation, and 3) cloud is inevitable.

 

Finally, let me end this blog by saying that I will be taking a break for a couple of months. Intel is a great company with the tremendous benefit of a sabbatical, and starting in early May I will be taking my second sabbatical since joining Intel.

 

I hope to return from my time away with some fresh insights.

 

To view more posts within the series, click here: Tech & Finance Series

Read more >

Creating Confidence in the Cloud

In every industry, we continue to see a transition to the cloud. It’s easy to see why: the cloud gives companies a way to deliver their services quickly and efficiently, in a very agile and cost-effective way.

 

Financial services is a good example — where the cloud is powering digital transformation. We’re seeing more and more financial enterprises moving their infrastructure, platforms, and software to the cloud to quickly deploy new services and new ways of interacting with customers.

 

But what about security? In financial services, where security breaches are a constant threat, organizations must focus on security and data protection above all other cloud requirements.

 

This is an area Intel is highly committed to, and we offer solutions and capabilities designed to help customers maintain data security, privacy, and governance, regardless of whether they’re utilizing public, private, or hybrid clouds.

 

Here’s a brief overview of specific Intel® solutions that help enhance security in cloud environments in three critical areas:

  • Enhancing data protection efficiency. Intel® AES-NI are instructions in the processor that accelerate encryption based on the widely-used Advanced Encryption Standard (AES) algorithm.  These instructions enable fast and secure data encryption and decryption, removing the performance barrier to allow more extensive use of this vital data protection mechanism. With this performance penalty reduced, cloud providers are starting to embrace AES-NI to promote the use of encryption.
  • Enhancing data protection strength. Intel® Data Protection Technology with AES-NI and Secure Key is the foundation for cryptography without sacrificing performance. These solutions can enable faster, higher quality cryptographic keys and certificates than pseudo-random, software-based approaches in a manner better suited to shared, virtual environments.
  • Protecting the systems used in the cloud or compute infrastructure. Intel® Trusted Execution Technology (Intel® TXT) is a set of hardware extensions to Intel® processors and chipsets with security capabilities such as measured launch and protected execution. Intel TXT provides a hardware-enforced, tamper resistant mechanism to evaluate critical, low level system firmware and OS/Hypervisor components from power-on. With this, malicious or inadvertent code changes can be detected, helping assure the integrity of the underlying machine that your data resides on. And at the end of the day, if the platform can’t be proven secured, the data on it probably can’t really be considered secured.

 

Financial services customers worldwide are using the solutions to provide added security at both the platform and data level in public, private, and hybrid cloud deployments.

 

Cloud-Security.jpgPutting It into Practice with our Partners

 

At Intel®, we are actively engaged with our global partners to put these security-focused solutions into practice. One of the more high-profile examples is our work with IBM. IBM is using Intel TXT to deliver a secure, compliant, and trusted global cloud for SoftLayer, its managed hosting and cloud computing provider. When IBM SoftLayer customers order cloud services on the IBM website, Intel TXT creates an extra layer of trust and control at the platform level. We are also working with IBM to offer Intel TXT-enhanced secure processing solutions including VMware/Hytrust, SAP, and the IBM Cloud OpenStack Services.

 

In addition, Amazon Web Services (AWS), a major player in financial services, uses Intel AES-NI for additional protection on its Elastic Compute Cloud (EC2) web service instances. Using this technology, AWS can speed up encryptions and avoid software-based vulnerabilities because the solution’s encryption and decryption instructions are so efficiently executed in hardware.

 

End-to-End Security

 

Intel security technologies are not only meant to help customers in the cloud. They are designed to work as end-to-end solutions that offer protection — from the client to the cloud. In my previous blog, for example, I talked about Intel® Identity Protection Technology (Intel® IPT), a hardware-based identity technology that embeds identity management directly into the customer’s device. Intel IPT can offer customers critical authentication capabilities that can be integrated as part of a comprehensive security solution.

 

It’s exciting to see how our technologies are helping financial services customers increase confidence that their cloud environments and devices are secure. In my next blog, I’ll talk about another important Intel® initiative: data center transformation. Intel® is helping customers transform their data centers through software-defined infrastructures, which are changing the way enterprises think about defining, building, and managing their data centers.

 

 

Mike Blalock

Global Sales Director

Financial Services Industry, Intel

 

This is the seventh installment of the Tech & Finance blog series.

 

To view more posts within the series, click here: Tech & Finance Series

Read more >

The Big Challenges We Face in Genomics Today: A European Perspective

Recently I’ve travelled to Oxford in UK, Athens in Greece and Antalya in Turkey for a series of roundtables on the subject of genomics. While there were different audiences across the three events, the themes discussed had a lot in common and I’d like to share some of these with you in this blog.

 

The event in Oxford, GenofutureUK15, was a roundtable hosted by the Life Sciences team here at Intel and bought academics from a range of European research institutions together to discuss the future of genomics. I’m happy to say that the future is looking very bright indeed as we heard of many examples of some fantastic research currently being undertaken.

 

Speeding up Sequencing

What really resonated through all of the events though was that the technical challenges we’re facing in genomics are not insurmountable. On the contrary, we’re making great progress when it comes to the decreasing time taken to sequence genomes. As just one example, I’d highly recommend looking at this example from our partners at Dell – using Intel® Xeon® processers it has been possible to improve the efficiency and speed of paediatric cancer treatments.

 

In contrast to the technical aspects of genomics, the real challenges seem to be coming from what we call ‘bench to bedside’, i.e. how does the research translate to the patient? Mainstreaming issues around information governance, jurisdiction, intellectual property, data federation and workflow were all identified as key areas which are currently challenging process and progress.

 

From Bench to Bedside

As somebody who spends a portion of my time each week working in a GP surgery, I want to be able to utilise some of the fantastic research outcomes to help deliver better healthcare to my patients. We need to move on from focusing on pockets of research and identify the low-hanging fruit to help us tackle chronic conditions, and we need to do this quickly.

 

Views were put forward around the implications of genomics transition from research to clinical use and much of this was around data storage and governance. There are clear privacy and security issues but ones for which technology already has many of the solutions.

 

Training of frontline staff to be able to understand and make use of the advances in genomics was a big talking point. It was pleasing to hear that clinicians in Germany would like more time to work with researchers and that this was something being actively addressed. The UK and France are also making strides to ensure that this training becomes embedded in the education of future hospital staff.

 

Microbiomics

Finally, the burgeoning area of microbiomics came to the fore at the three events. You may have spotted quite a lot of coverage in the news around faecal microbiota transplantation to help treat Clostridium difficile. Microbiomics throws up another considerable challenge as the collective genomes of the human microbiota contains some 8 million protein-coding genes, 360 times as many as in the human genome. That’s a ‘very’ Big Data challenge, but one we are looking forward to meeting head-on at Intel.

 

Leave your thoughts below on where you think the big challenges are around genomics. How is technology helping you to overcome the challenges you face in your research? And what do you need looking to the future to help you perform ground-breaking research?

 

Thanks to participants, contributors and organisers at Intel’s GenoFutureUK15 in Oxford, UK, Athens in Greece and HIMSS Turkey Educational Conference, in Antalya, Turkey.

 

Read more >

The Johnny-Five Framework Gets A New Website, Add SparkFun Support

The Johnny-Five robotics framework has made a big leap forward, migrating it’s primary point of presence away from creator Rick Waldron’s personal GitHub account to a brand new website: Johnny-Five.io. The new website features enhanced documentation, sample code and links … Read more >

The post The Johnny-Five Framework Gets A New Website, Add SparkFun Support appeared first on Intel Software and Services.

Read more >

World’s first 32 Node All Flash Virtual SAN with NVMe

EMC_VSAN.jpg

If you’ve wondered how many virtual machines (VMs) you can deploy in a single rack, or thought how you can scale VMs across an entire enterprise then you may be interested to know what Intel and VMware are doing. While not all enterprises have the same level of scale, there’s no doubt that technology around hyper-converged storage is changing. What you may not realize, though, is there are changes in the way servers and storage are being used that impact scale to the needs of medium and large enterprises.


Virtualization and the Storage Bottleneck

Many enterprises have turned to virtualized applications as a way to cost-effectively deploy services to end users; delivering email, managing data bases, and performing analytical analysis on big data sets are just some examples. Using virtualization software, such as that from VMware, enterprises can lower IT cost of ownership by enabling increased virtual machine scalability and optimizing platform utilization. But as with any technology or operational change, there are often implementation and scaling challenges. In the case of virtualized environments, storage bottlenecks can cause performance problems, resulting in poor scaling and inefficiencies.

 

All Flash Team Effort

The bottleneck challenge involves scaling the adoption of virtual machines and its infrastructure, all while providing good user performance. Such problems are not just faced by larger enterprise IT shops, but small to medium business as well. Intel and VMware teamed together to deliver a robust, scalable All Flash Virtual SAN architecture.

 

Using a combination of the latest Intel® Xeon® processors, Intel® Solid State Drives (SSDs), and VMware Virtual SAN, SMB to large enterprise customers are now able to roll out an All Flash Virtual SAN solution that not only provides a scalable infrastructure, but also blazing performance and cost efficiency.

 

Technical Details – Learn More

The world’s first 32 node All Flash Virtual SAN using the latest NVMe technology will be displayed and discussed in depth during EMC World in Las Vegas May 4-7. The all flash Virtual SAN is built-up with 64 Intel® Xeon® E5-2699 v3 processors each with an Intel® SSD DC P3700 Series NVMe cache flash drive fronting 128 of Intel’s follow-on 1.6TB data center SSDs. Offering over 50 terabytes of cache and 200 terabytes of data storage, it produces an impressive 1.5 Million IOPS. This design will surely impress the curious IT professional.

 

Chuck Brown, Ken LeTourneau, and John Hubbard of Intel will join VMware experts to showcase this impressive 32Node All Flash Virtual SAN on Tuesday, May 5 and Wednesday May 6 from 11:30-5:30pm each day in the Solutions Expo, VMware Booth #331. Be sure to stop by and speak to the experts and learn how to design an enterprise scale, all flash Virtual SAN storage.

Read more >