Recent Blog Posts

Enabling the Data Center of the Future

When I attended the International Data Corporation (IDC) Directions 2015 conference in Boston last month, one theme kept coming up: data center transformation. Presenters and conference-goers alike were talking about moving to cloud-based data centers that enable flexibility, scalability, and fast time to deployment.


The popularity of the topic didn’t surprise me at all. Right now, enterprises of all sizes—and in all industries—are re-envisioning their data centers for fast, agile, and efficient delivery of services, which is what “cloud” is all about.


I had the opportunity to speak on a panel at HPC on Wall Street several weeks ago on the topic of “Cloud and the New Trading Landscape” to outline Intel’s vision for this evolution of the datacenter.


The Cloud: Leading the Shift to the Digital Bank


As I mentioned in another blog post, the cloud is fast becoming an enabler for digital transformation in financial services. That’s because cloud-based technologies give banks and other financial institutions a way to rapidly deploy new services and new ways to interact with customers.


However, cloud is not a “pure” technology and one size doesn’t fit all. Each workload needs to be considered for performance and security. The primary adoption barriers for cloud are concerns around security and data governance, performance, and a lack of in-house expertise and skills to support the migration.


Intel is investing in technology to enable this new cloud based datacenter paradigm which enables innovation and allows financial services organizations to improve operational efficiency, enhance customer engagement, and support the growing requirements for compliance and risk management.


Software-Defined Infrastructure



At Intel, the strategy for re-envisioning the data center is software-defined infrastructure (SDI), and it provides a foundation for pervasive analytics and insight, allowing organizations to extract value from data.


The underpinning of SDI is workload-optimized silicon, which is applying Moore’s Law to the datacenter. The modern financial services datacenter must support many diverse workloads. Keeping up with the evolving needs of financial services requires data centers that are flexible and responsive and not bound by legacy approaches to how compute, storage and networks are designed. Intel is enabling dynamic resource pooling by working with industry leaders to bring new standards-based approaches to market to make infrastructure more responsive to user needs. This enables servers, networking, and storage to move from fixed functions to flexible, agile solutions that are virtualized and software defined. These pooled resources can be automatically provisioned to improve utilization, quickly deliver new services, and reduce costs.


Intelligent resource orchestration is required to manage and provision the datacenter of the future. Intel is working with software providers including VMWare, Microsoft, and the OpenStack community on solutions that allow users to manage and optimize workloads for performance and security. The data center of the future will have intelligent resource orchestration that monitors the telemetry of the system, makes decisions based on this data to comply with established policies, automatically acts to optimize performance, and learns through machine learning for continuous improvement.


This journey to a software-defined infrastructure will lead to pervasive analytics and insights that will give financial services end users the ability to unlock their data. A flexible, scalable software-defined infrastructure is key to harnessing and extracting value from the ever-increasing data across an enterprise.


The new paradigm of cloud (whether public, private, or hybrid) is a re-envisioning of the datacenter where systems will be workload-optimized, infrastructure will be software-defined, and analytics will be pervasive. Three closing thoughts on cloud: 1) cloud is not a pure technology (one size doesn’t fit all), 2) cloud enables innovation, and 3) cloud is inevitable.


Finally, let me end this blog by saying that I will be taking a break for a couple of months. Intel is a great company with the tremendous benefit of a sabbatical, and starting in early May I will be taking my second sabbatical since joining Intel.


I hope to return from my time away with some fresh insights.


To view more posts within the series, click here: Tech & Finance Series

Read more >

Creating Confidence in the Cloud

In every industry, we continue to see a transition to the cloud. It’s easy to see why: the cloud gives companies a way to deliver their services quickly and efficiently, in a very agile and cost-effective way.


Financial services is a good example — where the cloud is powering digital transformation. We’re seeing more and more financial enterprises moving their infrastructure, platforms, and software to the cloud to quickly deploy new services and new ways of interacting with customers.


But what about security? In financial services, where security breaches are a constant threat, organizations must focus on security and data protection above all other cloud requirements.


This is an area Intel is highly committed to, and we offer solutions and capabilities designed to help customers maintain data security, privacy, and governance, regardless of whether they’re utilizing public, private, or hybrid clouds.


Here’s a brief overview of specific Intel® solutions that help enhance security in cloud environments in three critical areas:

  • Enhancing data protection efficiency. Intel® AES-NI are instructions in the processor that accelerate encryption based on the widely-used Advanced Encryption Standard (AES) algorithm.  These instructions enable fast and secure data encryption and decryption, removing the performance barrier to allow more extensive use of this vital data protection mechanism. With this performance penalty reduced, cloud providers are starting to embrace AES-NI to promote the use of encryption.
  • Enhancing data protection strength. Intel® Data Protection Technology with AES-NI and Secure Key is the foundation for cryptography without sacrificing performance. These solutions can enable faster, higher quality cryptographic keys and certificates than pseudo-random, software-based approaches in a manner better suited to shared, virtual environments.
  • Protecting the systems used in the cloud or compute infrastructure. Intel® Trusted Execution Technology (Intel® TXT) is a set of hardware extensions to Intel® processors and chipsets with security capabilities such as measured launch and protected execution. Intel TXT provides a hardware-enforced, tamper resistant mechanism to evaluate critical, low level system firmware and OS/Hypervisor components from power-on. With this, malicious or inadvertent code changes can be detected, helping assure the integrity of the underlying machine that your data resides on. And at the end of the day, if the platform can’t be proven secured, the data on it probably can’t really be considered secured.


Financial services customers worldwide are using the solutions to provide added security at both the platform and data level in public, private, and hybrid cloud deployments.


Cloud-Security.jpgPutting It into Practice with our Partners


At Intel®, we are actively engaged with our global partners to put these security-focused solutions into practice. One of the more high-profile examples is our work with IBM. IBM is using Intel TXT to deliver a secure, compliant, and trusted global cloud for SoftLayer, its managed hosting and cloud computing provider. When IBM SoftLayer customers order cloud services on the IBM website, Intel TXT creates an extra layer of trust and control at the platform level. We are also working with IBM to offer Intel TXT-enhanced secure processing solutions including VMware/Hytrust, SAP, and the IBM Cloud OpenStack Services.


In addition, Amazon Web Services (AWS), a major player in financial services, uses Intel AES-NI for additional protection on its Elastic Compute Cloud (EC2) web service instances. Using this technology, AWS can speed up encryptions and avoid software-based vulnerabilities because the solution’s encryption and decryption instructions are so efficiently executed in hardware.


End-to-End Security


Intel security technologies are not only meant to help customers in the cloud. They are designed to work as end-to-end solutions that offer protection — from the client to the cloud. In my previous blog, for example, I talked about Intel® Identity Protection Technology (Intel® IPT), a hardware-based identity technology that embeds identity management directly into the customer’s device. Intel IPT can offer customers critical authentication capabilities that can be integrated as part of a comprehensive security solution.


It’s exciting to see how our technologies are helping financial services customers increase confidence that their cloud environments and devices are secure. In my next blog, I’ll talk about another important Intel® initiative: data center transformation. Intel® is helping customers transform their data centers through software-defined infrastructures, which are changing the way enterprises think about defining, building, and managing their data centers.



Mike Blalock

Global Sales Director

Financial Services Industry, Intel


This is the seventh installment of the Tech & Finance blog series.


To view more posts within the series, click here: Tech & Finance Series

Read more >

The Big Challenges We Face in Genomics Today: A European Perspective

Recently I’ve travelled to Oxford in UK, Athens in Greece and Antalya in Turkey for a series of roundtables on the subject of genomics. While there were different audiences across the three events, the themes discussed had a lot in common and I’d like to share some of these with you in this blog.


The event in Oxford, GenofutureUK15, was a roundtable hosted by the Life Sciences team here at Intel and bought academics from a range of European research institutions together to discuss the future of genomics. I’m happy to say that the future is looking very bright indeed as we heard of many examples of some fantastic research currently being undertaken.


Speeding up Sequencing

What really resonated through all of the events though was that the technical challenges we’re facing in genomics are not insurmountable. On the contrary, we’re making great progress when it comes to the decreasing time taken to sequence genomes. As just one example, I’d highly recommend looking at this example from our partners at Dell – using Intel® Xeon® processers it has been possible to improve the efficiency and speed of paediatric cancer treatments.


In contrast to the technical aspects of genomics, the real challenges seem to be coming from what we call ‘bench to bedside’, i.e. how does the research translate to the patient? Mainstreaming issues around information governance, jurisdiction, intellectual property, data federation and workflow were all identified as key areas which are currently challenging process and progress.


From Bench to Bedside

As somebody who spends a portion of my time each week working in a GP surgery, I want to be able to utilise some of the fantastic research outcomes to help deliver better healthcare to my patients. We need to move on from focusing on pockets of research and identify the low-hanging fruit to help us tackle chronic conditions, and we need to do this quickly.


Views were put forward around the implications of genomics transition from research to clinical use and much of this was around data storage and governance. There are clear privacy and security issues but ones for which technology already has many of the solutions.


Training of frontline staff to be able to understand and make use of the advances in genomics was a big talking point. It was pleasing to hear that clinicians in Germany would like more time to work with researchers and that this was something being actively addressed. The UK and France are also making strides to ensure that this training becomes embedded in the education of future hospital staff.



Finally, the burgeoning area of microbiomics came to the fore at the three events. You may have spotted quite a lot of coverage in the news around faecal microbiota transplantation to help treat Clostridium difficile. Microbiomics throws up another considerable challenge as the collective genomes of the human microbiota contains some 8 million protein-coding genes, 360 times as many as in the human genome. That’s a ‘very’ Big Data challenge, but one we are looking forward to meeting head-on at Intel.


Leave your thoughts below on where you think the big challenges are around genomics. How is technology helping you to overcome the challenges you face in your research? And what do you need looking to the future to help you perform ground-breaking research?


Thanks to participants, contributors and organisers at Intel’s GenoFutureUK15 in Oxford, UK, Athens in Greece and HIMSS Turkey Educational Conference, in Antalya, Turkey.


Read more >

The Johnny-Five Framework Gets A New Website, Add SparkFun Support

The Johnny-Five robotics framework has made a big leap forward, migrating it’s primary point of presence away from creator Rick Waldron’s personal GitHub account to a brand new website: The new website features enhanced documentation, sample code and links … Read more >

The post The Johnny-Five Framework Gets A New Website, Add SparkFun Support appeared first on Intel Software and Services.

Read more >

World’s first 32 Node All Flash Virtual SAN with NVMe


If you’ve wondered how many virtual machines (VMs) you can deploy in a single rack, or thought how you can scale VMs across an entire enterprise then you may be interested to know what Intel and VMware are doing. While not all enterprises have the same level of scale, there’s no doubt that technology around hyper-converged storage is changing. What you may not realize, though, is there are changes in the way servers and storage are being used that impact scale to the needs of medium and large enterprises.

Virtualization and the Storage Bottleneck

Many enterprises have turned to virtualized applications as a way to cost-effectively deploy services to end users; delivering email, managing data bases, and performing analytical analysis on big data sets are just some examples. Using virtualization software, such as that from VMware, enterprises can lower IT cost of ownership by enabling increased virtual machine scalability and optimizing platform utilization. But as with any technology or operational change, there are often implementation and scaling challenges. In the case of virtualized environments, storage bottlenecks can cause performance problems, resulting in poor scaling and inefficiencies.


All Flash Team Effort

The bottleneck challenge involves scaling the adoption of virtual machines and its infrastructure, all while providing good user performance. Such problems are not just faced by larger enterprise IT shops, but small to medium business as well. Intel and VMware teamed together to deliver a robust, scalable All Flash Virtual SAN architecture.


Using a combination of the latest Intel® Xeon® processors, Intel® Solid State Drives (SSDs), and VMware Virtual SAN, SMB to large enterprise customers are now able to roll out an All Flash Virtual SAN solution that not only provides a scalable infrastructure, but also blazing performance and cost efficiency.


Technical Details – Learn More

The world’s first 32 node All Flash Virtual SAN using the latest NVMe technology will be displayed and discussed in depth during EMC World in Las Vegas May 4-7. The all flash Virtual SAN is built-up with 64 Intel® Xeon® E5-2699 v3 processors each with an Intel® SSD DC P3700 Series NVMe cache flash drive fronting 128 of Intel’s follow-on 1.6TB data center SSDs. Offering over 50 terabytes of cache and 200 terabytes of data storage, it produces an impressive 1.5 Million IOPS. This design will surely impress the curious IT professional.


Chuck Brown, Ken LeTourneau, and John Hubbard of Intel will join VMware experts to showcase this impressive 32Node All Flash Virtual SAN on Tuesday, May 5 and Wednesday May 6 from 11:30-5:30pm each day in the Solutions Expo, VMware Booth #331. Be sure to stop by and speak to the experts and learn how to design an enterprise scale, all flash Virtual SAN storage.

Read more >

The new scale of compute demands a new scale of analytics

With the proliferation of popular software-as-a-service (SaaS) offerings, the scale of compute has changed dramatically. The boundaries of enterprise IT now extend far beyond the walls of the corporate data center.


You might even say those boundaries are disappearing altogether. Where we once had strictly on-premises IT, we now have a highly customized and complex IT ecosystem that blurs the lines between the data center and the outside world.


When your business units are taking advantage of cloud-based applications, you probably don’t know where your data is, what systems are running the workloads, and what sort of security is in place. You might not even have a view of the delivered application performance, and whether it meets your service-level requirements.


This lack of visibility, transparency, and control is at once both unsustainable and unacceptable. And this is where IT analytics enters the picture—on a massive scale.


To make a successful transition to the cloud, in a manner that keeps up with the evolving threat landscape, enterprise IT organizations need to leverage sophisticated data analytics platforms that can scale to hundreds of billions of events per day. That’s not a typo—we are talking about moving from analyzing tens of millions of IT events each day to analyzing hundreds of billions of events in the new enterprise IT ecosystem.


This isn’t just a vision; this is an inevitable change for the IT organization. To maintain control of data, to meet compliance and performance requirements, and to work proactively to defend the enterprise against security threats, we will need to gain actionable insight from an unfathomable amount of data. We’re talking about data stemming from event logs, network devices, servers, security and performance monitoring tools, and countless other sources.


Take the case of security. To defend the enterprise, IT organizations will need to collect and sift through voluminous amounts of two types of contextual information:


  • “In the moment” information on devices, networks, operating systems, applications, and locations where information is being accessed. The key here is to provide near-real time actionable information to policy decision and enforcement points (think of the credit card company fraud services).
  • “After the fact” information from event logs, raw security related events, netflow and packet data, along with other indicators of compromise that can be correlated with other observable/collectable information.


As we enter this brave new world for IT, it’s clear that we will need an analytics platform that will allow us to store and process data at an unprecedented scale. We will also need new algorithms and new approaches that will allow us to glean near real-time and historical insights from a constant flood of data.


In an upcoming post, I will look at some of the requirements for this new-era analytics platform. For now, let’s just say we’re gonna need a bigger boat.





Intel and the Intel logo are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.

Read more >

Data Center Disruption: Grounded in Moore’s Law

Yesterday we celebrated the 50th anniversary of Moore’s Law, the foundational model of computing innovation. While the past half-century of industry innovation based on the advancement of Moore’s Law is astounding, what’s exciting today is that we’re at the beginning of the next generation of information and communication technology architecture, enabling the move to the digital services economy. Nowhere are the opportunities more acute than in the data center.

This evening, at the Code/Enterprise Series in San Francisco, I had the pleasure of sharing Intel’s perspective on the disruptive force the data center transformation will have on businesses and societies alike. Like no time before, the data center stands at the heart of technology innovation connecting billions of people and devices across the globe and delivering services to completely transform businesses, industries, and people’s lives.

To accelerate this vision Intel is delivering a roadmap of products that enable the creation of a software-defined data center – a data center where the application defines the system. One area I’m particularly excited about is our work with the health care community to fundamentally change the experience of a cancer patient. Here, technology is used to speed up and scale the creation and application of precision medicine. 

Our goal? By 2020, a patient can have her cancerous cells analysed through genome sequencing, compared to countless other sequences through a federated, trusted cloud, and a precision treatment created… all in one day.

We are also expanding our business focus into new areas where our technology can accelerate business transformation, a clear example being the network. Our recent announcements with Ericsson and Huawei highlight deep technical collaborations that will help the telco industry deliver new services to their end users with greater network utilization through virtualization and new business models through the cloud. At the heart of this industry transformation is open, industry standard solutions running on Intel architecture.

Transforming health care and re-architecting the network are just two examples of Intel harnessing the power of Moore’s Law to transform businesses, industries, and the lives of us all. 

Read more >

Moore’s Law: Still the Driver for Computer Development and Innovation

Wow, I can remember my first day at Intel in 1987 (when I was a very young engineer) where I was assigned a cubicle on the same floor as Gordon Moore. I learned about Moore’s law which was celebrating 20+ years at the time.  How amazing that it is still true!


Who would have thought that it would survive now 50 years (!). Intel and Atos have teamed to deliver Workplace Transformation solutions based on Moore’s law. Over three billion people worldwide are using computing devices today, and this figure is consistently rising.


Computers are playing an increasingly significant role in our lives, and every year they’re getting faster and more powerful, addressing consumer needs at all levels. But what is driving these developments and how can we ensure we keep up with end-user demands?


The answer is Moore’s law and my good friend, John Minnick, has posted an insightful blog on the topic….Happy 50th Moore’s law!!!!



Excerpt blog from Ascent Atos with permission from Author John Minnick:


Tick Tock – Fuelling the customer experience through processing power

by John Minnick


Over three billion people worldwide are using computing devices today, and this figure is consistently rising. Computers are playing an increasingly significant role in our lives, and every year they’re getting faster and more powerful, addressing consumer needs at all levels. But what is driving these developments and how can we ensure we keep up with end-user demands?

Increasing processing power

Moore’s Law states that the density of transistors in an integrated circuit, or (micro)chip, doubles roughly every two years (figure 1). In essence, transistors switch small electric currents into larger ones, so you can imagine what happens when you put billions of them together. This is why we have the incredible compute capability and processing speed we see in microprocessors today.


Using the ‘Tick Tock’ mantra to improve the end-user experience

Intel provides microprocessors, and we’re working with them to ensure the development process consistently improves the end-user experience. My role in this involves running a series of tests, ranging from pure performance-based testing of the central processing unit (CPU), to looking at input/output, 3D graphics, memory, image filters and more, leading to an overall score for each system.  Using a testing methodology combining many factors is important: what good is processing power alone if people don’t need it? Because of the transistor density available today, we are able to have both – hugely powerful devices that live up to the expectations of increasingly digital consumers, such as servers, desktop workstations, laptops, tablets, smartphones and Ultrabooks.


Depending on personal requirements, some individuals will require more capabilities and processing power than others. In computing, there are three distinct types of requirement: executive or highly mobile worker, mainstream and workstation. For executive, the CXO needs access, mobility and security from their device – mainly to communicate, i.e. send e-mails, monitor dashboard metrics and access back-end data centers from anywhere in the world. At the other end, the engineering or analyst workstation user is typically more stationary, using much more of the compute power when it comes to applications and workloads.  With recent developments in compute capabilities such as the Ultrabooks, the lines are becoming blurred. With the ever increasing demand to use computers in mobile environments we will continue to see new platforms and new applications tested as they enter the market.


Fig. 2 – The three types of user devices: scores on performance, power and user experience

To ensure developments being made are in line with customer feedback, Intel typically releases one to two new processors per year based on their ‘Tick Tock mantra’. Here, the ‘Tick’ brings in new features and capabilities and the ‘Tock’ takes these features and improves their performance. This Tick Tock mantra is used because it allows a product to be brought to market quickly, without having to wait for all the new features to be 100% tuned at optimal performance levels. By releasing regular updates, through rigorous testing and listening to user feedback, we ensure that the updates are focused on addressing actual customer demand, at a time when it is required.

Forging alliances to bolster expertise

Working closely with Intel gives my team insight into how processors have evolved over the years, and offers us an opportunity to ensure the products are being updated in line with customer requirements. We make recommendations about when it’s appropriate to move to the next processor, how it affects compute functions or even the business.

In the global economy, companies benefit from forging alliances to bolster their expertise. When this process allows you to get closer to the client experience, that’s when you know you’ve hit a winner.

Originally posted on on April 16, 2015 by author John Minnick:


To continue the conversation, let’s connect on Twitter:

Rhett Livengood, Director of Enterprise Sales Solution Development – Intel

Read more >