ADVISOR DETAILS

RECENT BLOG POSTS

Tablets Improve Engagements, Workflows

 

Mobility is expected to be a hot topic once again at HIMSS 2015 in Chicago. Tablets like the Surface and Windows-based versions of electronic health records (EHRs) from companies such as Allscripts are helping clinicians provide better care and be more efficient with their daily workflows.

 

The above video shows how the Surface and Allscripts’ Wand application are helping one cardiologist improve patient engagement while allowing more appointments throughout the day.  You can read more in this blog.

 

Watch the video and let us know what questions you have. How are you leveraging mobile technology in your facility?

Read more >

OpenStack® Kilo Release is Shaping Up to Be a Milestone for Enhanced Platform Awareness

By: Adrian Hoban

 

The performance needs of virtualized applications in the telecom network are distinctly different from those in the cloud or in the data center.  These NFV applications are implemented on a slice of a virtual server and yet need to match the performance that is delivered by a discrete appliance where the application is tightly tuned to the platform.

 

The Enhanced Platform Awareness initiative that I am a part of is a continuous program to enable fine-tuning of the platform for virtualized network functions. This is done by exposing the processor and platform capabilities through the management and orchestration layers. When a virtual network function is instantiated by an Enhanced Platform Awareness enabled orchestrator, the application requirements can be more efficiently matched with the platform capabilities.

 

Enhanced Platform Awareness is composed of several open source technologies that can be considered from the orchestration layers to be “tuning knobs” to adjust in order to meaningfully improve a range of packet-processing and application performance parameters.

 

These technologies have been developed and standardized through a two-year collaborative effort in the open source community.  We have worked with the ETSI NFV Performance Portability Working Group to refine these concepts.

 

At the same time, we have been working with developers to integrate the code into OpenStack®. Some of the features are available in the OpenStack Juno release, but I anticipate a more complete implementation will be a part of the Kilo release that is due in late April 2015.

 

How Enhanced Platform Awareness Helps NFV to Scale

In cloud environments, virtual application performance may often be increased by using a scaling out strategy such as by increasing the number of VMs the application can use. However, for virtualized telecom networks, applying a scaling out strategy to improve network performance may not achieve the desired results.

 

NFV scaling out will not ensure that improvement in all of the important aspects of the traffic characteristics (such as latency and jitter) will be achieved. And these are essential to providing the predictable service and application performance that network operators require. Using Enhanced Platform Awareness, we aim to address both performance and predictability requirements using technologies such as:

 

  • Single Root IO Virtualization (SR-IOV): SR-IOV divides a PCIe physical function into multiple virtual functions each with the capability to have their own bandwidth allocations. When virtual machines are assigned their own VF they gain a high-performance, low-latency data path to the NIC.
  • Non-Uniform Memory Architecture (NUMA): With a NUMA design, the memory allocation process for an application prioritizes the highest-performing memory, which is local to a processor core.  In the case of Enhanced Platform Awareness, OpenStack® will be able to configure VMs to use CPU cores from the same processor socket and choose the optimal socket based on the locality of the relevant NIC device that is providing the data connectivity for the VM.
  • CPU Pinning: In CPU pinning, a process or thread has an affinity configured with one or multiple cores. In a 1:1 pinning configuration between virtual CPUs and physical CPUs, some predictability is introduced into the system by preventing host and guest schedulers from moving workloads around. This facilitates other efficiencies such as improved cache hit rates.
  • Huge Page support: Provides up to 1-GB page table entry sizes to reduce I/O translation look-aside buffer (IOTLB) misses, improves networking performance, particularly for small packets.

 

A more detailed explanation of these technologies and how they work together can be found in a recently posted paper that I co-authored titled: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release

 

 

Virtual BNG/BRAS Example

The whitepaper also has a detailed example of a simulation we conducted to demonstrate the impact of these technologies.

 

We created a VNF with the Intel® Data Plane Performance Demonstrator (DPPD) as a tool to benchmark platform performance under simulated traffic loads and to show the impact of adding Enhanced Platform Awareness technologies. The DPPD was developed to emulate many of the functions of a virtual broadband network gateway / broadband remote access server.

 

We used the Juno release of OpenStack® for the test, which was patched with huge page support. A number of manual steps were applied to simulate the capability that should be available in the Kilo release such as CPU pinning and I/O Aware NUMA scheduling.

 

The results shown in the figure below are the relative gains in data throughput as a percentage of 10Gpbs achieved through the use of these EPA technologies. Latency and packet delay variation are important characteristics for BNGs. Another study of this sample BNG includes some results related to these metrics: Network Function Virtualization: Quality of Service in Broadband Remote Access Servers with Linux* and Intel® Architecture®

 

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations..PNG

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations

 

 

The order in which the features were applied impacts the incremental gains so it is important to consider the results as a whole rather than infer relative value from the incremental increases. There are also a number of other procedures that you should read more about in the whitepaper.

 

The two years of hard work by the open source community has brought us to the verge of a very important and fundamental step forward for delivering carrier-class NFV performance. Be sure to check back here for more of my blogs on this topic, and you can also follow the progress of Kilo at the OpenStack Kilo Release Schedule website.

Read more >

Bring Your Own Device in EMEA – Part 2 – Finding the Balance

In my second blog focusing on Bring Your Own Device (BYOD) in EMEA I’ll be taking a look at the positives and negatives of introducing a BYOD culture into a healthcare organisation. All too often we hear of blanket bans on clinicians and administrators using their personal devices at work, but with the right security protocols in place and enhanced training there is a huge opportunity for BYOD to help solve many of the challenges facing healthcare.

 

Much of the negativity surrounding BYOD occurs because of the resulting impact to both patients (privacy) and healthcare organisations (business/financial) of data breaches in EMEA. While I’d agree that the headline numbers outlined in my first blog are alarming, they do need to be considered in the context of the size of the wider national healthcare systems.

 

A great example I’ve seen of an organisation seeking to operate a more efficient health service through the implementation of BYOD is the Madrid Community Health Department in Spain. Intel and security expert Stack Overflow assessed several mobile operating systems with a view to supporting BYOD for physicians in hospitals within their organisation. I highly recommend you read more about how Madrid Community Health Department is managing mobile with Microsoft Windows-based tablets.

 

 

The Upside of BYOD

There’s no doubt that BYOD is a fantastic enabler in modern healthcare systems. But why? We’ll look at some best practice tips in a later blog but suffice to say here that much of the list below should be underpinned by a robust but flexible BYOD policy, an enhanced level of staff training, and a holistic and multi-layered approach to security.

 

1) Reduces Cost of IT

Perhaps the most obvious benefit to healthcare organisations is a reduction in the cost of purchasing IT equipment. Not only that, it’s likely that employees will take greater care of their own devices than they would of a corporate device, thus reducing wastage and replacement costs.

 

2) Upgrade and Update

Product refresh rates are likely to be more rapid for personal devices, enabling employees to take advantage of the latest technologies such as enhanced encryption and improved processing power. And with personal devices we also expect individuals to update software/apps more regularly, ensuring that the latest security updates are installed.

 

3) Knowledge & Understanding

Training employees on new devices or software can be costly and a significant drain on time, notwithstanding being able to schedule in time with busy clinicians and healthcare administrators. I believe that allowing employees to use their personal everyday device, with which they are familiar, reduces the need for device-level training.  There may still be a requirement to have app-level training but that very much depends on the intuitiveness of the apps/services being used.

 

4) More Mobile Workforce

The holy grail of a modern healthcare organisation – a truly mobile workforce. My points above all lead to clinicians and administrators being equipped with the latest mobile technology to be able to work anytime and anywhere to deliver a fantastic patient experience.

 

 

The Downside of BYOD

As I’ve mentioned previously, much of the comment around BYOD is negative and very much driven by headline news of medical records lost or stolen, the ensuing privacy ramifications and significant fines for healthcare organisations following a data breach.

 

It would be remiss of me to ignore the flip-side of the BYOD story but I would hasten to add that much of the risk associated with the list below can be mitigated with a multi-layered approach that not only combines multiple technical safeguards but also recognises the need to apply these with a holistic approach including administrative safeguards such as policy, training, audit and compliance, as well as physical safeguards such as locks and secure use, transport and storage.


1)  Encourages a laissez-faire approach to security

We’ve all heard the phrase ‘familiarity breeds contempt’ and there’s a good argument to apply this to BYOD in healthcare. It’s all too easy for employees to use some of the same workarounds used in their personal life when it comes to handling sensitive health data on their personal device. The most obvious example is sharing via the multitude of wireless options available today.


2) Unauthorised sharing of information

Data held at rest on a personal devices is at a high risk of loss or theft and is consequently also at high risk of unauthorized access or breach. Consumers are increasingly adopting cloud services to store personal information including photos and documents.

 

When a clinician or healthcare administrator is in a pressured working situation with their focus primarily on the care of the patient there is a temptation to use a workaround – the most obvious being the use of a familiar and personal cloud-based file sharing service to transmit data. In most cases this is a breach of BYOD and wider data protection policies, and increases risk to the confidentiality of sensitive healthcare data.


3) Loss of Devices

The loss of a personal mobile device can be distressing for the owner but it’s likely that they’ll simply upgrade or purchase a new model. Loss of personal data is quickly forgotten but loss of healthcare data on a personal device can have far-reaching and costly consequences both for patients whose privacy is compromised and for the healthcare organisation employer of the healthcare worker. An effective BYOD policy should explicitly deal with loss of devices used by healthcare employees and their responsibilities in terms of securing such devices, responsible use, and timely reporting in the event of loss or theft of such devices.


4) Integration / Compatibility

I speak regularly with healthcare organisations and I know that IT managers see BYOD as a mixed blessing. On the one hand the cost-savings can be tremendous but on the other they are often left with having to integrate multiple devices and OS into the corporate IT environment. What I often see is a fragmented BYOD policy which excludes certain devices and OS, leaving some employees disgruntled and feeling left out. A side-effect of this is that it can lead to sharing of devices which can compromise audit and compliance controls and also brings us back to point 2 above.

 

These are just some of the positives and negatives around implementing BYOD in a healthcare setting. I firmly sit on the positive side of the fence when it comes to BYOD and here at Intel Security we have solutions to help you overcome the challenges in your organisation, such as Multi-Factor Authentication (MFA) and SSDs Solid State Drives including in-built encryption which complement the administrative and physical safeguards you use in your holistic approach to managing risk.

 

Don’t forget to check out the great example from the Madrid Community Health Department to see how our work is having a positive impact on healthcare in Spain. We’d love to hear your own views on BYOD so do leave us a comment below or if you have a question I’d be happy to answer it.

 

 

David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Read more >

Ready, Set, Action. Enhanced Platform Awareness in OpenStack for Line Rate NFV

By: Frank Schapfel

 

One of challenges in deploying Network Functions Virtualization (NFV) is creating the right software management of the virtualized network.  There are differences between managing an IT Cloud and a Telco Cloud.  IT Cloud providers take advantage of centralized and standardized servers in large scale data centers.  IT Cloud architects aim to maximize the utilization (efficiency) of the servers and automate the operations management.  In contrast, Telco Cloud application workloads are different from IT Cloud workloads.  Telco Cloud application workloads have real-time constraints, government regulatory constraints, and network setup and teardown constraints.  New tools are needed to build a Telco Cloud to these requirements.

 

OpenStack is the open software community developing IT Cloud orchestration management since 2010.  The Telco service provider community of end users, telecomm equipment manufacturers (TEMs), and software vendors have rallied around adapting the OpenStack cloud orchestration for Telco Cloud.  Over the last few releases of OpenStack, the industry has been shaping and delivering Telco Cloud ready solutions. For now, let’s just focus on the real-time constraints. For IT Cloud, the data center is viewed as a large pool of compute resources that need to operate a maximum utilization, even to the point of over-subscription of the server resources. Waiting a few milliseconds is imperceptible to the end user.  On the other hand, a network is real-time sensitive – and therefore cannot tolerate over-subscription of resources.

 

To adapt OpenStack to be more Telco Cloud friendly, Intel contributed to the concept of “Enhanced Platform Awareness” to OpenStack. Enhanced Platform Awareness in OpenStack offers a fine-grained matching of virtualized network resources to the server platform capabilities.  Having a fine-grained view of the server platform allows the orchestration to accurately assign the Telco Cloud application workload to the best virtual resource.  The orchestrator needs NUMA (Non-Uniform Memory Architecture) awareness so that it can understand how the server resources are partitioned, and how CPUs, IO devices, and memory are attached to sockets.  For instance, when workloads need line rate bandwidth, high speed memory access is critical, and huge page access is the latest technology in the latest Intel® Xeon™ E5-2600 v3 processor.

 

Now in action at the Oracle Industry Connect event in Washington, DC, Oracle and Intel demonstrate the collaboration using Enhanced Platform Awareness in OpenStack.  The Oracle Communications Network Service Orchestration uses OpenStack Enhanced Platform Awareness to achieve carrier grade performance for Telco Cloud. Virtualized Network Functions are assigned based on the needs for huge page access and NUMA awareness.  Other cloud workloads, which are not network functions, are not assigned specific server resources.

 

The good news – the Enhanced Platform Awareness contributions are already up-streamed in the OpenStack repository, and will be in the OpenStack Kilo release later this year.  At Oracle Industry Connect this week, there is a keynote, panel discussions and demos to get even further “under the hood.”  And if you want even more details, there is a new Intel White Paper: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release.

 

Adapting OpenStack for Telco Cloud is happening now. And Enhanced Platform Awareness is finding its way into a real, carrier-grade orchestration solution.

Read more >

How can you afford to NOT use SSDs?

“Intel SSDs are too expensive!”

“The performance of an SSD won’t be noticed by my users.”

“Intel SSDs will wear out too fast!”

“I don’t have time to learn about deploying SSDs!”

 

I’ve heard statements like this for years, and do I ever have a story to share – the story of Intel’s adoption of Intel® Solid-State Drives (Intel® SSDs).

 

Before I tell you more, I would like to introduce myself.  I am currently a Client SSD Solutions Architect in Intel’s Non-Volatile Memory Solutions Group (the SSD group).   Prior to joining this group last year, I was in Information Technology (IT) at Intel for 26 years.  The last seven years in IT were spent in a client research and pathfinding role where I investigated new technologies and how they could be applied inside of Intel to improve employee productivity.

 

I can still remember the day in late 2007 when I first plugged in an Intel SSD into my laptop.  I giggled.  A lot.  And that’s what sparked my passion for SSDs.  I completed many lab tests, research efforts and pilot deployments in my role, which led to the mainstream adoption of Intel SSDs within Intel.  That’s the short version.  More detail is documented in a series of white papers published through our IT@Intel Program.  If you’d like to read more about our SSD adoption journey, here are the papers:

 

 

I’ve answered many technical and business-related questions related to SSDs over the years.  Questions, and assumptions, like the four at the top of this blog, and perhaps one hundred others.  But the question I’ve been asked more than any other is, “how can you afford to deploy SSDs when they cost so much compared to hard drives?”  I won’t go in to the detail in this introductory blog, but I will give you a hint, point you to our Total Cost of Ownership estimator and ask, “how can you afford to NOT use SSDs?”

 

I plan to cover a variety of client SSD topics in future blogs.  I have a lot of info that I would like to share about the adoption of SSDs within Intel, and about the technology and products in general.  If you are interested in a specific topic, please make a suggestion and I will use your input to guide future blogs.

 

Thanks for your time!

 

Doug
intel.com/ssd

Read more >

Keeping it Simple in Healthcare Cybersecurity

“Any fool can make something complicated. It takes a genius to make it simple.” – Woody Guthrie, musician

 

The proliferation of electronic systems and devices in healthcare is a good example of the tendency of systems to increase in complexity over time, and the complexity has taken its toll on our ability to adequately secure data. In 2014, the number of people in California alone whose electronic protected health information (ePHI) was exposed by a breach had increased 600 percent. The national cost of recovering from a breach averaged $5.4 million, not including the harm from loss of consumer trust. FrankNegro-Dell.jpg

 

With so much at risk, security is no longer just an IT issue; it is a significant business and operational concern. The growing complexity of healthcare IT demands a simpler approach that will enable organizations to address security realistically. As Harvard surgeon Atul Gawande explained in his 2007 book The Checklist Manifesto, a checklist can help people simplify the steps in a complex procedure, like the one he used to reduce central line infections at Johns Hopkins University. His simple, five-step checklist for central line insertion, including the enforcement and monitoring of hand washing, helped prevent 43 infections and 8 ICU deaths, saving the hospital $2 million. Enforcement and monitoring of hand washing significantly increased compliance of basic hygiene and was important in reducing infection rates.

 

Use checklists

If healthcare organizations used a checklist of basic security hygiene, similar to the one Gawande wrote about, many breaches of privacy could be avoided. But, like enforcement of hand washing, which is both cheap and effective at preventing infection, healthcare organizations often neglect the bedrock of a good security posture: encryption, identity and access management platforms, risk analyses, and breach remediation and response plans.

 

While organizations understand that these activities are important, many lack operational follow-through. For example, less than 60 percent of providers have completed a risk assessment on their newest connected and integrated technologies, and only 30 percent are confident that their business associates can detect patient data loss or theft or perform a risk assessment. Barely 75 percent of providers use any form of encryption, despite the fact that it confers immunity from the requirement to report ePHI breaches. And according to Dell’s 2014 Global Technology Adoption Index, only one in four organizations surveyed actually has a plan in place for all types of security breaches. Many healthcare organizations are just as vulnerable as Community Health Systems was in early 2014, or insurer Anthem was at the beginning of 2015.

 

In the face of multiple incentives to encrypt data and manage authorizations and data access, why do so many organizations ignore these most of basic of measures?

 

The answer is complexity. In a 2010 survey, IBM’s Institute for Business Value identified “the rapid escalation of complexity” as a top challenge for CEOs, and most of those polled did not feel felt adequately ready to confront this complexity. To better manage the chaos, healthcare CIOs can look to their own clinical departments for examples of significant quality improvements achieved by establishing a checklist of behaviors and making people accountable for sticking to the list. The Royal Australian College of General Practitioners (RACGP), for instance, has adopted a 12-point framework to help physician practices assess their security and comply with security best practices. These guidelines are tightly integrated into areas such as process development, risk analysis, governance and building a culture of security.

 

Simplified playbook

Dell security experts have also written recently on the importance of a simplified playbook approach to security, focusing on four areas: (1) preventing, (2) detecting, (3) containing, and (4) eradicating breaches. By implementing a framework based on these four simple principles, healthcare organizations can not only address the technical and hardware components of security, but also address the “human element” that is responsible for many breaches, including human error and malicious insiders. Within these four strategic areas of focus, healthcare organizations can incorporate checklists of the core tactics that will support those areas. For instance, many of the activities in this process will take place to prevent a breach in the first place, and should limit employee negligence. Thus, to prevent a breach, a checklist similar to the following should be implemented, depending on the organization’s unique needs:

 

1. Automatically encrypt all protected data from point of creation, and as it moves, including movement into the cloud.

2. Implement an effective identity and access management solution. Include clear direction on access rights, password maintenance and management, remote access controls, and auditing and appropriate software configuration.

3. Regularly assess security risks, using a framework such as NIST, and include threat analysis, reporting schedule and data breach recording procedures.  Ensure risk remediation efforts have a high priority.

4. Ensure the education of staff on security “hand washing” behaviors, including password, internet and email usage practices.

5. Monitor to detect threats in real-time.

 

Similar checklists can also be created for the other three areas mentioned above. Healthcare organizations can simplify even further by vertically integrating security add-ons and centralizing and hardening security into the IT infrastructure. This includes embedding security in firewalls, servers and data centers; integrating secure messaging with next generation firewalls; and encrypting data automatically as it scales and moves into the cloud.

 

We can improve healthcare cybersecurity by focusing on a checklist of simple practices that have the greatest impact. And simplicity, Leonardo da Vinci once stated, “Is the ultimate sophistication.”

 

What questions about cybersecurity do you have?

 

Join Dell and Intel at HIMSS booth #955 on April 14 at 11 am CT for an interactive tweet-up discussing relevant topics in healthcare security. Register for this exclusive event here.


Frank Negro is Global Practice Leader, Strategy and Planning, Healthcare and Life Sciences Solutions at Dell Services

Read more >

Optimize Your Infrastructure with the Intel Xeon D

intel-xeon-processor-d-product-family-1.jpgIn case you missed it, we just celebrated the launch of the Intel Xeon processor D product family. And if you did miss it, I’m here to give you all the highlights of an exciting leap in enterprise infrastructure optimization, from the data center to the network edge.

 

The Xeon D family is Intel’s 3rd generation 64-bit SoC and the first based on Intel Xeon processor technology. The Xeon D weaves the performance of Xeon processors into a dense, lower-power system-on-a-chip (SoC). It suits a unique variety of use cases, ranging from dynamic web serving and dedicated web hosting, to warm storage and network routing.

 

Secure, Scalable Storage

 

The Xeon D’s low energy consumption and extremely high performance make it a cost-effective, scalable solution for organizations looking to take their data centers to the next level. By dramatically reducing heat and electricity usage, this product family offers an unrivaled low-powered solution for enterprise server environments.

 

Server systems powered by the new Intel Xeon D processors offer fault-tolerant, stable storage platforms that lend themselves well to the scalability and speed clients demand. Large enterprises looking for low-power, high-density server processors for their data stacks should keep an eye on the Xeon D family, as these processors offer solid performance per watt and unparalleled security baked right into the hardware.

 

Cloud Service Providers Take Note

 

intel-xeon-d-processor-family.jpg1&1, Europe’s leading web hosting service, recently analyzed Intel’s new Xeon D processor family for different cloud workloads such as storage or dedicated hosting. The best-in-class service utilizes these new processors to offer both savings and stability to their customers. According to 1&1’s Hans Nijholt, the technology has a serious advantage for enterprise storage companies as well as SMB customers looking to pass on savings to customers:

 

“The [Xeon D’s] energy consumption is extremely low and it gives us very high performance. Xeon D has a 4x improvement in memory and lets us get a much higher density in our data center, combined with the best price/performance ratio you can offer.”

 

If you’re looking to bypass existing physical limitations, sometimes it’s simply a matter of taking a step back, examining your environment, and understanding that you have options outside expansion. The Xeon D is ready to change your business — are you ready for the transformation?

 

We’ll be revealing more about the Xeon D at World Hosting Days; join us as we continue to unveil the exciting capabilities of our latest addition to the Xeon family!

 

If you’re interested in learning more about what I’ve discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation, connect with me on LinkedIn or use #ITCenter.

Read more >

Intel Was Doing Software-Defined Infrastructure Before It Was Cool

Following Intel’s lead – decoupling software from hardware and automating IT and business processes — can help IT departments do more with less.

 

When I think back to all the strategic decisions that Intel IT has made over the last two decades, I can think of one that set the stage for all the rest: our move in 1999 from RISC-based computing systems to industry-standard Intel® architecture and Linux for our silicon design workloads. That transition, which took place over a 5-year period, helped us more than double our performance while eliminating approximately $1.4 billion in IT costs.

 

While this may seem like old news, it really was the first step in developing a software-defined infrastructure (SDI) – before it was known as such – at Intel. We solidified our compute platform with the right mix of software on the best hardware to get our products out on time.

 

Today, SDI has become a data center buzzword and is considered one of the critical next steps for the IT industry as a whole.

StorageCapacity.png


Why is SDI (compute, storage, and network) so important?

 

SDI is the only thing that is going to enable enterprise data centers to meet spending constraints, maximize infrastructure utilization, and keep up with demand that increases dramatically every year.

 

Here at Intel, compute demand is growing at around 30 percent year-over-year. And as you can see from the graphic, our storage demand is also growing at a phenomenal rate.

 

But our budget remains flat or has even decreased in some cases.

 

Somehow, we have to deliver ever-increasing services without increasing cost.


What’s the key?

 

Success lies in decoupling hardware and software.

 

As I mentioned, Intel decoupled hardware and software in our compute environment nearly 16 years ago, replacing costly proprietary solutions that tightly coupled hardware and software with industry-standard x86 servers and the open source Linux operating system. We deployed powerful, performance-optimized Intel® Xeon® processor-based servers for delivering throughput computing. We followed this by adding performance-centric higher-clock, higher-density Intel Xeon processor-based servers to accelerate silicon design TTM (time to market) while significantly reducing EDA  (Electronic Design Automation) application license cost — all of which resulted in software-defined compute capabilities that were powerful but affordable.

 

Technology has been continuously evolving, enabling us to bring a similar level of performance, availability, scalability, and functionality with open source, software-based solutions on x86-based hardware to our storage and network environments.

 

Screen Shot 2015-03-20 at 12.45.32 PM.pngAs we describe in a new white paper, Intel IT is continuously progressing and transforming Intel’s storage and network environments from proprietary fixed-function solutions to standard, agile, and cost-effective systems.

 

We are currently piloting software-defined storage and identifying quality gaps to improve the capability for end-to-end deployment for business critical use.

 

We transitioned our network from proprietary to commodity hardware resulting in more than a 50-percent reduction in cost. We are also working with the industry to adopt and certify an open-source-based network software solution that we anticipate will drive down per-port cost by an additional 50 percent. Our software-defined network deployment is limited to a narrow virtualized environment within our Office and Enterprise private cloud.


But that’s not enough…

 

Although decoupling hardware and software is a key aspect of building SDI, we must do more. Our SDI vision, which began many years ago, includes automated orchestration of the data center infrastructure resources. We have already automated resource management and federation at the global data center level. Our goal is total automation of IT and business processes, to support on-demand, self-service provisioning, monitoring, and management of the entire compute/network/storage infrastructure. Automation will ensure that when a workload demand occurs, it lands on the right-sized compute and storage so that the application can perform at the needed level of quality of service without wasting resources.


Lower cost, greater relevancy

 

Public clouds have achieved great economy of scale by adopting open-standard-based hardware, operating systems, and resource provisioning and orchestration software through which they can deliver cost-effective capabilities to the consumers of IT. If enterprise IT wants to stay relevant, we need to compete at a price point and agility similar to the public cloud. SDI lets IT compete while maintaining a focus on our clients’ business needs.

 

As Intel IT continues its journey toward end-to-end SDI, we will share our innovations and learnings with the rest of the IT industry — and we want to hear about yours, too! Together, we can not only stay relevant to our individual institutions, but also contribute to the maturity of the data center industry.

Read more >

Managing Mobile and BYOD: Madrid Community Health Department

The Bring Your Own Device (BYOD) movement is booming. Tech Pro Research’s latest survey shows that 74 percent of organizations globally are either already using or planning to allow employees to bring their own devices to work.

 

Allowing employees to bring their own devices into the office for business use has helped companies cut hardware and service costs, increase flexibility and achieve greater productivity, but there are also inherent security and data protection risks.


According to the same Tech Pro Research study, security concerns were the primary barrier to adoption of BYOD for a large majority (78 percent) of respondents; followed by IT support concerns (49 percent); lack of control over hardware (45 percent); and regulatory compliance issues (39 percent).

 

The cost of a data breaches is often substantial. Data from the Ponemon Institute shows that in EMEA in 2014 the organisational cost of a breach was some £2.02m in UAE/Saudi Arabia, £2.21m in the United Kingdom and over £2.50m in Germany.

 

Of course these concerns and costs are understandable, but they needn’t be a showstopper.

 

Mobile risk analysis

Carrying out a thorough risk analysis of the impact of BYOD can help organizations better understand the associated security, management and compliance issues and help them chose the mobility solution that best aligns with their strategies.

 

Madrid Community Health Department, the agency in charge of providing public health services in Madrid, found that increasing numbers of physicians and other staff were trying to access the corporate network from their own tablets and smartphones.

 

Rather than try and resist this rising tide it called in an independent security expert to collaborate with its IT and Legal teams to draw up a list of 18 security requirements its mobility strategy needed to meet.

 

A full list of these requirements can be found here: [ENG]/[ESP].

 

It then assessed the capability of three different scenarios in assuring compliance with these statements.

 

  • A tablet running a Windows 8.1 operating system (OS) managed by Mobile Device Management (MDM)
  • A tablet running an Android OS managed by MDM
  • A tablet running a Windows 8.1 OS managed as a normal PC

 

Managing Windows 8.1 tablets was shown to meet all 18 compliance statements. Managing Windows 8.1 and Android tablets with MDM was only able to meet eight and 10 user compliance statements respectively.

 

Managing mobile as a PC

From this Madrid Community Health Department was able to conclude that tablets running a Windows 8.1 OS offered greater flexibility, since they can be managed both with an MDM and as a normal PC.

 

However, adopting and managing tablets with Windows 8.1 running as a normal enterprise PC can manage and cover most of the defined risks, providing the tablet is given to the employee by Madrid Community Health Department as a normal PC.

 

For Madrid Community Health Department carrying out a full risk analysis showed that managing Windows 8.1 devices as a normal PC best aligns with its strategies.

If your organization is uncertain which management solution to choose, then a similar analysis could be the way to move you closer towards BYOD.

 

Read more >

Is cloud destined to be purely public?

 

51 per cent of workloads are now in the cloud, time to break through that ceiling?

 

 

At this point, we’re somewhat beyond discussions of the importance of cloud. It’s been around for some time, just about every person and company uses it in some form and, for the kicker, 2014 saw companies place more computing workloads in the cloud (51 per cent) – through either public cloud or colocation – than they process in house.

 

In just a few years we’ve moved from every server sitting in the same building as those accessing it, to a choice between private or public cloud, and the beginning of the IT Model du jour, hybrid cloud. Hybrid is fast becoming the model of choice, fusing the safety of an organisation’s private data centre with the flexibility of public cloud. However, in today’s fast paced IT world as one approach becomes mainstream the natural reaction is to ask, ‘what’s next’? A plausible next step in this evolution is the end of the permanent, owned datacentre and even long-term co-location, in favour of an infrastructure entirely built on the public cloud and SaaS applications. The question is will businesses really go this far in their march into the cloud? Do we want it to go this far?

 

Public cloud, of course, is nothing new to the enterprise and it’s not unheard of for a small business or start-up to operate solely from the public cloud and through SaaS services. However, few, if any, examples of large scale corporates eschewing their own private datacentres and co-location approaches for this pure public cloud approach exist.

 

For such an approach to become plausible in large organisations, CIOs need to be confident of putting even the most sensitive of data into public clouds. This entails a series of mentality changes that are already taking place in the SMB. The cloud based Office 365, for instance, is Microsoft’s fastest selling product ever. For large organisations, however, this is far from a trivial change and CIOs are far from ready for it.

 

The data argument

 

Data protectionism is the case in point. Data has long been a highly protected resource for financial services and legal organisations both for their own competitive advantage and due to legal requirements designed to protect their clients’ information. Thanks to the arrival of big data analysis, we can also add marketers, retailers and even sports brands to that list, as all have found unique advantages in the ability to mine insights from huge amounts of data.

This is at the same time an opportunity and problem. More data means more accurate and actionable insights, but that data needs storing and processing and, consequently, an ever growing amount of server power and storage space. Today’s approach to this issue is the hybrid cloud. Keep sensitive data primarily stored in a private data centre or co-located, and use public cloud as an overspill when processing or as object storage when requirements become too much for the organisation’s existing capacity.

 

The amount of data created and recorded each day is ever growing. In a world where data growth is exponential,  the hybrid model will be put under pressure. Even organisations that keep only the most sensitive and mission critical data within their private data centres whilst moving all else to the cloud will quickly see data inflation. Consequently, they will be forced to buy ever greater numbers of servers and space to house their critical data at an ever growing cost, and without the flexibility of the public cloud.

 

In this light, a pure public cloud infrastructure starts to seem like a good idea – an infrastructure that can be instantly switched on and expanded as needed, at low cost. The idea of placing their most sensitive data in a public cloud, beyond their own direct control and security, however, will remain unpalatable to the majority of CIOs. Understandable when you consider research such as that released last year stating that only one in 100 cloud providers meets EU Data Protection requirements currently being examined in Brussels.

 

So, increasing dependence on the public cloud becomes a tug of war between a CIO’s data burden and their capacity for the perceived security risk of the cloud.

 

Cloud Creep

 

The process that may well tip the balance in this tug of war is cloud’s very own version of exposure therapy. CIOs are storing and processing more and more non critical data in the public cloud and, across their organisations, business units are independently buying in SaaS applications, giving them a taste of the ease of the cloud (from an end user point of view, at least). As this exposure grows, the public cloud and SaaS applications will increasingly prove their reliability and security whilst earning their place as invaluable tools in a business unit’s armoury. The result is a virtuous circle of growing trust of public cloud and SaaS services – greater trust means more data placed in the public cloud, which creates greater trust. Coupled with the ever falling cost of public cloud, eventually, surely, the perceived risks of the public cloud fall enough to make its advantages outweigh the disadvantages, even for the most sensitive of data?

 

Should it be done?

 

This all depends on a big ‘if’. Trust in the public cloud and SaaS applications will only grow if public cloud providers remain unhacked and SaaS data unleaked. This is a big ask in a world of weekly data breaches, but security is relative and private data centre leaks are rapidly becoming more common, or at least better publicised, than those in the public cloud. Sony Pictures’ issues arose from a malevolent force within its network, not its public cloud based data. It will take many more attacks such as these to convince CIOs that losing direct control of their data security and putting all that trust in their cloud provider is the most sensible option. Those attacks seem likely to come, however, and in the meantime, barring a major outage or truly headline making attack on it, cloud exposure is increasing confidence in public cloud.

 

At the same time, public cloud providers need to work to build confidence, not just passively wait for the scales to tip. Selecting a cloud service is a business decision and any CIO will lend the diligence that they would any other supplier choice. Providers that fail to meet the latest regulation, aren’t visibly planning for the future or fail to convince on data privacy concerns and legislation will damage confidence in the public cloud and actively hold it back, particularly within large enterprises. Those providers that do build their way to becoming a trusted partner will, however, flourish and compound the ever growing positive effects of public cloud exposure.

 

As that happens, the prospect of a pure public cloud enterprise becomes more realistic. Every CIO and organisation is different, and will have a different tolerance for risk. This virtuous circle of cloud will tip organisations towards pure cloud approaches at different times, and every cloud hack or outage will set the model back different amounts in each organisation. It is, however, clear that, whether desirable right now or not, pure public cloud is rapidly approaching reality for some larger enterprises.

Read more >

The Bleeding Edge of Medicine

Computer Aided Engineering (CAE) has become pervasive in the design and manufacture of everything from jumbo jets to razor blades, transforming the product development process to produce more efficient, cost effective, safe and easy to use products. A central component of CAE is the ability to realistically simulate the physical behavior of a product in real world scenarios, which greatly facilitates understanding and innovation.

LHP-Interactive-Experiences_LowRes.jpg

 

Application of this advanced technology to healthcare has profound implications for society, promising to transform the practice of medicine from observation driven to understanding driven. However, lack of definitive models, processes and standards has limited its application, and development has remained fragmented in research organizations around the world.

 

Heart simulation invaluable

In January of 2014, Dassault Systèmes took the first step to change this and launched the “Living Heart Project” as a translational initiative to partner with cardiologists, researchers, and device manufacturers to develop a definitive realistic simulation of the human heart. Through this accelerated approach, the first commercial model-centric, application-agnostic, multi-physical whole heart simulation has been produced.

 

Since cardiovascular disease is the number one cause of morbidity and mortality across the globe, Dassault Systèmes saw the Living Heart Project as the best way to address the problem. Although there is a plethora of medical devices, drugs, and interventions, physicians face the problem of determining which device, drug, or intervention to use on which patient. Often times to truly understand what is going on inside a patient invasive procedures are needed.

 

CAE and the Living Heart Project will enable cardiologists to take an image (MRI, CT, etc) of a patient’s heart and reconstruct it on a 3D model thereby creating a much more personalized form of healthcare. The doctor can see exactly what is happening in the patient’s heart and definitively make a more informed decision of how to treat that patient most effectively.

 

What questions do you have about computer aided engineering?

 

Karl D’Souza is a senior user experience specialist at Dassault Systèmes Simulia Corp.

Read more >

Ethernet Shows Its Role as Fabric Technology for High-End Data Centers at OCP Summit

March has been a big month for demonstrating the role of Intel® Ethernet in the future of several key Intel initiatives that are changing the data center.

 

At the start of the month we were in Barcelona at Mobile World Congress demonstrating the role of Ethernet as the key server interconnect technology for Intel’s Software Defined Infrastructure initiative read my blog post on that event.

 

And just this week, Intel was in San Jose at the Open Compute Project Summit highlighting Ethernet’s role in Rack Scale Architecture, which is one of our initiatives for SDI.

 

RSA is a logical data center hardware architectural framework based on pooled and disaggregated computing, storage and networking resources from which software controllers can compose the ideal system for an application workload.

 

The use of virtualization in the data center is increasing server utilization levels and driving an insatiable need for more efficient data center networks. RSA’s disaggregated and pooled approach is an open, high-performance way to meet this need for data center efficiency.

 

In RSA, Ethernet plays a key role as the low-latency, high bandwidth fabric connecting the disaggregated resources together and to other resources outside of the rack. The whole system depends on Ethernet providing a low-latency, high throughput fabric that is also software controllable.

 

MWC was where we demonstrated Intel Ethernet’s software controllability through support for network virtualization overlays; and OCP Summit is where we demonstrated the raw speed of our Ethernet technology.

 

A little history is in order. RSA was first demonstrated at last year’s OCP Summit, and as a part of that, we revealed an integrated 10GbE switch module proof of concept that included switch chip and multiple Ethernet controllers that removed the need for a NIC in the server.

 

This proof of concept showed how this architecture could disaggregate the network from the compute node.

 

At the 2015 show, we demonstrated a new design with our upcoming Red Rock Canyon technology, a single-chip solution that integrates multiple NICs into a switch chip. The chip delivered throughput of 50 Gbps between four Xeon nodes via PCIe, and multiple 100GbE connections between the server shelves, all with very low latency.

 

The features delivered by this innovative design provide performance optimized for RSA workloads. It’s safe to say that I have not seen a more efficient or high-performance rack than this PoC video of the performance.

 

Red Rock Canyon is just one of the ways we’re continuing to innovate with Ethernet to make it the network of choice for high-end data centers.

Read more >

NVM Express* Technology Goes Viral – From Data Center to Client to Fabrics

Amber Huffman, Sr Principal Engineer, Storage Technologies Group at Intel


For Enterprise, everyone is talking about “Cloud,” “Big Data,” and “Software Defined X,” the latest IT buzzwords. For consumers, the excitement is around 4K gaming and 4K digital content creation. At the heart of all this is a LOT of data. A petabyte of data used to sound enormous – now the explosion in data is being described in exabytes (1K petabytes) and even zettabytes (1K exabytes). The challenge is how to get fast access to the specific information you need in this sea of information.

 

NVM Express* (NVMe*) was designed for enterprise and consumer implementations, specifically to address this challenge and the opportunities created by the massive amount of data that businesses and consumers generate and devour.

 

NVMe is the standard interface for PCI Express* (PCIe*) SSDs. Other interfaces like Serial ATA and SAS were defined for mechanical hard drives and legacy interfaces are slow, both from a throughput and a latency standpoint. NVMe jettisons this legacy and is architected from the ground up for non-volatile memory, enabling NVMe to deliver amazing performance and low latency. For example, NVMe delivers up to 6x the performance of state of the art SATA SSDs1.

 

There are several exciting new developments in NVMe. In 2015, NVMe will be coming to client systems, delivering great performance at the low power levels required in 2-in-1s and tablets. The NVM Express Workgroup is also developing “NVMe over Fabrics,” which brings the benefits of NVMe across the data center and cloud over fabrics like Ethernet, Fibre Channel, InfiniBand*, and OmniPath* Architecture.

 

NVM Express is the interface that will serve data center and client needs for the next decade. For a closer look at the new developments in NVMe, look Under the Hood with this video. Check out more information at www.nvmexpress.org.

 

 



1Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Configuration: Performance claims obtained from data sheet, Intel® SSD DC P3700 Series 2TB, Intel® SSD DC S3700 Series: Intel Core i7-3770K CPU @ 3.50GHz, 8GB of system memory, Windows* Server 2012, IOMeter. Random performance is collected with 4 workers each with 32 QD. Configuration for latency: Intel® S2600CP server, Intel® Xeon® E5-2690v2 x2, 64GB DDR3, Intel® SSD DC P3700 Series 400GB, LSI 9207-8i, Intel® SSD DC S3700.

 

© 2015 Intel Corporation

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Read more >

Transforming the Workplace for a New Generation of Workers

Workplace transformation is not a new concept. It’s a piece of our evolution. As new generations enter the workforce, they bring new expectations with them; what the workplace meant for one generation doesn’t necessarily fit with the next. Think about the way we work in 2015 versus the way we worked in, say, 2000.

 

In just 15 years, we’ve developed mobile technology that lets us communicate and work from just about anywhere. Robust mobile technologies like tablets and 2 in 1s enable remote workers to video conference and collaborate just as efficiently as they would in the office. As these technologies evolve, they change the way we think about how and where we work.

 

Working-better.png

Working Better by Focusing on UX

 

Over the past decade, mobile technologies have probably had the most dramatic impact on how we work, but advances in infrastructure will pave the way for the next big shift. Wireless technologies have improved by leaps and bounds. Advances in wireless display (WiDi) and wireless gigabit (WiGig) technologies have created the very real possibility of a wire-free workplace. They drive evolution in a truly revolutionary way.

 

Consider the impact of something as simple as creating a “smart” conference room with a large presentation screen that automatically pairs with your 2 in 1 or other device, freeing you from adapters and cords. The meeting room could be connected to a central calendar and mark itself as “occupied” so employees always know which rooms are free and which ones are in use. Simple tweaks like this keep the focus on the content of meetings, not the distractions caused by peripheral frustrations.

 

The workstation is another transformation target. Wireless docking, auto-connectivity, and wireless charging will dramatically reduce clutter in the workplace. The powerful All-in-One PC with the Intel Core i5 processor will free employees from the tethers of their desktop towers. Simple changes like removing cords and freeing employees from their cubicles can have huge impacts for companies — and their bottom lines.

 

The Benefits of an Evolved Workplace

 

Creating the right workplace for employees is one of the most important things companies can do to give themselves an advantage. By investing in the right infrastructure and devices, businesses can maximize employee creativity and collaboration, enhance productivity, and attract and retain top talent. Evolving the workplace through technology can empower employees to do their best work with fewer distractions and frustrations caused by outdated technology.

 

If you’re interested in learning more about what I’ve discussed in this blog, tune in to the festivities and highlights from CeBit 2015.

 

To continue this conversation on Twitter, please use #ITCenter. And you can find me on LinkedIn here.

Read more >

The Behavioral Shift Driving Change in the World of Retail

Ready or Not, Cross-Channel Shopping Is Here to Stay

 

Of all the marketplace transitions that have swept through the developed world’s retail industry over the last five to seven years, the most important is the behavioral shift to cross-channel shopping.

 

The story is told in these three data points1:

 

  1. 60 plus percent of U.S. shoppers (and a higher number in the U.K.) regularly begin their shopping journey online.
  2. Online ratings and reviews have the greatest impact on shopper purchasing decisions, above friends and family, and have four to five times greater impact than store associates.
  3. Nearly 90 percent of all retail revenue is carried out in the store.

 

Retail today is face-to-face with a shopper who’s squarely at the intersection of e-commerce, an ever-present smartphone, and an always-on connection to the Internet.

 

Few retailers are blind to the big behavioral shift. Most brands are responding with strategic omni-channel investments that seek to erase legacy channel lines between customer databases, inventories, vendor lists, and promotions.

 

Retail-Graph.png

Channel-centric organizations are being trimmed, scrubbed, or reshaped. There’s even a willingness — at least among some far-sighted brands — to deal head-on with the thorny challenge of revenue recognition.

 

All good. All necessary.

 

Retail.png

Redefining the Retail Space

 

But, as far as I can tell, only a handful of leaders are asking the deeper question: what, exactly, is the new definition of the store?

 

What is the definition of the store when the front door to the brand is increasingly online?

 

What is the definition of the store when shoppers know more than the associates, and when the answer to the question of how and why becomes — at the point of purchase — more important than what and how much?

 

What is the definition of the store beyond digital? Or of a mash-up of the virtual and physical?

 

What is the definition — not of brick-and-mortar and shelves and aisles and four-ways and displays — but of differentiating value delivery?

 

This is a topic we’re now exploring through whiteboard sessions and analyst and advisor discussions. We’re hard at work reviewing the crucial capabilities that will drive the 2018 cross-brand architecture.

 

Stay tuned. I’ll be sharing my hypotheses (and findings) as I forge ahead.

 

 

Jon Stine
Global Director, Retail Sales

Intel Corporation

 

This is the second installment of a series on Retail & Tech. Click here to read Moving from Maintenance to growth in Retail Technology.

 

1 National Retail Federation. “2015 National Retail Federation Data.” 06 January 2015.

Read more >

Can Technology Enable Viable Virtual Care?

 

I recently spoke to Mark Blatt, Intel’s Worldwide Medical Director, about whether virtual care can deliver equal to or better than face-to-face care. Across the world, ageing populations are stretching public health services to the limit. It’s impractical for everybody with a health problem to go to a hospital or clinic, taking up the valuable time of a limited number of doctors and nurses that could be better used elsewhere.

 

That’s why we believe virtual care is a trend that will increase markedly in the future. It isn’t something that is entirely new – in the past my fellow medical professionals have found the telephone a valuable diagnostic tool. And while it remains an important part of virtual care, the desk telephone (which is more commonly used in a mobile situation today), when used in isolation, can help to deliver only basic support.

 

So, what does the future hold for virtual care? Take a look at the video above to hear Mark’s thoughts and leave us your ideas too. I’d love to hear from you in the comments section below.

Read more >

Tightening up Intel SCS service account permissions for managing Intel AMT computer objects in Microsoft Active Directory

An enterprise customer wanted to enable Active Directory integration with Intel AMT on their large Intel vPro client estate. However their security team wanted the permissions for the Intel SCS service account against the Organisational Unit (OU) where Intel AMT computer objects are stored to support Kerberos, to be as restrictive as possible.

 

As defined in the Intel® Setup and Configuration Software User Guide, permissions for the SCS service account on the OU container are “Create Computer objects”, “Delete Computer objects” and “List content” (the latter seems to be default) and full control on descendant computer objects. The latter was not acceptable so …

 

SCS_AD_Perms_OU_Create_Delete.jpgSCS_AD_Perms_OU_List.jpg

… to support AMT maintenance tasks such as updating the password of the AD object representing the Intel AMT device and ensuring the Kerberos clock remains synchronised, the following explicit permissions are required on all descendant computer objects within the OU.

SCS_AD_Perms_Descendant_Change_Password.jpgSCS_AD_Perms_Descendant_Write_All_Properties.jpg

The customers security team were happier with these permissions and they are now activating their Intel vPro systems to enable the powerful manageability and security capabilities that Active Management Technology, available on Intel vPro Technology platforms provides.

Read more >

Emerging Technology Sectors Changing the IT-Business Landscape

Intel’s CIO Kim Stevenson is “…convinced that this is an exciting time as we enter a new era for Enterprise IT. Market leadership is increasingly being driven by technology in all industries, and a new economic narrative is being written that challenges business models that have been in place for decades.”

 

With enterprise pumping more funds into the industry than ever, Gartner projects that IT spending will reach $3.8 trillion this year. Gartner’s prediction indicates that while many of the traditional enterprise IT-focused areas — data center systems, devices, enterprise software, IT services, and telecom services — will continue to see increased investment, new areas are expected to emerge much faster.

IT-Spending.png

As the business invests more in IT — whether it’s in these traditional focused areas or these new emergent areas — one thing is stable. The business is becoming more dependent on IT for both organizational efficiency and competitive value.

 

Let’s take a closer look at two of the emergent growth segments along with the challenges, opportunities, and value they create for this new era of business-IT relationships.

 

Security and the Internet of Things

 

Gartner projects an almost 30-fold increase in the number of installed IoT units (0.9 billion to 26 billion) between 2009 and 2020. The data collected from these devices is an essential component to future IT innovation; however, this technology comes with significant security and privacy risks that cannot be ignored. “Data is the lifeblood of IoT,” states Conner Forrest of ZDNet. “As such, your security implementation for IoT should center around protecting it.”

 

The potential for the IoT remains largely undefined and at risk, especially with 85 percent of devices still unconnected and security threats prevalent. The Intel IoT Platform was designed to address this business challenge. The Intel IoT Platform is an end-to-end reference model that creates a secure foundation for connecting devices and transferring data to the cloud. With this reference architecture platform, countless IoT solutions can be built and optimized with the advantages of scalable computing, security from device to cloud, and data management and analytics support.

 

Invest.pngThe Enterprise Investing in Startups

 

2014 represented the biggest year in corporate venture group capital investment since 2000, and this trend is set to continue, according to a MoneyTree Report jointly conducted by PricewaterhouseCoopers LLP, the National Venture Capital Association, and Thomson Reuters.  What is interesting to me is the why. Organizations want and need a critical asset: creative talent.

 

As the term “innovation” runs rampant through the enterprise, CIOs know they must make changes in order to stay fresh and competitive. However, according to Kim Nash of CIO, 74 percent of CIOs find it hard to balance innovation and operational excellence, suggesting that a more powerful approach would be to acquire a startup to capture its talent, intelligence, and creative spirit.

 

While buying a startup is not in every organization’s wheelhouse, some businesses are providing venture capital to startups in order to tap into their sense of innovation. “By making such moves,” explains Nash, “non-IT companies gain access to brand new technology and entrepreneurial talent while stopping short of buying startups outright.”

Leadership Tips For IT Innovation

 

IT’s success in this new environment will not follow a pre-defined formula. In fact, it will rely on new skills and an evolving partnership between business and IT. For this reason, Intel partnered with The IT Transformation Institute to present the Transform IT Show. Transform IT is a web-based show that features in-depth interviews with business executives, IT leaders, and industry experts to shed light on what the future holds for business and the IT organizations that power them. Most importantly, the show highlights the advice for all future leaders on how to survive and thrive in the coming era of IT.

 

I hope you enjoy our guests and can apply the insights you gain from the Transform IT Show. Join this critical conversation by connecting with me on Twitter at @chris_p_intel or by using #TransformIT.

Read more >

Hardware Hacking with Rowhammer

Memory2.jpgRowhammer represents a special case for vulnerability exploitation, it accomplishes something very rare, by hacking the hardware itself.  It takes advantage of the physics happening at the nano-level in a very specific architecture structure present in some designs of computer memory.  Rowhammer allows attackers to change bits of data in sections of memory they should not have access to. It seems petty, but don’t underestimate how flipping bits at this level can result in tremendous risk.  Doing so, could grant complete control of a system and bypass many security controls which exist to compartmentalize traditional malicious practices.  Rowhammer proves memory hardware can be manipulated directly.

 

In the world of vulnerabilities there is a hierarchy, from easy to difficult to exploit and from trivial to severe in overall impact.  Technically, hacking data is easiest, followed by applications, operating systems, firmware and finally hardware.  This is sometimes referred to as the ‘stack’ because it is how systems are architecturally layered. 

 

The first three areas are software and are very portable and dynamic across systems, but subject to great scrutiny by most security controls.  Trojans are a great example where data becomes modified and can be easily distributed across networks.  Such manipulations are relatively exposed and easy to detect at many different points.  Applications can be maliciously written or infected to act in unintended ways, but pervasive anti-malware is designed to protect against such attacks and are constantly watchful.  Vulnerabilities in operating systems provide a means to hide from most security, open up a bounty of potential targets, and offer a much greater depth of control.  Knowing the risks, OS vendors are constantly identifying problems and sending a regular stream of patches to shore up weaknesses, limiting the viability of continued exploitation by threats.  It is not until we get to Firmware and Hardware, do most of the mature security controls drop away.   

 

The firmware and hardware, residing beneath the software layers, tends to be more rigid and represents a significantly greater challenge to compromise and scale attacks.  However, success at the lower levels means bypassing most detection and remediation security controls which live above, in the software.  Hacking hardware is very rare and intricate, but not impossible.  The level of difficulty tends to be a major deterrent while the ample opportunities and ease which exist in the software layers is more than enough to lure hackers in staying with easier exploits in pursuit of their objectives. 

Attackers Move Down the Stack.jpg

Attackers are moving down the stack.  There are tradeoffs to attacks at any level.  The easy vulnerabilities in data and applications yield much less benefits for attackers in the way of remaining undetected, persistence after actions are taken against them, and the overall level of control they can gain.  Most security products, patches and services work at this level and have been adapted to detect, prevent, and evict software based attacks.  Due to the difficulty and lack of obvious success, most vulnerability research doesn’t explore much in the firmware and hardware space.  This is changing.  It is only natural, attackers will seek to maneuver where security is not pervasive. 

 

Rowhammer began as a theoretical vulnerability, one with potentially significant ramifications.  To highlight the viability, the highly skilled Google Project Zero team developed two exploits which showcased the reality of gaining kernel privileges.  The blog from Rob Graham, CEO of Errata Security, provides more information on the technical challenges and details.

 

Is Rowhammer an immediate threat?  Probably not.  Memory vendors have been aware of this issue for some time and have instituted new controls to undermine the current techniques.  But this shows at a practical level how hardware can be manipulated by attackers and at a theoretical level how this could have severe consequences which are very difficult to protect against.

 

As investments in offensive cyber capabilities from nations, organized crime syndicates, and elite hackers-for-hire continue to grow, new areas such as hardware vulnerabilities will be explored and exploited.  Rowhammer is a game-changer with respect to influencing the direction of vulnerability research.  It is breaking new ground which others will follow, eventually leading to broad research in hardware vulnerability research across computing products which influence our daily lives.  Hardware and firmware hacking is part of the natural evolution of cybersecurity and therefore a part of our future we must eventually deal with.

 

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

 

Read more >