Recent Blog Posts

January 2014 Intel® Chip Chat Podcast Round-up

In January, Chip Chat continued archiving OpenStack Summit podcasts. We’ve got episodes covering enterprise deployments for OpenStack and key concerns regarding security and trust, as well as software as a service and utilizing OpenStack to streamline compute, network and storage. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:

  • Commercial OpenStack for Enterprises – Intel® Chip Chat episode 362: In this archive of a livecast from the OpenStack Summit, Boris Renski (, the co-founder and CMO of Mirantis stops by to talk about the OpenStack ecosystem and the company’s Mirantis OpenStack distribution. Enterprises are now in the adoption phase for OpenStack, with one particular use case standing out for Boris – OpenStack as a data center wide Web server. For more information, visit
  • OpenStack Maturity and Development – Intel® Chip Chat episode 363: In this archive of a livecast from the OpenStack Summit, Krish Raghuram, the Enterprise Marketing Manager in the Open Source Technology Center at Intel, stops by to talk about working with developers directly to get technologies quickly proven and tested, as well as Intel’s investment and work as an OpenStack Platinum member, the need for developing cloud-aware/stateless apps, and utilizing OpenStack to cut operational and capital expense costs. For more information, visit
  • OpenStack and SaaS Deployments – Intel® Chip Chat episode 364: In this archive of a livecast from the OpenStack Summit, Carmine Rimi, the Director of Cloud Engineering at Workday stops by to talk about the evolution of software as a service, as well as scalability and reliability of apps in a cloud environment. Workday deploys various finance and HR apps for enterprises, government and education and is moving its infrastructure onto OpenStack to deploy software-defined compute, networking, and storage. For more information, visit
  • OpenStack and Service Assurance for Enterprises – Intel® Chip Chat episode 365: In this archive of a livecast, Kamesh Pemmaraju (, a Sr. Product Manager for OpenStack Solutions at Dell, stops by to talk about a few acute needs when deploying OpenStack for enterprises: Security, trust and SLAs and how enterprises can make sure their workloads are running in a trusted environment via the company’s work with Red Hat and Intel® Service Assurance Administrator. He also discusses the OpenStack maturity roadmap including the upgrade path, networking transitions, and ease of deployment. For more information, visit

Read more >

Data Privacy Day 2015: Reinterpreting Fair Information Practice Principles

By Paula J. Bruening, Senior Counsel, Global Privacy Policy, Intel Today is Data Privacy Day, an occasion when businesses, governments, regulators and advocates around the world recognize and highlight the importance of data protection and privacy. Intel helped bring Data … Read more >

The post Data Privacy Day 2015: Reinterpreting Fair Information Practice Principles appeared first on Policy@Intel.

Read more >

Executives Must Manage Cyber Risks Differently in 2015

Security 400x200.jpgThe Sony breach should be a wakeup call for big enterprises worldwide.  Not only was it a massive loss of intellectual property, but it took the international stage with geopolitical extortion, and even stepped beyond the boundaries of the cyber world and included threats of harm to employees and patrons.  Definitely a dark set of events, which will likely be attempted in the future by a variety of different threat agents against other organizations.  This is not just a Sony problem or a media industry problem.  This is the problem which faces every large company, industry, and government.


The effects can be blinding.  As recently reported, Sony’s critical systems won’t be back online until February.  Executives and board members must consider the fact Sony has tremendous resources working to get the company back up and running, yet vital systems may be down for another month.  Attackers can be incredibly difficult to evict.  They dig in, disrupt attempts deny them access, and leave hidden backdoors for later.  Repairing damages can be time consuming and meticulous, even with proper backups and quality IT resources.  Services must be restored in a way more protected than before, without sacrificing performance or usability.


The lesson to all: your company’s operational availability, among other things, can be severely affected over a long period of time, even if you have substantial resources.  2015 will may be a defining year for many organizations in protecting and recovering from cyber-attacks.  Be ready. Manage your risks professionally.



Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts



Read more >

The Essence of Cloud: Beyond Applications, Deliver Services?

You can look at cloud in a variety of ways.

I typically recognize two fundamentally different ways of looking at cloud. One I call an IT-oriented cloud, the other, a user-oriented cloud.

An IT oriented cloud is one where infrastructure is provisioned to facilitate the installation of the application as and when a new instance of that application is required. It consists in the automation of the provisioning of the appropriate number of virtual and/or physical machines with the right configurations and the connection of those through secured networking with appropriate storage capacity. The installation of the target application can also be automated. But ultimately you speak about the installation of an application for x users. You are looking at the cloud as an easy way to provision infrastructure. In other words, your cloud as an IaaS.


CV1.jpgLet me take an example that we all know. If you are part of IT and have to provision an exchange server for 5000 users, an IT oriented cloud will do the job. It will provision the right amount of physical and virtual servers, set-up the databases, the connections between the systems and install the appropriate exchange modules in the right place. Exchange is now available and you can start configuring users. In this case you request an application.


But what if you happen to be a manager in the business and have a new employee starting on Monday? You may want to make him feel at home in his new job by setting-up his mailbox and sending him a welcome message even before he is really onboard. You provision one mailbox. In most cases there is no need to provide more hardware, to install software, just to configure the mailbox of one user on an already provisioned environment. Obviously, if your request happens to be for the 5001st mailbox, the environment may have to provision a second exchange environment, but this hidden from you. You request a service. This is a completely different way to look at cloud. From a user perspective, cloud is a SaaS service. When you request a new user on, you do not care about what that implies for, you are just interested in getting your seat.

Cloud Enabling Legacy Applications

Let’s now assume you did set-up a private cloud environment. The first question is: which applications should you transfer to that cloud environment – legacy applications or new developments? And it’s a real good question.


If you decide for legacy applications, you may want to think about choosing applications that will truly benefit from cloud. There might be two main reasons why an application might benefit of moving to cloud. The application may have varying usage patterns requiring quick ramp-up and ramp-down of capacity over time or the application may have to be configured for many users. The cloud may not add that much value for applications that have a stable, consistent usage, although it may facilitate the delivery of the appropriate infrastructure quickly, so make life easier for the IT department.


The first can be addressed with a cloud in which you can provision applications; the second requires the provisioning of services. Let’s review at what characteristics the application needs to respond to in both situations.

Application Provisioning

I suggested it makes sense to migrate an application with varying usage patterns to cloud. Why? We all have our frustrations when an application responds very slowly due to the number of parallel requests. Cloud can address this by initiating a second instance of the application when performance degrades. Using a load balancer, requests can be routed to either of the instances to ensure appropriate response times.


Now, what I just wrote bears with it an assumption. And this assumption is that multiple instances of the application can run together without affecting the actions performed by the application. If your application is a web server, managing the display of multiple web pages, there is obviously no issue at all. But on the other hand, if your application is an order management system, things may be a little more tricky. You will need to maintain one single database to ensure all users have access to all orders in the system. So, the first question is whether your application is the bottleneck or the database. In the latter case, creating two instances of the application won’t solve the problem. You will first have to work on the database and maybe create a database cluster to remove the bottleneck. Once that is done, if the problem remains, you may look at creating multiple instances of the application itself.


Now, realize that the duplication of the application or some of its components in case of increased traffic may require you have a flexible licensing scheme for the application, the used middleware and potentially the database. Ideally you would like a pay per use model in which you only pay license fees when you actually use the software. Unfortunately, many ISVs have not developed that level of flexibility yet in their license schemes.


From an automation perspective, you will have to develop the scripts for the provisioning of an application instance. Ideally you will equip that application instance with a probe analyzing its responsiveness in real time. You will then develop the rules when it makes sense to create a second instance. With that second instance will come the configuration of the load balancer.


All this should be transparent to the end-user. It’s the IT department that manages the instances created, including the automated or manual shut-down of instances when they are no longer needed.

Service Provisioning

Service provisioning requires a much greater adaption of the application. Indeed, now you expect to perform automatically a number of tasks typically performed manually by the service desk. So, the first point to check is whether a way exists to initiate the configuration transactions via API’s or any other means. Can the appropriate information be transferred to the application? Is it possible to get the actual status of the request back at completion, etc?


Indeed, to set-up the service provisioning, you will have to create a number of workflows that automate the different processes required to configure a user, to give him access to the environment, etc.


When the business user requests the provisioning of a mailbox for example, he will be asked to provide information. That information will then be automatically transferred to the application so the configuration can take place. In return, the application will provide the status of the transaction (succeeded or failed, and in that case preferably the reason of the failure), so the cloud platform can inform the user and retain the status and the necessary information to access the service once provisioned.


What is important here is that the “services” delivered by the application are accessible. Often companies create web services to interface between the cloud environment and these applications, shielding the users from changes made in the applications. This allows them, once the application is encapsulated, to transform it and make it more cloud friendly without the user being aware of the changes implemented. You may want to think about such approach if you plan to transform or re-architect your application to make it more cloud friendly.  Obviously some applications may have both characteristics (variability in workload and user configurations), in that case both questions should be asked.

Should I start with cloud enabling legacy?

Having discussed the two key reasons why you want to bring an existing application to the cloud, the question remains, should you start with taking an existing application and transform it, or should you rather keep your legacy environments as is and surround it with new functionality specifically developed for the cloud? Frankly, there is not one answer to this question. It really depends on where your application is in its lifecycle. Obviously if you plan to replace that application in the foreseeable future, you may not want to take the time and effort to adapt it. On the other end, if this is a critical application you plan to keep for quite a while you probably would. Make sure of one thing though; build any new functionality with cloud in mind.

Read more >

Urban Growth and Sustainability: Building Smart Cities with the Internet of Things


This is the first installment of a four part series on Smart Cities with Dawn Olsen (#1 of 4).

Click here to read blog #2

Click here to read blog #3

Click here to read blog #4

Not long ago, the human race hit a significant milestone. In 2009, for the first time in our history, more of us lived in urban areas than rural. It’s estimated that 54% of today’s global population lives in cities and this figure is expected to rocket up to 66% by 2050. With this increase in city inhabitants, we’re quickly heading towards the “Megacity” era.  Soon a city with a population of 10 million or more will seem typical.  As these burgeoning metropolises drive industrial and financial growth on a global scale, the emergence of powerful new economies are beginning to be introduced and developed around the world.

Despite being financial powerhouses, cities can also generate their fair share of problems. For example, they consume two thirds of today’s available energy and other valuable resources, leaving the other third for the millions who still live in smaller settlements and rural areas. As urban populations get bigger, it is vital to make sure that our cities are ready to deal with more people, more traffic, more pollutants and more energy use in a scalable and sustainable way. In short, we need our cities to be smarter.

This is the challenge that gets me out of bed in the morning. I’m excited to be part of Intel’s smart cities initiative, which is focused on putting the Internet of Things (IoT) to use in any way that will benefit urban societies.


The IoT blocks that build smart cities may include anything from technical components like sensors that measure air quality or temperature, to end-to-end city management solutions that control traffic flow based on analysis of citywide congestion data. The combinations in which these blocks can be applied are almost limitless, and we are exploring innovative new applications to improve quality of life, cost efficiencies and environmental impact.  For example, the work Intel is undertaking with the City of San Jose, California, uses IoT technology to build more sustainable infrastructure, and the project has been recognized by the White House as part of its Smart America initiative. 

In this blog series (this is the first of four posts), I’ll be sharing my thoughts on some of the key areas in which we’re driving the smart cities of the future, based on innovative trials and deployments already completed, or going on now. My blog posts will cover three main areas:

  • Smart security and the evolving challenge of safeguarding our increasingly connected cities
  • Technology driving innovation in traffic and transport management
  • Sustainable solutions to the problem of rising air pollution.

Check back soon for my next post (next Thursday – 1/15/2015), which will explore how Intel’s smart city initiatives can help enhance citizens’ safety and security. I’ll give you a clue: it’s not by just rolling out more CCTV cameras.

Let’s get smart.

This is the first installment of a four part series on Smart Cities with Dawn Olsen (#1 of 4).

Click here to read blog #2

Click here to read blog #3

To continue the conversation, let’s connect on Twitter @DawnOlsen

Dawn Olsen

Global Sales Director

Government Enterprise, Intel

Read more >

Improving Air and Water Quality in Smart Cities

This is the fourth and final installment mini-blog series on Smart Cities with Dawn Olsen (#4 of 4).

Click here to read blog #1

Click here to read blog #2

Click here to read blog #3

Dawn7.jpgIn this blog series, we’ve been looking at the ways in which Intel’s smart cities initiatives are using the Internet of Things (IoT) to address the challenges faced by growing cities.

We first covered smart security, one of the most important areas of IoT technology for city authorities policing events and managing crowds. Then we looked at how Intel is implementing smart transport to alleviate congestion and improve traffic flow, which is especially important for emergency services routing. In this final post, we cover a topic that goes hand in hand with the issues of overcrowding and congestion in densely populated urban areas: the challenge of minimizing pollution and improving air and water quality in our cities.

There are many regions today where pollution is a well-documented problem. In China, for example, blankets of smog are a familiar sight over its metropolises. In New Zealand, water quality is a growing concern, with fresh water supplies susceptible to pollutants such as sediment and pathogens.  Furthermore, regulations have been introduced to limit the use of wood burners, which releases polluting particles into the air. 

While identifying pollution problems is the easy part, taking a timely and informed action to improve air and water quality is the real challenge that local authorities face. 


Intel has invested in smart city initiatives to build end-to-end solutions that utilize a full range of IoT tools.  City authorities can now monitor pollution levels by analyzing sensor data and automating real-time responses to the changing environment – all from a single management system. 

The recent investment made in the London-based Intel Collaborative Research Institute for Sustainable Connected Cities is helping lay the technological foundations for smart cities. For example, connected solutions can deliver greater efficiencies in cities like London where old utilities infrastructure is difficult to maintain. Systems that account for local water demand, combined with up-to-date weather data, enable authorities to adapt their water systems accordingly.  As a result, this increases the lifetime of the infrastructure while minimizing the risk of leaks, flooding or contamination.  

Dawn 8.jpgIn cities like Dublin, Ireland, Intel is working on several pilot programs to improve air quality.  These initiatives have the potential to connect the full spectrum of devices through the IoT.  Just imagine, when an individual with asthma is planning her morning jog, she can now use an app to find a route with the best air quality.  If pollution levels rise along the way, an alert can be triggered and a new route can be automatically suggested – all this from a handheld device!

In addition to the tools and technology that power the IoT, education will be crucial to the continued success of these programs. We at Intel are committed for the long term and know that even the best smart city solutions are only effective with the proper support before, during and after implementation. From training city authorities to make the most of their water management systems, to getting children involved on the ground, like in Christchurch, New Zealand, where school pupils conducted water quality tests on the Avon River, our focus is to provide the adequate support.

As our cities grow, so too does our responsibility to deal with the mounting pressures of more people, traffic and pollution. That’s the challenge Intel is helping city leaders to address through smart cities initiatives.  Let’s continue to work together to build more efficient and well-connected management systems for the smart cities of the future. After all, protecting our environment, safety and overall well-being is just the smart thing to do. 

To continue the conversation, let’s connect on Twitter @DawnOlsen

Dawn Olsen

Global Sales Director

Government Enterprise, Intel

This is the fourth and final installment mini-blog series on Smart Cities with Dawn Olsen (#4 of 4).

Click here to read blog #1

Click here to read blog #2

Click here to read blog #3

Read more >

What Does CIO Reporting Structure Mean for IT at Large?

A previous manager of mine used to say that structure follows strategy. So it seems logical to conclude that a business’s organizational structure contains significant insights about – and implications for – the role of IT within that company.


Gone are the more traditional expectations of IT as a cost center, and along with it the expectation that the CIO would report directly to the CFO. With every new reporting structure that emerges, a new conversation of strategy and importance is started. For example, here are a few that I ran across on Twitter:


When the CIO reports to the CEO, IT has a chance at being a valuable part of the business.

— Scott W. Ambler (@scottwambler) September 18, 2014


#CIO reporting to the #CMO? It may be a hot trend but is the wrong strategic move!

— Jeffrey Fenter (@JeffreyFenter) March 9, 2014


If a CIO reports up into the CFO, the CFO must be willing to sacrifice finance risk to make systems risk the priority. Can that ever happen?

— Wes Miller (@getwired) September 26, 2014

With IT on its way to being seen as driver, enabler, and – most importantly – a partner of the business, it seems that the CIO’s natural evolution would be to report directly to the CEO. This relationship may solidify the business’s view of IT as a strategic differentiator – a segment of the business worthy of the CEO’s direct attention.


In a Gartner report released this past October, research showed that CIOs are already pulling up a prominent seat at the proverbial table, with 41% reporting directly to their CEO.


This made me wonder – who do the readers of the Intel IT Peer Network and followers of the Intel IT Center (LinkedIn, Twitter, Google+, Facebook) report to? So we created a poll to discover if this reporting trend extended to our community of IT leaders as well.


The results were interesting – the majority of our readers responded that their CIOs report directly to their CEO, while the traditional CIO/CFO model was cited as the second most common reporting structure.




In order to continue to understand the landscape of reporting structure, I’ve left the poll open for further votes – let me know who your CIO reports to, and I’ll check in again in a few months.


Connect with me in the comments below or on Twitter (@chris_p_intel) – I’d love to know how you view organizational structure and its impact on IT (or vice versa).


Does who the CIO reports to imply anything about the importance of the role or is it simply a meaningless line on an org chart?

Read more >

Shopper at the Center: Getting Ready for the Radical Transformation of Retail

Imagine entering a retail store that responds to your individual needs, creating a truly personalized shopping experience. With interactive shelves that offer products based on your previous purchasing patterns—saving you time and endless searching—or digital signage that surrounds you with … Read more >

The post Shopper at the Center: Getting Ready for the Radical Transformation of Retail appeared first on IoT@Intel.

Read more >