ADVISOR DETAILS

RECENT BLOG POSTS

SDI: The Journey vs. the Destination

I just finished Words of Radiance, book two of The Stormlight Archive series by Brandon Sanderson (now have to wait for book three). In this series, the main characters all have an oath to which they adhere and to which they must commit themselves in order to be part of this special group. The oath goes like this:


“Life before Death.
Strength before Weakness.
Journey before Destination.”

 

sdi journey.jpgOne part of this oath, “Journey before Destination,” made me think about some of the challenges IT organizations face in today’s world. While those of us who work in the industry to provide IT solutions care a lot about the destination, the solutions are not always there for the journey!

 

Today, I talk to lots of customers about the concept of software-defined infrastructure (SDI). SDI really is the destination. It’s where organizations can get to a hybrid cloud and then control workloads through an end-to-end orchestration layer that allows you (the customer) to institute and enforce policies for your application workloads.

What a great idea! When I ran IT infrastructure in the past, this is exactly where I wanted to be, from an infrastructure perspective. To think I’d have the ability to manage and optimize my resources in a way that requires fewer people to control and manage. And that I could enforce and comply with the controls we had in place while utilizing all my resources to their optimal level. This is the dream of most IT infrastructure folks.

 

So where is SDI today, really?


Ultimately, we need to think about where we are on that journey toward SDI. Most organizations today, to some degree, could complete these steps, as the tools exist.

 

  • Virtualized resources with compute, storage, and networking (all in different levels of maturity): check
  • Created pools of resources with various products available to do this: check
  • Provided some level of telemetry (information, from the hardware to the software, on health and performance of the platform): check
  • Automated and orchestrated the use of these resources to ensure policies and workload management: check
  • Managed service levels through IT service management software: check

 

Sounds like it’s all in place, right? Well, kind of. The challenge here is that we either need to totally accept a vertical solution with one or two add-ons, or we need to assemble it ourselves. On both fronts, some integration is necessary and granted, many vertical solutions are not yet complete. This doesn’t mean that there aren’t some good ones out there; rather, that some glue is necessary to make it all work.

 

The truth is that while the destination matters, it’s important to think about the journey to create a winning strategy for SDI. Innovation is necessary and plays a huge role in that strategy looking forward, and also in our efforts to “create the glue.” Yet IT budget allocations are still heavily weighted toward maintenance efforts. Forrester reported in a 2013 survey of IT leaders in more than 3,700 companies that respondents estimated that they spend an average of 72 percent on “keep-the-lights-on” functions to support ongoing maintenance, while only 28 percent of the money went toward new projects.[i] This is still consistent with many of the organizations that I talk to in IT.

 

Ultimately, the SDI journey is really where the rubber meets the road. And for most enterprises day to day, the journey is still underway.

 

So where is your organization on that journey?

 

Ed Goldman

@EdLGoldman


Be sure to visit the Intel® IT Center to get the latest resources and expert insights, and check out the planning guide to find out how you can optimize the data center to move toward SDI.



[i] Bartels, Andrew, Christopher Mines, Joanna Clark. Forrsights: IT Budgets and Priorities in 2013. Forrester (April 25, 2013). http://www.forrester.com/Forrsights+IT+Budgets+And+Priorities+In+2013/fulltext/-/E-RES83021?isTurnHighlighting=false&highlightTerm=Forrsights:%2520IT%2520Budgets%2520And%2520Priorities%2520In%25202013

Read more >

Rearchitecting the Data Center for the Internet of Things

In the future, kitchen appliances will talk to you. The refrigerator will let you know when you’re low on milk and whether you have all the ingredients for dinner. The cooktop will display a recipe from the Internet and text you when your pasta water has reached the boiling point. Even better, your appliances will talk to each other—and they won’t burn the toast. When the toast is just about finished, the eggs will start to fry. Such is the vision of what’s possible with the Internet of Things (IoT).

 

data center for the iot.jpgSmart kitchens, smart everything


Smart kitchens, and all objects that are connected by the IoT, work similarly. The object—a loaf of bread, a head of lettuce, or a bottle of ketchup—will be labeled with an RFID tag and read with an RFID reader, and the information transmitted to a computer. Or a connected device with a built-in sensor will transmit information via the Internet to a computer. Either way, all of the information sent by readers and sensors is collected, and some of it is processed by data centers and in the cloud.


For the IoT to reach its full potential, innovation must continue its forward momentum at the device level, the data center level, and the network level. As it does, there will undoubtedly be challenges for the companies developing the technologies around the IoT. In fact, Gartner has identified a number of issues that will have to be addressed as the technology and services around smart homes and other connected systems mature.

 

Pressures on the data center

 

The volume of data generated by mobile devices and the IoT will place heavy demands on data centers. Servers will be pushed to the limit, and network bandwidth will have to be scaled. Gartner says that by 2020, there could be 26 billion units (smart-sensor devices) worth potentially $300 billion in new incremental revenue. Much of the machine-to-machine data will be processed locally at the edge of the network, but some of that will be sent to the data center for aggregation and further analysis.


The IoT will also interact with humans on their mobile devices. By the end of 2014, the number of mobile-connected devices will exceed the number of people on Earth, according to the forecast for mobile data traffic found in the Cisco Visual Networking Index. That same report predicts that while mobile data traffic reached a volume of 1.5 exabytes per month in 2013, by 2018 that amount will grow tenfold to surpass 15 exabytes per month. And some 96 percent of that data will be accounted for by smart devices by 2018. Both networks and storage are among IT’s biggest challenges now, and it’s not going to get any easier.

 

Intel addresses the opportunities and challenges of the IoT

For Intel, this is a chance to move innovation forward. For example:


  • Intel produces scalable microprocessors, such as the single-core, single-threaded Intel® Quark™ SoC X1000 for small-form-factor designs; the Intel Atom™ processor E3800 product family for low power and thermal efficiency; and the recently announced Intel Edison processor, designed specifically to enable the IoT. The processor boasts wireless capabilities and is the size of a postage stamp.
  • As bandwidth requirements increase, Intel continues to deliver network connectivity for the IoT with low-power, small-footprint cellular controllers and platforms that support a wide range of I/O interfaces that can connect to modules supporting cellular, Bluetooth*, ZigBee*, Wi-Fi, and other wireless technologies.
  • Together Intel and McAfee are helping to mitigate the security risks of IoT connectivity.
  • Cloud computing and big data analytics will be foundational technologies helping businesses connect to devices and sensors, process data, and then analyze it for insight. Intel infrastructure technology, such as Intel Xeon processors, Intel Ethernet solutions, and Intel Solid-State Drives, can provide the underpinnings for your next-generation data center.

 

Clearly, smart kitchens are only the tip of the iceberg. Intelligent refrigerators can make an impact on families, but smart homes, businesses, schools, public utilities, government buildings, and installations of all kinds—connected by the Internet of Things and facilitated by cloud computing—can literally change the world.

 

Is your organization ready? Join the #IoT conversation in the comments below. Be sure to check out Intel’s vision for data center architecture of the future from Intel futurist Steve Brown. And read the Intel IT Center planning guide to find out how to optimize your data center so you can deliver greater innovation and embrace the new opportunities that the IoT can bring to your business.

 

Dylan Larson

@idlarson

#IT Center #IoT #DataCenter

Read more >

The Data Stack – October 2014 Intel® Chip Chat Podcast Round-up

In October we continued to archive livecast episodes from the Intel Developer Forum with episodes covering robotics and the progress towards artificial intelligence, software defined infrastructure and intelligent data centers, and emerging technologies in the cloud computing industry. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

  • Meet Jimmy, the Robot Intel Employee – Intel® Chip Chat episode 347: In this archive of a livecast from IDF, Intel Futurist Brian David Johnson stops by with a special guest, Matt Trossen, the CEO of Trossen Robotics. We’re talking to them about the 21st Century Robot Project, which uses open source hardware and software and 3D printing to make customizable robots driven by apps and connected to other devices for an ecosystem of computation. Trossen Robotics makes the Intel® Edison powered Humanoid Exoskeleton and a 3D printer makes the robot skin, giving makers a platform to start from when innovating with robotics. You can order a robot development kit (and check out the book) at www.21stcenturyrobot.com and learn more at www.trossenrobotics.com.
  • Intelligent Infrastructure for the Digital Services Economy – Intel® Chip Chat episode 348: In this archive of a livecast from the Intel Developer Forum, Johnathan Donaldson (@jdonalds), the GM of Software Defined Infrastructure in the Cloud Platforms Group at Intel, stops by to talk about building intelligent infrastructure for enhanced platform and capabilities awareness, as well as dynamic workload placement and configuration. In a digital services economy, intelligent infrastructure is critical for development cycles and time to market. For more information, visit www.intel.com and search for software defined infrastructure.
  • Building Data Centers with Intelligence – Intel® Chip Chat episode 349: In this archive of a livecast from the Intel Developer Forum, we’ve got three great interviewees discussing the intelligent data center. Das Kamhout, a Cloud Orchestration Architect and Principal Engineer at Intel; Scott Carlson, an Information Security Architect for PayPal; and John Wilkes, a Principal Software Engineer at Google are on hand to talk about building, scaling, securing, and adding an intelligent software layer to modern data centers. For PayPal, a top priority is how to protect the money flow and retain customer trust. Google focuses on building smart systems that can scale massively and offer high reliability. For more information, visit: intel.ly/orchestration.
  • Next-gen Computing for Enterprises – Intel® Chip Chat episode 350: In this archive of a livecast from the Intel Developer Forum, Paul Miller (@PaulMiller), the founder of Cloud of Data, chats about various hot topics in enterprise computing including orchestration and telemetry for on demand, cost effective workload distribution, personalized medicine and data protection/privacy, and the evolution of public and private clouds and the emergence of containers. For more information, visit www.cloudofdata.com.

Read more >

Moving to the Cloud – Should the CIO Focus More on Systems of Engagements?

In a short video, Geoffrey Moore describes the focus evolution from Systems of Records to Systems of Engagement. At the end, he highlights the fact that systems of engagement probably require a very different type of IT than systems of records.

 

Systems of record host the key processes and data elements of the enterprise. Most often they have been implemented prior to the year 2000 to ensure enterprise survival in the new millennium. Great efforts and vast amounts of money went into implementing these systems and adapting both the enterprise and the software to one another. Since then, these systems have continued to run reliably and support enterprise operations.

 

But a couple of things happened.

 

Users, getting acquainted with having information at their fingertips through smart phones and other mobile devices, are now asking for access to the systems of records. New interaction mechanisms, such as social media, offer new sources of information that provides a better understanding of market needs, customer demand and the overall environment in which the enterprise operates. Actually, the world is increasingly becoming digital. The boundaries between business and IT are shrinking as every business interactions these days involve the use of information technology in one way or another.

 

In parallel, time has been shrinking.  What we expected, 10 or 15 years ago, to take several hours or days, can now be done in a matter of minutes due to the rapid advancements in IT. Hence the new style of IT, as Meg Whitman calls it, is now required to respond to user needs. Cloud is definitely part of this transformation in IT as it provides enterprises with the responsiveness and agility required to address today’s ever changing business environment.

 

As enterprises decide to move to the cloud, the question – where to start? – typically comes up. In a blog entry I published over one year ago, I spoke about five use cases companies could envisage to start their cloud journey. It ultimately depends on the decision to consume services from a cloud environment (being it private, managed or public). So, the question of – which application should be moved to the cloud first? – is raised. Should we start with a systems of record or a systems of engagement?


Should we start with Systems of Record?

As stated by Geoffrey Moore, most Systems of Record have been in place for the last 10 to 15 years. They are transaction based and focus on facts, dates and commitments. They often represent a single source of the truth. Indeed, they typically have been built around a single database, containing mostly structured information. Every time information is changed, the event is logged, so one can quickly find out who did what. Data is kept in the systems for extensive periods of time to ensure compliance while access is regulated and contained. They are the core support systems for the operations of the enterprise.

 

A couple companies focused on the development of such systems –  the most well-known being SAP and Oracle for their financial and manufacturing systems. Other enterprises may have written their own application, being left with a small team of knowledgeable resources to maintain it. These systems are considered business critical as the company cannot operate without them any longer. They contain the “single version of the truth”. And I can speak from my own experience, even if you disagree with those numbers (for example –  if deals have been mis-categorized), you will have great difficulty convincing higher levels of management that the data is incorrect.

 

Some enterprises may require increased flexibility in the use of such systems; they may want increased agility and responsiveness in case of a merger or divestiture. But are these the systems we should migrate to cloud first?

 

What would be the benefit? Well we could probably run them cheaper; we may be able to give our users that additional levels of responsiveness, agility and flexibility that they are looking for. But on the other hand, we would have to modernize an environment that runs well and supports the business of the enterprise on a daily basis. Or we should rebuild a brand new system of record based on the latest version of the software. This may be the only option, but to me… that sounds risky.

 

I’ve seen a couple of companies doing this, but it was mostly in the case of a merger, divestiture, consolidation of systems or a move to a new data center or IT delivery mechanism. And in most of these cases, it turned out that capabilities available on the cloud including – automation, flex-up/flex-down and service – were not fully taken into consideration during the installation of the application.

 

Now, employees might want access to the systems of record through their mobile devices.  They might want a more friendly user interface.  They might want to combine functions that are separate from the original system. This is a whole different ballgame.

 

Using web services, we could encapsulate the system of record and give the user what they want without having to disrupt the original environment. Over time we could consider updating/transforming some of the functionality and shut them down in the original package and replace them with cloud based functionality. This reduces the risk and shields the end-user from the actual package, making it easier to transform the system of record without overwhelming disruptions.


What is different with Systems of Engagement?

Systems of engagements were developed some time after system of records – so they involve newer technologies. In particular, many of them are built around SOA principles, making them more suitable to take full advantage of cloud technology. Their objectives are interactions and collaboration. It’s all about sharing insights, ideas and nuances. They are used within the frame of business opportunities and projects making the relationships transient in nature while requiring responsiveness and agility to be set-up quickly. Access is ad-hoc and in many companies may require a partner to customer interaction. Most often, information is unstructured which makes search more difficult.

 

Obviously, Systems of Engagement are important, but they do not maintain the critical information needed to run a company. They are important as a mechanism to share information, gain consensus and make decisions.  However, they do not maintain the single source of the truth. That makes them more suitable for being experimented with. Their nature, needs and technologies used to build them makes them better candidates to be migrated to cloud. So, I would suggest that this is where we should start. Of course, we don’t want our end-users to be left in the cold if something happens during the migration.  But even in the worst case scenario, telephones can still be used to allow people to exchange information even if the system were down for some time.


The importance of data

Tony Byrne argues that Geoffrey Moore simplifies things by creating two clearly different categories. He points out that the issue is probably messier in real life. On the one hand, people are discussing important business decisions in collaboration systems – thereby creating records, while others may want to engage with their colleagues directly from the systems of record.  Byrne explains it in simple terms: “your colleagues are creating records while they engage, and seeking to engage while they manage formal documents and participate in structured processes. Ditto for your interactions with customers and other partners beyond your firewall.

 

Now, we have been able to trigger functionality from within applications for quite some time.  That’s not the issue. And the use of web services described earlier makes this reasonably easy to implement.

 

The focus of Tony’s discussion is how the data can be moved between the systems of record and the systems of engagement. Right from the start, you should think about your data sources and information management. Again, technology exists today to access data within and outside a cloud environment. What’s important is to figure out what data should be used when and where, while ensuring that it is properly managed along the way. If you access and change data in a systems of record, do it in such a way that all the checking, security and logging functionality is respected. But this should be nothing new. Companies have been integrating external functionality within their systems of record for years.


Conclusion

When companies look at migrating to cloud, the question of where to begin is often debated. In my mind, it’s important to show end-users the benefits of the cloud early on. That lends me to lean more towards starting with systems of engagements by either transforming existing ones or building new ones that will positively surprise users. This will get their buy-in and give IT more “cloud” to transform the remainder of the IT environment. The real question is: how far you need to go? Because not everything has to be in the cloud.  At the end of the day, you should only move what makes sense.

Read more >

Game Over! Gamification and the CIO

Congratulations to the winner of the CIO Superhero Game!


The first to complete the challenge and become “Super CIO” was Brad Ton of Reindeer Auto Relocation. He was presented with an AWESOME Trophy to display proudly in his office. To win, Mr. Ton had to defeat five evil henchmen and the arch enemy of CIOs everywhere, Complacent IT Guy. 20141029_073410.jpg


So what is this craziness about CIO Superheroes? Just my dorky way of introducing another one of the challenges impacting the CIO today…Gamification.


Gamifi-what?


Gamification, the use of game thinking and game mechanics in non-game contexts to engage people (players) in solving problems.


According to Bunchball, one of the leading firms in applying gamification to business processes, gamification is made up of two major components. The first is game mechanics – things like points, leaderboards, challenges levels – the pieces that make game playing fun, engaging, and challenging. In other words, the elements that create competition. The second component is game dynamics – things like rewards, achievement, and a sense of competition.


Games are everywhere. People love to compete and people love to play games. What does this have to do with the role of CIO?


Everything!


Want to improve customer engagement? Make it a game! Want employees to embrace a new process? Make it a game! Want to improve performance? Make it a game! Add game mechanics and game dynamics into the next app you are building, layer them on an existing application, and put them in the next process improvement initiative.


Even sites like the Intel IT Peer Network use game theory to increase engagement. You earn points for all kinds of activity, which might include logging in multiple times, using searches to find content or posting a blog. I find it interesting that while these points earn you badges and levels, they actually offer minimal intrinsic value. Nevertheless, I found myself disappointed during a recent systems upgrade to have my points reset to zero. Alas, I am an Acolyte again!


Now, back to the CIO Superhero Game.


Recently, I had the opportunity to interview our Superhero CIO winner. Here are just a few of his thoughts surrounding gamification. smallish_CIOHero.png


What caught your attention enough to want to play the game?


“I really thought that Twitter was a very unique way to play a game.  It was not something I had ever done before.  I’m a frequent reader of all of your writings, so I knew I was in for a learning experience if anything else.  I’m a Twitter addict, so I felt comfortable diving into the CIO world even as someone not extremely knowledgeable on the topic.  Frankly, I’d rather have an hour long dentist appointment than read an instruction manual.  This was easily accessible – right at my fingertips and very self-explanatory.”


Games can engage, games can inspire, games can teach. What is the most important lesson you learned from playing the game?


“While I never truly understood the intricacies of being a CIO, I always appreciated the hard work and dedication it took to get to such a prestigious level.  After going through the CIO Superhero game, I can honestly say that I now genuinely respect it.  The passion behind the game was something I enjoyed.  It wasn’t a bland exercise built with little thought or substance.  I could feel that the game was designed to teach and help grow others into not only understanding new topics previously unknown – but to inspire them into being pro-active in sharing & creating their own ideas.   That is when you know you have something special.  More than the topics themselves, the passion behind what the game was meant to do is what was really able to draw me in.”   


You are not a CIO yourself, do you think gamification of a process would work in your business, and if so, can you give an example?


“Any tool that can supply a different approach to creating a better understanding of a current process is always worth the attempt.  I also think the concept of Gamification is able to provide a different perspective, which can spark new ways to think about old processes. Implementing gamification could highlight the variables within our industry that can, in turn, allow for a more personable approach.  Cost, scheduling, bookings…logistics are important, but the game tailored to our industry could be much more personal and deal directly with relationships of all parties involved in a relocation. Whereas, a typical goal would be to complete an on-time relocation with small out-of-pocket costs, the game’s primary objective would be to receive positive feedback from customers, clients, etc.   Yes, on-time and small cost could equate to this outcome, but not always.  “Defeat the evil henchmen” by coming up with a new idea to improve customer service, for instance.  By defining the game’s objectives from a relationship standpoint, you can spark new and creative ways of thinking.”


So, there you have it.


Gamification – just another element within the myriad of changes impacting the CIO today. It truly is a “game” changer that can increase adoption and engagement across a variety of businesses and processes.


This is a continuation of a series of posts titled “The CIO is Dead! Long Live the CIO!” looking at the confluence of changes impacting the CIO and IT leadership. #CIOisDead. Next up “Faster than a speeding bullet – The Speed of Change”.

Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

Unleashing the Digital Services Economy

Today Intel delivered a keynote address to 1000+ attendees at the Open Compute Project European Summit, in Paris. The keynote, delivered by Intel GM Billy Cox, covered Intel’s strategy to accelerate the digital services economy by delivering disruptive technology innovation founded on industry standards. The foundation of Intel’s strategy is an expansion of silicon innovation to augment its traditional Xeon, Xeon Phi and Atom solutions with expansion of product offerings through new standard SKUs and custom solutions based on specific workload requirements. Intel is expanding its data center SoC product line with the planned introduction of a Xeon based SoC in early 2015, which is sampling now. This will be Intel’s 3rd generation 64-bit SoC solution.

 

 

 

 

To further highlight this disruptive innovation, Cox described how Intel is working closely with industry leaders Facebook and Microsoft on separate collaborative engineering efforts to deliver innovative and more efficient solutions for the data center. Cox detailed how Intel and Facebook engineers worked together on Facebook’s delivery of the new Honey Badger storage server for their photo storage tier featuring the Intel® Atom™ processor C2000, a 64-bit system-on-chip. The high capacity, high density storage server offers up to 180TB in a 2U form factor and is expected to be deployed in 1H’15.  Cox also detailed how Microsoft has completed the 2nd generation Open Cloud Server (OCSv2) specification. Intel and Microsoft have jointly developed a board to go into OCSv2 that features a dual-processor design, built on the Intel Xeon E5-2600 v3 series processor that enables 28 cores of compute power per blade.

 

 

Collaboration with Open Compute reflects Intel’s decades long history of collaborating with industry organizations to accelerate computing innovation. As one of the 5 founding board members of the Open Compute Project, we are deeply committed to enabling broad industry innovation by openly sharing specifications and best practices for high efficiency data center infrastructure.  Intel is involved in many OCP working group initiatives spanning rack, compute, storage, network, C&I  and management which are strategically aligned with our vision of accelerating rack scale optimization for cloud computing.

 

At the summit, Intel and industry partners are demonstrating production hardware based on our Open Compute specifications. We look forward to working with the community to help push datacenter innovation forward.

Read more >

Adopting & Enabling OpenStack in the Enterprise: A look at OpenStack Summit 2014

As I discuss the path to cloud with customers, one topic that is likely to come up is OpenStack.  It’s easy to understand the inherent value in OpenStack as an open source orchestration solution, but this value is balanced by ever present questions on OpenStack’s readiness for the complex environments found in telco and enterprise.  Will OpenStack emerge as a leading presence in these environments, and in what timeframe?  What have lead adopters experienced with early implementations and POCs…are there pitfalls to avoid, and how can we use these learnings to drive the next wave of adoption?

 

This was most recently a theme at the Intel Developer Forum where I caught up with Intel’s Jonathan Donaldson and Das Kamhout on Intel’s strategy for orchestration and effort to take key learnings from the world’s most sophisticated data centers to apply to broad implementations.  However, Intel is certainly not new to the OpenStack arena having been involved in the community from its earliest days and more recently having delivered Service Assurance Administrator, a key tool that enables OpenStack environments better insight into underlying infrastructure attributes.  Intel has even helped lead the charge of enterprise implementation with integration of OpenStack into Intel’s own internal cloud environment.

 

These lingering questions on broad enterprise and telco adoption, however, make the upcoming OpenStack Summit a must attend event for me this month.  With an event loaded with discussions from leading enterprise and telco experts from companies like BMW, Telefonica, and Workday on their experiences with OpenStack, I’m expecting to get much closer to the art of the possible in OpenStack deployment as well as learn more about how OpenStack providers are progressing with enterprise friendly offerings.  If you’re attending the Summit please be sure to check out Intel’s line up of sessions and technology demonstrations and connect with Intel executives on site discussing our engagements in the OpenStack community and work with partners and end customers to help drive broad use of OpenStack into enterprise and telco environments. If you don’t have the Summit in your travel plans, never fear.  Intel will help bring the conference to you!  I’ll be hosting two days of livecast interviews from the floor of the Summit. We’ll also be publishing a daily recap of the event on the DataStack with video highlights, the best comments from the Twitterverse, and much more.  Please send input on the topics that you want to hear about coming from OpenStack to ensure that our updates match the topics you care about. #OpenStack

Read more >

Going Green With Your Data Center Strategy

Green data center.jpgFor an enterprise attempting to maximize energy efficiency, the data center has long been one of the greatest sticking points. A growing emphasis on cloud and mobile means growing data centers, and by nature, they demand a gargantuan level of energy in order to function. And according to a recent survey on global electricity usage, data centers are sucking more energy than ever before.

 

George Leopold, senior editor at EnterpriseTech, recently dissected Mark P. Mills’ study entitled, “The Cloud Begins With Coal: Big Data, Big Networks, Big Infrastructure, And Big Power.” The important grain of salt surrounding the survey is that funding stemmed from the National Mining Association and the American Coalition for Clean Coal Electricity, but there were some stark statistics that shouldn’t be dismissed lightly.

 

“The average data center in the U.S., for example, is now well past 12 years old — geriatric class tech by ICT standards. Unlike other industrial-classes of electric demand, newer data facilities see higher, not lower, power densities. A single refrigerator-sized rack of servers in a data center already requires more power than an entire home, with the average power per rack rising 40% in the past five years to over 5 kW, and the latest state-of-the-art systems hitting 26 kW per rack on track to doubling.”

 

More Power With Less Energy

 

As Leopold points out in his article, providers are developing solutions to circumvent growing demand while still cutting carbon footprint. IT leaders can rethink energy usage by concentrating on air distribution and attempting assorted cooling methods. This ranges from containment cooling to hot huts (a method pioneered by Google). And thorium-based nuclear reactors are gaining traction in China, but don’t necessarily solve waste issues.

 

If the average data center in the U.S. is older than 12-years old, IT leaders need to start looking at the tech powering their data center and rethink the demand on the horizon. Perhaps the best way to go about this is thinking about the foundation of the data center at hand.

 

Analysis From the Ground Up

 

Intel IT has three primary areas of concern when choosing a new data center site: environmental conditions, fiber and communications infrastructure, and power infrastructure. These three criteria bear the greatest weight on the eventual success — or failure — of a data center. So when you think about your data center site in the context of the given criteria, ask yourself: Was the initial strategy wise? How does the threat proximity compare to the resource proximity? What does the surrounding infrastructure look like and how does that affect the data center? If you could go the greenfield route and build an entirely new site, what would you retain and what would you change?

 

Every data center manager in every enterprise has likely considered the almost counterintuitive concept that more power can come with less energy. But doing more with less has been the mantra since the beginning of IT. It’s a challenge inherent to the profession. Here at Intel, we’ll continue to provide invaluable resources to managers looking to get the most out of their data center.

 

To continue the conversation, please follow us at @IntelITCenter or use #ITCenter.

Read more >

5 Questions for Howard A. Zucker, MD, JD

Health IT is a hot topic in the Empire State. New York was the first state to host an open health data site and is now in the process of building the Statewide Health Information Network of New York. The SHIN-NY will enable providers to access patient records from anywhere in the state.

 

To learn more, we caught up with Howard A. Zucker, MD, JD, who was 22 when he got his MD from George Washington University School of Medicine and became one of America’s youngest doctors. Today, Zucker is the Acting Commissioner of Health for New York State, a post he assumed in May 2014. Like his predecessor Nirav R. Shah, MD, MPH, Zucker is a technology enthusiast, who sees EHRs, mobile apps and telehealth as key components to improving our health care system. Here, he shares his thoughts.

 

What’s your vision for patient care in New York in the next five years?

 

Zucker: Patient care will be a more seamless experience for many reasons. Technology will allow for further connectivity. Patients will have access to their health information through patient portals. Providers will share information on the SHIN-NY. All of this will make patient care more fluid, so that no matter where you go – a hospital, your doctor’s office or the local pharmacy – providers will be able to know your health history and deliver better quality, more individualized care. And we will do this while safeguarding patient privacy.

 

I also see a larger proportion of patient care taking place in the home. Doctors will take advantage of technologies like Skype and telemedicine to deliver that care. This will happen as patients take more ownership of their health. Devices like FitBit amass data about health and take steps to improve it. It’s a technology still in its infancy, but it’s going to play a major role in long term care. zucker_263x329.jpg

 

How will technology shape health care in New York and beyond?

 

Zucker: Technology in health and medicine is rapidly expanding – it’s already started. Genomics and proteomics will one day lead to customized medicine and treatments tailored to the individual. Mobile technology will provide patient data to change behaviors. Patients and doctors alike will use this type of technology. As a result, patients will truly begin to “own” their health.

 

Personally, I’d like to see greater use of technology for long-term care. Many people I know are dealing with aging parents and scrambling to figure out what to do. I think technology will enable more people to age in place in ways that have yet to unfold.

 

What hurdles do you see in New York and how can you get around those?

 

Zucker: Interoperability remains an ongoing concern. If computers can’t talk to each other, then this seamless experience will be extremely challenging.

 

We also need doctors to embrace and adopt EHRs. Many of them are still using paper records. But it’s challenging to set up an EHR when you have patients waiting to be seen and so many other clinical care obligations. Somehow, we need to find a way to make the adoption and implementation process less burdensome. Financial incentives alone won’t work.

 

How will mobility play into providing better patient care in New York?

 

Zucker: The human body is constantly giving us information, but only recently have we begun to figure out ways to receive that data using mobile technology. Once we’ve mastered this, we’re going to significantly improve patient care.

 

We already have technology that collects data from phones, and we have sensors that monitor heart rate, activity levels and sleep patterns. More advanced tools will track blood glucose levels, blood oxygen and stress levels.

 

How will New York use all this patient-generated health data?

 

Zucker: We have numerous plans for all this data, but the most important will be using it to better prevent, diagnose and treat disease. Someday soon, the data will help us find early biomarkers of disease, so that we can predict illness well in advance of the onset of symptoms. We will be able to use the data to make more informed decisions on patient care.

Read more >

Delivering on Choice for Hybrid Cloud Customers with Intel & EMC

Stu Goldstein is a Market Development Manager in the Communications and Storage Infrastructure Group at Intel

 

When purchasing a new laptop for my sons as they went off to college, a big part of the brand decision revolved around support and my peace of mind.  Sure enough, one of my sons blew his motherboard when he plugged into an outlet while spending a summer working in China, and the other trashed his display when playing jump rope with his power cord, pulling his PC off the bed. In both cases I came away feeling good about the support received from the brand that I trusted.  

 

So, knowing a bit more about how I think, it should not be a big surprise that I see the EMC Hybrid Cloud announcement today as important. Enterprises moving to Converged Software Defined Storage Infrastructures should have choices.  EMC is offering the Enterprise the opportunity to evolve without abandoning a successfully engineered infrastructure, including the support that will inevitably be needed. The creation of products that maximize existing investments, while providing the necessary path to a secure hybrid cloud is proof of EMC’s commitment to choice.  Providing agility moving forward without short circuiting security and governance can be difficult; EMC’s announcement today recognizes the challenge.  Offering a VMware edition is not surprising; neither is the good news about supporting a Microsoft edition.  However, a commitment to “Fully Engineered OpenStack Solutions,” is a big deal.  Intel is a big contributor to open source including OpenStack, so it is great to see this focus from EMC.

 

EMC has proven over the last several years that they can apply much of the underlying technologies that the Intel® Xeon® processors combined with Intel® Ethernet Converged Network Adapters have to offer.  When Intel provided solutions that increased memory bandwidth by 60% and doubled I/O bandwidth generation over generation, EMC immediately asked, “what’s next?”  Using these performance features coupled with Intel virtualization advances, the VMAX³ and VNX solutions prove EMC is capable of moving Any Data, Anytime, Anywhere while maintaining VMs that are isolated to allow for secure shared tenancy.  Now EMC is intent on proving it is serious about expanding the meaning of Anywhere.  (BTW, the XtremIO Scale Out products are a great example of Anytime, using Intel Architecture advancements to maintain a consistent 99% latency at less than 1 ms to provide the steady performance that customers need to take the most advantage possible of this all flash array).  EMC is in a unique position to offer customers of its products in the enterprise the ability to extend benefits derived from highly optimized deduplication, compression, flash, memory, I/O and virtualization technology into the public cloud.

 

Getting back to support which is a broad term that comes into laser focus when you need it.  It has to come from a trusted source no matter whether your storage scales up, out, is open, sort of open or proprietary.  It costs something whether you rely on open source distros, OEMs or smart people hired to build and support home grown solutions. EMC’s Hybrid Cloud announcement is a recognition that adding IaaS needs backing that covers you inside and out or maybe said another way, from the inside into the outside.  I look forward to seeing what IT Managers do with EMC’s choices and the innovation this initiative brings to the cloud.

Read more >

Meeting The Demands Of The Tech-Savvy Workforce

Are your employees’ devices optimized for effective collaboration? The tech-savvy workforce is presenting enormous opportunities in the enterprise. Employees are increasingly aware of new technologies and how they can integrate them into their work. One of the biggest changes in enterprise technology is the advent of collaboration tools to support the demands of this emerging workforce. Tech-savvy workers demand flexibility and are challenging enterprise IT leaders to adopt solutions that take full advantage of their technical abilities.

 

Collaboration & Flexibility

 

Many companies have identified the benefits of flexible and remote working policies, including an increase in productivity and morale, and a decrease in overhead. In order to empower employees with the tools needed to be successful in these progressive working conditions, it’s incumbent on IT leaders to build device and software strategies that support employees.

ssg image.png

 

 

Two of the most popular collaboration software solutions are Microsoft Lync and Skype. Skype is a familiar, robust video conferencing platform that provides employees with valuable face-to-face interactions without having to book a conference room. The software also offers file sharing and group chat functionality to support communication and collaboration, and its adoption among consumers makes it an ideal platform to communicate with clients. Microsoft Lync is a powerful enterprise-level collaboration tool that facilitates multi-party video conferencing, PowerPoint slide sharing, real-time polling, call recording, file sharing, and more.

 

Not All Devices Are Created Equal

 

Both of these solutions offer flexible ways for employees to collaborate and communicate from anywhere. Although both Skype and Microsoft Lync are available as stand alone apps on many platforms, some features may not be available on all devices. In a recent comparison, Principled Technologies found that popular tablets such as the iPad Air and Samsung Galaxy Note 10.1 were unable to utilize key collaboration features like group video chat or multiple file transfers. However, the Intel-powered Microsoft Surface Pro 3 offered users a full-featured experience with both apps. Additionally, the Surface Pro 3 boasted perceptibly higher video quality than the competition during both Skype and Microsoft Lync meetings. For IT leaders looking to support collaboration in the enterprise, the message is clear: don’t let hardware be a roadblock to employee success. Give them tools that work.

 

Click here to read the full Principled Technologies device comparison.

Read more >

Virtualization: The First Step to the Cloud

private.PNG

 

 

The underpinning for most high performing clouds is a virtualized infrastructure that pools resources for greater physical server consolidation and processor utilization. With the efficiencies associated with pooled resources, some organizations have considered their virtualized environment “cloud computing.” These organizations are selling themselves short. The full promise of cloud—efficiency, cost savings, and agility—can be realized only by automating and orchestrating how these pooled, virtualized resources are utilized.

 

Virtualization has been in data centers for several years as a successful IT strategy for consolidating servers by deploying more applications on fewer physical systems. The benefits include lower operational costs, reduced heat (from fewer servers), a smaller carbon footprint (less energy required for cooling), faster disaster recovery (virtual provisioning enables faster recovery), and more hardware flexibility.

 

Source: Why Build a Private Cloud? Virtualization vs. Cloud Computing. Intel (2014).

Cloud takes efficiency to the next level

A fully functioning cloud environment does much more. According to the National Institute of Standards and Technology (NIST), a fully functioning cloud has five essential characteristics:

 

  1. On-demand self-service. A consumer can unilaterally provision computing capabilities.
  2. Broad-network access. Capabilities are available over the network and accessed through standard mechanisms (e.g., mobile phones, tablets, laptops, and workstations).
  3. Resource pooling. The provider’s computing resources are pooled to serve multiple consumers.
  4. Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward, commensurate with demand.
  5. Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts).

 

Different but complementary strategies

A Forbes* article describes how these highly complementary strategies work together. Virtualization abstracts compute resources—typically as virtual machines (VMs)—with associated storage and networking connectivity. The cloud determines how those virtualized resources are allocated, delivered, and presented. While virtualization is not required to create a cloud environment, it does enable rapid scaling of resources and that the majority of high performing clouds are built upon virtualized infrastructures.

 

In other words, virtualization pools infrastructure resources and acts as a building block to enhance the agility and business potential of the cloud environment. It is the first step in building a long-term cloud computing strategy that could ultimately include integration with public cloud services—a hybrid deployment model—enabling even greater flexibility and scalability.

 

With a virtualized data center as its foundation, an on-premises, or private, cloud can make IT operations more efficient as well as increase business agility. IT can offer cloud services across the organization, serving as a broker with providers and avoiding some of the risks associated with shadow IT. Infrastructure as a service (IaaS) and the higher-level platform as a service (PaaS) delivery models are two of the services that can help businesses derive maximum value from the cloud.

 

Virtualization and cloud computing go hand in hand with virtualization as a critical first step toward fully achieving the value of a private cloud investment and laying the groundwork for a more elastic hybrid model. Delivery of IaaS and PaaS creates exceptional flexibility and agility—offering enormous potential for the organization with IT as a purveyor of possibility.

 

How did you leverage virtualization to evolve your cloud environment? Comment below to join the discussion.

 

Mike Moshier

Find Mike on LinkedIn.
See previous content from Mike.

#ITCenter #Virtualization #Cloud

Read more >

IBM and Intel: Partners in the Journey from Information to Insights

IBM’s long-time Information on Demand conference has changed its name, and its focus. Big Blue’s major fall conference is now called IBM Insight, and it will take over Mandalay Bay in Las Vegas from Oct. 26 to 30. The name change reflects a key shift in the tech industry: Beyond managing vast amounts of information—increasingly, technology’s role is to extract value and insights from a vast range of information sources. It’s our job to create actionable information to help businesses succeed and gain a competitive advantage in a highly competitive marketplace.

 

Intel and IBM have worked together for over 20 years to help their customers achieve precisely that. Joint engineering built into IBM and Intel solutions, such as IBM DB2 with BLU Acceleration* optimized for Intel® Xeon® processors, deliver dramatic performance gains that can transform big data into vital business insights more quickly, all while lowering costs and power consumption.

 

The other word that describes IBM Insights is “big.” Not only is the focus big data, but the event itself is huge.  With over 13,000 attendees, and over 700 sessions and keynotes, IBM Insight is the largest big data conference in the world. I’m looking forward to catching up with the latest perspectives and emerging technologies in the fast-evolving world of data analytics.

 

Be sure not to miss the following sessions, where you’ll discover the newest advances in data management and analytics from Intel and IBM (all events are in the Mandalay Bay South Convention Center).

 

  • IBM Big SQL: Accelerating SQL and Big Data Performance on Intel Architecture – Session 5191A (10:15-11:15am, Oct 27, Jasmine G). Jantz Tran, an Intel Software Performance Engineer, and IBM’s Simon Harris, provide an overview of IBM Big SQL* and describe the breakthrough performance it delivers when run on Intel Xeon servers and platform products.
  • Ideas for Implementing Big Data on Intel Architecture – Session 7188A (2-2:20pm, Oct. 27, Solution Expo Theater). In this session, Jim Fister, lead strategist and director of business development for Intel’s Data Center Group, will discuss the opportunity for data analytics, the case for driving analytics ahead of schedule, and options for implementing your solutions using Intel Architecture and IBM software.
  • TPC-DI: An Industry Standard Benchmark for Data Integration – 5193A (10-11am, Oct 28, Jasmine G). Along with IBM software engineers Ron Liu and Sam Wong, Jantz Tran returns to introduce TPC-DI, a new industry standard for measuring and comparing the performance of data integration (DI) or ETL systems. They will discuss early observations from running TPC-DI with IBM Infosphere Datastage* on the latest generation Intel Xeon systems, and provide best practice optimization recommendations for Datastage deployments.
  • Managing Internet of Things Data on the Edge and in the Cloud with Intel and IBM Informix* Solutions  – Session 6140A  (10-11am, Oct. 28, Banyan F). IBM’s Kevin Brown and Preston Walters, who leads Intel’s technical enablement and co-marketing of IBM software products on Intel technology, describe the challenges of Internet of Things and Internet of Everything requirements, and how data architecture and technologies from Intel and IBM are responding to these challenges in both edge devices and cloud infrastructure.
  • Intel and IBM Software: A Long History – Session 7189A (2-2:20pm, Oct. 28, Solution Expo Theater). In this session, Jim Fister will cover the history of Intel and IBM’s relationship, along with a discussion of performance enhancements for IBM OLTP and data analytics software using the latest Intel platforms.
  • Optimizing Mixed Workloads and High Availability with IBM DB2 10.5 on an Intel® Architecture – Session 5141A (3-4pm. Oct. 28, Banyan F). In this session, Kshitij Doshi, a principal engineer in Intel’s Software and Services Group, and Jessica Rockwood, an IBM senior manager for DB2 performance, provide an overview of the latest Intel® Xeon® E5-2600 V3 series processor architecture and its benefits for transaction processing workloads with IBM DB2 10.5 with BLU Acceleration.
  • Goodbye Smart; Hello Smarter: Enabling the Internet of Things for Connected Environments – Session 6402A (11:15am-12:15pm, Oct. 29, Banyan F). Intel’s Preston Walters, with Oliver Goh, CEO of Shaspa GmpH, discuss how Intel, Shaspa and IBM provide intelligent solutions for connected environments that enable local analytics and decision-making to improve business and consumer services. Attend this session to see a demo of the Internet of Things in action.
  • Using IBM Bluemix* and IBM SoftLayer* to Run IBM InfoSphere Information Server* on an Intel® Technology-Powered Cloud – Session 5198A (10-11am, Oct. 30. Jasmine E). In this session, Jantz Tran, with IBM’s Beate Porst and Sam Wong, explain how IBM InfoSphere Information Server* works in the cloud and provides data for scaling performance. They also discuss bare metal and virtualization options available with IBM SoftLayer.

 

At IBM Insights, Intel will be sharing a booth with our friends at Lenovo. Stop by to say hello and check out the latest Lenovo tablets, which rate highly in performance and security in the recent report Do More, Faster with IBM Cognos* Business Intelligence. Download the report to learn how tablets and servers based on Intel processors provide unparalleled improvements to speed and capabilities for IBM Cognos BI workloads.

 

Follow me at @TimIntel and watch for my Vine videos and man-on-the-street commentary and impressions from IBM Insights. Follow @IntelITCenter to join the dialogue with Intel IT experts, and follow @IntelSoftware to engage with Intel’s software community. 

 

See you in Las Vegas!

Read more >

United Kingdom’s National Weather Service Selects Cray

Two Generations of Cray Supercomputers to be Powered by Intel® Xeon® Processors

 

Cray has announced that the Met Office, the United Kingdom’s national weather service, widely recognized as one of the world’s most accurate weather forecasting organizations, has selected Cray to provide multiple Cray XC* supercomputers and Cray Sonexion* storage systems.

 

The $128 million contract which spans multiple years will include Cray XC40 systems as well as next-generation Cray XC systems with current and future Intel Xeon processors.

 

The new Cray supercomputers at the Met Office will provide 16 times more supercomputing power than current systems, and will be used for operational weather prediction and climate research.

 

According to Cray’s news release, the U.K.’s Met Office uses more than 10 million weather observations a day and an advanced atmospheric model to create 3,000 tailored forecasts and briefings each day that are delivered to customers ranging from government, businesses, the general public, armed forces and other organizations.

 

According to Cray CEO, Peter Ungaro, “The award is symbolic for Cray on a number of fronts – it demonstrates that our systems continue to be the supercomputers of choice for production weather centers across the globe, that our close relationship with Intel is providing customers with enhanced capabilities today and into the future and it reinforces the role that Cray plays in impacting society on a daily basis in a wide range of areas.”

Read more >

Final Teams Announced for the 2014 Intel® Parallel Universe Computing Challenge

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group

 

Intel announced today the competition schedule and the final selection of teams who will participate in the second annual Intel Parallel Universe Computing Challenge (PUCC) at SC14 in New Orleans, November 17-20. Each team will play for a charitable organization to whom Intel will donate $26,000 in recognition of the 26th anniversary of the Supercomputing conference.

 

 

Returning from last year will be the defending champions from Germany, The Gaussian Elimination Squad. The other finalist from last year, The Coding Illini, representing the National Center for Supercomputing Applications (NCSA) and the University of Illinois at Urbana–Champaign, will also make a return appearance.

Three other organizations, albeit with new team names, are also returning for this year’s competition. They are The Brilliant Dummies from South Korea’s Seoul National University, the Invincible Buckeyes from the Ohio Supercomputing Center, and the Linear Scalers from Argonne National Laboratory.

Three new teams will be competing in the 2014 competition, including SC3 (Super Computación y Calculo Cientifico) representing Latin America, Taiji representing China, and EXAMEN representing the EXA2CT project in Europe.

The 2014 PUCC will kick off on Monday evening, November 17 at 8 p.m. during the SC14 exhibition hall Opening Gala with The Gaussian Elimination Squad facing the Invincible Buckeyes. Additional matches will be held Tuesday through Thursday of the SC14 conference with the final match scheduled for Thursday afternoon at 1:30 p.m. I will once again be hosting the challenge along with my Intel partner James Reinders.

Coding and Trivia Challenge Combine in an Entertaining Stage Show

The PUCC features an 8-team single elimination tournament and is designed to raise awareness of the importance of parallelization for improving the performance of technical computing applications.

Each elimination match will consist of two rounds. The first round is a rapid-fire trivia challenge consisting of technical parallel computing questions interlaced with general HPC and SC Conference history trivia.

The second round is a parallel code optimization challenge where participants examine a piece of code that has been deconstructed from its optimized, parallel version and apply any code changes they believe will improve the overall performance. Points will be awarded during both rounds and the team with the most combined points will move on to the next match in the tournament. The audience will be watching the exercise on large screens while the hosts discuss the steps the teams are taking, and engage the audience in trivia for the chance to win prizes.

Team Captains Reveal Their Preparation Plans

 

Here’s what some of the team captains had to say on the upcoming challenge:

  • Mike Showerman, captain of the team who placed second last year, said he hopes this year’s team goes all the way to the championship. “The Coding Illini will be even fiercer this year and will take every opportunity to bring the title home.”
  • Examen Team Captain David Horak with IT4Innovations humorously suggested “We want to check whether we finally reached the state when bachelor students surpass us in HPC knowledge.”
  • Another team with a sense of humor is the group from Seoul National University who returns as The Brilliant Dummies. Team Captain Wookeun Jung, a graduate PhD student, says “Our team name is TBD, The Brilliant Dummies, because we are brilliant enough to solve the complicated HPC problems, and dummies that only can solve those HPC problems. Of course, whether we would be brilliant in the competition or not is TBD.”
  • When asked how the Linear Scalers, would prepare, Kalyan “Kumar” Kumaran, who is manager of Performance Engineering and Data Analytics in the Argonne Leadership Computing Facility said “Pre-competition stretching, and coffee. We will start memorizing sections from James Reinders’ books.”
  • Karen Tomko, Scientific Applications Group Manager at the Ohio Supercomputer Center and captain of the Invincible Buckeyes offered “We’ll do our homework for the trivia, brush up on the parallel constructs, look at some Fortran codes, and make sure we have at least one vi user on the team.”
  • Gilberto Díaz, infrastructure chief of the supercomputer center at Universidad Industrial de Santander (UIS), assembled a Latin Americans team called SC3 for Super Computación y Calculo Cientifico. He asserted “We would like to promote and develop more widespread awareness and use of HPC in our region. In addition to the excitement of participating in the 2014 event, our participation will help us to prepare students of master and PhD programs to better understand the importance of code modernization as well as preparing them to compete in future competitions.” Gilbert has since passed on the responsibility for the team captain to Carlos Barrios, Professor at UIS.
  • Georg Hager, team captain for last year’s champion Gaussian Elimination Squad and senior research scientist at Germany’s Erlangen Regional Computing Center, said “The PUCC is about showing knowledge and experience in the field of HPC. This is exactly what we are trying to build in the German institutions that were part of the team at SC13, and so we are eagerly waiting for our next chance to show that we have done well on that.”

Read more >

VMworld 2014 Takeaway: Software-Defined Security

By Scott Allen

 

One of the key topics that had everyone talking at VMworld 2014 in San Francisco was the Software-Defined Infrastructure, or SDI—an advance on the traditional data center that makes it easier and faster for businesses to scale network services to accommodate changing needs. The SDI extends the benefits of virtualization, which include increased uptime and automated provisioning, plus reduced server sprawl and lower energy costs, to the realm of networking and storage infrastructures.

 

This more fully virtualized environment is a stepping stone to the increased flexibility and cost savings of the hybrid cloud—but it also presents real challenges to traditional data center security solutions.

 

Today’s data center security technologies are designed for existing data centers—which makes moving to a SDI a chancy proposition for most businesses. Current security solutions are largely blind to what actually goes on in a virtualized data center, with its dynamic provisioning and virtual machines. Running traditional security solutions on a fully virtualized environment can result in gaps in protection and coverage, make security management inefficient and difficult, and create problems with compliance.

 

So I was encouraged by the number of security-related announcements at VMworld that point to advances in protection for servers deployed in physical, virtualized and cloud environments—and that address the security challenges associated with SDI.

 

Intel® Security, a newly formed group within Intel that focuses on security projects and technologies, announced the Intel® Security Controller, a software-defined approach to securing virtualized environments. This security controller integrates the McAfee* Virtual Network Security Platform, an advanced intrusion protection system (IPS) optimized for Intel® Xeon®-based servers, into VMware* NSX, the industry-leading technology for network virtualization. This combination allows users to virtualize individual security services and synchronize policy and service injection within workflows by providing an abstraction layer between the security and networking infrastructures. This in essence creates software-defined security, allowing businesses to automate their existing security management applications to span security policies across physical and virtual network infrastructures. This leads to cost-effective security protection of virtualized workflows within an SDI and simplified management and deployment.

 

Also at VMworld, McAfee (now part of Intel Security) announced major advancements to its Server Security Suites portfolio, offering comprehensive protections for hybrid data center deployments, including software-defined infrastructures. Because significant amounts of data are stored on servers, they are attractive targets for hackers, and providing your server environment with integrated, broad-based protection is essential. McAfee’s new Server Security Suites release incorporates a number of individual security technologies into a single, easy-to-manage solution that extends visibility into your underlying server infrastructure whether it is on-premises or off. It shields physical, virtual and cloud environments from stealthy attacks so businesses like yours can safely explore the flexibility and scalability of hybrid infrastructures.

 

VMware also announced a new program to help businesses and organizations meet compliance mandates for regulated workloads in cloud infrastructures. VMware’s Compliance Reference Architecture Frameworks provide a programmatic approach that maps VMware and Intel security products to regulatory compliance in cloud environments for industries with strict security or privacy mandates. The framework provides a reference architecture, regulation-specific guidance, and thought leadership—plus advice for software solutions that businesses require to attain continuous compliance. These frameworks will help take the guesswork out of meeting strict regulatory guidelines when using cloud-based infrastructures for restricted workloads.

 

The first available framework is the VMware* FedRAMP Compliance Reference Architecture Framework, which addresses the needs of organizations to enable and maintain a secure and compliant cloud environment for U.S. government agencies. Further compliance frameworks from VMware and Intel are in the works, including one for HIPAA.

 

VMware and Intel are building the foundations for software-defined security, making it easier—and safer—than ever for your business to achieve the benefits of virtualization and the hybrid cloud.

Read more >

Look Out X-Men! Here Come the Examen to Tackle the Intel Parallel Universe Computing Challenge

The latest team to announce participation in the Intel Parallel Universe Computing Challenge (PUCC) at SC14 is the Examen representing the EXA2CT project in Europe. The “Exa” in “Examen” is pronounced like exascale, of course!

 

The EXA2CT project, funded by the European commission is comprised of 10 partners—including IT4Innovations, Inria and Università della Svizzera Italiana (USI), which are represented by Examen team members. The project aims to integrate development of algorithms and programming models tuned to future Exascale supercomputer architectures.

 

 

The Examen include David Horak (IT4Innovations, Czech Republic), Lubomir Riha (IT4Innovations, Czech Republic), Patrick Sanan (USI, Switzerland), Filip Stanek (IT4Innovations, Czech Republic), and Francois Rue (not pictured, Inria, France)

The team plans to leverage those hours spent developing algorithms and programming models into a serious run at the PUCC. Here’s what Examen Team Captain David Horak with IT4Innovations in the Czech Republic had to say about his team:

 

Q. What is the role of your team members in the EXA2CT project?
A. The team provides system level libraries support for MPI-3.0 implementation and GASPI implementation for non-blocking communication. They are also involved in the development of ESPRESO and FLLOP libraries with communication hiding and avoiding techniques for exascale computers.

 

Q. What do the Examen hope to accomplish by participating in the Intel Parallel Universe Computing Challenge?
A: We want to check whether we finally reached the state, when bachelor students surpass us in HPC knowledge. ;-)

 

Q. What are the most prevalent high performance computing applications in which your team members are involved?
A. We are most involved with Intel Cluster Studio (Intel MPI), OpenMPI, GPI-2, PETSC and FLLOP.

 

Q. How will your team prepare for the competition?
A. Drink a lot of coffee. ;-)

 

Q. SC14 is using the theme “HPC Matters” for this year’s conference. Can you explain why HPC matters to you?
A. HPC is fun and makes the world more interesting by enabling scientists to better explore it.

 

The Intel Parallel Universe Computing Challenge kicks off on Monday night at the SC14 opening gala. See the schedule and learn more about the contestants on the Intel SC14 Web page.

Read more >

Mobile Device Security Raises Risk for Hospitals

The bring-your-own-device to work trend is deeply entrenched in the healthcare industry, with roughly 89 percent of the nation’s healthcare workers now relying on their personal devices in the workplace. While this statistic—supplied by a 2013 Cisco partner network study—underscores the flexibility of mHealth devices in both improving patient care and increasing workflow efficiency, it also shines a light on a nagging, unrelenting reality: mobile device security remains a problem for hospitals.

 

A more recent IDG Connect survey concluded the same, as did a Forrester Research survey that was released earlier this month.

 

It’s not that hospitals are unaware of the issue; indeed, most HIT professionals are scrambling to secure every endpoint through which hospital staff access medical information. The challenge is keeping pace with a seemingly endless barrage of mHealth tools.

 

As a result:

 

  • 41 percent of healthcare employees’ personal devices are not password protected, and 53 percent of them are accessing unsecured WiFi networks with their smartphones, according to the Cisco partner survey.
  • Unsanctioned device and app use is partly responsible for healthcare being more affected by data leakage monitoring issues than other industries, according the IDG Connect survey.
  • Lost or stolen devices have driven 39 percent of healthcare security incidents since 2005, according to Forrester analyst Chris Sherman, who recently told the Wall Street Journal these incidents account for 78 percent of all reported breached records originating from healthcare.

 

Further complicating matters is the rise of wireless medical devices, which usher in their own security risks that take precedence over data breaches.

 

So, where should healthcare CIOs focus their attention? Beyond better educating staff on safe computing practices, they need to know where the hospital’s data lives at all times, and restrict access based on job function. If an employee doesn’t need access, he doesn’t get it. Period.

 

Adopting stronger encryption practices also is critical. And, of course, they should virtualize desktops and applications to block the local storage of data.

 

What steps is your healthcare organization taking to shore up mobile device security? Do you have an encryption plan in place?

 

As a B2B journalist, John Farrell has covered healthcare IT since 1997 and is a sponsored correspondent for Intel Health & Life Sciences.

Read John’s other blog posts

Read more >

Workload Optimized Part A: Enabling video transcoding on new Intel powered HP Moonshot ProLiant server

TV binge watching is a favorite past time of mine. For an 8 weeks span between February and March of this year, I binge watched five seasons of a TV series. I watched it on my Ultrabook, on a tablet at the gym, and even a couple episodes on my smart phone at the airport. It got me thinking about how the episodes get to me as well as my viewing experience on different devices.

 

Let me use today’s HP Moonshot server announcement to talk about high-density servers. You may have seen that HP today announced the Moonshot ProLiant m710 cartridge. The m710, based on the Intel® Xeon® processor E3-1284L v3 with built-in Intel® Iris Pro Graphics P5200, is the first microserver platform to support Intel’s best media and graphics processing technology. The Intel® Xeon® processor E3-1284L v3 is also a great example of how Intel continues to deliver on its commitment to provide our customers with industry leading silicon customized for their specific needs and workloads.

 

Now back to video delivery. Why does Intel® Iris™ Pro Graphics matter for Video Delivery? The 4k Video transition is upon us. Netflix already offers mainstream content like Breaking Bad in Ultra HD 4k. Devices with different screen sizes and resolutions are proliferating rapidly. The Samsung Galaxy S5 and iPhone 6 Plus smartphones have 1920×1080 Full HD resolution while the Panasonic TOUGHPAD 4k boasts a 3840×2560 Ultra HD display. And, the sheer volume of video traffic is growing. According to Cisco, streaming video will make up 79% of all consumer internet traffic by 2018 – up from 66% in 2013.

 

At the same time, the need to support higher quality and more advance user experiences is increasing. Users have less tolerance for poor video quality and streaming delays. The types of applications that Sportvision pioneered with the yellow 10 yard marker on televised football games are only just beginning. Consumer depth cameras and 3D Video cameras are just hitting the market.

 

For service providers to satisfy these video service demands, network and cloud based media transcoding capacity and performance must grow. Media transcoding is required to convert video for display on different devices, to reduce the bandwidth consumed on communication networks and to implement advanced applications like the yellow line on the field. Traditionally, high performance transcoding has required sophisticated hardware purpose built for video applications. But, since the 2013 introduction of the Intel® Xeon® Processor E3-1200 v3 family with integrated graphics, application and system developers can create very high performance video processing solutions using standard server technology.

 

These Intel Xeon processors support Intel® Quick Sync Video and applications developed with the Intel® Media Server Studio 2015.  This technology enables access to acceleration hardware within the Xeon CPU for the major media transcoding algorithms. This hardware acceleration can provide a dramatic improvement in processing throughput over software only approaches and a much lower cost solution as compared to customized hardware solutions. The new HP Moonshot M710 cartridge is the first server to incorporate both Intel® Quick Sync Video and Intel® Iris Pro Graphics in a single server making it a great choice for media transcoding applications.

As video and other media takes over the internet, economical, fast, and high quality transcoding of content becomes critical to support user demands. Systems built with special purpose hardware will struggle to keep up with these demands. A server solution like the HP Moonshot ProLiant m710, built on standard Intel Architecture technology, offers the flexibility, performance, cost and future proofing the market needs.

 

In part B of my blog I’m going to turn the pen over to Frank Soqui. He’s going to switch gears and talk about another workload – remote workstation application delivery. Great processor graphics are not only great for transcoding and delivering TV shows like Breaking Bad, they’re also great at delivering business applications to devices remotely.

Read more >