Recent Blog Posts

SDI: The Journey vs. the Destination

I just finished Words of Radiance, book two of The Stormlight Archive series by Brandon Sanderson (now have to wait for book three). In this series, the main characters all have an oath to which they adhere and to which they must commit themselves in order to be part of this special group. The oath goes like this:


“Life before Death.
Strength before Weakness.
Journey before Destination.”

 

sdi journey.jpgOne part of this oath, “Journey before Destination,” made me think about some of the challenges IT organizations face in today’s world. While those of us who work in the industry to provide IT solutions care a lot about the destination, the solutions are not always there for the journey!

 

Today, I talk to lots of customers about the concept of software-defined infrastructure (SDI). SDI really is the destination. It’s where organizations can get to a hybrid cloud and then control workloads through an end-to-end orchestration layer that allows you (the customer) to institute and enforce policies for your application workloads.

What a great idea! When I ran IT infrastructure in the past, this is exactly where I wanted to be, from an infrastructure perspective. To think I’d have the ability to manage and optimize my resources in a way that requires fewer people to control and manage. And that I could enforce and comply with the controls we had in place while utilizing all my resources to their optimal level. This is the dream of most IT infrastructure folks.

 

So where is SDI today, really?


Ultimately, we need to think about where we are on that journey toward SDI. Most organizations today, to some degree, could complete these steps, as the tools exist.

 

  • Virtualized resources with compute, storage, and networking (all in different levels of maturity): check
  • Created pools of resources with various products available to do this: check
  • Provided some level of telemetry (information, from the hardware to the software, on health and performance of the platform): check
  • Automated and orchestrated the use of these resources to ensure policies and workload management: check
  • Managed service levels through IT service management software: check

 

Sounds like it’s all in place, right? Well, kind of. The challenge here is that we either need to totally accept a vertical solution with one or two add-ons, or we need to assemble it ourselves. On both fronts, some integration is necessary and granted, many vertical solutions are not yet complete. This doesn’t mean that there aren’t some good ones out there; rather, that some glue is necessary to make it all work.

 

The truth is that while the destination matters, it’s important to think about the journey to create a winning strategy for SDI. Innovation is necessary and plays a huge role in that strategy looking forward, and also in our efforts to “create the glue.” Yet IT budget allocations are still heavily weighted toward maintenance efforts. Forrester reported in a 2013 survey of IT leaders in more than 3,700 companies that respondents estimated that they spend an average of 72 percent on “keep-the-lights-on” functions to support ongoing maintenance, while only 28 percent of the money went toward new projects.[i] This is still consistent with many of the organizations that I talk to in IT.

 

Ultimately, the SDI journey is really where the rubber meets the road. And for most enterprises day to day, the journey is still underway.

 

So where is your organization on that journey?

 

Ed Goldman

@EdLGoldman


Be sure to visit the Intel® IT Center to get the latest resources and expert insights, and check out the planning guide to find out how you can optimize the data center to move toward SDI.



[i] Bartels, Andrew, Christopher Mines, Joanna Clark. Forrsights: IT Budgets and Priorities in 2013. Forrester (April 25, 2013). http://www.forrester.com/Forrsights+IT+Budgets+And+Priorities+In+2013/fulltext/-/E-RES83021?isTurnHighlighting=false&highlightTerm=Forrsights:%2520IT%2520Budgets%2520And%2520Priorities%2520In%25202013

Read more >

Rearchitecting the Data Center for the Internet of Things

In the future, kitchen appliances will talk to you. The refrigerator will let you know when you’re low on milk and whether you have all the ingredients for dinner. The cooktop will display a recipe from the Internet and text you when your pasta water has reached the boiling point. Even better, your appliances will talk to each other—and they won’t burn the toast. When the toast is just about finished, the eggs will start to fry. Such is the vision of what’s possible with the Internet of Things (IoT).

 

data center for the iot.jpgSmart kitchens, smart everything


Smart kitchens, and all objects that are connected by the IoT, work similarly. The object—a loaf of bread, a head of lettuce, or a bottle of ketchup—will be labeled with an RFID tag and read with an RFID reader, and the information transmitted to a computer. Or a connected device with a built-in sensor will transmit information via the Internet to a computer. Either way, all of the information sent by readers and sensors is collected, and some of it is processed by data centers and in the cloud.


For the IoT to reach its full potential, innovation must continue its forward momentum at the device level, the data center level, and the network level. As it does, there will undoubtedly be challenges for the companies developing the technologies around the IoT. In fact, Gartner has identified a number of issues that will have to be addressed as the technology and services around smart homes and other connected systems mature.

 

Pressures on the data center

 

The volume of data generated by mobile devices and the IoT will place heavy demands on data centers. Servers will be pushed to the limit, and network bandwidth will have to be scaled. Gartner says that by 2020, there could be 26 billion units (smart-sensor devices) worth potentially $300 billion in new incremental revenue. Much of the machine-to-machine data will be processed locally at the edge of the network, but some of that will be sent to the data center for aggregation and further analysis.


The IoT will also interact with humans on their mobile devices. By the end of 2014, the number of mobile-connected devices will exceed the number of people on Earth, according to the forecast for mobile data traffic found in the Cisco Visual Networking Index. That same report predicts that while mobile data traffic reached a volume of 1.5 exabytes per month in 2013, by 2018 that amount will grow tenfold to surpass 15 exabytes per month. And some 96 percent of that data will be accounted for by smart devices by 2018. Both networks and storage are among IT’s biggest challenges now, and it’s not going to get any easier.

 

Intel addresses the opportunities and challenges of the IoT

For Intel, this is a chance to move innovation forward. For example:


  • Intel produces scalable microprocessors, such as the single-core, single-threaded Intel® Quark™ SoC X1000 for small-form-factor designs; the Intel Atom™ processor E3800 product family for low power and thermal efficiency; and the recently announced Intel Edison processor, designed specifically to enable the IoT. The processor boasts wireless capabilities and is the size of a postage stamp.
  • As bandwidth requirements increase, Intel continues to deliver network connectivity for the IoT with low-power, small-footprint cellular controllers and platforms that support a wide range of I/O interfaces that can connect to modules supporting cellular, Bluetooth*, ZigBee*, Wi-Fi, and other wireless technologies.
  • Together Intel and McAfee are helping to mitigate the security risks of IoT connectivity.
  • Cloud computing and big data analytics will be foundational technologies helping businesses connect to devices and sensors, process data, and then analyze it for insight. Intel infrastructure technology, such as Intel Xeon processors, Intel Ethernet solutions, and Intel Solid-State Drives, can provide the underpinnings for your next-generation data center.

 

Clearly, smart kitchens are only the tip of the iceberg. Intelligent refrigerators can make an impact on families, but smart homes, businesses, schools, public utilities, government buildings, and installations of all kinds—connected by the Internet of Things and facilitated by cloud computing—can literally change the world.

 

Is your organization ready? Join the #IoT conversation in the comments below. Be sure to check out Intel’s vision for data center architecture of the future from Intel futurist Steve Brown. And read the Intel IT Center planning guide to find out how to optimize your data center so you can deliver greater innovation and embrace the new opportunities that the IoT can bring to your business.

 

Dylan Larson

@idlarson

#IT Center #IoT #DataCenter

Read more >

The Data Stack – October 2014 Intel® Chip Chat Podcast Round-up

In October we continued to archive livecast episodes from the Intel Developer Forum with episodes covering robotics and the progress towards artificial intelligence, software defined infrastructure and intelligent data centers, and emerging technologies in the cloud computing industry. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

  • Meet Jimmy, the Robot Intel Employee – Intel® Chip Chat episode 347: In this archive of a livecast from IDF, Intel Futurist Brian David Johnson stops by with a special guest, Matt Trossen, the CEO of Trossen Robotics. We’re talking to them about the 21st Century Robot Project, which uses open source hardware and software and 3D printing to make customizable robots driven by apps and connected to other devices for an ecosystem of computation. Trossen Robotics makes the Intel® Edison powered Humanoid Exoskeleton and a 3D printer makes the robot skin, giving makers a platform to start from when innovating with robotics. You can order a robot development kit (and check out the book) at www.21stcenturyrobot.com and learn more at www.trossenrobotics.com.
  • Intelligent Infrastructure for the Digital Services Economy – Intel® Chip Chat episode 348: In this archive of a livecast from the Intel Developer Forum, Johnathan Donaldson (@jdonalds), the GM of Software Defined Infrastructure in the Cloud Platforms Group at Intel, stops by to talk about building intelligent infrastructure for enhanced platform and capabilities awareness, as well as dynamic workload placement and configuration. In a digital services economy, intelligent infrastructure is critical for development cycles and time to market. For more information, visit www.intel.com and search for software defined infrastructure.
  • Building Data Centers with Intelligence – Intel® Chip Chat episode 349: In this archive of a livecast from the Intel Developer Forum, we’ve got three great interviewees discussing the intelligent data center. Das Kamhout, a Cloud Orchestration Architect and Principal Engineer at Intel; Scott Carlson, an Information Security Architect for PayPal; and John Wilkes, a Principal Software Engineer at Google are on hand to talk about building, scaling, securing, and adding an intelligent software layer to modern data centers. For PayPal, a top priority is how to protect the money flow and retain customer trust. Google focuses on building smart systems that can scale massively and offer high reliability. For more information, visit: intel.ly/orchestration.
  • Next-gen Computing for Enterprises – Intel® Chip Chat episode 350: In this archive of a livecast from the Intel Developer Forum, Paul Miller (@PaulMiller), the founder of Cloud of Data, chats about various hot topics in enterprise computing including orchestration and telemetry for on demand, cost effective workload distribution, personalized medicine and data protection/privacy, and the evolution of public and private clouds and the emergence of containers. For more information, visit www.cloudofdata.com.

Read more >

Moving to the Cloud – Should the CIO Focus More on Systems of Engagements?

In a short video, Geoffrey Moore describes the focus evolution from Systems of Records to Systems of Engagement. At the end, he highlights the fact that systems of engagement probably require a very different type of IT than systems of records.

 

Systems of record host the key processes and data elements of the enterprise. Most often they have been implemented prior to the year 2000 to ensure enterprise survival in the new millennium. Great efforts and vast amounts of money went into implementing these systems and adapting both the enterprise and the software to one another. Since then, these systems have continued to run reliably and support enterprise operations.

 

But a couple of things happened.

 

Users, getting acquainted with having information at their fingertips through smart phones and other mobile devices, are now asking for access to the systems of records. New interaction mechanisms, such as social media, offer new sources of information that provides a better understanding of market needs, customer demand and the overall environment in which the enterprise operates. Actually, the world is increasingly becoming digital. The boundaries between business and IT are shrinking as every business interactions these days involve the use of information technology in one way or another.

 

In parallel, time has been shrinking.  What we expected, 10 or 15 years ago, to take several hours or days, can now be done in a matter of minutes due to the rapid advancements in IT. Hence the new style of IT, as Meg Whitman calls it, is now required to respond to user needs. Cloud is definitely part of this transformation in IT as it provides enterprises with the responsiveness and agility required to address today’s ever changing business environment.

 

As enterprises decide to move to the cloud, the question – where to start? – typically comes up. In a blog entry I published over one year ago, I spoke about five use cases companies could envisage to start their cloud journey. It ultimately depends on the decision to consume services from a cloud environment (being it private, managed or public). So, the question of – which application should be moved to the cloud first? – is raised. Should we start with a systems of record or a systems of engagement?


Should we start with Systems of Record?

As stated by Geoffrey Moore, most Systems of Record have been in place for the last 10 to 15 years. They are transaction based and focus on facts, dates and commitments. They often represent a single source of the truth. Indeed, they typically have been built around a single database, containing mostly structured information. Every time information is changed, the event is logged, so one can quickly find out who did what. Data is kept in the systems for extensive periods of time to ensure compliance while access is regulated and contained. They are the core support systems for the operations of the enterprise.

 

A couple companies focused on the development of such systems –  the most well-known being SAP and Oracle for their financial and manufacturing systems. Other enterprises may have written their own application, being left with a small team of knowledgeable resources to maintain it. These systems are considered business critical as the company cannot operate without them any longer. They contain the “single version of the truth”. And I can speak from my own experience, even if you disagree with those numbers (for example –  if deals have been mis-categorized), you will have great difficulty convincing higher levels of management that the data is incorrect.

 

Some enterprises may require increased flexibility in the use of such systems; they may want increased agility and responsiveness in case of a merger or divestiture. But are these the systems we should migrate to cloud first?

 

What would be the benefit? Well we could probably run them cheaper; we may be able to give our users that additional levels of responsiveness, agility and flexibility that they are looking for. But on the other hand, we would have to modernize an environment that runs well and supports the business of the enterprise on a daily basis. Or we should rebuild a brand new system of record based on the latest version of the software. This may be the only option, but to me… that sounds risky.

 

I’ve seen a couple of companies doing this, but it was mostly in the case of a merger, divestiture, consolidation of systems or a move to a new data center or IT delivery mechanism. And in most of these cases, it turned out that capabilities available on the cloud including – automation, flex-up/flex-down and service – were not fully taken into consideration during the installation of the application.

 

Now, employees might want access to the systems of record through their mobile devices.  They might want a more friendly user interface.  They might want to combine functions that are separate from the original system. This is a whole different ballgame.

 

Using web services, we could encapsulate the system of record and give the user what they want without having to disrupt the original environment. Over time we could consider updating/transforming some of the functionality and shut them down in the original package and replace them with cloud based functionality. This reduces the risk and shields the end-user from the actual package, making it easier to transform the system of record without overwhelming disruptions.


What is different with Systems of Engagement?

Systems of engagements were developed some time after system of records – so they involve newer technologies. In particular, many of them are built around SOA principles, making them more suitable to take full advantage of cloud technology. Their objectives are interactions and collaboration. It’s all about sharing insights, ideas and nuances. They are used within the frame of business opportunities and projects making the relationships transient in nature while requiring responsiveness and agility to be set-up quickly. Access is ad-hoc and in many companies may require a partner to customer interaction. Most often, information is unstructured which makes search more difficult.

 

Obviously, Systems of Engagement are important, but they do not maintain the critical information needed to run a company. They are important as a mechanism to share information, gain consensus and make decisions.  However, they do not maintain the single source of the truth. That makes them more suitable for being experimented with. Their nature, needs and technologies used to build them makes them better candidates to be migrated to cloud. So, I would suggest that this is where we should start. Of course, we don’t want our end-users to be left in the cold if something happens during the migration.  But even in the worst case scenario, telephones can still be used to allow people to exchange information even if the system were down for some time.


The importance of data

Tony Byrne argues that Geoffrey Moore simplifies things by creating two clearly different categories. He points out that the issue is probably messier in real life. On the one hand, people are discussing important business decisions in collaboration systems – thereby creating records, while others may want to engage with their colleagues directly from the systems of record.  Byrne explains it in simple terms: “your colleagues are creating records while they engage, and seeking to engage while they manage formal documents and participate in structured processes. Ditto for your interactions with customers and other partners beyond your firewall.

 

Now, we have been able to trigger functionality from within applications for quite some time.  That’s not the issue. And the use of web services described earlier makes this reasonably easy to implement.

 

The focus of Tony’s discussion is how the data can be moved between the systems of record and the systems of engagement. Right from the start, you should think about your data sources and information management. Again, technology exists today to access data within and outside a cloud environment. What’s important is to figure out what data should be used when and where, while ensuring that it is properly managed along the way. If you access and change data in a systems of record, do it in such a way that all the checking, security and logging functionality is respected. But this should be nothing new. Companies have been integrating external functionality within their systems of record for years.


Conclusion

When companies look at migrating to cloud, the question of where to begin is often debated. In my mind, it’s important to show end-users the benefits of the cloud early on. That lends me to lean more towards starting with systems of engagements by either transforming existing ones or building new ones that will positively surprise users. This will get their buy-in and give IT more “cloud” to transform the remainder of the IT environment. The real question is: how far you need to go? Because not everything has to be in the cloud.  At the end of the day, you should only move what makes sense.

Read more >

Game Over! Gamification and the CIO

Congratulations to the winner of the CIO Superhero Game!


The first to complete the challenge and become “Super CIO” was Brad Ton of Reindeer Auto Relocation. He was presented with an AWESOME Trophy to display proudly in his office. To win, Mr. Ton had to defeat five evil henchmen and the arch enemy of CIOs everywhere, Complacent IT Guy. 20141029_073410.jpg


So what is this craziness about CIO Superheroes? Just my dorky way of introducing another one of the challenges impacting the CIO today…Gamification.


Gamifi-what?


Gamification, the use of game thinking and game mechanics in non-game contexts to engage people (players) in solving problems.


According to Bunchball, one of the leading firms in applying gamification to business processes, gamification is made up of two major components. The first is game mechanics – things like points, leaderboards, challenges levels – the pieces that make game playing fun, engaging, and challenging. In other words, the elements that create competition. The second component is game dynamics – things like rewards, achievement, and a sense of competition.


Games are everywhere. People love to compete and people love to play games. What does this have to do with the role of CIO?


Everything!


Want to improve customer engagement? Make it a game! Want employees to embrace a new process? Make it a game! Want to improve performance? Make it a game! Add game mechanics and game dynamics into the next app you are building, layer them on an existing application, and put them in the next process improvement initiative.


Even sites like the Intel IT Peer Network use game theory to increase engagement. You earn points for all kinds of activity, which might include logging in multiple times, using searches to find content or posting a blog. I find it interesting that while these points earn you badges and levels, they actually offer minimal intrinsic value. Nevertheless, I found myself disappointed during a recent systems upgrade to have my points reset to zero. Alas, I am an Acolyte again!


Now, back to the CIO Superhero Game.


Recently, I had the opportunity to interview our Superhero CIO winner. Here are just a few of his thoughts surrounding gamification. smallish_CIOHero.png


What caught your attention enough to want to play the game?


“I really thought that Twitter was a very unique way to play a game.  It was not something I had ever done before.  I’m a frequent reader of all of your writings, so I knew I was in for a learning experience if anything else.  I’m a Twitter addict, so I felt comfortable diving into the CIO world even as someone not extremely knowledgeable on the topic.  Frankly, I’d rather have an hour long dentist appointment than read an instruction manual.  This was easily accessible – right at my fingertips and very self-explanatory.”


Games can engage, games can inspire, games can teach. What is the most important lesson you learned from playing the game?


“While I never truly understood the intricacies of being a CIO, I always appreciated the hard work and dedication it took to get to such a prestigious level.  After going through the CIO Superhero game, I can honestly say that I now genuinely respect it.  The passion behind the game was something I enjoyed.  It wasn’t a bland exercise built with little thought or substance.  I could feel that the game was designed to teach and help grow others into not only understanding new topics previously unknown – but to inspire them into being pro-active in sharing & creating their own ideas.   That is when you know you have something special.  More than the topics themselves, the passion behind what the game was meant to do is what was really able to draw me in.”   


You are not a CIO yourself, do you think gamification of a process would work in your business, and if so, can you give an example?


“Any tool that can supply a different approach to creating a better understanding of a current process is always worth the attempt.  I also think the concept of Gamification is able to provide a different perspective, which can spark new ways to think about old processes. Implementing gamification could highlight the variables within our industry that can, in turn, allow for a more personable approach.  Cost, scheduling, bookings…logistics are important, but the game tailored to our industry could be much more personal and deal directly with relationships of all parties involved in a relocation. Whereas, a typical goal would be to complete an on-time relocation with small out-of-pocket costs, the game’s primary objective would be to receive positive feedback from customers, clients, etc.   Yes, on-time and small cost could equate to this outcome, but not always.  “Defeat the evil henchmen” by coming up with a new idea to improve customer service, for instance.  By defining the game’s objectives from a relationship standpoint, you can spark new and creative ways of thinking.”


So, there you have it.


Gamification – just another element within the myriad of changes impacting the CIO today. It truly is a “game” changer that can increase adoption and engagement across a variety of businesses and processes.


This is a continuation of a series of posts titled “The CIO is Dead! Long Live the CIO!” looking at the confluence of changes impacting the CIO and IT leadership. #CIOisDead. Next up “Faster than a speeding bullet – The Speed of Change”.

Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

Unleashing the Digital Services Economy

Today Intel delivered a keynote address to 1000+ attendees at the Open Compute Project European Summit, in Paris. The keynote, delivered by Intel GM Billy Cox, covered Intel’s strategy to accelerate the digital services economy by delivering disruptive technology innovation founded on industry standards. The foundation of Intel’s strategy is an expansion of silicon innovation to augment its traditional Xeon, Xeon Phi and Atom solutions with expansion of product offerings through new standard SKUs and custom solutions based on specific workload requirements. Intel is expanding its data center SoC product line with the planned introduction of a Xeon based SoC in early 2015, which is sampling now. This will be Intel’s 3rd generation 64-bit SoC solution.

 

 

 

 

To further highlight this disruptive innovation, Cox described how Intel is working closely with industry leaders Facebook and Microsoft on separate collaborative engineering efforts to deliver innovative and more efficient solutions for the data center. Cox detailed how Intel and Facebook engineers worked together on Facebook’s delivery of the new Honey Badger storage server for their photo storage tier featuring the Intel® Atom™ processor C2000, a 64-bit system-on-chip. The high capacity, high density storage server offers up to 180TB in a 2U form factor and is expected to be deployed in 1H’15.  Cox also detailed how Microsoft has completed the 2nd generation Open Cloud Server (OCSv2) specification. Intel and Microsoft have jointly developed a board to go into OCSv2 that features a dual-processor design, built on the Intel Xeon E5-2600 v3 series processor that enables 28 cores of compute power per blade.

 

 

Collaboration with Open Compute reflects Intel’s decades long history of collaborating with industry organizations to accelerate computing innovation. As one of the 5 founding board members of the Open Compute Project, we are deeply committed to enabling broad industry innovation by openly sharing specifications and best practices for high efficiency data center infrastructure.  Intel is involved in many OCP working group initiatives spanning rack, compute, storage, network, C&I  and management which are strategically aligned with our vision of accelerating rack scale optimization for cloud computing.

 

At the summit, Intel and industry partners are demonstrating production hardware based on our Open Compute specifications. We look forward to working with the community to help push datacenter innovation forward.

Read more >

Adopting & Enabling OpenStack in the Enterprise: A look at OpenStack Summit 2014

As I discuss the path to cloud with customers, one topic that is likely to come up is OpenStack.  It’s easy to understand the inherent value in OpenStack as an open source orchestration solution, but this value is balanced by ever present questions on OpenStack’s readiness for the complex environments found in telco and enterprise.  Will OpenStack emerge as a leading presence in these environments, and in what timeframe?  What have lead adopters experienced with early implementations and POCs…are there pitfalls to avoid, and how can we use these learnings to drive the next wave of adoption?

 

This was most recently a theme at the Intel Developer Forum where I caught up with Intel’s Jonathan Donaldson and Das Kamhout on Intel’s strategy for orchestration and effort to take key learnings from the world’s most sophisticated data centers to apply to broad implementations.  However, Intel is certainly not new to the OpenStack arena having been involved in the community from its earliest days and more recently having delivered Service Assurance Administrator, a key tool that enables OpenStack environments better insight into underlying infrastructure attributes.  Intel has even helped lead the charge of enterprise implementation with integration of OpenStack into Intel’s own internal cloud environment.

 

These lingering questions on broad enterprise and telco adoption, however, make the upcoming OpenStack Summit a must attend event for me this month.  With an event loaded with discussions from leading enterprise and telco experts from companies like BMW, Telefonica, and Workday on their experiences with OpenStack, I’m expecting to get much closer to the art of the possible in OpenStack deployment as well as learn more about how OpenStack providers are progressing with enterprise friendly offerings.  If you’re attending the Summit please be sure to check out Intel’s line up of sessions and technology demonstrations and connect with Intel executives on site discussing our engagements in the OpenStack community and work with partners and end customers to help drive broad use of OpenStack into enterprise and telco environments. If you don’t have the Summit in your travel plans, never fear.  Intel will help bring the conference to you!  I’ll be hosting two days of livecast interviews from the floor of the Summit. We’ll also be publishing a daily recap of the event on the DataStack with video highlights, the best comments from the Twitterverse, and much more.  Please send input on the topics that you want to hear about coming from OpenStack to ensure that our updates match the topics you care about. #OpenStack

Read more >

Going Green With Your Data Center Strategy

Green data center.jpgFor an enterprise attempting to maximize energy efficiency, the data center has long been one of the greatest sticking points. A growing emphasis on cloud and mobile means growing data centers, and by nature, they demand a gargantuan level of energy in order to function. And according to a recent survey on global electricity usage, data centers are sucking more energy than ever before.

 

George Leopold, senior editor at EnterpriseTech, recently dissected Mark P. Mills’ study entitled, “The Cloud Begins With Coal: Big Data, Big Networks, Big Infrastructure, And Big Power.” The important grain of salt surrounding the survey is that funding stemmed from the National Mining Association and the American Coalition for Clean Coal Electricity, but there were some stark statistics that shouldn’t be dismissed lightly.

 

“The average data center in the U.S., for example, is now well past 12 years old — geriatric class tech by ICT standards. Unlike other industrial-classes of electric demand, newer data facilities see higher, not lower, power densities. A single refrigerator-sized rack of servers in a data center already requires more power than an entire home, with the average power per rack rising 40% in the past five years to over 5 kW, and the latest state-of-the-art systems hitting 26 kW per rack on track to doubling.”

 

More Power With Less Energy

 

As Leopold points out in his article, providers are developing solutions to circumvent growing demand while still cutting carbon footprint. IT leaders can rethink energy usage by concentrating on air distribution and attempting assorted cooling methods. This ranges from containment cooling to hot huts (a method pioneered by Google). And thorium-based nuclear reactors are gaining traction in China, but don’t necessarily solve waste issues.

 

If the average data center in the U.S. is older than 12-years old, IT leaders need to start looking at the tech powering their data center and rethink the demand on the horizon. Perhaps the best way to go about this is thinking about the foundation of the data center at hand.

 

Analysis From the Ground Up

 

Intel IT has three primary areas of concern when choosing a new data center site: environmental conditions, fiber and communications infrastructure, and power infrastructure. These three criteria bear the greatest weight on the eventual success — or failure — of a data center. So when you think about your data center site in the context of the given criteria, ask yourself: Was the initial strategy wise? How does the threat proximity compare to the resource proximity? What does the surrounding infrastructure look like and how does that affect the data center? If you could go the greenfield route and build an entirely new site, what would you retain and what would you change?

 

Every data center manager in every enterprise has likely considered the almost counterintuitive concept that more power can come with less energy. But doing more with less has been the mantra since the beginning of IT. It’s a challenge inherent to the profession. Here at Intel, we’ll continue to provide invaluable resources to managers looking to get the most out of their data center.

 

To continue the conversation, please follow us at @IntelITCenter or use #ITCenter.

Read more >