Recent Blog Posts

Top Tweets – 3D Avatars, Fast JavaScript and Twitter for Developers

Follow Gael on Twitter: @GaelHof Subscribe to the IntelDeveloperZone Subreddit My Google + profile Check out my Slideshare profile I was just going through my Twitter analytics and thought I would share what topics my followers are the most interested in.  While I … Read more >

The post Top Tweets – 3D Avatars, Fast JavaScript and Twitter for Developers appeared first on Intel Software and Services.

Read more >

Where Are You on the Road to Software-Defined Infrastructure?

In enterprise IT and service provider environments these days, you’re likely to hear lots of discussion about software-defined infrastructure. In one way or another, everybody now seems to understand that IT is moving into the era of SDI.

 

There are good reasons for this transformation, of course. SDI architectures enable new levels of IT agility and efficiency. When everything is managed and orchestrated in software, IT resources— including compute, storage, and networking—can be provisioned on demand and automated to meet service-level agreements and the demands of a dynamic business.map-graphic.jpg

 

For most organizations, the question isn’t, “Should we move to SDI?” It’s, “How do we get there?” In a previous post, I explored this topic in terms of a high road that uses prepackaged SDI solutions, a low road that relies on build-it-yourself strategies, and a middle road that blends the two approaches together.

 

In this post, I will offer up a maturity-model framework for evaluating where you are in your journey to SDI. This maturity model has five stages in the progression from traditional hard-wired architecture to software-defined infrastructure. Let’s walk through these stages.

 

Standardized

 

At this stage of maturity, the IT organization has standardized and consolidated servers, storage systems, and networking devices. Standardization is an essential building block for all that follows. Most organizations are already here.

 

Virtualized

 

By now, most organizations have leveraged virtualization in their server environments. While enabling high level of consolidation and greater utilization of physical resources, server virtualization accelerates service deployment and facilitates workload optimization. The next step is to virtualize storage and networking resources to achieve similar gains.

 

Automated

 

At this stage, IT resources are pooled and provisioned in an automated manner. In a step toward a cloud-like model, automation tools enable the creation of self-service provisioning portals—for example, to allow a development and test team to provision its own infrastructure and to move closer to a frictionless IT organization.

 

Orchestrated

 

At this higher stage of IT maturity, an orchestration engine optimizes the allocation of data center resources. It collects hardware platform telemetry data and uses that information to place applications on the best servers, with features that enable acceleration of the workloads, located in approved locations for optimal performance and the assigned levels of trust. The orchestration engine acts as an IT watchdog that spots performance issues and takes remedial actions—and then learns from the events to continue to meet or exceed the customer’s needs.

 

SLA Managed

 

At this ultimate stage—the stage of the real-time enterprise—an organization uses IT service management software to maintain targeted service levels for each application in a holistic manner. Resources are automatically assigned to applications to maintain SLA compliance without manual intervention. The SDI environment makes sure the application gets the infrastructure it needs for optimal performance and compliance with the policies that govern it.

 

In subsequent posts, I will take a closer look at the Automated, Orchestrated, and SLA Managed stages. For now, the key is to understand where your organization falls in the SDI maturity model and what challenges need to be solved in order to take this journey. This understanding lays the groundwork for the development of strategies that move your data center closer to SDI—and the data center of the future.

Read more >

Blurred Boundaries: Hidden Data Center Savings

Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models.

 

Cloud technologies and services are driving similar change. Data center managers have many choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.

 

Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.

 

Shifting Workloads to Lower Energy Costs

 

Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions. But many companies do not leverage all of the energy-related data available to them – and without this knowledge, it’s challenging to make sense of information being generated by servers, power distribution, airflow and cooling units and other smart equipment.

 

That’s why holistic energy management is essential to optimizing power usage across the data center. IT and facilities can rely more on user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat like graphical thermal and power maps of the data center. Specific services and workloads can also be profiled, and logged data helps build a historical database to establish and analyze temperature patterns. Having one cohesive view of energy consumption also reduces the need to rely on less accurate theoretical models, manufacturer specifications or manual measurements that are time consuming and quickly out of date.

 

A Case for Cloud Computing

 

This makes the case for cloud computing as a means to manage energy costs. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.

 

From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where Infrastructure-as-a-Service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.

 

Other Benefits of Fine-Grained Visibility

 

For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60 percent of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.

 

Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.

 

Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.

 

Some of the same energy management solutions that boost visibility also provide a range of control features. Thresholds can be set to trigger notification and corrective actions in the event of power spikes, and can even help identify the systems that will be at greatest risk in the event of a spike. Those servers operating near their power and temperature limits can be proactively adjusted, and configured with built-in protection such as power capping.

 

Power capping can also provide a foundation for priority-based energy allocations. The capability protects mission-critical services, and can also extend battery life during outages. Based on knowledge extracted from historical power data, capping can be implemented in tandem with dynamic adjustments to server performance. Lowering clock speeds can be an effective way to lower energy consumption, and can yield measurable energy savings while minimizing or eliminating any discernable degradation of service levels.

 

Documented use cases for real-time feedback and control features such as thresholds and power capping prove that fine-grained energy management can yield significant cost reductions. Typical savings of 15 to 20 percent of the utility budget have been measured in numerous data centers that have introduced energy and temperature monitoring and control.

 

Understand and Utilize Energy Profiles

 

As the next step in the journey that began with virtualization, cloud computing is delivering on the promises for more data center agility, centralized management that lowers operating expenses, and cost-effectively meeting the needs for very fast-changing businesses.

 

With an intelligent energy management platform, the cloud also positions data center managers to more cost-effectively assign workloads to leverage lower utility rates in various locations. As energy prices remain at historically high levels, with no relief in sight, this provides a very compelling incentive for building out internal clouds or starting to move some services out to public clouds.

 

Every increase in data center agility, whether from earlier advances such as virtualization or the latest cloud innovations, emphasizes the need to understand and utilize energy profiles within the data center. Ignoring the energy component of the overall cost can hide a significant operating expense from the decision-making process.

Read more >

Increasing Scalability and Cost-Effectiveness for InterSystems Caché with the Intel® Xeon® Processor E7 v3 Family

Healthcare systems are coping with an unprecedented level of change. They’re managing a new regulatory environment, a more complex healthcare ecosystem, and an ever-increasing demand for services—all while facing intense cost pressures.

 

These trends are having a dramatic impact on EMR systems and healthcare databases, which have to maintain responsiveness even as they handle more concurrent users, more data, more diverse workflows, and a wider range of application functionality.

 

As Intel prepared to introduce the Intel® Xeon® processor E7 v3 family, we worked with engineers from Epic and InterSystems to ensure system configurations that would provide robust, reliable performance. InterSystems and VMware were also launching their next-generation solutions, so the test team ran a series of performance tests pairing the Intel Xeon processor E7-8890 v3 with InterSystems Caché 2015.1 and a beta version of VMware vSphere ESXi 6.0.

                                

The results were impressive. “We saw the scalability of a single operational database server increase by 60 percent,” said Epic senior performance engineer Seth Hain. “With these gains, we expect our customers to scale further with a smaller data center footprint and lower total cost of ownership.” Those results were also more than triple the end-user database accesses per second (global references or GREFs) achieved using the Intel® Xeon® processor E7-4860 with Caché® 2011.1.

 

leibforth graph.jpg

 

These results show that your healthcare organization can use the Intel Xeon processor E7 v3 family to implement larger-scale deployments with confidence on a single, scale-up platform.

 

In addition, if you exceed the vertical scalability of a single server, you can use InterSystems Caché’s Enterprise Cache Protocol (ECP) to scale horizontally. Here again, recent benchmarks show great scalability. A paper published earlier this year reported more than a threefold increase in GREFs for horizontal scalability compared to previous-generation technologies.

 

This combination of outstanding horizontal and vertical scalability—in the cost-effective environment of the Intel® platform—is exactly what needed to meet rising demands and create a more agile, adaptable, and affordable healthcare enterprise.

                                                                              

What will these scalability advances mean for your healthcare IT decision makers and data center planners? How will they empower your organization deliver outstanding patient care and enhance efficiency? I hope you’ll read the whitepapers and share your thoughts. And please keep in mind: Epic uses many factors, along with benchmarking results, to provide practical sizing guidelines, so talk to your Epic system representative as you develop your scalability roadmap.

 

Read the whitepaper about vertical scalability with the Intel Xeon processor E7 v3.

 

Read the whitepaper about horizontal scalability  with Intel Xeon processors.

 

Join and participate in the Intel Health and Life Sciences Community

 

Follow us on Twitter: @IntelHealth, @IntelITCenter, @InterSystems, @vmwareHIT

 

Steve Leibforth is a Strategic Relationship Manager at Intel Corporation

Read more >

Network World Article Highlights Advances in iWARP Specification

The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.

 

This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by myself and Wael Noureddine of Chelsio Communications, describes two new extensions that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies that are based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE. By bringing these technologies into alignment, we move closer toward the goal of the Open Fabrics Alliance, that the application developer need not concern herself with which of these is the underlying network technology — RDMA will “just work” on all.

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

Read more >

A Bucket of Wings: A Case Study of Better-Informed Decisions

In my blog Use Data To Support Arguments, Not Arguments To Support Data, I articulated how better-informed decisions are typically made and the role that business intelligence (BI) should play. Shortly after I wrote the blog, I experienced a real-life event that clearly illustrates three main phases of “data-entangled decisions.”

 

Since my family likes to take a day off from cooking on Fridays, we recently visited the deli of our favorite organic grocery store. At the take-out bar, I noticed an unusually long line of people under a large sign reading, “In-House Made Wing Buckets. All You Can Fill. On Sale for $4.99, Regular $9.99.” Well, I love wings and couldn’t resist the temptation to get a few.

 

The opportunity was to add wings (one of my favorite appetizers) to my dinner. But instead of using the special wings bucket, I chose the regular salad bar container, which was priced at $8.99 per pound regardless of the contents. I reasoned that the regular container was an easier-to-use option (shaped like a plate) and a cheaper option (since I was buying only a few wings). My assumptions about the best container to use led to a split-second decision—I “blinked” instead of “thinking twice.”

 

Interestingly, a nice employee saw me getting the wings in the regular container and approached me. Wary of my reaction, he politely reminded me of the sale and pointed out that I may pay more if I use the regular container because the wing bucket had a fixed cost (managed risk).

 

Although at first this sounded reasonable, when I asked if it would weigh enough to result in a higher cost, he took it to one of the scales behind the counter and discovered it was less than half a pound. This entire ordeal took less than 30 seconds and now I had the information I needed to make a better-informed decision.

 

This clinched it, because now two factors were in my favor. I knew that a half pound of the $8.99, regular-priced option was less than the $4.99, fixed-priced bucket option. And I knew that they would deduct the weight of the regular deli container at the register, resulting in an even lower price. I ended up paying $4.02.

 

This every-day event provides a good story to demonstrate the three phases as it relates to the business of better-informed decisions and the role of BI—or data in general.

 

Phase 1: Reaction

When the business opportunity (wing purchase) presented itself, I made some assumptions with limited data and formed my preliminary conclusion. If it weren’t for the store employee, I would have continued to proceed to the cash register ignorant of all the data. Sometimes in business, we tend to do precisely the same thing. We either don’t validate our initial assumptions and/or we make a decision based on our preliminary conclusions.

 

Phase 2: Validation

By weighing the container, I was able to obtain additional data and validate my assumptions to quickly take advantage of business opportunities —exactly what BI is supposed to do. With data, I was able to conclude with a great degree of confidence that I had mitigated the risk that it was the right approach. This is also typical of how BI can shed more light on many business decisions.

 

Phase 3: Execution

I made my decision by taking into account reliable data to support my argument, not arguments to support data. I was able to do this because I (as the decision maker) had an interest in relying on data and the data I needed was available to me in an objective form (use of the scale). This allowed me to eliminate any false personal judgments (like my initial assumptions or the employee’s recommendation).

  • From the beginning, I could have disregarded the employee’s warning or simply not cared much about the final price. If that had been my attitude, then no data or BI tool would have made a difference in my final decision. And I might have been wrong.
  • On the other hand, if I had listened to the initial argument by that nice employee without backing it up with data, I would have been equally wrong. I would have made a bad decision based on what appeared to be a reasonable argument that was actually flawed.
  • When I insisted on asking the question that would validate the employee’s argument, I took a step that is the business equivalent of insisting on more data because we may not have enough to make a decision.
  • By resorting to an objective and reliable method (using the scale), I was able to remove personal judgments.

 

In 20/20 Hindsight

Now, I realize that business decisions are never this simple. Organizations’ risk is likely measured in the millions of dollars, not cents. And sometimes we don’t have the luxury of finding objective tools (such as the scale) in time to support our decision making. However, I believe that many business decisions mirror the same sequences.

 

Consider the implications if this were a business decision that resulted in a decision of $100 in the wrong direction. Now simply assume that these types of less-informed or uninformed decisions were made once a week throughout the year by 1000 employees. The impact would be $5 million.

 

Hence, the cost to our organization increases as:

  • The cost of the error rises
  • Errors are made more frequently
  • The number of employees making the error grows

 

Bottom Line

Better-informed decisions start and end with leadership that is keen to promote the culture of data-driven decision making. BI, if designed and implemented effectively, can be the framework that enables organizations of all sizes to drive growth and profitability.

 

What other obstacles do you face in making better-informed decisions?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Malicious links could jump the air gap with the Tone Chrome extension

The new Google Tone extension is simple and elegant.  On one machine, the browser can generate audio tones which browsers on other machines will listen to and then open a website.  Brilliant.  No need to be connected to the same network, spell out a long URL to your neighbor, or cut/paste a web address into a text message for everyone to join.  But it has some serious potential risks.

Chrome Tone.jpg

Imaging being on an audio bridge, in a coffee shop, or a crowded space with bored people on their phones, tablets, or laptops.  One compromised system may be able to propagate and infect others on different networks, effectively jumping the proverbial ‘air gap’.  Malware could leverage the Tone extension and introduce a series of audible instructions which, if enabled on targeted devices, would direct everyone to automatically open a malicious website, download malware, or be spammed with phishing messages. 

 

Will such tones eventually be embedded in emails, documents, and texts?  A Tone icon takes less space than a URL.  It is convenient but obfuscates the destination, which may be a phishing site or dangerous location.  Tone could also be used to share files (an early usage for the Google team).  Therefore it could also share malware without the need for devices to be on the same networks.  This bypasses a number of standard security controls.  

 

On the less malicious side, but still annoying, what about walking by a billboard and having a tone open advertisements and marketing pages in your browser.   The same could happen as you are shopping in a store to promote sales, products, and coupons.  Will this open a new can of undesired marketing pushing into our lives?

 

That said, I must admit I like the technology.  It has obviously useful functions, fills a need, and shows the innovation of Google to make technology a facilitator of information sharing for people.  But, we do need controls to protect from unintended and undesired usages as well as security to protect from equally impressive malicious innovations.  My advice: use with care.  Enterprises should probably not enable it just yet, until the dust settles.  I for one will be watching how creative attackers will wield this functionality and how long it takes for security companies to respond to this new type of threat.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >