A bit more than nine years of my Intel career, I spent working in Russian offices: in Moscow and Nizhny Novgorod. When I joined the company more than 11 years ago now, I remember that my cube was near the … Read more >
Recent Blog Posts
The underpinning for most high performing clouds is a virtualized infrastructure that pools resources for greater physical server consolidation and processor utilization. With the efficiencies associated with pooled resources, some organizations have considered their virtualized environment “cloud computing.” These organizations are selling themselves short. The full promise of cloud—efficiency, cost savings, and agility—can be realized only by automating and orchestrating how these pooled, virtualized resources are utilized.
Virtualization has been in data centers for several years as a successful IT strategy for consolidating servers by deploying more applications on fewer physical systems. The benefits include lower operational costs, reduced heat (from fewer servers), a smaller carbon footprint (less energy required for cooling), faster disaster recovery (virtual provisioning enables faster recovery), and more hardware flexibility.
Source: Why Build a Private Cloud? Virtualization vs. Cloud Computing. Intel (2014).
Cloud takes efficiency to the next level
A fully functioning cloud environment does much more. According to the National Institute of Standards and Technology (NIST), a fully functioning cloud has five essential characteristics:
- On-demand self-service. A consumer can unilaterally provision computing capabilities.
- Broad-network access. Capabilities are available over the network and accessed through standard mechanisms (e.g., mobile phones, tablets, laptops, and workstations).
- Resource pooling. The provider’s computing resources are pooled to serve multiple consumers.
- Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward, commensurate with demand.
- Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (for example, storage, processing, bandwidth, and active user accounts).
Different but complementary strategies
A Forbes* article describes how these highly complementary strategies work together. Virtualization abstracts compute resources—typically as virtual machines (VMs)—with associated storage and networking connectivity. The cloud determines how those virtualized resources are allocated, delivered, and presented. While virtualization is not required to create a cloud environment, it does enable rapid scaling of resources and that the majority of high performing clouds are built upon virtualized infrastructures.
In other words, virtualization pools infrastructure resources and acts as a building block to enhance the agility and business potential of the cloud environment. It is the first step in building a long-term cloud computing strategy that could ultimately include integration with public cloud services—a hybrid deployment model—enabling even greater flexibility and scalability.
With a virtualized data center as its foundation, an on-premises, or private, cloud can make IT operations more efficient as well as increase business agility. IT can offer cloud services across the organization, serving as a broker with providers and avoiding some of the risks associated with shadow IT. Infrastructure as a service (IaaS) and the higher-level platform as a service (PaaS) delivery models are two of the services that can help businesses derive maximum value from the cloud.
Virtualization and cloud computing go hand in hand with virtualization as a critical first step toward fully achieving the value of a private cloud investment and laying the groundwork for a more elastic hybrid model. Delivery of IaaS and PaaS creates exceptional flexibility and agility—offering enormous potential for the organization with IT as a purveyor of possibility.
How did you leverage virtualization to evolve your cloud environment? Comment below to join the discussion.
#ITCenter #Virtualization #Cloud
This article is a continuation from my last article on the topic of using Intel XDK IoT Edition with Mashery APIs to make REST API calls using Node.js.
This article assumes you… Read more
In this article I will be demonstrating how to create a basic Node.js application using Intel Mashery’s JamBase API with Intel Edison and XDK. You will find accompanying GitHub source in the provided… Read more
IBM’s long-time Information on Demand conference has changed its name, and its focus. Big Blue’s major fall conference is now called IBM Insight, and it will take over Mandalay Bay in Las Vegas from Oct. 26 to 30. The name change reflects a key shift in the tech industry: Beyond managing vast amounts of information—increasingly, technology’s role is to extract value and insights from a vast range of information sources. It’s our job to create actionable information to help businesses succeed and gain a competitive advantage in a highly competitive marketplace.
Intel and IBM have worked together for over 20 years to help their customers achieve precisely that. Joint engineering built into IBM and Intel solutions, such as IBM DB2 with BLU Acceleration* optimized for Intel® Xeon® processors, deliver dramatic performance gains that can transform big data into vital business insights more quickly, all while lowering costs and power consumption.
The other word that describes IBM Insights is “big.” Not only is the focus big data, but the event itself is huge. With over 13,000 attendees, and over 700 sessions and keynotes, IBM Insight is the largest big data conference in the world. I’m looking forward to catching up with the latest perspectives and emerging technologies in the fast-evolving world of data analytics.
Be sure not to miss the following sessions, where you’ll discover the newest advances in data management and analytics from Intel and IBM (all events are in the Mandalay Bay South Convention Center).
- IBM Big SQL: Accelerating SQL and Big Data Performance on Intel Architecture – Session 5191A (10:15-11:15am, Oct 27, Jasmine G). Jantz Tran, an Intel Software Performance Engineer, and IBM’s Simon Harris, provide an overview of IBM Big SQL* and describe the breakthrough performance it delivers when run on Intel Xeon servers and platform products.
- Ideas for Implementing Big Data on Intel Architecture – Session 7188A (2-2:20pm, Oct. 27, Solution Expo Theater). In this session, Jim Fister, lead strategist and director of business development for Intel’s Data Center Group, will discuss the opportunity for data analytics, the case for driving analytics ahead of schedule, and options for implementing your solutions using Intel Architecture and IBM software.
- TPC-DI: An Industry Standard Benchmark for Data Integration – 5193A (10-11am, Oct 28, Jasmine G). Along with IBM software engineers Ron Liu and Sam Wong, Jantz Tran returns to introduce TPC-DI, a new industry standard for measuring and comparing the performance of data integration (DI) or ETL systems. They will discuss early observations from running TPC-DI with IBM Infosphere Datastage* on the latest generation Intel Xeon systems, and provide best practice optimization recommendations for Datastage deployments.
- Managing Internet of Things Data on the Edge and in the Cloud with Intel and IBM Informix* Solutions – Session 6140A (10-11am, Oct. 28, Banyan F). IBM’s Kevin Brown and Preston Walters, who leads Intel’s technical enablement and co-marketing of IBM software products on Intel technology, describe the challenges of Internet of Things and Internet of Everything requirements, and how data architecture and technologies from Intel and IBM are responding to these challenges in both edge devices and cloud infrastructure.
- Intel and IBM Software: A Long History – Session 7189A (2-2:20pm, Oct. 28, Solution Expo Theater). In this session, Jim Fister will cover the history of Intel and IBM’s relationship, along with a discussion of performance enhancements for IBM OLTP and data analytics software using the latest Intel platforms.
- Optimizing Mixed Workloads and High Availability with IBM DB2 10.5 on an Intel® Architecture – Session 5141A (3-4pm. Oct. 28, Banyan F). In this session, Kshitij Doshi, a principal engineer in Intel’s Software and Services Group, and Jessica Rockwood, an IBM senior manager for DB2 performance, provide an overview of the latest Intel® Xeon® E5-2600 V3 series processor architecture and its benefits for transaction processing workloads with IBM DB2 10.5 with BLU Acceleration.
- Goodbye Smart; Hello Smarter: Enabling the Internet of Things for Connected Environments – Session 6402A (11:15am-12:15pm, Oct. 29, Banyan F). Intel’s Preston Walters, with Oliver Goh, CEO of Shaspa GmpH, discuss how Intel, Shaspa and IBM provide intelligent solutions for connected environments that enable local analytics and decision-making to improve business and consumer services. Attend this session to see a demo of the Internet of Things in action.
- Using IBM Bluemix* and IBM SoftLayer* to Run IBM InfoSphere Information Server* on an Intel® Technology-Powered Cloud – Session 5198A (10-11am, Oct. 30. Jasmine E). In this session, Jantz Tran, with IBM’s Beate Porst and Sam Wong, explain how IBM InfoSphere Information Server* works in the cloud and provides data for scaling performance. They also discuss bare metal and virtualization options available with IBM SoftLayer.
At IBM Insights, Intel will be sharing a booth with our friends at Lenovo. Stop by to say hello and check out the latest Lenovo tablets, which rate highly in performance and security in the recent report Do More, Faster with IBM Cognos* Business Intelligence. Download the report to learn how tablets and servers based on Intel processors provide unparalleled improvements to speed and capabilities for IBM Cognos BI workloads.
Follow me at @TimIntel and watch for my Vine videos and man-on-the-street commentary and impressions from IBM Insights. Follow @IntelITCenter to join the dialogue with Intel IT experts, and follow @IntelSoftware to engage with Intel’s software community.
See you in Las Vegas!
The holidays are a busy time with all the preparations to get ready. Intel-powered tablets can lend a hand in making your holidays such as Halloween memorable and fun without too much pain. If you’re behind on selecting your costume … Read more >
Two Generations of Cray Supercomputers to be Powered by Intel® Xeon® Processors
Cray has announced that the Met Office, the United Kingdom’s national weather service, widely recognized as one of the world’s most accurate weather forecasting organizations, has selected Cray to provide multiple Cray XC* supercomputers and Cray Sonexion* storage systems.
The $128 million contract which spans multiple years will include Cray XC40 systems as well as next-generation Cray XC systems with current and future Intel Xeon processors.
The new Cray supercomputers at the Met Office will provide 16 times more supercomputing power than current systems, and will be used for operational weather prediction and climate research.
According to Cray’s news release, the U.K.’s Met Office uses more than 10 million weather observations a day and an advanced atmospheric model to create 3,000 tailored forecasts and briefings each day that are delivered to customers ranging from government, businesses, the general public, armed forces and other organizations.
According to Cray CEO, Peter Ungaro, “The award is symbolic for Cray on a number of fronts – it demonstrates that our systems continue to be the supercomputers of choice for production weather centers across the globe, that our close relationship with Intel is providing customers with enhanced capabilities today and into the future and it reinforces the role that Cray plays in impacting society on a daily basis in a wide range of areas.”
Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group
Intel announced today the competition schedule and the final selection of teams who will participate in the second annual Intel Parallel Universe Computing Challenge (PUCC) at SC14 in New Orleans, November 17-20. Each team will play for a charitable organization to whom Intel will donate $26,000 in recognition of the 26th anniversary of the Supercomputing conference.
Returning from last year will be the defending champions from Germany, The Gaussian Elimination Squad. The other finalist from last year, The Coding Illini, representing the National Center for Supercomputing Applications (NCSA) and the University of Illinois at Urbana–Champaign, will also make a return appearance.
Three other organizations, albeit with new team names, are also returning for this year’s competition. They are The Brilliant Dummies from South Korea’s Seoul National University, the Invincible Buckeyes from the Ohio Supercomputing Center, and the Linear Scalers from Argonne National Laboratory.
Three new teams will be competing in the 2014 competition, including SC3 (Super Computación y Calculo Cientifico) representing Latin America, Taiji representing China, and EXAMEN representing the EXA2CT project in Europe.
The 2014 PUCC will kick off on Monday evening, November 17 at 8 p.m. during the SC14 exhibition hall Opening Gala with The Gaussian Elimination Squad facing the Invincible Buckeyes. Additional matches will be held Tuesday through Thursday of the SC14 conference with the final match scheduled for Thursday afternoon at 1:30 p.m. I will once again be hosting the challenge along with my Intel partner James Reinders.
Coding and Trivia Challenge Combine in an Entertaining Stage Show
The PUCC features an 8-team single elimination tournament and is designed to raise awareness of the importance of parallelization for improving the performance of technical computing applications.
Each elimination match will consist of two rounds. The first round is a rapid-fire trivia challenge consisting of technical parallel computing questions interlaced with general HPC and SC Conference history trivia.
The second round is a parallel code optimization challenge where participants examine a piece of code that has been deconstructed from its optimized, parallel version and apply any code changes they believe will improve the overall performance. Points will be awarded during both rounds and the team with the most combined points will move on to the next match in the tournament. The audience will be watching the exercise on large screens while the hosts discuss the steps the teams are taking, and engage the audience in trivia for the chance to win prizes.
Team Captains Reveal Their Preparation Plans
Here’s what some of the team captains had to say on the upcoming challenge:
- Mike Showerman, captain of the team who placed second last year, said he hopes this year’s team goes all the way to the championship. “The Coding Illini will be even fiercer this year and will take every opportunity to bring the title home.”
- Examen Team Captain David Horak with IT4Innovations humorously suggested “We want to check whether we finally reached the state when bachelor students surpass us in HPC knowledge.”
- Another team with a sense of humor is the group from Seoul National University who returns as The Brilliant Dummies. Team Captain Wookeun Jung, a graduate PhD student, says “Our team name is TBD, The Brilliant Dummies, because we are brilliant enough to solve the complicated HPC problems, and dummies that only can solve those HPC problems. Of course, whether we would be brilliant in the competition or not is TBD.”
- When asked how the Linear Scalers, would prepare, Kalyan “Kumar” Kumaran, who is manager of Performance Engineering and Data Analytics in the Argonne Leadership Computing Facility said “Pre-competition stretching, and coffee. We will start memorizing sections from James Reinders’ books.”
- Karen Tomko, Scientific Applications Group Manager at the Ohio Supercomputer Center and captain of the Invincible Buckeyes offered “We’ll do our homework for the trivia, brush up on the parallel constructs, look at some Fortran codes, and make sure we have at least one vi user on the team.”
- Gilberto Díaz, infrastructure chief of the supercomputer center at Universidad Industrial de Santander (UIS), assembled a Latin Americans team called SC3 for Super Computación y Calculo Cientifico. He asserted “We would like to promote and develop more widespread awareness and use of HPC in our region. In addition to the excitement of participating in the 2014 event, our participation will help us to prepare students of master and PhD programs to better understand the importance of code modernization as well as preparing them to compete in future competitions.” Gilbert has since passed on the responsibility for the team captain to Carlos Barrios, Professor at UIS.
- Georg Hager, team captain for last year’s champion Gaussian Elimination Squad and senior research scientist at Germany’s Erlangen Regional Computing Center, said “The PUCC is about showing knowledge and experience in the field of HPC. This is exactly what we are trying to build in the German institutions that were part of the team at SC13, and so we are eagerly waiting for our next chance to show that we have done well on that.”
By Scott Allen
One of the key topics that had everyone talking at VMworld 2014 in San Francisco was the Software-Defined Infrastructure, or SDI—an advance on the traditional data center that makes it easier and faster for businesses to scale network services to accommodate changing needs. The SDI extends the benefits of virtualization, which include increased uptime and automated provisioning, plus reduced server sprawl and lower energy costs, to the realm of networking and storage infrastructures.
This more fully virtualized environment is a stepping stone to the increased flexibility and cost savings of the hybrid cloud—but it also presents real challenges to traditional data center security solutions.
Today’s data center security technologies are designed for existing data centers—which makes moving to a SDI a chancy proposition for most businesses. Current security solutions are largely blind to what actually goes on in a virtualized data center, with its dynamic provisioning and virtual machines. Running traditional security solutions on a fully virtualized environment can result in gaps in protection and coverage, make security management inefficient and difficult, and create problems with compliance.
So I was encouraged by the number of security-related announcements at VMworld that point to advances in protection for servers deployed in physical, virtualized and cloud environments—and that address the security challenges associated with SDI.
Intel® Security, a newly formed group within Intel that focuses on security projects and technologies, announced the Intel® Security Controller, a software-defined approach to securing virtualized environments. This security controller integrates the McAfee* Virtual Network Security Platform, an advanced intrusion protection system (IPS) optimized for Intel® Xeon®-based servers, into VMware* NSX, the industry-leading technology for network virtualization. This combination allows users to virtualize individual security services and synchronize policy and service injection within workflows by providing an abstraction layer between the security and networking infrastructures. This in essence creates software-defined security, allowing businesses to automate their existing security management applications to span security policies across physical and virtual network infrastructures. This leads to cost-effective security protection of virtualized workflows within an SDI and simplified management and deployment.
Also at VMworld, McAfee (now part of Intel Security) announced major advancements to its Server Security Suites portfolio, offering comprehensive protections for hybrid data center deployments, including software-defined infrastructures. Because significant amounts of data are stored on servers, they are attractive targets for hackers, and providing your server environment with integrated, broad-based protection is essential. McAfee’s new Server Security Suites release incorporates a number of individual security technologies into a single, easy-to-manage solution that extends visibility into your underlying server infrastructure whether it is on-premises or off. It shields physical, virtual and cloud environments from stealthy attacks so businesses like yours can safely explore the flexibility and scalability of hybrid infrastructures.
VMware also announced a new program to help businesses and organizations meet compliance mandates for regulated workloads in cloud infrastructures. VMware’s Compliance Reference Architecture Frameworks provide a programmatic approach that maps VMware and Intel security products to regulatory compliance in cloud environments for industries with strict security or privacy mandates. The framework provides a reference architecture, regulation-specific guidance, and thought leadership—plus advice for software solutions that businesses require to attain continuous compliance. These frameworks will help take the guesswork out of meeting strict regulatory guidelines when using cloud-based infrastructures for restricted workloads.
The first available framework is the VMware* FedRAMP Compliance Reference Architecture Framework, which addresses the needs of organizations to enable and maintain a secure and compliant cloud environment for U.S. government agencies. Further compliance frameworks from VMware and Intel are in the works, including one for HIPAA.
VMware and Intel are building the foundations for software-defined security, making it easier—and safer—than ever for your business to achieve the benefits of virtualization and the hybrid cloud.
This blog post has been republished from 01.org.
About a year ago Ozone platform abstraction layer started to take its shape in Chromium* and we were excited how easily we could support Wayland… Read more
The latest team to announce participation in the Intel Parallel Universe Computing Challenge (PUCC) at SC14 is the Examen representing the EXA2CT project in Europe. The “Exa” in “Examen” is pronounced like exascale, of course!
The EXA2CT project, funded by the European commission is comprised of 10 partners—including IT4Innovations, Inria and Università della Svizzera Italiana (USI), which are represented by Examen team members. The project aims to integrate development of algorithms and programming models tuned to future Exascale supercomputer architectures.
The Examen include David Horak (IT4Innovations, Czech Republic), Lubomir Riha (IT4Innovations, Czech Republic), Patrick Sanan (USI, Switzerland), Filip Stanek (IT4Innovations, Czech Republic), and Francois Rue (not pictured, Inria, France)
The team plans to leverage those hours spent developing algorithms and programming models into a serious run at the PUCC. Here’s what Examen Team Captain David Horak with IT4Innovations in the Czech Republic had to say about his team:
Q. What is the role of your team members in the EXA2CT project?
A. The team provides system level libraries support for MPI-3.0 implementation and GASPI implementation for non-blocking communication. They are also involved in the development of ESPRESO and FLLOP libraries with communication hiding and avoiding techniques for exascale computers.
Q. What do the Examen hope to accomplish by participating in the Intel Parallel Universe Computing Challenge?
A: We want to check whether we finally reached the state, when bachelor students surpass us in HPC knowledge. ;-)
Q. What are the most prevalent high performance computing applications in which your team members are involved?
A. We are most involved with Intel Cluster Studio (Intel MPI), OpenMPI, GPI-2, PETSC and FLLOP.
Q. How will your team prepare for the competition?
A. Drink a lot of coffee. ;-)
Q. SC14 is using the theme “HPC Matters” for this year’s conference. Can you explain why HPC matters to you?
A. HPC is fun and makes the world more interesting by enabling scientists to better explore it.
The Intel Parallel Universe Computing Challenge kicks off on Monday night at the SC14 opening gala. See the schedule and learn more about the contestants on the Intel SC14 Web page.
One of the most fascinating—and challenging—aspects of using technology in the retail and financial services space is how to ensure the protection of personal data on open platforms. In the guest blog post below, Chris Lybeer, Vice President of Strategic … Read more >
The post Protecting Consumer Information: NCR and Intel Team Up for a New Approach appeared first on IoT@Intel.
My colleague Ignacio Alvarez, Research Scientist, Systems Prototyping & Infrastructure, Intel Labs, works closely with software and hardware engineers, user experience researchers, and designers to prototype concepts in the field of intelligent transportation. In his blog post below, Ignacio writes … Read more >
The post Intel Labs’ Orion Races In-Vehicle Infotainment Onto the IoT appeared first on IoT@Intel.
The bring-your-own-device to work trend is deeply entrenched in the healthcare industry, with roughly 89 percent of the nation’s healthcare workers now relying on their personal devices in the workplace. While this statistic—supplied by a 2013 Cisco partner network study—underscores the flexibility of mHealth devices in both improving patient care and increasing workflow efficiency, it also shines a light on a nagging, unrelenting reality: mobile device security remains a problem for hospitals.
A more recent IDG Connect survey concluded the same, as did a Forrester Research survey that was released earlier this month.
It’s not that hospitals are unaware of the issue; indeed, most HIT professionals are scrambling to secure every endpoint through which hospital staff access medical information. The challenge is keeping pace with a seemingly endless barrage of mHealth tools.
As a result:
- 41 percent of healthcare employees’ personal devices are not password protected, and 53 percent of them are accessing unsecured WiFi networks with their smartphones, according to the Cisco partner survey.
- Unsanctioned device and app use is partly responsible for healthcare being more affected by data leakage monitoring issues than other industries, according the IDG Connect survey.
- Lost or stolen devices have driven 39 percent of healthcare security incidents since 2005, according to Forrester analyst Chris Sherman, who recently told the Wall Street Journal these incidents account for 78 percent of all reported breached records originating from healthcare.
Further complicating matters is the rise of wireless medical devices, which usher in their own security risks that take precedence over data breaches.
So, where should healthcare CIOs focus their attention? Beyond better educating staff on safe computing practices, they need to know where the hospital’s data lives at all times, and restrict access based on job function. If an employee doesn’t need access, he doesn’t get it. Period.
Adopting stronger encryption practices also is critical. And, of course, they should virtualize desktops and applications to block the local storage of data.
What steps is your healthcare organization taking to shore up mobile device security? Do you have an encryption plan in place?
As a B2B journalist, John Farrell has covered healthcare IT since 1997 and is a sponsored correspondent for Intel Health & Life Sciences.
Read John’s other blog posts
Workload Optimized Part A: Enabling video transcoding on new Intel powered HP Moonshot ProLiant server
TV binge watching is a favorite past time of mine. For an 8 weeks span between February and March of this year, I binge watched five seasons of a TV series. I watched it on my Ultrabook, on a tablet at the gym, and even a couple episodes on my smart phone at the airport. It got me thinking about how the episodes get to me as well as my viewing experience on different devices.
Let me use today’s HP Moonshot server announcement to talk about high-density servers. You may have seen that HP today announced the Moonshot ProLiant m710 cartridge. The m710, based on the Intel® Xeon® processor E3-1284L v3 with built-in Intel® Iris Pro Graphics P5200, is the first microserver platform to support Intel’s best media and graphics processing technology. The Intel® Xeon® processor E3-1284L v3 is also a great example of how Intel continues to deliver on its commitment to provide our customers with industry leading silicon customized for their specific needs and workloads.
Now back to video delivery. Why does Intel® Iris™ Pro Graphics matter for Video Delivery? The 4k Video transition is upon us. Netflix already offers mainstream content like Breaking Bad in Ultra HD 4k. Devices with different screen sizes and resolutions are proliferating rapidly. The Samsung Galaxy S5 and iPhone 6 Plus smartphones have 1920×1080 Full HD resolution while the Panasonic TOUGHPAD 4k boasts a 3840×2560 Ultra HD display. And, the sheer volume of video traffic is growing. According to Cisco, streaming video will make up 79% of all consumer internet traffic by 2018 – up from 66% in 2013.
At the same time, the need to support higher quality and more advance user experiences is increasing. Users have less tolerance for poor video quality and streaming delays. The types of applications that Sportvision pioneered with the yellow 10 yard marker on televised football games are only just beginning. Consumer depth cameras and 3D Video cameras are just hitting the market.
For service providers to satisfy these video service demands, network and cloud based media transcoding capacity and performance must grow. Media transcoding is required to convert video for display on different devices, to reduce the bandwidth consumed on communication networks and to implement advanced applications like the yellow line on the field. Traditionally, high performance transcoding has required sophisticated hardware purpose built for video applications. But, since the 2013 introduction of the Intel® Xeon® Processor E3-1200 v3 family with integrated graphics, application and system developers can create very high performance video processing solutions using standard server technology.
These Intel Xeon processors support Intel® Quick Sync Video and applications developed with the Intel® Media Server Studio 2015. This technology enables access to acceleration hardware within the Xeon CPU for the major media transcoding algorithms. This hardware acceleration can provide a dramatic improvement in processing throughput over software only approaches and a much lower cost solution as compared to customized hardware solutions. The new HP Moonshot M710 cartridge is the first server to incorporate both Intel® Quick Sync Video and Intel® Iris Pro Graphics in a single server making it a great choice for media transcoding applications.
As video and other media takes over the internet, economical, fast, and high quality transcoding of content becomes critical to support user demands. Systems built with special purpose hardware will struggle to keep up with these demands. A server solution like the HP Moonshot ProLiant m710, built on standard Intel Architecture technology, offers the flexibility, performance, cost and future proofing the market needs.
In part B of my blog I’m going to turn the pen over to Frank Soqui. He’s going to switch gears and talk about another workload – remote workstation application delivery. Great processor graphics are not only great for transcoding and delivering TV shows like Breaking Bad, they’re also great at delivering business applications to devices remotely.
By Frank Soqui, General Manager, Technical Compute Cloud and Client, Data Center Group, Intel Corporation
It’s clear that if a business wants to remain competitive in today’s global business climate it has to employ technologies that helps its technical employees (engineers, researchers, analysts, scientists, etc) collaborate at an accelerated pace. They need be able to solve complex and interconnected problems in time to remain competitive or relevant within their industry. They need access to their primary tool – the workstation – anywhere and anytime.
With that in mind, we have been working with Citrix to optimize XenApp performance on Intel processor graphics solutions, such as the HP Moonshot ProLiant m710 cartridge based on the Intel® Xeon® processor E3-1284L v3 with built-in Intel® Iris Pro Graphics P5200. This solution makes it possible to extend the workstation experience to more users by delivering a rich, high-performance workstation experience to devices that range from tablets, smart phones to ultrabooks. These processor technologies help change the game by accelerating the pace of collaboration, and can be vital to delivering the robust user experience that is necessary to securely collaborate with partners and customers anywhere, at any time.
What matters most is the delivered experience. This solution, for the first time, can deliver a workstation class experience and graphics in a virtual environment. It is capable of delivering rich applications as a service (RaaS) to engineers or designers engaged in CAD; an artist or animators doing content creation; or a knowledge workers engaged in business logic, data base applications, 2D graphics, Audio/Video or asynchronous I/O applications. The solution is compelling in that it transforms a tablet, smart phone or Ultrabook into a collaboration tool at any time, in in any place, and with a compelling professional visual experience.
This is made possible because the Intel processor technology employs a zero copy workflow that uses the same cache and memory of the CPU and its dedicated eDRAM. It also employs the Intel® Quick Sync video technology that accelerates decoding and encoding for a significantly faster conversion time, while also enabling the processor to complete other tasks, resulting in an enhanced overall user experience.
Beyond the hardware, Intel® Graphics Virtualization Technology (Intel® GVT), is a comprehensive portfolio of graphics virtualization technologies for media transcode acceleration, visual quality per channel bandwidth maximization, and 3D graphics offload. Intel® GVT addresses variety of graphics usages and deployment models, including but not limited to: Remote Workstations, VDI, Transcode, Media streaming, and Cloud gaming, and a few more. Intel GVT allows ISVs and developers to choose from three different techniques to best suit their product and business model.
Today, an Intel® Xeon® processor E3-1284L v3 with built-in Intel® Iris Pro Graphics P5200 delivers a workstation like experiences to knowledge workers employing tablets, smart phones and ultrabooks on the go – this helps them to securely collaborate anywhere, anytime with customers and suppliers.
For corporate IT, it provides a server hosted solution that is manageable and capable of delivering predictable service levels for users seeking remote access to rich applications.
As the enterprise integrates BYOD and mobility strategies, the burden placed on wireless access points in companies has increased exponentially. In the past, employees connected to wireless networks on one primary device such as a laptop, and 802.11b/g/n routers were … Read more >
The post 802.11ac in the Enterprise — A True Productivity Boost appeared first on Technology@Intel.
Intel® RealSense™ technology makes it possible for our digital worlds to interact with our physical, organic worlds in meaningful ways. Many of the projects that developers are creating step across… Read more
By: David A. Hoffman On October 8th, I had the great honor to interview Walter Isaacson at an event titled The Data Opportunity: Rethinking Privacy to Spur Innovation. The event focused on demonstrating that we can have privacy AND … Read more >
Transactive energy envisions the energy distribution network as a two-way superhighway, where millions of “smart” devices—thermostats, vehicle charging stations, transformers—communicate their needs to energy providers. Those devices will even be able to negotiate the price they’re willing to pay through … Read more > Read more