Recent Blog Posts

Optimizing Media Delivery in the Cloud

For cloud, media, and communications service providers, video delivery is now an essential service offering—and a rather challenging proposition.


In a world with a proliferation of viewing devices—from TVs to laptops to smart phones—video delivery becomes much more complex. To successfully deliver high-quality content to end users, service providers must find ways to quickly and efficiently transcode video from one compressed format to another. To add another wrinkle, many service providers now want to move transcoding to the cloud, to capitalize on cloud economics.


That’s the idea behind innovative Intel technology-based solutions showcased at the recent Streaming Media East conference in New York. Event participants had the opportunity to gain a close-up look at the advantages of deploying virtualized transcoding workflows in private or public clouds, with the processing work handled by Intel® architecture.


I had the good fortune to join iStreamPlanet for a presentation that explained how cloud workflows can be used to ingest, transcode, protect, package, stream, and analyze media on-demand or live to multiscreen devices. We showed how these cloud-based services can help communications providers and large media companies simplify equipment design and reduce development costs, while gaining the easy scalability of a cloud-based solution.


iStreamPlanet offers cloud-based video-workflow products and services for live event and linear streaming channels. With its Aventus cloud- and software-based live video streaming solution, the company is breaking new ground in the business of live streaming. Organizations that are capitalizing on iStreamPlanet technology include companies like NBC Sports Group as well as other premium content owners, aggregators, and distributors.


In the Intel booth Vantrix showcased a software-defined solution that enables service providers to spread the work of video transcoding across many systems to make everything go a lot faster. With the company’s solution, transcoding workloads that might otherwise take up to an hour to run can potentially be run in just seconds.


While they meet different needs, solutions from iStreamPlanet and Vantrix share a common foundation: the Intel® Xeon® processor E3-1200 product family with integrated graphics processing capabilities. By making graphics a core part of the processor, Intel is able to deliver a dense, cost-effective solution that is ideal for video transcoding, cloud-based or otherwise.


The Intel Xeon processor E3-1200 product family supports Intel® Quick Sync Video technology. This groundbreaking technology enables hardware-accelerated transcoding to deliver better performance than transcoding on the CPU—all without sacrificing quality.


Want to make this story even better? To get a transcoding solution up and running quickly, organizations can use the Intel® Media Server Studio, which provides development tools and libraries for developing, debugging, and deploying media solutions on Intel-based servers.


With offerings like Intel Media Server Studio and Intel Quick Sync Video Technology, Intel is enabling a broad ecosystem that is developing innovative solutions that deliver video faster, while capitalizing on the cost advantages of cloud economics.


For a closer look at the Intel Xeon processor E3-1200 product family with integrated graphics, visit

Read more >

Part I: Data-Driven Science and the Coming Era of Petascale Genomics

Seventeen years. That’s how long it has taken us to move from the dawn of automated DNA sequencing to the data tsunami that defines next-generation sequencing (NGS) and genomic analysis in general today. I’m remembering, with some fondness, the year 1998 which I’ll consider as the year the life sciences got serious about automated DNA sequencing, about sequencing the human genome in particular, and the year the train left the station and the genomics research went from the benchtop to prime mover of high-performance computing (HPC) architectures and never looked back.


1998 was the year Perkin Elmer formed PE Biosystems, an amalgam of Applied Biosystems, PerSeptive Biosystems, Tropix, and PE Informatics, among other acquisitions. That was the year PE decided they could sequence the human genome before the academics could – that is, by competing against their own customers, and they would do it by brute force application of automated sequencing technologies. That was the year Celera Genomics was born and Craig Venter became a household name. At least if you lived in a household where molecular biology was a common dinnertime subject.


Remember Zip Drives?

In 1998, PE partnered with Hitachi to produce the ABI “PRISM” 3700, and hundreds of these machines were sold worldwide, kick starting the age of genomics. PE Biosystems revenues that year were nearly a billion dollars. The 3700 was such a revolutionary product that it purportedly could produce the same amount of DNA data in a single day what the typical academic lab could produce in a whole year. And yet, from an IT perspective, the 3700 was quite primitive. The computational engine driving the instrument was a Mac Centris, later upgraded to a Quadra, then finally to a Dell running Windows NT. There was no provision for data collection other than local storage, which if you wanted any portability was at that time the ubiquitous Iomega Zip Drive. You remember those? Those little purplish-blue boxes that sat on top of your computer and gave you a whopping 100 megabytes of portable storage. The pictures on my phone would easily fill several Zip disks today.


Networking the 3700 was no mean feat either. We had networking in 1998 of course; gigabit Ethernet and most wireless networking technologies were still just an idea in 1998 but 100 megabit (100Base-TX) connections were common enough and just about anyone in any academic research setting had a least 10 megabit (10Base-T) connections available. The problem was the 3700, and specifically the little Dell PC that was paired with the instrument and responsible for all the data collection and subsequent transfer of data to some computational facility (Beowulf-style Linux HPC clusters were just becoming commonplace in 1998 as well.)  As shipped from PE at that time, there was zero provision for networking, zero provision for data management beyond the local hard drive and/or the Zip Drive.


It seems laughable today but PE did not consider storage and networking, i.e., the collection and transmission of NGS data, a strategic platform element. I guess it didn’t matter since they were making a BILLION DOLLARS selling 3700s and all those reagents, even if a local hard drive and sneakernet were your only realistic data management options. Maybe they just didn’t have the proper expertise at that time.  After all, PE was in the business of selling laboratory instruments, not computers, storage, or networking infrastructure.


Changing Times

How times have changed. NGS workflows today practically demand HPC-style computational and data management architectures. The capillary electrophoresis sequencing technology in the 3700 was long-ago superseded by newer and more advanced sequencing technologies, dramatically increasing the data output of these instruments and simultaneously lowering the costs as well.  It is not uncommon today for DNA sequencing centers to output many terabytes of sequencing data every day from each machine, and there can be dozens of machines all running concurrently. To be a major NGS center meant also being adept at collecting, storing, transmitting, managing, and ultimately archiving petascale amounts of data. That’s seven orders of magnitude removed from the Zip Drive. If you are also in the business of genomics analysis that meant you needed to be experts in computational systems capable of handling data and data rates at these scales as well.


Today, this means either massively scalable cloud-based genomics platforms or the more traditional and even higher scale HPC architectures that dominate all large research computing centers worldwide. We are far, far beyond the days of any single Mac Quadra or Dell server. Maybe if PE had been paying closer attention to IT side of the NGS equation they would still be making billions of dollars today.


In Part II of this blog, I’ll look at what’s in store for the next 17 years in genomics. Watch for the post next week.



James Reaney is Senior Director, Research Markets for Silicon Graphics International (SGI).

Read more >

Have You Got Your Blueprints Ready?

Ever feel more like a construction worker hauling bricks in your data centre, when you’d like to be a visionary architect? You’re probably not the only one. Let’s consider a typical data centre of today. It’s likely we’ll find compute, storage and networking resources, each sitting in its own silo, merrily doing its own thing. This is a hardware-defined model, where many appliances have a fixed function that can’t be changed. Despite virtualisation helping to make managing compute servers more efficient and improving flexibility, management of other resources, and of the data centre overall, is generally manual and slow. Like building a house, it requires time, cost and heavy lifting, and results in a fairly static edifice. If you want to add an extension at a later date, you’ll need to haul more bricks.


SDI-Architecture-On-Computer-Screen.pngAt Intel, we envision an architectural transformation for the data centre that will change all this. This is the move to software-defined infrastructure, where the private cloud is as elastic, scalable, automated and therefore efficient as your public cloud experience, which I described in my last blog post.


I’d say today’s data centre is at an inflection point, where compute servers are already making inroads to SDI (a virtual machine is essentially a software-defined server after all). Now we need to apply the same principle to storage and networking as well. When all of these resource pools are virtualised, we can manage them in the same automated and dynamic way that we do servers, creating resources that fit our business needs rather than letting the infrastructure define how we work. It’s as if you could move the walls around within your house, add new windows or remove a bathroom, whenever you liked, with great agility and without any additional costs, time or labour.


A Data Center for the Application Economy


So how does SDI work in practice? Let’s look at it from the top down, starting with the applications. Whether they’re your email program, customer-facing sales website, CRM or ERP system, applications are what drive your business. Indeed, the application economy is apparently now ‘bigger than Hollywood’, with the iOS App Store alone billing $10 billion on apps in 2014. They’re your most important point of contact with your customers, and possibly employees and partners too, and they have strict SLAs. These may include response times, security levels, availability, the location in which their data is held, elasticity to meet peaks and troughs in demand, or even the amount of power they use. Meeting these SLAs means allocating the right resources in the data centre, across compute, storage and networking.


This allocation is handled by the next layer down – the Orchestration layer. It’s here that you can automate management of your data centre resources and allocate them dynamically depending on application needs. These resource pools, which are the foundation layer of the data centre, can be used for compute, networking or storage as required, allocated on-demand in an automated manner. Fixed-function appliances are now implemented as virtual resource pools, meaning you can make them into whatever architectural feature you like.


Intel-SDI.pngBig Changes, Bigger Savings


Oh, the possibilities! While this big change in data centre operations may be daunting, the benefits that SDI can bring in terms of driving time, cost and labour out of your business make it worth the effort. Orchestration optimises infrastructure and will reduce IT admin costs and enable your valuable team to focus more on strategic projects; while software-defined storage and networking cut your infrastructure hardware costs. Intel estimates that this could result in a relative cost saving of up to 66 percent[1] per virtual machine instance for a data centre running full orchestration with SDI, versus one just starting virtualisation. With IDC predicting data centre operational costs to more than double every eight years, procrastination will only result in more cost in the long run.


As with any ambitious building project though, it’s important to plan carefully. I’ll be continuing this blog series by examining the four key architectural aspects of the software-defined data centre, and explaining how Intel is addressing each of them to equip clients with the best tools. These areas are:


  • Orchestration
  • Transforming the network
  • Determining and building infrastructure attributes and composable architectures
  • Unleashing the potential of your SDI data centre

Check back soon for the next installment and do let me know your thoughts in the meantime. What sort of data centre renovations would you make given the freedom of time, cost and grunt work?


My first blog on data centers can be found here: Is Your Data Center Ready for the IoT Age?

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter



1 Source: Intel Finance, 2014. Individual IT mileage may vary depending on workload, IT maturity and other factors. SDI assumes future benefits. Projections are extrapolations from Intel IT data. Private cloud model based on actual datacenter operations. IT DC based on Intel finance estimation for typical enterprise costs. Hybrid cloud model based on forward looking future benefits and market cost trends. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance or cost.

Read more >

Intel’s Purchasing Power Drives EHR Interoperability

Brian DeVore, Director Healthcare Strategy and Ecosystem Intel Corporation Prashant Shah, Health IT Architect Intel Corporation Alice Borrelli, Director Global Healthcare Policy Intel Corporation As Meaningful Use Stage 3 comments are being filed this week, we’d like to highlight another … Read more >

The post Intel’s Purchasing Power Drives EHR Interoperability appeared first on Policy@Intel.

Read more >

Telehealth Proves It’s Good for What Ails Home Healthcare

Telehealth is often touted as a potential cure for much of what ails healthcare today. At Indiana’s Franciscan Visiting Nurse Service (FVNS), a division of Franciscan Alliance, the technology is proving that it really is all that. Since implementing a telehealth program in 2013, FVNS has seen noteworthy improvements in both readmission rates and efficiency.

I recently sat down with Fred Cantor, Manager of Telehealth and Patient Health Coaching at Franciscan, to talk about challenges and opportunities. A former paramedic, emergency room nurse and nursing supervisor, Fred transitioned to his current role in 2015. His interest in technology made involvement in the telehealth program a natural fit.

At any one time, Fred’s staff of three critical care-trained monitoring nurses, three installation technicians and one scheduler is providing care for approximately 1,000 patients. Many live in rural areas with no cell coverage – often up to 90 minutes away from FVNS headquarters in Indianapolis.

Patients who choose to participate in the telehealth program receive tablet computers that run Honeywell LifeStream Manager* remote patient monitoring software. In 30-40 minute training sessions, FVNS equipment installers teach patients to measure their own blood pressure, oxygen, weight and pulse rate. The data is automatically transmitted to LifeStream and, from there, flows seamlessly into Franciscan’s Allscripts™* electronic health record (EHR). Using individual diagnoses and data trends recorded during the first three days of program participation, staff set specific limits for each patient’s data. If transmitted data exceeds these pre-set limits, a monitoring nurse contacts the patient and performs a thorough assessment by phone. When further assistance is needed, the nurse may request a home visit by a field clinician or further orders from the patient’s doctor. These interventions can reduce the need for in-person visits requiring long-distance travel.

FVNS’ telehealth program also provides patient education via LifeStream. For example, a chronic heart failure (CHF) patient experiencing swelling in the lower extremities might receive content on diet changes that could be helpful.

Since the program was implemented, overall readmission rates have been well below national averages. In 2014, the CHF readmission rate was 4.4%, compared to a national average of 23%. The COPD rate was 5.47%, compared to a national average of 17.6%, and the CAD/CABG/AMI rate was 2.96%, compared to a national average of 18.3%.

Despite positive feedback, medical staff resistance remains the biggest hurdle to telehealth adoption.  Convincing providers and even some field staff that, with proper training, patients can collect reliable data has proven to be a challenge. The telehealth team is making a concerted effort to engage with patients and staff to encourage increased participation.

After evaluating what type of device would best meet the program’s needs, Franciscan decided on powerful, lightweight tablets. The touch screen devices with video capabilities are easily customizable and can facilitate continued program growth and improvement.

In the evolving FVNS telehealth program, Fred Cantor sees a significant growth opportunity. With knowledge gained from providing the service free to their own patients, FVNS could offer a private-pay package version of the program to hospital systems and accountable care organizations (ACOs).

Is telehealth a panacea? No. Should it be a central component of any plan to reduce readmission rates and improve workflow? Just ask the patients and healthcare professionals at Franciscan VNS.


Read more >