ADVISOR DETAILS

RECENT BLOG POSTS

Delivering Full Stack Video Analytics with Viscovery and Quanta

analytics-image.jpg

Online  video is a huge part of our connected world today. It’s a medium that we use  daily to share, communicate, learn, and of course, be entertained – and there  seems to be no limit to its growth.   Facebook is a great example – they are now getting an amazing 8 billion video views a day, more than double what  they saw 6 months earlier. According to a recent Cisco report, video  traffic will be 80% of all consumer Internet traffic in 2019, up from 64% in  2014, and mobile video  will increase 11X in the next 5 years. In China alone, the online video market  is expected to reach more than $17B by 2018 according to iresearch.

 

As a  video and tech enthusiast, these developments are hugely exciting, but this  relentless deluge of video does indeed present some very real challenges (and  opportunities). Infrastructure challenges are obvious, given the need for  increased storage and compute to process, transcode and manipulate videos for end  user consumption.   However, there is another,  less straightforward, problem to overcome. How can viewers best navigate the  flood of online video content? And how can content providers and advertisers efficiently  and intelligently provide video content that is relevant (and useful) to consumers?

 

This is  certainly a daunting task and something that we as humans are ill equipped to  handle.  Frankly, it is no wonder that many  companies are investigating the possibility of developing intelligent systems that  leverage machine learning and deep neural networks (DNN) to help automate these  tasks.

 

With  this in mind, Intel, Quanta, and Viscovery came together to build a full stack  solution to this problem that leverages a deep learning based application from  Viscovery, the power and scalability of Intel® Xeon® processors, and Quanta’s  efficient platform designs.  We created a  turnkey solution specifically designed to solve the video content recognition  problem. At Intel, we recognize that it is critical to take a holistic view when tackling these types of  challenges and enable solutions that include everything from the silicon and  server hardware to the libraries and open source components all the way to the  end application. And of course, all of these ingredients must be optimized for  cloud scale deployments.  Below is a high  level view of the solution stack:

 

In order  to tackle these problems at scale, libraries like Intel® Math Kernel Library and optimized open source components like Caffe* are tightly integrated into  Viscovery’s Deep Learning-based video content  recognition engine to take  full advantage of the performance of Intel® processors.  The result is a solution that seamlessly runs  across Intel® Xeon® and Intel® Xeon Phi™ processor-based platforms providing  the capability to train DNNs quickly and deploy at scale at an efficient total  cost of ownership. Below is an example of types of content that the Viscovery  application uses to train their DNNs.  As  you can see they’ve moved significantly beyond simple image and object  classification:

 

 

                                                                                                                                

Modality

Target

Facial

Human/Animal

Image

Brand/Logo

Text

OCR in the wild

Audio

Speech/Music

Motion

Action/Video2Text

Object

Brand/Model

Scene

Location/Event

viscovery-blog-image.png

 

Of  course, the real proof of success is in the usage of this platform by end  customers.  Leaders in video content  delivery such as LeEco, YouKu, 8sian, Alimama (part of Alibaba) and many others  have already deployed solutions based on this stack.

 

If  you’re at Computex this month, you can check out this video discovery service  in action running on our Intel Xeon and Intel Xeon Phi processors at Quanta’s  booth and during Intel’s keynote speech by Diane Bryant. And with any luck, as  video content recognition capabilities continue to advance, you’ll never find  yourself watching irrelevant or unwanted video content again.

Read more >

Visual Cloud & Remote Workstations in the Enterprise

From video streaming to remote workstations, more content is being delivered via cloud computing every day. Data centers everywhere are dealing with a flood of video traffic, and many enterprises are also dealing with the computing demands of complex design applications and massive data sets that are used by employees and contractors scattered around the world.

 

digitally-connected.jpg

For design and content creation companies to remain competitive in today’s global business climate, they need to employ technologies that help technical employees and contractors collaborate to solve complex and interconnected design problems. Their designers, sales people, customers, contractors and others involved in the design process need access to design information and tools – anywhere and anytime, and the enterprise needs to safeguard its valuable intellectual property. The enterprise is therefore faced with finding ways to securely share data models and content over a widely distributed workforce without breaking the bank.

 

Enabling the Global Design Workforce

 

Securing access to complex and sensitive design models and content, quickly and easily providing access to a highly distributed workforce and ecosystem and providing an excellent user experience to that workforce are the collaboration challenges faced by the IT organizations at design firms.

 

However, there’s a simple solution to these challenges. Cloud-hosted remote workstations allow engineers to use demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted on servers based on Intel Xeon processors in a secure data center. Employees can safely collaborate with external contractors while avoiding sending designs from computer-to-computer and protecting enterprise intellectual property.

Remote users can also work from the same data set, with no need for high-volume data transfers. This allows the enterprise to deliver fast and clear graphics running on a dense, cost-effective infrastructure.

 

New Architectures and Ecosystems

 

To support the demands of remote workstations, new solutions and partnerships are absolutely necessary.

 

The Intel Xeon processor E3-1500 v5 product family offers hardware-enhanced integrated graphics capabilities that are optimized for remote application delivery workloads. These integrated graphics solutions cost-effectively accelerate video and enable secure, remote delivery of applications by combining the performance of Intel Xeon processors with integrated Iris Pro graphics.

 

Intel-powered remote workstation solutions allow technical professionals and content creators to have greater access to key applications on their computing device(s) while securely collaborating with colleagues. For IT, these solutions provide centralized management, more provisioning control, and easier patching and updating of applications.

 

The newly announced Intel Xeon processor E3-1500 v5 includes Intel Graphics Virtualization Technology (Intel GVT) to address multiple customer use cases. These include direct assignment of a given GPU’s capabilities to a single user; the ability to allow multiple local or remote virtual machines to share access to a GPU; and the ability to share a GPU’s resources through multiple concurrent users in a time-slice scenario.

 

Productivity and Progress: Central to the Enterprise

 

Organizations can increase the security of enterprise information by centrally hosting critical applications and data and avoiding delivering valuable visual content to contractors. The enterprise can also avoid provisioning powerful workstations to users who need infrequent access to graphic-intensive applications, such as salespeople who only occasionally need to provide design input.

 

Intel works with a partner ecosystem to enhance the delivery and minimize the complexity of high-performance remote workstations within the enterprise. For example, the enterprise can turn to VMware Horizon 7 to deliver virtual or hosted desktops or Citrix XenApp and XenDesktop to deliver secure virtual apps and desktops.

 

Adopting secure remote workstations allows the enterprise to deliver once out-of-reach workstation performance and visual content to designers, engineers, media creators, and other professionals. This enables major leaps in collaboration and productivity, further empowering each employee to drive progress for the enterprise.

Read more >

Safe and sound—digital security with home desktop PCs

In today’s digital world, consumers face a barrage of online phishing attacks, new forms of nasty malware, and the risk of virus-infected desktops like never before. Unfortunately, cyber criminals do not discriminate, and it’s very easy to fall victim to their scams.

   metricsbadge.png

But what if you could rest easy at night knowing all of your pictures, videos, and personal files are securely stored on a high-capacity, always-available desktop PC that stays safely in your home? [i],[ii] Here are a few ways that Intel Security is making this possible.

 

Built-In Protection for Stronger Security

 

At its core, Intel-based desktops build security in from the silicon up to help safeguard your files, online transactions, data, and identity on a device that can reside securely in your home. Desktop PCs that are running 6th gen Intel Core processors feature hardware-based technologies that protect against a wide range of malware attacks and exploits—and help keep your system and data free from hacking, viruses, and prying eyes.

 

As an added layer of support, the hardware-based security capabilities of Intel Identity Protection Technology can be found on more than 500 million PCs[iii] to support trusted device authentication. Now you can enjoy amazing computing experiences and more control over your personal content and information without worrying about the next Trojan horse.

  

password.pngSay Goodbye to Passwords

 

Creating one strong password that you can remember is hard enough, but doing it for every single online account is almost impossible—until now. Many people use the same password everywhere, so it doesn’t take a skilled hacker to break into an account, just a good guesser.

 

“More than 90 percent of passwords today are weak, predictable, and ultimately crackable,” says Dave Singh, product marketer, Intel Client Computing Group. “What we’re trying to do is help consumers develop good security habits when they’re browsing and shopping online, and password managers make this very convenient by decreasing frustration to provide a better user experience on their PCs.”

 

As one example, True Key comes preloaded on most Intel-based desktop PCs with McAfee LiveSafe software. Users can sync their data across Windows, Mac, Android, and iOS devices and import passwords from all browsers and competitors. Advanced multi-factor authentication (MFA) and biometric security make it easy to sign into any account. Choose at least two different factors (e.g., trusted device, face, email, master password, numeric pin, or fingerprint) and the app will verify your identity. For additional security, you can add more factors and make your profile even stronger. Basically, True Key can recognize you and sign you in—eliminating the need for passwords altogether.

 

“With so many different ways to log in and get to your personal content and information, password managers can really help increase productivity by saving time and headaches,” adds Singh.

 

Seamless Online Shoppingcheckmoney.png

 

Imagine being able to walk up to your PC and have one central app manage your mobile wallet, healthcare account, or hotel membership profile. You can now book travel, buy and ship gifts or upload photos to the cloud more conveniently and better protected against malware.

 

Some password managers can also store wallet items—credit cards, addresses, memberships—and make it easy to “tap and pay” at checkout for secure online payments and transactions. Intel technologies feature fast, end-to-end data encryption to keep your information safe without slowing you down, with built-in hardware authentication to provide seamless protection for online transactions.

 

“Your high-capacity, always-available desktop can stay safely at your home with all your locally stored files, but you can securely access the information from other devices, including your smartphone,” Singh says.

 

“Paired with new Windows 10 sign-in options like Windows Hello, desktop computing is truly becoming more personal and secure. It really shows how digital security is advancing to work better together for the best home computing experience.”

 

So the next time you log into your home desktop PC, you can do it with a smile. Download the Flash Card for more tips on how to safeguard your digital security. 

 


[i] Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

[ii] Requires an Intel® Ready Mode Technology-enabled system or motherboard, a genuine Intel® processor, Windows* 7, Windows 8.1, or Windows 10 OS. Results dependent upon hardware, applications installed, Internet connectivity, setup and configuration.

[iii] True Key™ by Intel Security. Security White Paper 1.0. https://b.tkassets.com/shared/TrueKey-SecurityWhitePaper-v1.0-EN.pdf

Read more >

Rich Graphics for Virtualized Remote Applications — Powered by Citrix and Intel

By James Hsu, Director of Technical Marketing at Citrix

 

One of the great experiences in our industry is to see  products from different vendors—hardware and software—come together to solve  real customer problems. That’s what’s been happening with Citrix and Intel for  the last two years as we worked together to apply Intel Graphics  Virtualization Technology (Intel GVT) to the Citrix  XenServer virtualization platform. The result of that effort is Citrix  XenServer 7.0, which we are announcing at Citrix Synergy 2016 in Las Vegas. It’s  the first commercial hypervisor product to leverage Intel GVT-g, Intel’s virtual  graphics processing unit that can power multiple VMs with one physical GPU. As  well as announcing XenServer 7.0, Citrix is also announcing XenDesktop 7.9  offering industry-leading remote graphics delivery supported by Intel.  Let me tell you what that does for users  running graphics-intensive virtualized desktop applications, and then I’ll tell  you how we used Intel GVT-g to do it.

 

hdx-technical-and-training-materials-header-1010x464.jpg

 

Citrix XenApp and  XenDesktop lets you deliver virtualized desktop and applications hosted on  a server to remote workstations. Many desktop applications—like computer-aided  design and manufacturing apps and even accelerated Microsoft Office—require the  high-performance graphics capabilities of a graphics processing unit (GPU). In  XenDesktop 7.9 Citrix also added support for Intel Iris Pro graphics in the HDX  3D Pro remote display protocol.

 

Earlier versions of XenServer enabled Intel GPU capabilities  on virtualized desktops in a pass-through mode that allocated the GPU to a  single workstation. Now, XenServer 7.0 expands our customers’ options by using  Intel  GVT-g to virtualize access to the Intel Iris Pro Graphics GPU integrated onto select  Intel Xeon processor E3  family products , allowing it to be shared by as many as seven  virtual workstations.

 

With Intel GVT-g, each virtual desktop machine has its own copy  of Intel’s native graphics driver, and the hypervisor directly assigns the full  GPU resource to each virtual machine on a time-sliced basis. During its time  slice, each virtual machine gets a dedicated GPU, but the overall effect is that  a number of virtual machines share a single GPU. It’s an ideal solution in  applications where high-end graphics are required but shared access is  sufficient to meet needs. Using the Intel Xeon processor E3 family, small single-socket servers can pack a big  graphics punch. It’s an efficient, compact design that enables a new scale-out  approach to virtual application delivery. And it’s a cost-effective  alternative to high-end workstations and servers with add-on GPU cards.

 

The advantages go beyond just cost efficiency. Providing  shared access by remote users to server-based data and applications enhances  worker productivity and improves collaboration. It also tightens security and  enables compliance, because critical intellectual property, financial data, and  customer information stays in the data center rather than drifting out to  individual workstations and mobile devices. And security is further enhanced,  because Intel Xeon processors contain Intel Trusted Execution Technology  (Intel TXT) to let you create trusted computing pools. Intel TXT attests to  the integrity and trust of the platform, assures nothing has been tampered with,  and verifies that the platform is running the authorized versions of firmware  and software when booting up.

 

At Citrix, our goal is to provide our customers with the  computing experience they need to innovate and be productive—on a range of platforms  and usage models and in a way that enhances the security of their business. And  we want to give them the flexibility to access the computing resources they  need anywhere, any time, and from any device. Our collaboration with Intel has let  us deliver on that promise, and it lets us provide even more options for  platform choice and deployment configurations. It’s been a great experience for  us, and now it will enable a great experience for our mutual customers.

Read more >

Advantages of Telehealth: Better Patient Care

 

The shift from fee-for-service to fee-for-performance is changing the conversation around patient care. Reducing readmissions is one benchmark for analyzing the quality of care, and more discussion is happening around bringing telehealth into the mix to improve this metric.

 

Traditionally, when patients leave the clinical setting, interaction between the care team and the patient decreases. With telehealth and remote patient monitoring, technology allows the provider team to remain in contact with the patient to follow up on regiments and make sure instructions are followed. The result can be a shift in outcomes for the better.

 

To learn more about telehealth, we sat down with Fadesola Adetosoye from Dell Healthcare Services, who says telehealth allows patients to overcome challenges, like transportation issues, to obtain better primary care and stay in touch with clinicians following discharge.

 

Watch the video above and let us know what questions you have about telehealth? Is your organization using a telehealth strategy?

Read more >

Unleash the Power: Knights Landing Developer Platforms are here!

Developers –  your HPC Ninja Platform is here! HPC developers  worldwide have begun to participate in the Developer Access Program (DAP) – a  bootstrap effort for early access to code development and optimization on the  next generation Intel Xeon Phi processor. A key part of the program is the  Ninja Developer Platform.

hpc-graphic.jpg

 

Several  supercomputing-class systems are currently powered by the Intel Xeon Phi processor  (code name Knights Landing (KNL))—a powerful many core, highly parallel  processor. KNL delivers  massive thread parallelism, data parallelism, and memory bandwidth with  improved single-thread performance and Intel Xeon processor  binary-compatibility in a standard CPU form factor.

 

In anticipation of KNL’s general availability,  we, along with our partners, are bringing to market a developer access program,  which provides an ideal, platform for code developers. Colfax, a valued Intel  partner, is handling the program, which is already underway.

 

The Ninja Platform

 

Think of the  Ninja Developer Platform as a stand-alone box that has a single bootable next-generation  Intel Xeon Phi processor. Developers can start kicking the tires and getting a  feel for the processor’s capabilities. They can begin developing the highly  parallel codes needed to optimize existing and new applications.

 

As part of  Intel’s Developer Access Program, the Ninja platform has everything you need in  the way of hardware, software, tools, education and support.  It comes fully configured with memory, local  storage, CentOS 7.2 and also includes a one-year license for Intel Parallel  Studio XE tools and libraries.  You can  get to work immediately whether you’re a developer experienced with previous  generations of Intel Xeon Phi coprocessors or if you are new to the Intel Xeon  Phi processor family.

 

Colfax has  pulled out all the stops in designing the education and support resources  including white papers, webinars, and how-to and optimization guides. Currently  underway are a series of KNL webinars and hands-on workshops – see details at http://dap.xeonphi.com/#trg

 

Here is a  quick look at the two platform options that are being offered by the Developer Access  Program – both are customizable to meet your application needs.

 

             


Pedestal    Platform

Rack    Platform

  • Developer Edition of Intel Xeon Phi Processor: 16GB MCDRAM, 6         Channels of DDR4, AVX 512
  • MEMORY: 6x DIMM slots
  • EXPANSION: 2x PCIe 3.0 x16 (unavailable with KNL-F), 1x PCIe 3.0 x4 (in a x8 mechanical slot)
  • LAN: 2x Intel i350 Gigabit Ethernet
  • STORAGE: 8x SATA ports, 2x SATADOM support
  • POWER SUPPLY: 1x 750W 80 Plus Gold
  • CentOS 7.2
  • Intel Parallel Studio XE Professional Edition Named User 1-year license

  

      
  • 2U 4x Hot-Swap Nodes
  •   

  • Developer Edition of Intel Xeon Phi Processor: 16GB MCDRAM, 6         Channels of DDR4, AVX 512
  •   

  • MEMORY: 6x DIMM slots / Node
  •   

  • EXPANSION: Riser 1: 1x PCIe 3.0 x16, Riser 2: 1x PCIe Gen3 x 20         (x16 or x4) / Node
  •   

  • LAN: 2x Intel i210 Gigabit Ethernet / Node
  •   

  • STORAGE: 12x 3.5″ Hot-Swap Drives
  •   

  • POWER SUPPLY: 2x 2130W Common Redundant 80 Plus Platinum
  •   

  • CentOS 7.2
  •   

  • Intel Parallel Studio XE Cluster Edition Named User 1-year         license

  

 

Given the richness of the technology and the  tools being offered along with the training and support resources, developers should  find the process of transitioning to the latest Intel Xeon Phi processor  greatly accelerated.

 

The Ninja Development Platform is particularly  well suited to meet the needs of code developers in such disciplines as  academia, engineering, physics, big data analytics, modeling and simulation,  visualization and a wide variety of scientific applications.

 

The platform  will cost ~$5,000 USD for the single node pedestal server with additional costs  for customization.  On the horizon is our  effort to take this program global with Colfax and partners. Stay tuned for  details in my next blog.

 

You can pre-order  the Ninja Developer Platform now at http://www.xeonphideveloper.com.

Read more >

Enabling Anywhere, Anytime Design Collaboration with Intel Graphics Virtualization Technology

Graphics virtualization and design collaboration took a step  forward this week with the announcement of support for Intel Graphics  Virtualization Technology-g (Intel® GVT-g) on the Citrix XenServer* platform.

 

Intel GVT-g running on the current generation graphics-enabled  Intel Xeon processor E3 family, and future generations of Intel Xeon®  processors with integrated graphics capabilities, will enable up to seven Citrix  users to share a single GPU without significant performance penalties. This new  support for Intel GVT-g in the Citrix virtualization environment was unveiled  this week at the Citrix Synergy conference in Las Vegas.

 

A little bit of background on the technology: With Intel  GVT-g, a virtual GPU instance is maintained for each virtual machine, with a  share of performance-critical resources directly assigned to each VM. Running a  native graphics driver inside a VM, without hypervisor intervention in  performance-critical paths, optimizes the end-user experience in terms of features,  performance and sharing capabilities.

 

All of this means that multiple users who need to work with  and share design files can now collaborate more easily on the XenServer  integrated virtualization platform, while gaining the economies that come with  sharing a single system and benefiting from the security of working from a  trusted compute pool enabled by Intel  Trusted Execution Technology (Intel® TXT).

 

Intel GVT-g is an ideal solution for users who need access  to GPU resources to work with graphically oriented applications but don’t  require a dedicated GPU system. These users might be anyone from sales reps and  product managers to engineers and component designers. With Intel GVT-g on the  Citrix virtualization platform, each user has access to separate OSs and apps  while sharing a single processor – a cost-effective solution that increases  platform flexibility.

 

The back side of this story is one of close collaboration  among Intel, Citrix, and the Xen open source community to develop and refine a  software-based approach to virtualization in an Intel GPU and XenServer  environment. It took a lot of people working together to get us to this point.

 

And now we’ve arrived at our destination. With the  combination of Intel GVT-g, Intel  Xeon processor-based servers with Intel Iris Pro Graphics, and Citrix  XenServer, anywhere, anytime design collaboration just a got a lot easier.

For a closer look at Intel GVT-g, including a technical  demo, visit our Intel Graphics Virtualization  Technology site or visit our booth #870 at Citrix  Synergy 2016.

Read more >

Making New Server-Virtualization Capabilities a Reality

One of the most rewarding aspects of my work at Intel is seeing the new capabilities built in to Intel silicon that are then brought to life on an ISV partner’s product. It is this synergy between Intel and partner technologies where I see the industry and customers really benefit.

 

Two of the newer examples of this kind of synergy are made possible with Citrix XenServer 7.0—Supervisor Mode Access Prevention (SMAP) and Page Modification Logging (PML). Both capabilities are built in to the Intel Xeon processor E5 v4 family, but can only benefit customers when a server-virtualization platform is engineered to use them. Citrix XenServer 7.0 is one of  the first server-virtualization platforms to do that with SMAP and PML.

 

Enhancing Security with Supervisor Mode Access Prevention (SMAP)

 

SMAP is not new in and of itself. Intel introduced SMAP for Linux on 3rd generation Xeon processors, SMAP is new to virtualization though. Intel added SMAP code to the Citrix Xen hypervisor in Xen Project. Citrix then worked with the code in Xen, and XenServer 7.0 makes SMAP a reality for server virtualization.

 

citrix-blog-graphic.jpg

Figure 1:  SMAP prevents the hypervisor from accessing the guests’ memory space other than when needed for a specific function

 

SMAP helps prevent malware from diverting operating-system access to malware-controlled user data, which helps enhance security in virtualized server environments. SMAP aligns with the Intel and Citrix partnership where Intel and Citrix regularly collaborate to help make a seamless, secure mobile-workspace experience a reality.

 

Improving Performance with Page Modification Logging (PML)

 

PML improves performance during live migrations between virtual server hosts. As with SMAP, PML capabilities are built in to the Intel Xeon processor E5 v4 family, and XenServer 7.0 is one of the first server-virtualization platforms to actually enable PML in a virtualized server environment.

 

citrix-blog-graphic-2.jpg

Figure 2:  With PML, CPU cycles previously used to track guest memory-page writes during live migration are available for guest use instead

 

Read More

 

I haven’t gone into detail on SMAP or PML or how they work. Instead, I invite you to read about them and how they add to the already strong XenServer virtualization platform and Intel Xeon processor E5 family in the Intel and Citrix solution brief, “New Capabilities with Citrix XenServer and the Intel Xeon Processor E5 v4 Family.” I also invite you to follow me and my growing #TechTim community on Twitter: @TimIntel.

Read more >

Intel Inside and Everywhere at Synergy16

Las-Vegas-Strip-1036x691[1].jpgBy Steve Sieron, Senior Alliance Marketing Manager at CItrix

 

 

Intel will be highly visible next week at Synergy as a Platinum Sponsor. They’ll be featuring a number of new solutions that showcase the broad technical, product and marketing partnership with Citrix across networking, cloud, security and graphics virtualization. And? There’ll be an array of innovative Intel-based endpoint devices running XenApp and XenDesktop across Win10, Linux and Chrome OS.

 

You won’t want to miss SYN121 on Wednesday May 25 from 4:30-5:15pm PDT in Murano 3204 for “Mobilize your Design Workforce: Delivering Graphical Applications on Both Private and Public Clouds.” This informative panel, hosted by Jim Blakley, Intel GM Visual Cloud Computing, will feature graphics industry experts, including Thomas Poppelgaard, Jason Dacanay from Gensler, Adam Jull from IMSCAD. and Citrix own “Mr. HDX,” Derek Thorslund.

 

Be sure to take advantage of Intel’s Ask the Experts Bar and daily tech talks, where you can network with a variety of industry experts. The tech talks will feature customers and industry experts along with Intel and Citrix product owners. Intel health care implementations will also be featured in customer presentations at the Citrix Booth Theatre from both LifeSpan and Allegro Pediatrics.

 

Visit these Interactive Demos and More in Intel Booth #870

 

Enhancing Netscaler Security and Performance with Intel Inside. Showcasing performance scaling and new security enhancements on Intel® Xeon® Processor based Netscaler MPX and SDX product families.

 

Intel® Solid State Drives (SSD) Enable a Secure Client. New endpoint security, storage technologies and capabilities with Citrix core product solutions.

 

Scaling XenDesktop with Atlantis USX and Intel SSD.  Featuring Atlantis USX as a storage layer with Intel SSDs for XenDesktop. Offering a robust performance architecture and high density with lower implementation costs and ongoing maintenance OPEX compared to traditional VDI Solutions.

 

Intel® Graphics Virtualization on Citrix (Intel® GVT). Learn about the new Intel Xeon Processor E3 family with Intel® Iris™ Pro Graphics in the cloud and new graphics virtualization technologies and solutions powered by Citrix from leading OEM partners. Interact with ISV-certified rich and brilliant 3D apps on the Intel remote cloud and learn how integrated graphics offer a compelling alternative to add-in graphics cards. The technologies highlighted will include Intel GVT-d – direct deployment of Intel processor graphics running 3D apps and media as well as Intel GVT-g – shared deployment in a cloud-based environment, hosted remotely in a data center running Citrix on latest-gen Intel Xeon processor servers.

 

Intel Ecosystem Enables Citrix Across Synergy16

 

Of course, the broader Intel ecosystem will be on full display at Synergy, including the latest HP Moonshot m710 Series and Cisco M-Series offerings. These tools bring unmatched levels of price, performance and density in delivering graphics and rich apps to a wide range of professional users requiring access to apps with ever-increasing graphics capabilities. There will also be a broad array of Intel Xeon-based Netscalers running in the IBM Softlayer Cloud and across booths and learning labs throughout the event. Explore exciting Intel-based Storage solutions on Citrix with new offerings from partners such as Nutanix, Pure Storage and Atlantis. As always, Intel end points will be ubiquitous throughout Synergy and featured in many sponsor pavilions, including HPE, Google, Dell and Samsung.

 

Beyond being a technology leader and strategic partner, Intel will be supplying Intel Arduino boards for the Simply Serve program at Synergy. Promoting STEM programs for Title 1 middle school students. A big thanks to Intel on behalf of both Citrix and the Southern Nevada United Way!

 

Citrix is pleased to welcome Intel to Synergy 2016. We encourage all attendees to stop by Booth #870 to meet the Intel team, watch customer presentations at the Intel Theatre and interact with innovative technology demos. Don’t forget to pull up your Synergy Mobile App to mark your calendar for SYN121, the Industry Expert Graphics Panel on Wed May 25 at 4:30pm in Murano 3204.

Read more >

What Cybersecurity Data Should You Trust?

Vault2.jpg

The Limitations of Security Data

We are constantly being bombarded by cybersecurity data, reports, and marketing collateral—and not all of this information should be treated equally. Security data inherently has limitations and biases, which result in varying value and relevance in how it should be applied. It is important to understand which is significant and how best to allow it to influence your decisions. 

 

There is a tsunami of security metrics, reports, analyses, blogs, papers, and articles vying for attention. Sources range from reporters, researchers, professional security teams, consultants, dedicated marketing groups, and even security-operations people who are adding data, figures, and opinions to the cauldron. We are flooded with data and all those who have opinions on it.

 

It was not always this way. Over a decade ago, it was an information desert, where even speculations were rare. Making decisions driven by data has always been a good practice. Years ago, many advocates were working hard to convince the industry to share information. Even a drop is better than none. Most groups that were capturing metrics were too frightened or embarrassed to share. Data was kept secret by everyone while decision makers were clamoring for security insights based upon industry numbers, which simply were not available. 

 

What Was the Result?

In the past, fear, uncertainty, and doubt ruled. People began to dread the worst and unscrupulous security marketing advocates took advantage, fanning the flames to sell products and snake oil. They were dark times, promulgated with outlandish claims of easily eradicating cyber threats with their software or appliance products. The market was riddled with magic boxes, silver-bullet software, and turn-key solutions to easily fix all security woes.  I can remember countless salespeople asserting “we solve security” (which at that point I stopped listening or kicked them out).  The concept of flipping a switch and all the complex problems of compute security forever goes away, was what uninformed organizations wanted to hear, but was simply unrealistic.  Why customers chose to believe such nonsense (when the problem and the effectiveness of potential solutions could not be quantified) is beyond me, but many did.  Trust in the security solutions industry was lost for a period of time.

 

Slowly, a trickle of informative sources began to produce reports and publish data. Such initiatives gained momentum with others joining in to share in limited amounts. It was a turning point. Armed with data and critical thinking, clarity and common sense began to take root. It was not perfect or quick, but the introduction of data from credible sources empowered security organizations to better understand the challenge and effective ways to maneuver against threats.

 

As the size of the market and competition grew, additional viewpoints joined the fray. Today, we are bombarded by all manner of cybersecurity information. Some are credible while others are not. There are several types of data being presented, ranging from speculations to hard research. Being well-informed is extremely valuable to decision makers. Now, the problem is figuring out how to filter and organize the data so one is not mislead.

 

As part of my role as a cybersecurity strategist, I both publish information to the community and consume vast amounts of industry data. To manage the burden and avoid the risks of believing less-than-trustworthy information, I have a quick guide to help structure the process. It is burned into my mind as a set of filters and rules, but I am committing it to paper in order to share. 

 

I categorize data into four buckets. These are: Speculation, Survey, Actuarial, and Research. Each has its pros and cons. The key to managing security data overload is to understand the limitations of each class, its respective value and its recommended usage.

 

Cybersecurity Data-Table.jpg

For example, Survey data is the most unreliable, but does have value in understanding the fears and perceptions of the respondent community.  Research data is normally very accurate but notoriously narrow in scope and may be late to the game.  One of my favorites is Actuarial data.  I am a pragmatic guy.  I want to know what is actually happening so I can make my own conclusions.  But there are limitations to Actuarial data as well.  It tends to be very limited in size and scope, so you can’t look too far into it and it is a reflection of the past, which may not align to the future. 

I hear lots of different complaints and criticisms when it comes to the validity, scope, intent, and usage of data.  I personally have my favorites and those which I refuse to even read.  Security data is notoriously difficult.  There are so many limitations and biases, it is far easier to point out issues than to see the diamond in the rough.  But data can be valuable if filtered, corrected for bias, and the limitations are known. Don’t go in blind.  Common sense must be applied.  Have a consistent method and structure to avoid pitfalls and maximize the data available to help you manage and maintain an optimal level of security.

Below are a few examples, in my opinion, of credible cybersecurity data across the spectrum of different categories.  Again keep in mind the limitations of each group and don’t make the mistake of using the information improperly!  Look to Speculation for the best opinions, Survey for the pulse of industry perceptions, Actuarial for real events, and Research for deep analysis:

 

Speculation:

 

Survey:

  • Threat Intelligence Sharing surveyMcAfee Labs Threats Report March 2016
  • 20% jump in cybercrime in the UK since 2014 with nearly two-thirds of businesses expressing no confidence in the ability of law enforcement to deal with it, per PwC
  • 25% Americans believe they have experienced a data breach or cyber attack.  Travelers survey
  • 43% organizations surveyed indicated increases in cybersecurity will drive the most technology spending.  Source 2016 ESG IT spending intentions research report
  • 61% of CEO’s believe cyber threats pose a danger to corporate growth per PwC survey

 

Actuarial:

  • 3 out of 5 Californians were victims of data breaches in 2015 according to the CA Attorney General in the 2016 California Data Breach Report
  • ~35% of the US population. Top 10 Healthcare breaches of 2015, affected almost 35% of the US population.  Source: Office of Civil Rights
  • Data Breach Investigations Report (DBIR) annual report by Verizon
  • 2016 Annual Security Report by Cisco
  • 42 million new unique pieces of malware discovered in Q4 2015, bringing the total known samples to almost 500 million, per McAfee Labs Threat Report  (March 2016, Malware section)
  • Security Intelligence Report (SIR) bi-annual report by Microsoft

 

Research:

 

By the way, yes, this very blog would be considered Speculation.  Treat it as such. 

 

 

 

Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

5 Questions for Mark Caulfield, Chief Scientist, Genomics England

Mark Caulfield, FMedSci, is a chief scientist and board member at Genomics England, an organization which provides investment and leadership to increase genomic testing research and awareness. Caulfield is also the director of the William Harvey Research Institute and was elected to the Academy of Medical Sciences in 2008. His particular areas of research are Cardiovascular Genomics and Translational Cardiovascular Research and Pharmacology. We recently sat down with him to discuss genomic sequencing as well as insight into a current research project. Mark-Caulfield-11-use.jpg

 

Intel: What is the most exciting project you’re working on right now?

 

Caulfield: The 100,000 Genomes Project is a healthcare transformation program that reads through the entire DNA code using whole genome sequencing. That’s 3.3 billion letters that make you the individual you are. It gives insight into what talents you have as well as what makes you susceptible to disease. My research is focused on infectious disease and rare inherited diseases such as cancer. Technology can bring answers that are usable in the health system now across our 13 centers.

 

When studying rare disease, the optimal unit is a mother, father and an affected offspring. The reason is that both parents allow the researcher to filter out rare variations that occur in the genetic code that are unrelated to the disease, focusing in on a precise group. This project will result in more specific diagnosis for patients, a better understanding of disease, biological insights which may pave the way for new therapies and a better understanding of the journey of patients with cancer, rare disease and infection.

 

Intel: How does this project benefit patients?

 

Caulfield: By building a picture of the entirety of the genome or as much as we can read today, which is about 97.6 percent of your genome, we have a more comprehensive picture and a far greater chance of deriving healthcare benefits for patients. Cancer is essentially a disease of disordered genome. With genomic sequencing, we can gain insights into what drove the tumor to occur in the first place, what drives its relapse, what drives its spread and other outcomes. Most importantly, we can understand what drives response to therapy. We already have good examples of where cancer genotyping is making a real difference to therapy for patients.

 

Intel: What is the biggest hurdle?

 

Caulfield: Informed consent is essential to the future application of the 100,000 genomes project. It’s very hard to guarantee, that you can absolutely secure data. I think it’s the responsibility of all medical professionals like myself in this age to be upfront about the risk to data access. Most patients understand these risks. We try and keep patient data as secure as is reasonably possible within the present technological bounds.

 

Intel: What is crucial to the success of genomic sequencing?

 

Caulfield: We need big data partners and people who know how to analyze a large amount of data. We also need commercial partners that will allow us to get new medicines to patients as quickly as possible. That partnership, if articulated properly, is well received by people. Once we have this established, we can make strides in gaining and keeping public and patient trust, which is crucial to the success of genomic sequencing.

 

If you want public trust, you must fully inform patients about the plan. Ensure their medical professionals understand that plan and that patients are bought into a conversation. This allows the patients and the public to shape your work. Sometimes in medicine, we become a little remote from what the patient wants when in actuality, this is their money. It should be their program, not mine.

 

Intel: What goal should researchers focus on?

 

Caulfield: With this large amount of data comes the need to process it as quickly as possible in order to provide helpful results for both the patient and care team. Intel’s All in One Day initiative is an important goal because it accelerates the time from when a person actually enrolls in such a program to receiving a diagnostic answer.

 

The goal is to get the turn-a-round as fast as possible. For example, if a patient has cancer, that person may have an operation where the cancer is removed. Then the patient would then need to heal. If chemotherapy were needed, it would be important to start that as quickly as possible. We have to use the best technology we have available so we can shrink the time from involvement to answer.

Read more >

All In One Day by 2020 – A Progress Check

 

All In One Day by 2020 – the phrase encompasses our real ambition here at Intel to empower researchers to give clinicians the information they need to deliver a targeted treatment plan for patients in just one 24-hour period. I wanted to provide you with some insight into where we are today and what’s driving forward the journey to All In One Day by 2020.

 

Genomics Code Optimization

 

We have been working with industry-leader experts, and commercial and open source authors of key genomic codes for several years on code optimization to ensure that genome processing runs as fast as possible on Intel®-based systems and clusters. The result is a significant improvement on the speed of key genomic programs which will help get sequencing and processing down to minutes, for example:

 

  • Intel has sped up a key piece of the Haplotype Caller in GATK, the pairHMM kernel to be 970x faster for an overall 1.8x increase in the pipeline performance;
  • The acceleration of file compression for genomics files, e.g. BAM and SAM files by over 4x
  • The acceleration of Python using Intel’s Math Kernel Library (MKL) producing a 15x speedup on a 16-core Haswell CPU;
  • Finally, using the enhanced MKL, in conjunction with its Data Analytics Acceleration Library (DAAL), has enabled DAAL to be 100x faster than R for k-means clusters and 35x faster than Weka on Apriori.

 

You can find out more about Intel’s work in code optimization at our dedicated Optimized Genomics Code webpage.

 

Scalability for Success

 

As we see an explosion in the volume of available data the importance of being able to scale a high performance computing system becomes ever more critical to accelerating success. We have put forth the Intel® Scalable System Framework to guide the market on the optimal construction of an HPC solution that is multi-purpose, expandable and scalable.

 

Combining the Scalable System Framework with optimized life sciences codes has resulted in a new, more flexible, scalable, and performant architecture. This reduces the need for purpose-built systems and instead offers an architecture that can span a variety of diverse workloads while offering increased performance.

 

Another key element of an architecture is the balance between three key factors: compute, storage, and fabric. And today we see the fruits of our work coming to life, for example, in a brilliant collaboration between TGen, Dell and Intel which optimized TGen’s  RNA-Seq pipeline from 7 days to under 4 hours. TGen are successfully operating FDA-approved clinical trials, balancing research and providing clinical treatment of pediatric oncology patients.

 

The intersection of our code optimization efforts and our SSF effort have yielded two new products for genomics too, one from Dell and another from Qiagen.

 

From a week to a day

 

It’s useful, I think, to see just how far we’ve come in the last four years as we look ahead to the next four years to 2020. In 2012 it took a week to perform the informatics on a whole human in a cloud environment going from the raw sequence data to an annotated result. Today, the time for the informatics had decreased to just 1 day for whole genomes.

 

With the Dell and Qiagen reference architectures that are based on optimized code and the Intel® Scalable System Framework, a throughput-based solution has been created. This means that when fully loaded these base systems will perform the informatics on ~50 whole genomes per day.

 

However, it is important to note the genomes processed on these systems still take ~24 hours to run, but they are being processed in a highly parallel manner. If you use a staggered start time of ~30 minutes between samples, this results in a completed genome being produced approximately every 30 minutes. For the sequencing instrumentation, Illumina can process a 30x whole human genome in 27 hours using its “rapid-run mode”.

 

So, in 2016, we can sequence a whole genome and do the informatics processing in just over 2 days (51 hours consisting of 27 hours of sequencing + 24 hours of informatics time), that’s just ~1 day longer than our ambition of All In One Day by 2020.

 

Three final points to keep in mind:

 

  1. There are steps in the All In One Day process that are our outside of the sequencing and the informatics, such as the doctor’s visit, the sample preparation for sequencing, the genome interpretation and the dissemination of results to the patient. These steps will add additional time to the above 51 hours.
  2. The reference architectures are highly scalable meaning a larger system can do more genomes per day. 4 times the nodes produce 4 times throughput.
  3. There are enhancements still to be made. For example, streaming the output from the sequencer to the informatics cluster such that the informatics can be started before the sequencing is finished will further compress the total time towards our all-in-one-day goal.

 

I’m confident our ambitions will be realized.

 

Read more >

Can Zealous Security Cause Harm?

Security Balance.jpg

Good security is about balancing Risks, Costs, and Usability.  Too much or too little of each can be unhealthy and lead to unintended consequences.  We are entering an era where the risks of connected technology can exceed the inconveniences of interrupted online services or the release of sensitive data.  Failures can create life-safety issues and major economic impacts.  The modernization of healthcare, critical infrastructure, transportation, and defense industries is beginning to push the boundaries and directly impact people’s safety and prosperity.  Lives will hang in the balance and it is up to the technology providers, users, and organizations to ensure the necessary balance of security is present.

 

We are all cognizant of the risks in situations where insufficient security opens the door to exposure and the compromise of systems.  Vulnerabilities allow threats to undermine the availability of systems, confidentiality of data, and integrity of transactions.  On the other end of the spectrum, too much security can also cause serious issues.

 

A recent incident described how a piece of medical equipment crashed during a heart procedure due to an overly aggressive anti-virus scan setting.  The device, a Merge Hemo, is used to supervise heart catheterization procedures, while doctors insert a catheter inside blood vesicles to diagnose various types of heart diseases.  The module is connected to a PC that runs software to record and display data.  During a recent procedure, the application crashed due to the security software which began scanning for potential threats.  The patient remained sedated while the system was rebooted, before the procedure could be completed.  Although the patient was not harmed, the mis-configuration of the PC security software caused an interruption during an invasive medical procedure. 

 

Security is not an absolute.  There is a direct correlation between the increasing integration of highly connected and empowered devices, and the risks of elevated attack frequency with a greater severity of impacts.  The outcome of this particular situation was fortunate, but we should recognize the emerging risks and prepare to adapt as technology rapidly advances.

 

Striking a balance is important.  It may not seem intuitive, but yes, too much security can be a problem as well.  Protection is not free.  Benefits come with a cost.  Security functions can create overhead to performance, reduce productivity, and ruin users’ experiences.  Additionally, security can increase the overall cost of products and services.  These and other factors can create ripples in complex systems and result in unintended consequences.  We all agree security must also be present, but the reality is, there must be an appropriate balance.  The key is to achieve an optimal level, by tuning the risk management, costs, and usability aspects for any given environment and usage.

 

 

 

Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

Telemedicine Trends in Latin America

Telemedicine is gaining increased attention worldwide as a solution for improving access to care, improving quality of care, and lowering costs.

 

Much of Latin America faces a major challenge that could in part be addressed with telemedicine:  a shortage of providers, and large populations living in rural areas where access to physicians—particularly specialists—is lacking.

 

In my multiple visits to Latin America over the past two years, it is clear that while most countries in the region have used telemedicine to varying extents for many years, scalability remains a major goal.

 

Governments across Latin America are generally strong advocates of telemedicine, and are investing in the networks and infrastructure that will support this technology.

 

Below I highlight ways in which countries throughout the region are using or intend to use telemedicine, and what trends we might observe in the years ahead.

 

Brazil

In Brazil, telemedicine today is used strictly for provider-to-provider consultation, as physicians are not legally allowed to consult with patients over videoconference.

 

Telemedicine has been largely driven by the need to provide care virtually between specialists in urban centers to patients in remote areas, due to a lack of specialists in the rural areas.

 

The Brazilian government has long supported the use of telemedicine to provide better access and treatment to remote areas. Since 2006, it has facilitated two public initiatives–the Brazilian National Telehealth Network Program (launched by the MOH) and the RUTE-Telemedicine University Network (launched by the Ministry of Science, Technology, and Innovation) both of which serve to deploy telemedicine across Brazil.

 

One of the first major initiatives started in 2006 in Parintins, a city of 100,000 located in the middle of the Amazon. With no roads to or from the city, the goal was to use telemedicine to enable communication between physicians in Parintins and specialists in Sao Paulo. Parintins partnered with private technology companies, including Intel, to build the necessary infrastructure (e.g., WiMAX network). This telemedicine program continues to operate today, and has informed other telemedicine efforts including Brazil’s national telehealth program, Telessaude (http://www.telessaudebrasil.org.br/).

 

Another major initiative in Brazil is to bring intensive care unit (ICU) care to rural areas. The Brazilian MOH initiated tele-ICU programs so that now many hospitals in different regions are connected to rural parts of the country. These tele-ICUs reduce the need to transport patients into a city for health conditions such as heart attacks, strokes, and sepsis. Physicians in urban areas are able to use PTZ cameras to visually inspect the patient, and collect and interpret vital signs in real-time. Cerner, in partnership with Brazilian companies Intensicare and IMFtec, has provided the technology and software for most of these virtual ICUs.

 

Mexico, Chile, Peru, and Argentina

In Mexico, the social security network provides healthcare to formal sector workers. The network is currently working with companies such as Lumed Health http://www.lumedhealth.com/ to expand telemedicine capabilities. In addition, telemedicine is being used between the U.S. and Mexico with health systems such as the Mayo Clinic and Massachusetts General conducting consultations with physicians in Mexico.

 

In Chile, the Ministry of Health has implemented a “Digital Health Strategy.” Its primary goal is also to address provider shortages and to improve access to care in rural areas. There are currently several telemedicine projects and POCs underway in Chile.  AccuHealth (https://www.accuhealth.cl/), for example, is a Chilean company that provides tele-monitoring services specifically to bring home care to patients who suffer from chronic conditions. The company plans to expand to Mexico and Colombia in the near term.

 

In Peru, the government is spearheading efforts to build a fiber optics network across the entire country (www.proinversion.gob.pe/RedDorsal/). This infrastructure will be used to better support telemedicine services.

In Argentina, the government has worked with the MOH and the Ministry of Federal Planning, Public Investment and Services to promote telemedicine. This collaboration has culminated in the CyberHealth Project, which is focusing on the installation of fiber optics and upgrading hospitals to allow for videoconferencing. It aims to connect 325 healthcare institutions across the nation to enable remote consultations and sharing of expertise.

 

The Future of Telemedicine in Latin America

Telemedicine is being increasingly recognized as a solution to achieve more with less. In Latin America, it has great potential to address the fact that providers and health care resources are not distributed equally among the urban and rural populations.

 

The future of telemedicine in the region is promising. Governments are investing in and taking active roles in digitizing their health systems (e.g., implementation of electronic medical records, improving interoperability) along with building the infrastructure required to support telemedicine. The Pan American Health Organization (PAHO) has convened a meeting of the MOH leaders from several Latin American countries to discuss strategic plans for e-Health across the region. This collaboration, where protocols, guidelines, and best practices can be shared, will be increasingly important.

 

Intel Health & Life Sciences looks forward to continuing its partnerships with public and private entities across Latin America to continue these important efforts.

Read more >

Nurses Week 2016: Technology To Make Your Job Easier

International Nurses Day is a time to say Thank You Nurses. Thank you for your hard work, thank you for your compassion and thank you for the endless care you give to patients. It’s this unwavering focus on patient care that we must keep in mind when developing and implementing technology for nurses both in the hospital and community. The most valuable technology we can give to nurses is that which is almost invisible to – yet improves – their workflow, simplifies complex tasks and enables them to deliver even better care – in essence, technology must make the job of a nurse easier. I want to take today, International Nurses Day, to highlight a couple of technologies which have the potential to deliver on all of the above.

 

Nursing goes Digital

I know from experience that the best decisions are made when a nurse has the most accurate and up-to-date information on a patient’s condition. And when that accurate information can be gathered and accessed in an intuitive and more natural interaction using technology it’s a win-win for nurses and patients.

 

I’m excited by the potential offered by Intel’s RealSense 3D camera which can be found in a range of devices such as 2-in-1s, the likes of which are already being used by nurses to record vital signs and access EMRs. For example, imagine being able to accurately track all 22 joints of a hand to assist with post-operative treatment following hand surgery.

 

For community nurses, mobility is key. Holding the most up-to-date information when visiting patients in the home ensures mistakes are kept to a minimum and all parties involved in the care of the patient, from community nurses to specialist clinician, can make evidence-based decisions. 2-in-1 devices help nurses to stay focused on the patient rather than reams of paperwork, while also helping patients better understand their condition and improving buy-in to treatment plans. The real benefits are in simplifying and speeding up those processes which ensures nurses deliver the best possible care.

 

Big Data for Nurses

When we think of Big Data it is all too easy to think just about genomics, but there are benefits which can clearly help nurses identify serious illness more quickly too. Take Cerner for example, who have developed an algorithm that monitors vital information fed in real-time from the EMR. The data is analysed on a real-time basis, which then identifies with a high degree of accuracy that a patient is either going to get, or already has, sepsis.

 

Clearly, given the speedy nature with which drugs must be administered, this Big Data solution is helping nurses to simply save lives by identifying at-risk patients and getting them the treatment they so desperately need. Watch this video to find out more about how Intel and Cloudera allow Cerner to provide a technology platform which has helped save more than 2,700 lives.

 

Intelligent Care

The rise of the Internet of Things in the healthcare sector is seeing an increasing use of sensors to help simplify tasks for nurses. For example, if sensors can monitor not only a patient’s vital signs but also track movement such as frequency of the use of a toilet, it not only frees up a nurse’s time for other tasks but also begins to build an archive of data which can be used at both patient and population effort.

 

In China the Intel Edison-based uSleepCare intelligent bed is able to record a patient’s vital signs such as rate and depth of breathing, heart-rate and HRV without the need for nurse intervention. There are positive implications for patient safety too, as sensors can track movements and identify when patients might fall out of bed, alerting nurses to the need for attention.

And when I think of moving towards a model of distributed care, this type of intelligent medical device can help the sick and elderly be cared for in the home too. WiFi and, in the future, 5G technologies, combined with sensors can help deliver the right patient information to the right nurse at the right time.

 

Investing in the Future

Having highlighted two examples of how technology can help nurses do an even better job for patients I think it’s important to recognise that we must also support nurses in using new technology. Solutions must be intuitive and seamlessly fit into existing workflows, but I recognise that training is needed. And training on new technologies should happen right from the start of nursing school and be a fundamental part of ongoing professional development.

 

While International Nurses Day is, of course, a time to reflect and say Thank You Nurses, I’m also excited about the future too.

 

Read more >

Key Lessons from the 2016 Verizon Data Breach Incident Report

Verizon 2016 DBIR.jpg

The annual Data Breach Incident Report (DBIR) is out and reinforcing the value of well-established cybersecurity practices.  The good folks at Verizon Enterprise have once again published one of the most respected annual reports in the security industry, the DBIR. 

 

The report sets itself apart with the author intentionally avoiding unreliable ‘survey’ data and instead striving to truly communicate what is actually happening across the cybersecurity breach landscape.  The perception of security typically differs greatly from reality, so this analysis provides some of the most relevant lessons for the field.

 

Report data is aggregated from real incidents that the company’s professional security services have responded to for external customers.  Additionally, a large number of security partners now also contribute data for the highly respected report.  Although this is not comprehensive across the industry, it does provide a unique and highly-valuable viewpoint, anchored in real incident response data.

 

Much of the findings support long-standing opinions on the greatest cybersecurity weaknesses and best practices.  Which is to say, I found nothing too surprising and it does reinforce the current directions for good advice.

 

 

Key Report Findings

1. Human Weaknesses

30% of phishing messages were opened by their intended victim

12% of those targets took the next step to open the malicious attachment or web link

2. Ransomware Rises

39% of crime-ware incidents were ransomware

3. Money for Data

95% of data breaches were motivated by financial gain

4. Attackers Sprint, Defenders Crawl

93% of data breaches were compromised in minutes

83% of victims took more than a week to detect breaches

5. Most of the Risk is from a Few Vulnerabilities

85% of successful traffic was attributed to the top 10 CVE vulnerabilities.  Although difficult to quantify and validate, it’s clear that top vulnerabilities should be prioritized

 

 

Key Lessons to Apply

1. Train users.  Users with permissions and trust are still the weakest link.  Phishing continues to be highly effective for attackers to leverage poorly trained users to give them access. 

2. Protect financially-valuable data from confidentiality, integrity, and availability attacks.  Expect attacks and be prepared to respond and recover.

3. Speed up detection capabilities.  Defenders must keep pace with attackers.  When preventative controls fail, it is imperative to quickly detect the exploit and maneuver to minimize overall impact.

4. Patch top vulnerabilities in operating systems, applications, and firmware.  Patch quickly or suffer.  It is a race; treat it as such.  Prioritize the work based upon severity ranking Serious vulnerabilities should not languish for months or years!

 

This is just a quick review.  The report contains much more information and insights.

I recommend reading the Executive Summary or the full DBIR Report.

 

 

 

Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

Tweet Chat Review: The Growth of Connected Care

Last week I had the honor of moderating the weekly #HITsm (Health IT social media) chat on Twitter. This regular discussion about health IT issues is a wonderful forum for addressing what steps need to be taken to move healthcare technology forward on a number of fronts.

 

The topic of my chat was The Growth of Connected Care, and focused on defining the terms, sharing trends and identifying successful characteristics of a connected care program. I enjoyed the banter and the great questions that came my way during the chat and learned quite a bit about what the climate is like for overcoming obstacles to adopting connected care.  You can see the transcript of the entire chat here.

 

To recap the conversation, below are the questions that were asked during the chat and my brief answers.

andychat.png

 

Connected care is a broad term – what does it mean?

Generally, connected care applies to leveraging technology to connect patients, providers, and caregivers. Increasingly, this is happening in real-time. Connected care extends care outside of the traditional hospital setting and moves healthcare from episodic events to more continuous care that is tailored specifically for the patient.

 

What market trends are driving connected care?

A few trends are driving connected care forward. First, new Internet of Things (IoT) technology (devices-datacenter) are making connected care possible for patients. Think about wearables and the massive amount of data that can be acquired that influences care; this is the cornerstone of connected care.

 

Second, payment reform and payment models are changing from fee-for-service to value-based. As payment models change, patient retention becomes increasingly important for clinicians. This is the consumerization of healthcare, where the patient takes charge of their own health and the care is on a regular, on-going basis.

 

Finally, healthcare technology investments in digital platforms have opened the opportunity to create and consume new data streams in real-time.

 

What technologies are enabling connected care?

For starters, big data technologies, both software and hardware, are enabling us to work with the high volume, variety, and velocity of connected care data. Wearables and sensors are also evolving, and newer devices are delivering more value in improved form factors.

 

What are characteristics of a successful connected care program?

Successful connected care programs have clear clinical and business goals, know the problems that need to be solved, have measurable outcomes and clear value propositions, and feature scalable architecture for data ingestion, storage, analysis, and visualization.

 

Programs must be patient-centric and look holistically at both patient and care team touch points throughout the continuum of care. They also need a strategy for transforming data into actionable/comprehensible insights delivered at the right time, to the right person. This is often overlooked – insights for providers or patient instructions get lost in poor visualization. This is why the UI/UX aspect of connected care is so critical.

 

Where is connected care headed, and what are some things to watch for?

Expect larger connected care programs with employers, payers, and care providers to reach consumers and tie engagement to financial outcomes. It will be interesting to see how employees respond and how the employer/employee relationship is re-written to include health-related activities.

 

Population health programs will go through a three step evolution of understanding, predicting, and then preventing (UPP). Step one is simply understanding what data is available and identifying/filling gaps. The second stage of program maturity involves using data to being predicting outcomes for specific populations. This stage involves iterating through models to improve specificity both for target outcomes and population boundaries.

 

The third stage is using the predictions to implement real programs that prevent target outcomes from occurring. This stage will partially rely on human-centered care delivery, but it will also push the boundaries of virtual medicine in response to access and delivery constraints that inevitably arise.

 

On the downside, large data breaches look inevitable in the future as more devices allow for more attack vectors. The big unknown is how this will impact the industry and consumers.

 

What are some of the short- and long-term obstacles to adoption of connected care programs?

The business models for connected care are still evolving. New payment and reimbursement pathways are needed to create growth. Sustainable, long-term patient engagement is a challenge. Hopefully, healthcare will continue to look to industries that have pioneered techniques for data-driven high-touch consumer engagement (consumer goods, SaaS internet companies, etc.) and apply those learnings to developing new strategies to engage patients. Finally, federal and state regulation must continue to evolve because connected care operates across traditional geographic boundaries and models of care delivery.

Read more >

Will Your Cloud be at Risk?

The Cloud is both compelling and alluring, offering benefits that entice many organizations into rapid adoption. The attractiveness of lower operational costs, powering new service offerings, and adaptability to cater to varying demands makes it almost irresistible to rush in.

 

But caution should be taken.

 

Leveraging cloud technologies can offer tremendous opportunities, with the caveat of potentially introducing new security problems and business risks.

 

These risks can include vulnerability to cyber-attacks, jeopardizing the confidentiality of data, and potentially undermining the integrity of transactions. Care must be taken to understand these challenges in order to properly design the environment and establish sustainable management processes to maintain a strong security posture. Information assurance is required.

 

 

 

 

How can you mitigate risks in the Cloud?

1. Be informed by understanding both the benefits and risks of cloud adoption.

2. Know the threats and types of attacks that put your cloud data and services at risk.

3. Establish practices to cover the Top 10 assurance categories for cloud.

4. Build a quality plan by leveraging expert resources.

5. Establish accountability across the lifecycle.

6. Don’t be afraid to ask. Nobody gets it right alone!

 

I recently presented strategic recommendations for cloud adoption to a community of application and infrastructure developers. The first step of the journey into the Cloud resides with teams pursuing the benefits and those accountable for maintaining the environment. It is important to follow a path of practical steps for cloud adoption in order to manage the risks while accessing the plethora of benefits. To be successful, teams must understand the security challenges, leverage available expertise and establish a comprehensive plan across the service lifecycle.

 

 

Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

Read more >

Forging an Open Path for SDI Stack Innovation

path-02.jpgIntel was founded on a deep commitment to innovation, especially open standards driven innovation, that results in  acceleration only seen when  whole ecosystems come together to deliver solutions.  Today’s investment in CoreOS is reflective of this commitment, as data centers face an inflection point with the delivery of software defined infrastructure (SDI).  As we have at many times in our industry’s history, we are all piecing together many technology alternatives to form an open, standard path for SDI stack delivery.  At Intel, we understand the value that OpenStack has brought to delivery of IaaS, but also see the additive value of containerized architectures found in many of the largest cloud providers today.  We view these two approaches as complimentary, and the integration and adoption of these are critical to broad proliferation of SDI.


This is why we announced a technology collaboration with CoreOS and Mirantis earlier this year to integrate OpenStack and Kubernetes, enabling OpenStack to run as a containerized pod within a Kubernetes environment. Inherent in this collaboration is a strong commitment across all parties to contribute the results of this collaboration directly upstream so that both communities may benefit. The collaboration brings the broad workload support, and vendor capabilities of OpenStack and the application lifecycle management and automation of Kubernetes into a single solution that provides an efficient path to solving many of the issues gating OpenStack proliferation today – stack complexity and convoluted upgrade paths.  Best of all, this work is being driven in a fully open source environment reducing any risk of vendor lock in.

 

Because software development and innovation like this is a critical part of Intel’s Cloud for All initiative, we tasked our best SDI engineers to work together with CoreOS to deliver the first ever live demonstration of OpenStack running as a service within Kubernetes at the OpenStack Summit.  To put this into perspective, our joint engineers were able to deliver a unified “Stackanetes” configuration in approximately three weeks’ time after our initial collaboration was announced. Three weeks is a short timeframe to deliver such a major demo, but highlights the power of using the right tools together. To say that this captured the attention of the OpenStack community would be an understatement, and we expect to integrate this workflow into the Foundation’s priorities moving forward.

 

The next natural step in our advancement of the Kubernetes ecosystem was our investment in CoreOS that we announced today.  CoreOS was founded on a principle of delivering GIFEE, or “Google Infrastructure for Everyone Else”, and their Tectonic solution integrates Kubernetes with the CoreOS Linux platform. CoreOS’s Tectonic is an easy to consume Hyperscale SDI Stack. We’ve been working with CoreOS for more than a year on various software optimization efforts focused at optimization of Tectonic for underlying Intel Architecture features. Our collaboration on Kubernetes reflects a common viewpoint on the evolution of SDI software to support a wide range of cloud workloads that are efficient, open and highly scalable.  We’re pleased with this latest chapter in our collaboration and look forward to delivering more of our vision in the months ahead.

Read more >

Nurses Week 2016: When Will Avatars Join Nurses Week Celebrations?

Nurses Week is a great opportunity to celebrate all of the fantastic work we do for patients. I often find myself pausing at this time of the year to appreciate just how different – and in most cases better – our working practices, processes and outcomes are compared to just 10 or so years ago. Technology has been a great enabler in improving the workflow of nurses today, but I wanted to share some thoughts on the future of nursing in this blog and how we might be welcoming avatars and the world of virtual reality to Nurses Week celebrations in the near future.

 

Better Training, Overcoming Global Shortage of Nurses

There are challenges ahead for the nursing community, driven by many of the same factors affecting the entire healthcare ecosystem, ranging from an increasingly ageing population to pressure on budgets. When I met with nurses from across Europe in Brussels earlier this year at Microsoft in Health’s Empowering Health event, two key themes really came to the fore:

  • First, there was a call for improved training for nurses to help them better understand and benefit from technologies such as 2 in 1 tablets and advanced Electronic Medical Record systems;
  • Second, there was a discussion around what technologies might help overcome the potential of a global shortage of nurses in the future. A 2015 World Health Organisation report stated that ‘a fundamental mismatch exists between supply and demand in both the global and national health labour markets, and this is likely to increase due to prevalent demographic, epidemiologic and macroeconomic trends.’


Looking ahead I see a real opportunity to integrate avatars and virtual reality into the nursing environment which will not only train students to be better nurses but also deliver better patient care with improved workflows at the bedside

 

Virtual Reality To Deliver Safe, Effective Teaching

Training is a fundamental part of a nurse’s development, and that rings true for both those in nursing school and more experienced nurses learning new technologies and procedures. Virtual reality technology can play a major role in helping nurses to better deal with a range of scenarios and technologies.

 

For example, if I want to teach a nurse how to perform a specific procedure using virtual reality, I’m able to present the trainee with an avatar on a screen that could be any combination of gender, height, weight and medical condition. And whilst the procedure is being undertaken I’m then able to trigger a wide range of responses from the avatar patient to help the nurse learn how to deal with different scenarios – all in a safe and controlled manner that can be monitored and assessed for post-session feedback.

 

Similarly, if a nurse is required to understand how to use a new piece of technology to improve their workflow, such as working with an upgrade to an EMR system on a 2 in 1 tablet, virtual reality can help too by simulating these new systems. In a virtual setting nurses are not only able to familiarise themselves with new processes but can provide feedback on issues around workflow before they are launched into a live patient environment.

 

If I think of how my training was delivered at nursing school there was plenty of ‘chalk and board’-style teaching and a lot of time spent in a classroom using limited resources such as manikins. Today, an infinite number of student nurses can learn remotely using virtual reality and avatar patients, reinforcing knowledge and improving workflows on a range of mobile devices. This is particularly useful too for countries where educators are in short supply but nursing demand may be high.


Avatars Ask ‘How Are You Feeling Today’?

A recurring question in my mind is how can we make better use of the fantastic expertise and knowledge of today’s nurses to continue to deliver great care to patients. In the face of a shortage of nurses we should explore how avatars on a bed-side screen or 2 in 1 device might be able to take away the burden of some of the more routine daily tasks such as asking patients if they have any unusual symptoms.

 

Patient answers could be fed back into the EMR which would trigger either further questions from the avatar or, in more serious cases, an alert for intervention by a human nurse.

When I talk to my peers within healthcare there are some obvious and real concerns about the lack of emotion delivered by avatars. The rise of chat bots has made for interesting news recently and I see this kind of artificial intelligence combined with a great avatar experience delivering something approaching human emotions such as sympathy for routine tasks. We should also recognise that an avatar can potentially speak an unlimited number of languages too, helping all patients get a better understanding of their condition.

 

As nurses I hope our community embraces discussion and ideas around the use of virtual reality and avatars – I’ve talked through just a couple of scenarios where I see improvements to training and care delivery but I’d be interested to hear how you think they could help you do your job better. And perhaps one day in the near future, avatars will be celebrating Nurses Week with us too.

 

Read more >