Recent Blog Posts

Big Data in Life Sciences: The Cost of Not Being Prepared

For years, the term “Big Data” has been thrown around the Healthcare and Life Science research fields like it was a new fashion that was trendy to talk about. In some manner, everyone knew that the day was coming that the amount of data being generated would outpace our ability to process it if major steps to stave off that eventuality weren’t taken immediately. But, many IT organizations chose to treat the warnings of impending overload much like Y2K in the aftermath, that it was a false threat and there was no real issue to prepare for in advance. That was five years ago, and, the time for big data has come.

 

The pace at which life science-related data can be produced has increased at a rate that far exceeds Moore’s Law, and it has never been cheaper or easier for scientists and clinical researchers to acquire data in vast quantities. Many research computing environments have found themselves in the middle of a data storm, in which researchers and healthcare professionals need enormous amounts of storage, and need to analyze the stored data with alacrity so that discoveries can be made, and cures for disease can be possible. In the wake of a lack of preparedness on the organizations’ part, researchers have found themselves in the middle of a research computing desert with nowhere to go, and the weight of that data threatening to collapse onto them.

 

Storage and Compute

The net result of IT calling the assumed bluff of the scientists is that they are unprepared to provide the sheer amount of storage that is necessary for the research, and, even when they can provide that storage, they don’t have enough compute power to help them get through the data (so that it can be archived), causing a back log of data storage that exponentially compounds as more and more data pours into the infrastructure. To make matters worse, scientists are left with the option of moving the data elsewhere to help them get through processing and analysis. Sometimes, well-funded laboratories purchase their own HPC equipment, sometimes cloud-based compute and storage is purchased, sometimes researchers find a collaborator with access to an HPC system that they can use to help chunk through the backlog. Unfortunately, these solutions create another barrier; how to get that much data moved from one point to another. Most organizations don’t have Internet connections much above 1Gbps for the entire organization, while most of these datasets are many terabytes (TBs) in size and would take weeks to move over those connections at saturation (which would effectively shut down the Internet connection for the organization). So, being the resourceful folks they are, scientists then take to physically shipping hard drives to their collaborators to be able to move their data, which has it’s own complex set of issues to contend with.

 

The depth of the issues that have arisen out of the lack of preparedness of research- or healthcare-based organizations are so profound that many of these organizations are finding it difficult to attract and hire the talent they need to actually accomplish their missions. New researchers, and those on the forefront of laboratory technologies, largely understand the requirements they have computationally. If a hiring organization isn’t going to be able to provide that, they look elsewhere.

 

Today and Tomorrow

As such, these organizations have finally started to make the proper investments into research computing infrastructure, and the problem is slowly starting to get better. But, many of them are taking the approach of only funding what they have to today to get today’s jobs done. This approach is a bit like expanding a highway in a busy city to meet the current population’s needs, rather than trying to build it for 10 years from now; it won’t make a difference in the problem by the time the highway is completed because the population will have already exceeded that capacity. Building this stuff the correct way for an unpredictable time at some point in the future is scary, and quite expensive, but the alternative is the likely failure of the organization to meet their mission. Research computing is now a reality in life science and healthcare research, and not investing will only slow things down and cost the organizations much more in the future.

 

So, if this situation describes your organization, encourage them to invest now in technologies for the 5-years-from-now timeframe. Ask them to think big, to think strategically, instead of putting tactical bandages on the problems at hand. If we can get most organizations to invest in the needed technologies, scientists will be able to stop worrying about where their data goes, and will be able to get back to work, which will result in an overall improvement in our health-span as a society.

 

What questions do you have?

Read more >

10 Mobile BI Strategy Questions: Enterprise Mobility

Two-Men-Looking-At-A-Tablet.jpgIs your mobile business intelligence (BI) strategy aligned with your organization’s enterprise mobility strategy? If you’re not sure what this means, you’re in big trouble. In its simplest form, enterprise mobility can be considered a framework to maximize the use of mobile devices, wireless networks, and all other related services in order to drive growth and profitability. However, it goes beyond just the mobile devices or the software that runs on them to include people and processes.

 

It goes without saying that enterprise mobility should exist in some shape or form before we can talk about mobile BI strategy, even if the mobile BI engagement happens to be the first pilot planned as a mobility project. Therefore, an enterprise mobility roadmap serves as both a prerequisite for mobile BI execution and as the foundation on which it relies.

 

When the development of a successful mobile BI strategy is closely aligned with the enterprise mobility strategy, the company benefits from the resulting cost savings, improvement in the execution of the mobile strategy, and increased value.

 

Alignment with Enterprise Mobility Results in Cost Savings

 

Although mobile BI will inherit most of its rules for data and reports from the underlying BI framework, the many components that it relies on during execution will be dependent on the enterprise rules or lack thereof. For example, the devices on which the mobile BI assets (reports) are consumed will be offered and supported as part of an enterprise mobility management system, including bring-your-own-device (BYOD) arrangements. Therefore, operating outside of these boundaries could not only be costly to the organization but it could also result in legal and compliance concerns.

 

Whether the mobile BI solutions are built in-house or purchased, as with any other technology initiative, it doesn’t make any sense to reinvent the wheel. Existing contracts with software and hardware vendors could offer major cost savings. Moreover, fragmented approaches in delivering the same requirement for multiple groups and/or for the same functionality won’t be a good use of scarce resources.

 

For example, building forecast reports for sales managers within the customer relationship management (CRM) system and forecast reports developed on the mobile BI platform may offer the same or similar functionality and content, resulting in confusion and duplicate efforts.

 

Leveraging Enterprise Mobility Leads to Improved Execution

 

If you think about it, execution of the mobile BI strategy can be improved in all aspects if an enterprise mobility framework exists that can be leveraged. The organization’s technology and support infrastructure (two topics I will discuss later in this series) are the obvious ones worth noting. Consider this—how can you guarantee an effective delivery of BI content when you rollout to thousands of users without having a robust mobile device support infrastructure?

 

If we arm our sales force with mobile devices around the same time we plan to deliver our first set of mobile BI assets, we can’t expect flawless execution and increased adoption. What if the users have difficulty setting up their devices and have nowhere to turn for immediate and effective support?

 

Enterprise Mobility Provides Increased Value for Mobile BI Solutions

 

By aligning our mobility BI strategy with our organization’s enterprise mobility framework, we not only increase our chances of success, but, most importantly, we have the opportunity to provide increased value beyond pretty reports with colorful charts and tables. This increased value means that we can deliver an end-to-end solution even though we may not be responsible for them under the BI umbrella. Enterprise mobility components such as connectivity, device security, or management contribute to a connected delivery system that mobile BI will share.

 

Bottom Line: Enterprise Mobility Plays an Important Role

 

Enterprise mobility will influence many of mobile BI’s success criteria. When we’re developing a mobile BI strategy, we need to stay not only in close alignment with the enterprise mobility strategy so we can take advantage of the synergies that exist, but also consider the potential gaps that we may have to address if the roadmap does not provide timely solutions.

 

How do you see enterprise mobility influencing your mobile BI execution?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Processor Syllabification; Separated by a Common Language #thankyounoahwebster

How do you pronounce processor, coprocessor, microprocessor or multiprocessing? Do you hyphenate these as proc-es-sor, co-proc-es-sor, mi-cro-proc-es-sor, and mul-ti-proc-ess-ing as I know to be correct? Or do you prefer pro-ces-sor, co-pro-ces-sor, mi-cro-pro-ces-sor, and mul-ti-pro-cess-ing? Your answer likely hinges on whether you speak … Read more >

The post Processor Syllabification; Separated by a Common Language #thankyounoahwebster appeared first on Intel Software and Services.

Read more >

Getting Personal: IoT Tech Helps Retailers Step Closer to Consumers

Consumers today expect 1:1 experiences from brands and thanks to the Internet of Things (IoT) retailers are quickly moving in that direction. From rapid consumer data processing and real-time analytics, to IoT-connected RFID tags and predictive consumer profiling, retailers are … Read more >

The post Getting Personal: IoT Tech Helps Retailers Step Closer to Consumers appeared first on IoT@Intel.

Read more >

Intel France Collaborates with Teratec to Open Big Data Lab

By Valère Dussaux


Here at Intel in France, we recently announced a collaboration with the European-based Teratec consortium to help unlock new insights into sustainable cities, precision agriculture and personalized medicine. These three themes are closely interlinked because each of them requires significant high performance computing power and big data analysis.

 

Providing Technology and Knowledge

The Teratec campus, located south of Paris, is comprised of more than 80 organisations from the world of commerce and academia. It’s a fantastic opportunity for us at Intel to provide our expertise not only in the form of servers, networking solutions and big data analytics software but also by utilising the skills and knowledge of our data scientists who will work closely with other scientists on the vast science and technology park.

 

The big data lab will be our principal lab for Europe and will initially be focused on proof of concept works with our first project being in the area of precision agriculture. As we progress techniques we will bring the learnings into the personalized medicine arena where one of our big focuses is the analysis of merged clinical data and genomic data that are currently stored in silos as we seek to advance the processing of unstructured data.

 

Additionally we will also be focusing on the analysis of merged clinical data and open data like weather, traffic and other publically available data in order to help healthcare organizations to enhance resource allocation and health insurers and payers to build sustainable healthcare systems.

 

Lab makes Global impact

You may be asking why Intel is opening up a big data lab in France. Well, the work we will be undertaking at Teratec will not only benefit colleagues and partners in France or Europe, but globally too. The challenges we all face as a collective around an ageing population and movement of people towards big cities present unique problems, with healthcare very much towards the top of that list. And France presents a great environment for innovation, especially in the 3 focus areas, as the Government here is in the process of promulgating a set of laws that will really help build a data society.

 

I highly recommend taking time to read about some of the healthcare concepts drawn up by students on the Intel-sponsored Innovation Design and Engineering Master programme in conjunction between Imperial College and the Royal College of Arts (RCA) in our ‘Future Health, Future Cities’ series of blogs. For sustainable cities, the work done at Teratec will allow us to predict trends and help mitigate the risks associated with the expectation that more than 2/3rds of the world’s population living in big cities by 2050.

 

So far, we have seen research into solutions curtailed from both a technical and knowledge aspect but we look forward to overcoming these challenges with partners at Teratec in the coming years. We know there are significant breakthroughs to be made as we push towards providing personalized medicine at the bedside. Only then can we truly say we are forging ahead to build a future for healthcare that matches the future demands of our cities.

 

Further Reading:

 

 

We’d love to keep in touch with you about the latest insight and trends in Health IT so please drop your details here to receive our quarterly newsletter.

Read more >

KNL and OmniPath Demonstrations; Proof of Gravitational Lensing Seems Cool to Me, What do I know?

Today, I saw a demonstration in the Intel booth at ISC15, in Frankfurt, of stunning work from the Centre for Theoretical Cosmology (CTC) running on amazing technologies from my company, Intel.  But, it’s the matter-of-fact treatment of GR proof that made me pause to … Read more >

The post KNL and OmniPath Demonstrations; Proof of Gravitational Lensing Seems Cool to Me, What do I know? appeared first on Intel Software and Services.

Read more >

New HP and Intel Alliance to Optimize HPC Workload Performance for Targeted Industries

HP and Intel are again joining forces to develop and deliver industry-specific solutions with targeted workload optimization and deep domain expertise to meet the unique needs of High Performance Computing (HPC) customers. These solutions will leverage Intel’s HPC scalable system framework and HP’s solution framework for HPC to take HPC mainstream.

 

HP systems innovation augments Intel’s chip capabilities with end-to-end systems integration, density optimization and energy efficiency built into each HP Apollo platform.  HP’s solution framework for HPC optimizes workload performance for targeted vertical industries.  HP offers clients Solutions Reference Architectures that deliver the ability to process, analyze and manage data while addressing the complex requirements across a variety of industries including Oil and Gas, Financial Services and Life Sciences.  With HP HPC solutions customers can address their need for HPC innovation with an infrastructure that delivers the right Compute for the right workload at the right economics…every time!

 

In addition to combining Intel’s HPC scalable system framework with HP solutions framework for HPC   to develop HPC optimized solutions, the HPC Alliance goes a step further, by introducing a new Center of Excellence (CoE) specifically designed to spur customer innovation.  This CoE effectively combines deep vertical industry expertise and technological understanding with the appropriate tools, services and support. This approach makes it simple and easy for our customers to drive innovation with HPC. This service is open to all HPC customers from academia to industry.

 

Today, in Grenoble, France, customers have access to HP and Intel engineers at the HP and Intel Solutions Center.  Clients can conduct a Proof of Concept using the latest in HP and Intel technologies.  Furthermore, HP and Intel engineers stand ready to help customers modernize their codes to take advantage of new technologies….resulting in faster performance, improved efficiencies, and ultimately better business outcomes.

 

 

HP and Intel will make the HPC Alliance announcement at ISC’15 in Frankfurt, Germany July 12-16, 2015. To learn more, visit www.hp.com and search ‘high performance computing’.

Read more >

Intel Rolls Out Enhanced Lustre* File System

For High Performance Computing (HPC) users who leverage open-source Lustre* software, a good file system for big data is now getting even better. That’s a key takeaway from announcements Intel is making this week at ISC15 in Frankfurt, Germany.

 

Building on its substantial contributions to the Lustre community, Intel is rolling out new features that will make the file system more scalable, easier to use, and more accessible to enterprise customers. These features, incorporated in Intel® Enterprise Edition for Lustre* 2.3, include support for Multiple Metadata Targets in the Intel® Manager for Lustre* GUI.

 

The Multiple Metadata Target feature allows Lustre metadata to be distributed across servers. Intel Enterprise Edition for Lustre 2.3 supports remote directories, which allow each metadata target to serve a discrete sub-directory within the file system name space. This enables the size of the Lustre namespace and metadata throughput to scale with demand and provide dedicated metadata servers for projects, departments, or specific workloads.

 

This latest Enterprise Edition for Lustre release supports clients running Red Hat Enterprise Linux (RHEL) 7.1 as well as client nodes running SUSE Linux Enterprise 12.

 

The announcements don’t stop there. Looking ahead a bit, Intel is preparing to roll out new security, disaster recovery, and enhanced support features in Intel® Cloud Edition for Lustre 1.2, which will arrive later this year. Here’s a quick look at these coming enhancements:

 

  • Enhanced security— The new version of Cloud Edition adds network encryption using IPSec to provide enhanced security. This feature can be automatically configured to ensure that the communication of important data is always secure within the file system, and when combined with EBS encryption (released in version 1.1.1 of Cloud Edition) provides a complete and robust end-to-end security solution for cloud-based I/O.
  • Disaster recovery—Existing support for EBS snapshots is being expanded to support the recovery of a complete file system. This feature enhances file system durability and increases the likelihood of recovering important data in the case of failure or data corruption.
  • Supportability enhancements—Cloud Edition supportability has been enhanced with the addition of client mounting tools, updates to instance and target naming, and added network testing tools. These changes provide a more robust framework for administrators to deploy, manage, and troubleshoot issues when running Cloud Edition.

 

Making adoption and use of Lustre easier for organizations is a key driver behind the Intel Manager for Lustre software. This management interface includes easy-to-use tools that provide a unified view of Lustre storage systems and simplify the installation, configuration, monitoring, and overall management of the software. Even better, the Intel Distribution includes an integrated adapter for Apache Hadoop*, which enables users to operate both Lustre and Apache Hadoop within a shared HPC infrastructure.

 

Enhancements to the Intel Distributions for Lustre software products are a reflection of Intel’s commitment to making HPC and big data solutions more accessible to both traditional HPC users and mainstream enterprises. This commitment to the HPC and big data space is also evident in Intel’s HPC scalable system framework. The framework, which leverages a collection of leading-edge technologies, enables balanced, power-efficient systems that can support both compute- and data-intensive workloads running on the latest Intel® Xeon processors and Intel® Xeon Phi™ coprocessors.

 

For a closer look at these topics, visit Intel Solutions for Lustre Software and Intel’s HPC Scalable System Framework.

 

 

 

Intel, the Intel logo, Xeon and Xeon Phi are trademarks of Intel Corporation in the United States and other countries.
* Other names and brands may be claimed as the property of others.

Read more >

How eHarmony Makes Matches in the Cloud with Big Data Analytics

For millions of years, humans searched for the right person to love based on emotion, intuition, and a good bit of pure luck. Today, it’s much easier to find your soul mate using the power of big data analytics.

 

Analyzing Successful RelationshipseHarmony_Tweet.jpg

 

This scientific approach to matchmaking is clearly successful. On average, 438 people in the United States get married every day because of eHarmony. That’s the equivalent of nearly four percent of new marriages.  

Navigating an Ocean of Data

To keep up with its fast-growing demand, eHarmony needed to boost its analytics capabilities and upgrade its cloud environment to support a new software framework. It also needed a solution that was scalable to keep up with tomorrow’s needs.

Robust Private Cloud Environment

eHarmony built a new private cloud environment that lets it process affinity matching and conduct machine learning research to help refine the matching process. The cloud is built on Cloudera CDH software—an Apache Hadoop software distribution that enables scalable storage and distributed computing while providing a user interface and a range of enterprise capabilities. 

The infrastructure also includes servers equipped with the Intel® Xeon® processor E5 v2 and E5 v3 families. eHarmony chose the Intel Xeon processors because they had the performance it needed plus large-scale memory capacity to support the memory-intensive cloud environment. eHarmony’s software developers also use Intel® Threading Building Blocks (Intel® TBB) to help optimize new code.  

More andMore AccurateResults

This powerful new cloud environment can help eHarmony accommodate more complex analyses—and ultimately produce more personalized matches that improve the likelihood of relationship success. It can also analyze more data faster than before and deliver within overnight processing windows.  

Going forward, eHarmony is ready to handle the fast-growing volume and variety of user information it takes to match millions of users every day. 

You can take a look at the eHarmony solution here or read more about it here. To explore more technology success stories, visit www.intel.com/itcasestudies and follow us on Twitter.

 

Read more >

Security on the Frontlines of Healthcare

 

I recently had the privilege of interviewing Daniel Dura, CTO of Graphium Health recently on the subject of security on the frontlines of healthcare, and a few key themes emerged that I want to highlight and elaborate on below.

 

Regulatory compliance is necessary but not sufficient for effective security and breach risk mitigation. To effectively secure healthcare organizations against breaches and other security risks one needs to start with understanding the sensitive healthcare data at risk. Where is it at rest (inventory) and how is it moving over the network (inventory), and how sensitive is it (classification)? These seem like simple questions, but in practice are difficult to answer, especially with BYOD, apps, social media, consumer health, wearables, Internet of Things etc driving increased variety, volume and velocity (near real-time) sensitive healthcare data into healthcare organizations.

 

There are different types of breaches. Cybercrime type breaches have hit the news recently. Many other breaches are caused by loss or theft of mobile devices or media, insider risks such as accidents or workarounds, breaches caused by business associates or sub-contracted data processors, or malicious insiders either snooping records or committing fraud. Effective security requires avoiding distraction from the latest media, understanding the various types of breaches holistically, which ones are the greatest risks for your organization, and how to direct limited budget and resources available for security to do the most good in mitigating the most likely and impactful risks.

 

Usability is key. Healthcare workers have many more information technology tools now than 10 years ago and if usability is lacking in healthcare solutions or security it can directly drive the use of workarounds, non-compliance with policy, and additional risks that can lead to breaches. The challenge is to provide security together with improved usability. Examples include software encryption with hardware acceleration, SSD’s with encryption, or multi-factor authentication that improves usability of solutions and security.

 

Security is everyone’s job. Healthcare workers are increasingly targeted in spear phishing attacks. Effective mitigation of this type of risk requires a cultural shift so that security is not only the job of the security team but everyone’s job. Security awareness training needs to be on the job, gamified, continuous, and meaningful.

 

I’m curious what types of security concerns and risks are top of mind in your organization, challenges you are seeing in addressing these, and thoughts on how best to mitigate?

Read more >