Recent Blog Posts

Putting the Power of Mobile Technology in Patient Care

Healthcare providers today rely on an array of technologies to help manage key workflows, from maintaining electronic medical record (EMR) systems and performing clinical procedures to coordinating consultations and prescribing follow-up care. Yet in many cases, poor integration among technologies and outdated devices can waste time and hamper efforts to efficiently deliver high-quality care.


Refreshing older technologies with new solutions can improve caregiver collaboration, increase efficiency, and address security requirements. For example, caregivers need ways to securely share patient information among colleagues. They also need smooth, fast handoffs from one device or system to another—remembering multiple login passwords and transferring information among systems can be inefficient and time-consuming.


New Whitepaper: Workspace Transformation for Healthcare Providers


In today’s healthcare environment, enhancing communication and collaboration among clinical workers across the continuum of care is critical for producing optimal patient and financial outcomes.


Prescribing the Right Solution

Intel® mobile solutions are designed to help healthcare providers address workflow challenges simply and effectively, so they can refocus their time and resources on their patients. The latest generations of Intel® Core™ vPro™ and Core M vPro processors include technologies designed to easily integrate within the mobile healthcare environment, streamline workflows, and bolster security.


For instance, newer generations of devices can be equipped with WiGig technology, which can be combined with a number of commercially available wireless docking solutions. Those wireless docking stations enable healthcare providers to transition seamlessly between different clinical workstations without the hassle of connecting multiple wires. Clinical teams looking for improved ways to facilitate team-based care can also leverage the Intel® Unite™ platform, which provides a robust set of features to enable collaboration along with multiple layers of security through a simple and secure interface.


Further security measures in Intel® Identity Protection Technology (Intel® IPT) are built into the Intel Core vPro processor architecture. They provide role-based security that prevents unauthorized users from accessing healthcare systems, even if they have a stolen username and passcode.


Other solutions that depend on Intel® technologies include apps for capturing EMR data and using biometrics to access applications. All of these technologies are designed to improve clinical workflows and the patient experience.


The right mobile solutions and platform capabilities can simplify a wide range of communication and collaboration tasks. As a result, caregivers can stay focused on patients.

To learn more about how Intel® solutions are helping healthcare organizations achieve these goals, read Workspace Transformation for Healthcare Providers.


What questions do you have?

Read more >

Intel Highlights from Strata + Hadoop World 2016

This year’s Strata + Hadoop World conference, held March 28-31 in San Jose, marked an interesting event – the 10th anniversary of Apache Hadoop*. Though Hadoop’s birthday was celebrated with circus-like festivities on the show’s second evening, I did notice that this year’s conference centered less on Hadoop than in previous years. Instead, the conference focused more on the cluster-computing analytics framework Spark, Internet of Things (IoT) technologies, and the ongoing challenge of deriving analytical insights from big data.


Intel had a substantial presence at the show, with a number of keynote speakers, sponsored sessions, an announcement about cloud infrastructure provider Rackspace and the open source Trusted Analytics Platform (TAP) driven by Intel, and other events to showcase its latest advances in big data and Internet of Things (IoT) technologies.


Thursday morning’s keynotes got off to a bang when writer and entrepreneur Alistair Croll (@acroll) welcomed the audience with a story of his lost jeans. Apparently, Croll’s luggage was misplaced on the way to San Jose, which sent him on a fruitless shopping trip to the mall to buy a replacement pair.


This proved to be the perfect set-up for Bob Rogers (@scientistBob), chief data scientist for big data solutions at Intel, and his presentation “Advanced analytics and the mystery of the missing jeans.” Rogers discussed how Intel and Levis has been working together to address a major problem in retail: Inventory accuracy in brick and mortar stores is only 65 percent. This means that 35 percent of the time, merchandise that is supposed to be in stock, isn’t. Rogers provided an overview of an analytics retail solution built for Levis that links IoT data from RFID inventory tags, video cameras, and sensors and transmits it via Intel® IoT Gateways to cloud-based advanced analytics engines on TAP. Watch this video for a glimpse into how Intel’s IoT and advanced analytics technologies have helped Levis make smarter business decisions and better serve its customers.


Bridget Karlin, the managing director of Intel’s Internet of Things Group, joined Bob Rogers for another session on IoT and TAP called “Master the Internet of Things with Integrated Analytics.” Intel offers a complete IoT platform that begins with reference architectures and extends to products and technologies from Intel and its partners to create an open, secure and scalable approach to IoT solutions. Karlin and Rogers discussed how applying analytics platforms such as TAP to rich data streams from IoT networks have the capacity to deliver enormous value to many industries, including healthcare, energy and utilities, and retail.



Figure 1. Intel’s IoT Platform extends from edge to advanced analytics in the cloud.


These IoT systems are particularly powerful when they offer real-time analytics, not just from the cloud, but also from the network edge. Intel has worked closely with SAP to deliver an end-to-end IoT solution that can deliver actionable business insights on a near real-time basis. Watch this video featuring Karlin and Irfan Khan, CTO for Global Customer Operations at SAP, to learn more about the joint Intel-SAP IoT solution, and read the solution brief Business Intelligence at the Edge to find out how real-time data from the network edge helps protect remote workers and improves customer engagement and sales in retail applications.


During the same week as Strata + Hadoop World, Intel also announced the ]general availability of the new Intel® Xeon® processor E5-2600 v4 product family](, which provides a strategic foundation for building modern, software-defined cloud infrastructures. The new processor family delivers improved performance for cloud workloads, with more than 20 percent more cores and cache than the prior generation , plus enhanced security and faster memory support. According to recent benchmarks, Intel Xeon processor E5-2600 v4 chips delivered up to 1.22x higher performance on procedural workloads  (such as MapReduce workloads on Apache Hadoop clusters). The new chips also delivered up to 1.27x higher performance for BigBench queries , a benchmark that measures efficient processing of big data analytics.


Included in the release was news that Intel is expanding its popular Intel® Cloud Builders program, which brings together reference architectures, solutions blueprints and leading solution providers to help facilitate the delivery of modern computing infrastructures, to include software defined infrastructure use cases. The Cloud Builders program is now joined by Intel® Network Builders and Intel® Storage Builder programs, which aim to accelerate adoption of software-defined storage and network innovations.


My last stop at the conference was at the SAP kiosk, where I filmed a Periscope video of our friend, Karen Sun introducing SAP HANA Vora. Vora* is an in-memory computing engine for Hadoop that runs on the Apache Spark* execution framework and helps sift massive volumes of unstructured data in hierarchies to simplify big data management in SAP HANA and Hadoop environments. Intel contributed engineering and enablement efforts for Spark, which SAP HANA Vora is based on, to maximize performance and security on Intel® architectures.


From lost jeans to Vora, Strata + Hadoop World was a busy, eventful show with engaging, provocative keynotes and events. The highlights are available for viewing on the Strata web site.


Follow me at @TimIntel and #TechTim to keep up with the latest with Intel and SAP.

Read more >

Cisco UCS Claims Nine World Record Benchmarks with the Intel Xeon processor E5-2600 v4 Family

By Girish Kulkarni, Senior Marketing Manager, Cisco


On March 31, 2016 Cisco announced support for the Intel Xeon processor E5-2600 v4 family on the Cisco Unified Computing System (Cisco UCS). On the same day as the Intel announcement, Cisco captured Nine world records on industry benchmarks on Cisco UCS to highlight the way in which Cisco UCS can accelerate performance across the data center.


As we know, there is no better way to compare performance than by using industry-standard benchmarks, and with Nine new world record benchmark performance results Cisco has demonstrated Cisco Unified Computing System’s outstanding performance and IT productivity across key data center workloads. Check out the Performance Brief for additional information on the nine new Cisco UCS world record benchmarks. The detailed benchmark disclosure reports are available here. The performance leadership across a wide range of workloads provided by Cisco UCS is validated by the nine World records announced today summarized below:


  1. SPECfp®_rate_base2006 – Best 2-socket x86-architecture result
  2. SPECint®_rate_base – Best 2-socket x86-architecture result
  3. SPECint_base2006 – Best 2-socket x86-architecture result
  4. SPECjbb®2015-MultiJVMBest 2-socket x86-architecture result for critical jOPS
  5. SPECfp®_base2006 – Best 2-socket x86-architecture result
  6. TPC Express Benchmark HS (TPCx-HS)- Best performance and price/performance at the 1-TB scale factor
  7. TPC Express Benchmark HS (TPCx-HS)- Best performance and price/performance at the 10-TB scale factor
  8. SAP Sales and Distribution (SD)- Best 2-processor, 44-core, 2-tier result
  9. SPEC®OMPG2012 – Best 2-socket x86-architecture result


Cisco’s world-record-setting benchmark results also demonstrate dramatic performance improvements over the prior- generation Intel® Xeon® processor E5 family. Cisco’s results demonstrate the degree to which Cisco UCS servers with large memory configurations deliver the power of the new Intel Xeon processor E5 v4 family. Compared to Cisco’s previous-generation servers powered by the Intel Xeon processor E5 v3 family, Cisco’s new servers demonstrate dramatic improvements in raw CPU power as well as business and parallelized application performance.


Cisco UCS with the Intel® Xeon® processor E5-2600 v4 family delivered up to 127 percent better performance over the prior generation Intel® Xeon® processor E5 family as shown in the graph below:


LE-56601-PB-Broadwell-Results (3)

Performance Improvement Compared to Previous Generation Cisco UCS Servers (Percent)

It is interesting to note that although all vendors have access to same Intel processors, Cisco UCS unleashes their potential to deliver high performance to applications through the power of unification and performance optimization. Cisco UCS integrates industry-standard x86-architecture blade and rack servers with networking and storage access into a unified system. Automated server and network configuration let you quickly and easily deploy new applications, repurpose existing servers, and scale applications with configurations that are compliant with your IT standards.


Cisco’s continued performance leadership with Cisco UCS comes in part due to the power of the Intel Xeon processors that power Cisco UCS servers. Intel Xeon processor E5-2600 v4 family CPUs provide the best balance of performance, power efficiency, and features to meet the diverse needs of your data center applications and workloads. Built on 14-nanometer (nm) processor technology, these innovative processors offer up to 22 cores, large high-speed memory configurations, and accelerated I/O throughput, delivering significant performance improvements compared to previous-generation processors. The processors also offer increased memory bandwidth monitoring and cache allocation capabilities, optimum data center orchestration and virtualization features, and hardware-assisted security advancements, which work in conjunction with Cisco UCS servers to further enhance the value of IT infrastructure in your enterprise.


The architectural advantages of a single cohesive system optimized for virtualized environments coupled with the industry leading benchmark performance results makes the Cisco Unified Computing System an “infrastructure platform of choice” to provide industry-leading performance in your data center. For additional information on Cisco UCS and Cisco UCS solutions please visit Cisco Unified Computing & Servers web page.



  1. The floating-point throughput performance improvement of 19.3 percent compared the SPECfp_rate_base2006 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on September 8, 2014.

  2. The integer throughput performance improvement of 27.5 percent compared the SPECint_rate_base score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on March 5, 2015.

  3. The integer processing performance improvement of 14 percent compared the SPECint_base2006 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on March 5, 2015.

  4. The Java application performance improvement of 127.3 percent compared the SPECjbb2015- MultiJVM critical-jOPS score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on September 30, 2015.

  5. The floating-point performance improvement of 16.8 percent compared the SPECfp_base2006 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C240 M4 Rack Server, a result that was available in December, 2014.

  6. The big data system performance improvement of 11.1 percent and the price/performance improvement of 29.6 percent at a scale factor of 1 TB compared the TPCx-HS results of Cisco Integrated Infrastructure for Big Data and Analytics using Cisco UCS C240 M4 Rack Servers with Huawei FusionInsight for Big Data, a result that was available on September 15, 2015.

  7. The big data system performance improvement of 3.9 percent and price/performance improvement of 13.3 percent at a scale factor of 10 TB compared the TCPx-HS results of the Cisco UCS Integrated Infrastructure for Big Data and Analytics using the Cisco UCS C240 M4 Rack Server with the previous generation of Cisco’s solution, a result that was available March 22, 2016.

  8. The SAP SD benchmark certification number was not available at press time and when certified can be found on the following web page:

  9. The parallel-processing performance improvement of 28.2 percent compared the SPECompG_peak2012 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on September 8, 2014.

  10. The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks and to disseminate objective and verifiable performance data to the industry. TPC membership includes major hardware and software companies. The performance results described in this document are derived from detailed benchmark results available as of March 31,2016, at http:// asp.

  11. SPEC, SPECfp, SPECint, SPECjbb, and SPEComp are registered trademarks of Standard Performance Evaluation Corporation. The benchmark results used to establish world-record status are based on those available at as of September 8, 2014.

Read more >

Composing a Path to the Infrastructure of the Future with Tencent, Dell, and Intel

By Matt Langman, Director of Business strategy for Memory and Intel Rack Scale Architecture


As hyperscale principles continue to influence datacenter designs, orchestrating and managing hardware becomes a challenge because scale and complexity break existing management approaches. It’s becoming impractical to use the Intelligent Platform Management Interface (IPMI) and other outdated standards to monitor, manage, provision and orchestrate thousands, or tens of thousands of devices.


Many organizations are trying to cope with this challenge. Nearly any data center vendor will tell you that they’re trying to step beyond the current limits, but one of the most distinctive and visionary approaches is Intel Rack Scale Architecture. Intel Rack Scale Architecture, simply put, promises a future of fully disaggregated hardware – whereby a customer can independently upgrade their compute, storage or networking capabilities – coupled with an open-source software layer that will let companies of any size stand up, manage and orchestrate infrastructure just like the big guys. With a common API, Intel Rack Scale Architecture lets customers manage all of their workloads without being locked into a specific orchestration layer


At IDF Shenzhen this week, our Intel Rack Scale Architecture offering will take center stage with Dell and Tencent. Our joint technology demo will show how customers can manage a hyperscale platform with custom orchestration– without hardware lock-in – to meet their workload needs. With Intel’s Rack Scale Infrastructure and Tencent’s custom orchestration stack running on the forthcoming Dell DSS 9000, we will demonstrate support for IT asset inventory, configuration, management and provisioning for Tencent workloads across their distributed data center infrastructure.


“Tencent Cloud Computing, the cloud service business under Tencent, a leading provider of Internet value added services in China, is jointly working with Intel and Dell on an innovative solution based on the combination of Intel  Rack Scale Architecture, Dell rack infrastructure and Tencent’s orchestration platform. This offers the ability to inventory and manage the underlying infrastructure with open, standards-based rack-level management, while also offering hardware disaggregation and infrastructure flexibility.”

– Tencent Cloud Computing Group


“When you’re one of the largest Internet companies in the world, it’s expected that you can accelerate service delivery, respond to changing workload needs and scale without interruption to meet customer needs. We’re proud to have collaborated with Intel and Tencent to showcase how this can be done. With the open and flexible Dell DSS 9000 running Intel Rack Scale Architecture and Tencent’s orchestration stack, Tencent Cloud Computing can provide better ROI at the infrastructure level while serving customer needs now and for years to come.”

– Stephen Rousset, Distinguished Engineer & Director of Architecture, Dell Extreme Scale Infrastructure


We hope you will drop by the Tencent booth at IDF Shenzhen to see a live demonstration and learn more. We certainly believe in Intel Rack Scale Architecture and we’re proud to have worked with Tencent and Dell to provide a glimpse into the future. Dell is a leading hardware provider that is building open, agile and efficient infrastructure via the DSS 9000 that matches our vision, and Tencent is on the forefront of technology trends and showing others how they are sidestepping the complexity of legacy approaches to better serve customer needs.

Read more >

Automated Manufacturing 101

snackable-startingfromsquareone.pngAs a Manufacturing IT Principal Engineer, I have helped Intel’s factory management evolve over the last two decades from manual processes toward the goal of being 100 percent automated. In our recent white paper, “Using Big Data in Manufacturing at Intel’s Smart Factories,” we describe the crucial aspects of Intel’s continuing automated manufacturing journey and the benefits automation—accelerated by the Internet of Things—has brought to Intel. These benefits include reduced cost, accelerated velocity, world-class quality, and improved safety.


The automation of manufacturing processes has now achieved global momentum, through stratagems such as Germany’s Industry 4.0 concept and China’s “Made in China 2025” initiative. Companies in Europe, China, and elsewhere around the world are endeavoring to automate their manufacturing processes.


But as I travel the world discussing Intel’s key learnings about automated manufacturing with customers and partners, I find that many companies are starting from square one. While they are interested in the advanced techniques Intel is using, such as automated material movement, IoT integration, real-time capabilities, virtualization, and interoperability with our suppliers, the manufacturing processes at these companies are still very much based on human intervention, pen-and-paper tracking, and manually operated production lines. Advanced automation is well and good – but how do they get started?


To help answer that question, in this blog I’ll cover the basics of factory automation – those critical components that must be in place as a foundation before more advanced practices can be achieved.  I hope this information will help you as you evaluate your own journey.


So, what are smart factories made of?


Manufacturing Execution System (MES)


The first ingredient in a smart factory is the MES. This is the heart of factory automation through which all the factory transactions flow; it is the gatekeeper for equipment and material states. The MES, which can be developed internally but is more often purchased from a third-party supplier,is a transactional system for managing and tracking the state of equipment and work-in-process on a factory floor. As parts move physically on the factory floor, they move logically through the MES. An MES keeps track of information in real time, receiving up-to-the-minute data from various sources, which can include employees, machine monitors, equipment host controllers, and even robots. An MES can enable leaner operations and increase product quality through standardized workflows.


When evaluating an MES, a company should consider several factors.


For the MES itself, high performance is paramount. The MES architecture should be re-entrant and multithreaded to support a high level of parallelism. (If these terms are unfamiliar, keep reading, and I’ll explain what they mean.) This is because the MES must track and execute transactions for different parts of the factory at the same time. In today’s brutally fast-paced business environment, waiting for information to come through a queue can significantly erode a company’s competitive edge. Other important attributes of an MES include a fully redundant architecture, the availability of remote interfaces, easy access to APIs for customization, and user interfaces that support configuration and manual intervention in processes when necessary.


The MES database is usually determined by the MES vendor, so it is an important part of the MES evaluation process. Again, don’t skimp on performance – automation can put a heavy load on the database, and the database must be able to scale as automation increases. Similar to the MES, look for a database that is highly parallel. For example, some databases lock the entire table when an update is occurring, while others are more flexible and lock only the row that is being updated.


Another important MES and database consideration is support availability. It’s best to find out what others in the industry are using, and choose an MES and database that are well supported by the vendor(s) for the operating system in use. Going with an obscure choice, or a combination of MES, database, and operating system that is not popular in the industry could leave you on an island with the esteemed privilege of uncovering many of the bugs and issues that exist in the system without having a fast path to fixing them.




Think of the middleware as the mail carrier for factory automation, who also speaks every language on the planet. Middleware serves as an isolation and translation layer between the entire install base of automated capabilities. It enables you to reconfigure or swap-out factory automation systems without having to make changes to any of the other automation components. It also supports transaction aggregation, which is useful in many equipment transaction situations such as moving a carrier of production material into or out of a process operation.


When a factory transaction executes, the middleware abstracts this transaction and translates the single “action” into many separate actions. For example, it could start actions in the MES and scheduling system, set a flag in the quality control system, and publish data to a repository.


Middleware usually handles every transaction and routes every message in the factory. Therefore, evaluation of the available middleware solutions with high performance and parallelism should be similar to the MES/database evaluation process.


PCs at Each Equipment Station


snackable-nextgen.pngPCs are the ears, eyes, and mouth of factory automation. They communicate with the equipment, control that equipment, and send that information back to the factory automation systems like the MES. At Intel, we use PCs equipped with high-end processors, because performance is critical to keeping Intel’s factories running at peak capacity. High-performance PCs are necessary because the PC may communicate with a production tool many times per second, with hundreds of data variables in play – and data comes back at that same rate – providing real-time data that powers quality decisions. (Some factories that focus on assembly of products may not need this level of monitoring, but the semiconductor industry certainly does.)


In addition, the PC must communicate simultaneously with the MES and the equipment. Therefore, it must be capable of multiplexing (the simultaneous transmission of several messages along a single channel of communication). The PC also serves as the operator interface for controlling equipment (start, stop, change parameters).


Connections between the PC and the equipment are defined by the equipment in use. Many semiconductor factories use Ethernet connections with SEMI standard protocols, but connections could also use Wi-Fi. Other factories might use serial interfaces, Modbus, CAN bus, EtherCAT, or one of many other options.


Additional Factory Automation Components


Several backend components comprise the rest of the automation system. These include, but are not limited to, the following:

  • Statistical process control system
  • Yield analysis system
  • Scheduling system

When a company is ready for more advanced automation, they may add in automated material handling through robotic delivery (methods include overhead hoist, ground-based transport, and rail-mounted transport). A material control system is required to maintain the state of the automated vehicles and to provide instructions and synchronization.


Putting it All Together


If it seems like there’s a lot to factory automation, you’re right. That’s why creating a reference architecture for computer integrated manufacturing is important before beginning implementation. Because Intel has decades of experience in factory automation, we’re documenting our journey with the goal of helping fellow travelers create their own factory automation architecture.


In the next few months, I’ll be posting additional blogs talking about more advanced aspects of smart factories, the industrial Internet of Things, edge computing, and how Intel is putting technology to work in our manufacturing facilities. In the meantime, I’d love to hear your challenges and success stories – please leave a comment below and join the conversation.

Read more >

The Future of Mobile: 3 Takeaways From the 5G Summit

In mid-March, tech innovators from all over the world gathered for the 5G Summit in Taipei to discuss the future of mobile network technology. This conference was significant for the tech community, as it’s poised to change not only mobile technology, but how we conduct business and connect with each other. It’s nearly impossible to overstate the impact 5G will have. That doesn’t mean the switch will be easy, though. Quite the contrary. Here are my three main takeaways from the summit, and what they mean for the future of your tech.

5G Summit Taipei image.jpg

1. Taiwan’s role in the future of 5G


Deciding to hold the conference in the capital of Taiwan was no coincidence. Taiwan is renowned in the tech sector for leading design and manufacturing. In fact, just a month before the 5G Summit, Ericsson announced a strategic partnership with Quanta Computer, a Taiwanese leader in cloud computing. Together, they’ll be scaling design and developing data center solutions.


The companies present at the conference were some of the most cutting edge creators in technology today. They listened as the world’s leading telecom service providers, including Vodafone, Verizon, Bell Canada, China Mobile, Orange, and Telecom Italia, discussed their requirements for specific 5G services and use cases. The presentations emphasized that the transition to 5G will bring communication and computing together in a way we’ve never seen before, and that Taiwanese companies are positioned to play a huge part in the rollout and success of 5G.


2. A game changer


The technology involved in 5G will require small cells that connect to billions of embedded devices, and many Taiwanese companies attended the event looking to get a head start in development of 5G hardware and software.


Earlier this year at Mobile World Congress, Intel announced plans to collaborate with several industry leaders in an effort to accelerate the path to 5G. In fact, the connection between these two events is strong. How 5G services and requirements will differ depending on the vertical market and use cases was a discussion that started at Mobile World Congress and continued at the 5G Summit.


Next Generation Mobile Networks (NGMN) made a point in their presentation to emphasize that it will take a lot of work to identify unique 5G use cases and related KPIs worldwide. The low latency requirements in automotive, for example, could likely be significantly more stringent than in a typical consumer use case.


3. Transforming the network


Throughout the conference, one message was repeated over and over: 5G networks have to transform to allow easier deployment through software-defined networks (SDN) and network function virtualization (NFV) that can run on standard servers. This was especially interesting to me since Intel is actively engaging with service providers and the SDN/NFV ecosystem. We’ll be opening another NFV customer engagement center in Taipei in the second quarter of this year.


There was also a consensus among speakers and attendees that 5G and LTE will coexist. One presentation specified chip package size and power consumption, which gave something for the Taiwanese hardware companies to consider. But in order for these technologies to coexist, services and corresponding devices must be designed to aggregate bandwidth while maintaining reasonable power consumption. The most interesting message from several participants cautioned avoiding pre-standard solutions.


It was exciting to participate in the 5G Summit this year. I can’t wait to see how this technology transforms business and enables bigger leaps in technological innovation. Did you attend the conference? Please, share your takeaways.

  • Who do you think is going to be first to rollout standards-based 5G service?
  • Which verticals and use cases do you think will drive the fastest commercial adoption and where?
  • What leading companies in these verticals will benefit the most from 5G?


Given the healthy competition to be first among operators and countries, it will be interesting to see how this all plays out.


Tim Lauer is Intel’s Director of Sales for Cloud and Communication Service Providers in the APJ (Asia Pacific and Japan) region. Connect with him on Twitter and LinkedIn.

Read more >

Making The Case For Small With Mobile Analytics Design

group-meeting.jpgIn mobile analytics design, the “case for small” stems from the need to effectively manage performance and response time for mobile experiences. The concept has nothing to do with smaller screens or device sizes. Instead, it deals with the delivery of the content onto those screens.


One of the common denominators of all mobile user experiences deals with what I call the “patience factor.” mobile users tend to be less patient about performance and response time than PC users, since they’re on the go with less time to spare.


On the other hand, the unmatched access and convenience of mobile makes them heavy users of the technology that’s largely influenced by their daily experiences with their mobile devices, which is all about ease of use and instant results.


The challenge for mobile design is that many traditional data and analytics platforms can’t handle large volumes of data on wired systems, let alone on wireless networks. Unless you’re taking advantage of the latest technology such as in-memory computing, the case for small remains undeniable.


Let’s take a look at two key areas where the case for small makes sense with traditional mobile analytics platforms.


Mobile query size

If you’re going to load data from a traditional database, the size of the underlying mobile query will undoubtedly impact the performance.

  • Use the minimum number of data elements to satisfy your business requirement to define your mobile query. It’s completely acceptable to experiment with large queries during development, but you must clean them up before going live.
  • Optimize your queries at the database level and take advantage of additional features that your analytics or business intelligence application may offer.
  • If it makes sense and provides relief, use several smaller data queries instead of one large query.
  • As an alternative to loading all data at once, consider loading data required for the initial analysis.
  • For certain requirements, think about cached data (storing values that have been computed in advance). Although this duplicates original values that are stored elsewhere, the performance gains may be invaluable—especially for certain audiences, such as senior executives.


Mobile analytics asset

If you put aside the constraint of the most valuable mobile design property—real estate—avoiding a bottleneck becomes dependent on how the mobile analytics asset is designed. Consider:

  • Review the number of underlying calculations that are created inside the report. Are they all really necessary or simply duplicates or leftovers from draft copies that can be eliminated?
  • Can you leverage standard definitions/calculations? If one doesn’t exist, does it make sense to create one as part of your data model instead of inside the mobile asset?
  • How do you plan to deliver the assets: Push or Pull? (Read more on this topic.)
  • How are you configuring the mobile asset in terms of the data refresh? Is it automatic (loads the latest data on open) or manual?
  • Do you have the offline capability, which will eliminate the need for a 24/7 wireless connection and the need to refresh data sets that don’t change frequently?


Bottom line

Mobile analytics is about delivering actionable insight—not loading thousands of rows. Such overload won’t necessarily promote faster, better-informed decision making and it’s sure to cause unnecessary headaches for everyone involved.


Stay tuned for my next blog in the Mobile Analytics Design series.


You may also like the Mobile BI Strategy series on IT Peer Network.

Connect with me on Twitter @KaanTurnali, LinkedIn and here on the IT Peer Network.

A version of this post was originally published on and also appeared on the SAP Analytics Blog.

Read more >

Retailers Reach for Robust Solutions Built for a Mobile World

It’s an exciting time for the retail industry. New mobile technologies are putting an amazing amount of insight at retailers’ fingertips—empowering sales assistants with timely, tailored knowledge that leads to richer customer engagement and more seamless shopping. New mobile devices … Read more >

The post Retailers Reach for Robust Solutions Built for a Mobile World appeared first on IoT@Intel.

Read more >

Bio-IT World Highlights Innovation for All in One Day Precision Medicine

Bio-IT World is a great occasion to take stock and see what’s on the horizon. In a plenary keynote session on April 5, I spoke about three areas where we’re making progress toward achieving All in One Day precision medicine.


All in One Day is both a vision and a challenge. The vision is that if you’re diagnosed with cancer or another genetically-influenced disease, your clinical team will sequence your DNA and provide you with a precision treatment plan based on your biomolecular profile—all within 24 hours. To do that, they’ll scour massive databases, examining the known available treatments to find the ones that are most effective for people who most closely line up with your unique biology, age, lifestyle, and other factors. So you receive the treatment that’s likely to be most successful with the fewest side effects. The upshot: less anxiety and uncertainty, less trial-and-error treatment, and the likelihood of better outcomes.


With enough of the right kinds of innovation and focus, Intel thinks the goal is achievable by 2020. We’re working hard to make the vision a reality, and to make it practical enough for community oncologists to use as part of their clinical workflows.


Tools for Making the Most of Genomics Data


What kinds of innovation am I talking about? One crucial area is the development of open source tools for analyzing and managing genomics data.


Genomic analysis and precision medicine are massive big data applications. Increasingly, the limiting factor isn’t sequencing a genome, but assembling, analyzing, comparing, studying and storing it along with clinical and other data. At Bio-IT World, Intel and the Broad Institute of MIT and Harvard announced that we are advancing fundamental capabilities so large genomic workflows can run at cloud scale, as well as co-developing new open source tools to simplify the execution of large genomic workflows such as the Broad’s Genome Analysis Toolkit (GATK).


The Broad Institute released Cromwell, an integrated workflow execution engine designed to give organizations greater control by launching genomic pipelines on private or public clouds in a portable and reproducible manner. Broad and Intel also announced GenomicsDB, a novel way to store vast amounts of patient variant data and to process it with unprecedented speed and scalability. Broad is teaming up with Intel, Cloudera, and four leading cloud service providers to enable cloud-based access to GATK software. (Read more about optimized open source solutions on Intel® platforms.)


Collaborative Networks to Accelerate Breakthroughs


Solving massive challenges calls for deep collaborations across diverse institutions. For precision medicine, these collaborations must balance open data sharing with institutional control and rigorous protection of patient privacy.


The Collaborative Cancer Cloud, established last year by Intel and Oregon Health & Science University (OHSU), provides a robust foundation for such collaborations by enabling medical institutions to securely share insights from their private patient genomic data. The Cancer Cloud’s unique, federated approach to data sharing allows for rapid advances while overcoming many concerns about sharing sensitive datasets. At Bio-IT World, we welcomed the Dana-Farber Cancer Institute and the Ontario Institute for Cancer Research as recent additions to the Cancer Cloud.


Platform Innovation for Diverse Genomics Workloads


As powerful as today’s supercomputers are, All in One Day will require significant increases in computational capacity, performance, and throughput. Intel is driving progress on multiple fronts to help institutions manage, analyze, share, and store the expanding world of bio data.  We’ve created Intel® Scalable System Framework (Intel® SSF) as a next-generation approach to developing high-performance, balanced, efficient, and reliable computing (HPC) systems. We recently launched the Intel® Xeon® processor E5-2600 v4 product family, the first processor within Intel Scalable Systems Framework. Together with Intel® Xeon Phi™ processors, Intel® Omni-Path Architecture, Intel® Enterprise Edition for Lustre* Solutions, revolutionary Intel® Optane™ memory/storage technology, and other critical elements of Intel SSF, we’re dramatically advancing the capabilities needed for precision medicine.


What will All in One Day mean for your organization? What questions do you have? What do you need to do to get ready? Tell me in the comments.


Dig deeper:

Stay in touch:                         

  • @IntelHealth, @portlandketan

Read more >

Can NHS England’s Healthy New Towns programme present an opportunity to rethink how we live?

How long before we see a real and dramatic change in the way health and care services are delivered in England on a large scale? It’s a question you can be forgiven for asking – and subsequently thinking that we’re still a long way from achieving – but the recent announcements by NHS England around the Healthy New Towns (HNT) programme recently had me thinking about how bricks & mortar could be the catalyst for change that health and care services need.


Healthcare at the Heart of New Developments

The HNT programme will facilitate joined-up thinking from clinicians, designers and technology experts who will essentially start with a blank slate with house-builders creating new developments. From designing infrastructure which will make healthy activities such as walking and cycling safer (and thus more attractive) to the sharing of technology and information across a range of public services such as healthcare and social care, the programme aims to deliver better healthcare in a more efficient and economically sound way.


I think we’d all agree that a new approach to the provision of healthcare is needed in England and across the UK. Budgets are under pressure, we have an increasingly elderly population and chronic diseases such as diabetes and obesity are swallowing up huge resources. So what can new models of health & care services look like in a Healthy New Town and what advantages might it bring?


Utilizing Technology

NHS England’s Five Year Forward View clearly states that technology will play an important role in enabling change. Three key areas where I see technology bringing significant improvements for a Healthy New Town are:


  • Improved communication across the health and social care ecosystem – moving patient records to an electronic system ensures that patient information is always up-to-date and always available anytime and anywhere, whether that be on a desktop computer on a hospital ward or on a 2 in 1 device in the hands of a community nurse. The data can be easily and securely shared too, amongst authorized parties such as social care teams, thus helping to deliver a seamless patient experience through primary, secondary and social care. Often, these electronic medical records are made up of unstructured case notes which may contain hidden value to clinicians. North East London NHS Foundation Trust and Santana Big Data Analytics are working together on a project to extract value from unstructured case notes using data analytics for the benefit of health and social care teams read this whitepaper[PDF] for more insight on that project.

  • Making new homes more accessible and connected – there are some obvious and practical considerations around accessibility for those with mobility issues which should be easy to plan into a new-build property. I’m also keen to see how the concept of smart homes and the internet of things can be incorporated into new building developments and how such technologies could be used within new health & care models.

  • Accessing healthcare in new ways – millennials access many aspects of their daily lives through a connected mobile device, whether that be banking services, social media or checking on a utility bill for example; and healthcare will be no different. With faster high-speed internet connections and 5G mobile network capabilities coming soon I see the ways in which future generations access healthcare will change too, e.g. a face-to-face consultation with a GP may no longer be the first option for patients.


Those are just three examples but there are certainly more and I’d love to hear how you see this Healthy New Towns programme playing out and the benefits it can bring (leave a comment @IntelHealth on Twitter or contact me via LinkedIn). We need to take a more holistic approach to health and care to make a real difference, so the design of this type of new community is a step in the right direction.


Read more >

Fostering the Internet of Things to Advance U.S. Leadership

By Marjorie Dickman, Global Director and Managing Counsel, IoT Policy Intel commends the Department of Commerce’s National Telecommunications and Information Administration (NTIA) for launching a request for comment today on “The Benefits, Challenges, and Potential Roles for the Government in Fostering … Read more >

The post Fostering the Internet of Things to Advance U.S. Leadership appeared first on Policy@Intel.

Read more >

Intel IoT Fuels Innovation at Digital Signage Expo 2016

From connecting at every touchpoint to creating lasting impressions, the Intel Internet of Things Group demonstrated engaging and secure digital signage experiences at this year’s Digital Signage Expo. Intel works with the industry to develop innovative and disruptive solutions from … Read more >

The post Intel IoT Fuels Innovation at Digital Signage Expo 2016 appeared first on IoT@Intel.

Read more >