Recent Blog Posts

Intel Highlights from Strata + Hadoop World 2016

This year’s Strata + Hadoop World conference, held March 28-31 in San Jose, marked an interesting event – the 10th anniversary of Apache Hadoop*. Though Hadoop’s birthday was celebrated with circus-like festivities on the show’s second evening, I did notice that this year’s conference centered less on Hadoop than in previous years. Instead, the conference focused more on the cluster-computing analytics framework Spark, Internet of Things (IoT) technologies, and the ongoing challenge of deriving analytical insights from big data.

 

Intel had a substantial presence at the show, with a number of keynote speakers, sponsored sessions, an announcement about cloud infrastructure provider Rackspace and the open source Trusted Analytics Platform (TAP) driven by Intel, and other events to showcase its latest advances in big data and Internet of Things (IoT) technologies.

 

Thursday morning’s keynotes got off to a bang when writer and entrepreneur Alistair Croll (@acroll) welcomed the audience with a story of his lost jeans. Apparently, Croll’s luggage was misplaced on the way to San Jose, which sent him on a fruitless shopping trip to the mall to buy a replacement pair.

 

This proved to be the perfect set-up for Bob Rogers (@scientistBob), chief data scientist for big data solutions at Intel, and his presentation “Advanced analytics and the mystery of the missing jeans.” Rogers discussed how Intel and Levis has been working together to address a major problem in retail: Inventory accuracy in brick and mortar stores is only 65 percent. This means that 35 percent of the time, merchandise that is supposed to be in stock, isn’t. Rogers provided an overview of an analytics retail solution built for Levis that links IoT data from RFID inventory tags, video cameras, and sensors and transmits it via Intel® IoT Gateways to cloud-based advanced analytics engines on TAP. Watch this video for a glimpse into how Intel’s IoT and advanced analytics technologies have helped Levis make smarter business decisions and better serve its customers.

 

Bridget Karlin, the managing director of Intel’s Internet of Things Group, joined Bob Rogers for another session on IoT and TAP called “Master the Internet of Things with Integrated Analytics.” Intel offers a complete IoT platform that begins with reference architectures and extends to products and technologies from Intel and its partners to create an open, secure and scalable approach to IoT solutions. Karlin and Rogers discussed how applying analytics platforms such as TAP to rich data streams from IoT networks have the capacity to deliver enormous value to many industries, including healthcare, energy and utilities, and retail.

 

tap-iot-platform.png

Figure 1. Intel’s IoT Platform extends from edge to advanced analytics in the cloud.

 

These IoT systems are particularly powerful when they offer real-time analytics, not just from the cloud, but also from the network edge. Intel has worked closely with SAP to deliver an end-to-end IoT solution that can deliver actionable business insights on a near real-time basis. Watch this video featuring Karlin and Irfan Khan, CTO for Global Customer Operations at SAP, to learn more about the joint Intel-SAP IoT solution, and read the solution brief Business Intelligence at the Edge to find out how real-time data from the network edge helps protect remote workers and improves customer engagement and sales in retail applications.

 

During the same week as Strata + Hadoop World, Intel also announced the ]general availability of the new Intel® Xeon® processor E5-2600 v4 product family](https://newsroom.intel.com/news-releases/intel-makes-move-to-the-cloud-faster-easier/), which provides a strategic foundation for building modern, software-defined cloud infrastructures. The new processor family delivers improved performance for cloud workloads, with more than 20 percent more cores and cache than the prior generation , plus enhanced security and faster memory support. According to recent benchmarks, Intel Xeon processor E5-2600 v4 chips delivered up to 1.22x higher performance on procedural workloads  (such as MapReduce workloads on Apache Hadoop clusters). The new chips also delivered up to 1.27x higher performance for BigBench queries , a benchmark that measures efficient processing of big data analytics.

 

Included in the release was news that Intel is expanding its popular Intel® Cloud Builders program, which brings together reference architectures, solutions blueprints and leading solution providers to help facilitate the delivery of modern computing infrastructures, to include software defined infrastructure use cases. The Cloud Builders program is now joined by Intel® Network Builders and Intel® Storage Builder programs, which aim to accelerate adoption of software-defined storage and network innovations.

 

My last stop at the conference was at the SAP kiosk, where I filmed a Periscope video of our friend, Karen Sun introducing SAP HANA Vora. Vora* is an in-memory computing engine for Hadoop that runs on the Apache Spark* execution framework and helps sift massive volumes of unstructured data in hierarchies to simplify big data management in SAP HANA and Hadoop environments. Intel contributed engineering and enablement efforts for Spark, which SAP HANA Vora is based on, to maximize performance and security on Intel® architectures.

 

From lost jeans to Vora, Strata + Hadoop World was a busy, eventful show with engaging, provocative keynotes and events. The highlights are available for viewing on the Strata web site.

 

Follow me at @TimIntel and #TechTim to keep up with the latest with Intel and SAP.

Read more >

Cisco UCS Claims Nine World Record Benchmarks with the Intel Xeon processor E5-2600 v4 Family

By Girish Kulkarni, Senior Marketing Manager, Cisco

 

On March 31, 2016 Cisco announced support for the Intel Xeon processor E5-2600 v4 family on the Cisco Unified Computing System (Cisco UCS). On the same day as the Intel announcement, Cisco captured Nine world records on industry benchmarks on Cisco UCS to highlight the way in which Cisco UCS can accelerate performance across the data center.

 

As we know, there is no better way to compare performance than by using industry-standard benchmarks, and with Nine new world record benchmark performance results Cisco has demonstrated Cisco Unified Computing System’s outstanding performance and IT productivity across key data center workloads. Check out the Performance Brief for additional information on the nine new Cisco UCS world record benchmarks. The detailed benchmark disclosure reports are available here. The performance leadership across a wide range of workloads provided by Cisco UCS is validated by the nine World records announced today summarized below:

 

  1. SPECfp®_rate_base2006 – Best 2-socket x86-architecture result
  2. SPECint®_rate_base – Best 2-socket x86-architecture result
  3. SPECint_base2006 – Best 2-socket x86-architecture result
  4. SPECjbb®2015-MultiJVMBest 2-socket x86-architecture result for critical jOPS
  5. SPECfp®_base2006 – Best 2-socket x86-architecture result
  6. TPC Express Benchmark HS (TPCx-HS)- Best performance and price/performance at the 1-TB scale factor
  7. TPC Express Benchmark HS (TPCx-HS)- Best performance and price/performance at the 10-TB scale factor
  8. SAP Sales and Distribution (SD)- Best 2-processor, 44-core, 2-tier result
  9. SPEC®OMPG2012 – Best 2-socket x86-architecture result

 

Cisco’s world-record-setting benchmark results also demonstrate dramatic performance improvements over the prior- generation Intel® Xeon® processor E5 family. Cisco’s results demonstrate the degree to which Cisco UCS servers with large memory configurations deliver the power of the new Intel Xeon processor E5 v4 family. Compared to Cisco’s previous-generation servers powered by the Intel Xeon processor E5 v3 family, Cisco’s new servers demonstrate dramatic improvements in raw CPU power as well as business and parallelized application performance.

 

Cisco UCS with the Intel® Xeon® processor E5-2600 v4 family delivered up to 127 percent better performance over the prior generation Intel® Xeon® processor E5 family as shown in the graph below:

 

LE-56601-PB-Broadwell-Results (3)

Performance Improvement Compared to Previous Generation Cisco UCS Servers (Percent)


It is interesting to note that although all vendors have access to same Intel processors, Cisco UCS unleashes their potential to deliver high performance to applications through the power of unification and performance optimization. Cisco UCS integrates industry-standard x86-architecture blade and rack servers with networking and storage access into a unified system. Automated server and network configuration let you quickly and easily deploy new applications, repurpose existing servers, and scale applications with configurations that are compliant with your IT standards.

 

Cisco’s continued performance leadership with Cisco UCS comes in part due to the power of the Intel Xeon processors that power Cisco UCS servers. Intel Xeon processor E5-2600 v4 family CPUs provide the best balance of performance, power efficiency, and features to meet the diverse needs of your data center applications and workloads. Built on 14-nanometer (nm) processor technology, these innovative processors offer up to 22 cores, large high-speed memory configurations, and accelerated I/O throughput, delivering significant performance improvements compared to previous-generation processors. The processors also offer increased memory bandwidth monitoring and cache allocation capabilities, optimum data center orchestration and virtualization features, and hardware-assisted security advancements, which work in conjunction with Cisco UCS servers to further enhance the value of IT infrastructure in your enterprise.

 

The architectural advantages of a single cohesive system optimized for virtualized environments coupled with the industry leading benchmark performance results makes the Cisco Unified Computing System an “infrastructure platform of choice” to provide industry-leading performance in your data center. For additional information on Cisco UCS and Cisco UCS solutions please visit Cisco Unified Computing & Servers web page.






 

 

  1. The floating-point throughput performance improvement of 19.3 percent compared the SPECfp_rate_base2006 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on September 8, 2014.

  2. The integer throughput performance improvement of 27.5 percent compared the SPECint_rate_base score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on March 5, 2015.

  3. The integer processing performance improvement of 14 percent compared the SPECint_base2006 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on March 5, 2015.

  4. The Java application performance improvement of 127.3 percent compared the SPECjbb2015- MultiJVM critical-jOPS score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on September 30, 2015.

  5. The floating-point performance improvement of 16.8 percent compared the SPECfp_base2006 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C240 M4 Rack Server, a result that was available in December, 2014.

  6. The big data system performance improvement of 11.1 percent and the price/performance improvement of 29.6 percent at a scale factor of 1 TB compared the TPCx-HS results of Cisco Integrated Infrastructure for Big Data and Analytics using Cisco UCS C240 M4 Rack Servers with Huawei FusionInsight for Big Data, a result that was available on September 15, 2015.

  7. The big data system performance improvement of 3.9 percent and price/performance improvement of 13.3 percent at a scale factor of 10 TB compared the TCPx-HS results of the Cisco UCS Integrated Infrastructure for Big Data and Analytics using the Cisco UCS C240 M4 Rack Server with the previous generation of Cisco’s solution, a result that was available March 22, 2016.

  8. The SAP SD benchmark certification number was not available at press time and when certified can be found on the following web page: http://global.sap.com/campaigns/benchmark/index.epx.

  9. The parallel-processing performance improvement of 28.2 percent compared the SPECompG_peak2012 score of the Cisco UCS C220 M4 Rack Server with the previous-generation Cisco UCS C220 M4 Rack Server, a result that was available on September 8, 2014.

  10. The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks and to disseminate objective and verifiable performance data to the industry. TPC membership includes major hardware and software companies. The performance results described in this document are derived from detailed benchmark results available as of March 31,2016, at http:// www.tpc.org/tpcx-hs/results/tpcxhs_perf_results. asp.

  11. SPEC, SPECfp, SPECint, SPECjbb, and SPEComp are registered trademarks of Standard Performance Evaluation Corporation. The benchmark results used to establish world-record status are based on those available at http://www.spec.org as of September 8, 2014.

Read more >

Composing a Path to the Infrastructure of the Future with Tencent, Dell, and Intel

By Matt Langman, Director of Business strategy for Memory and Intel Rack Scale Architecture

 

As hyperscale principles continue to influence datacenter designs, orchestrating and managing hardware becomes a challenge because scale and complexity break existing management approaches. It’s becoming impractical to use the Intelligent Platform Management Interface (IPMI) and other outdated standards to monitor, manage, provision and orchestrate thousands, or tens of thousands of devices.

 

Many organizations are trying to cope with this challenge. Nearly any data center vendor will tell you that they’re trying to step beyond the current limits, but one of the most distinctive and visionary approaches is Intel Rack Scale Architecture. Intel Rack Scale Architecture, simply put, promises a future of fully disaggregated hardware – whereby a customer can independently upgrade their compute, storage or networking capabilities – coupled with an open-source software layer that will let companies of any size stand up, manage and orchestrate infrastructure just like the big guys. With a common API, Intel Rack Scale Architecture lets customers manage all of their workloads without being locked into a specific orchestration layer

 

At IDF Shenzhen this week, our Intel Rack Scale Architecture offering will take center stage with Dell and Tencent. Our joint technology demo will show how customers can manage a hyperscale platform with custom orchestration– without hardware lock-in – to meet their workload needs. With Intel’s Rack Scale Infrastructure and Tencent’s custom orchestration stack running on the forthcoming Dell DSS 9000, we will demonstrate support for IT asset inventory, configuration, management and provisioning for Tencent workloads across their distributed data center infrastructure.

 

“Tencent Cloud Computing, the cloud service business under Tencent, a leading provider of Internet value added services in China, is jointly working with Intel and Dell on an innovative solution based on the combination of Intel  Rack Scale Architecture, Dell rack infrastructure and Tencent’s orchestration platform. This offers the ability to inventory and manage the underlying infrastructure with open, standards-based rack-level management, while also offering hardware disaggregation and infrastructure flexibility.”

– Tencent Cloud Computing Group

 

“When you’re one of the largest Internet companies in the world, it’s expected that you can accelerate service delivery, respond to changing workload needs and scale without interruption to meet customer needs. We’re proud to have collaborated with Intel and Tencent to showcase how this can be done. With the open and flexible Dell DSS 9000 running Intel Rack Scale Architecture and Tencent’s orchestration stack, Tencent Cloud Computing can provide better ROI at the infrastructure level while serving customer needs now and for years to come.”

– Stephen Rousset, Distinguished Engineer & Director of Architecture, Dell Extreme Scale Infrastructure

 

We hope you will drop by the Tencent booth at IDF Shenzhen to see a live demonstration and learn more. We certainly believe in Intel Rack Scale Architecture and we’re proud to have worked with Tencent and Dell to provide a glimpse into the future. Dell is a leading hardware provider that is building open, agile and efficient infrastructure via the DSS 9000 that matches our vision, and Tencent is on the forefront of technology trends and showing others how they are sidestepping the complexity of legacy approaches to better serve customer needs.

Read more >

Automated Manufacturing 101

snackable-startingfromsquareone.pngAs a Manufacturing IT Principal Engineer, I have helped Intel’s factory management evolve over the last two decades from manual processes toward the goal of being 100 percent automated. In our recent white paper, “Using Big Data in Manufacturing at Intel’s Smart Factories,” we describe the crucial aspects of Intel’s continuing automated manufacturing journey and the benefits automation—accelerated by the Internet of Things—has brought to Intel. These benefits include reduced cost, accelerated velocity, world-class quality, and improved safety.

 

The automation of manufacturing processes has now achieved global momentum, through stratagems such as Germany’s Industry 4.0 concept and China’s “Made in China 2025” initiative. Companies in Europe, China, and elsewhere around the world are endeavoring to automate their manufacturing processes.

 

But as I travel the world discussing Intel’s key learnings about automated manufacturing with customers and partners, I find that many companies are starting from square one. While they are interested in the advanced techniques Intel is using, such as automated material movement, IoT integration, real-time capabilities, virtualization, and interoperability with our suppliers, the manufacturing processes at these companies are still very much based on human intervention, pen-and-paper tracking, and manually operated production lines. Advanced automation is well and good – but how do they get started?

 

To help answer that question, in this blog I’ll cover the basics of factory automation – those critical components that must be in place as a foundation before more advanced practices can be achieved.  I hope this information will help you as you evaluate your own journey.

 

So, what are smart factories made of?

 

Manufacturing Execution System (MES)

 

The first ingredient in a smart factory is the MES. This is the heart of factory automation through which all the factory transactions flow; it is the gatekeeper for equipment and material states. The MES, which can be developed internally but is more often purchased from a third-party supplier,is a transactional system for managing and tracking the state of equipment and work-in-process on a factory floor. As parts move physically on the factory floor, they move logically through the MES. An MES keeps track of information in real time, receiving up-to-the-minute data from various sources, which can include employees, machine monitors, equipment host controllers, and even robots. An MES can enable leaner operations and increase product quality through standardized workflows.

 

When evaluating an MES, a company should consider several factors.

 

For the MES itself, high performance is paramount. The MES architecture should be re-entrant and multithreaded to support a high level of parallelism. (If these terms are unfamiliar, keep reading, and I’ll explain what they mean.) This is because the MES must track and execute transactions for different parts of the factory at the same time. In today’s brutally fast-paced business environment, waiting for information to come through a queue can significantly erode a company’s competitive edge. Other important attributes of an MES include a fully redundant architecture, the availability of remote interfaces, easy access to APIs for customization, and user interfaces that support configuration and manual intervention in processes when necessary.

 

The MES database is usually determined by the MES vendor, so it is an important part of the MES evaluation process. Again, don’t skimp on performance – automation can put a heavy load on the database, and the database must be able to scale as automation increases. Similar to the MES, look for a database that is highly parallel. For example, some databases lock the entire table when an update is occurring, while others are more flexible and lock only the row that is being updated.

 

Another important MES and database consideration is support availability. It’s best to find out what others in the industry are using, and choose an MES and database that are well supported by the vendor(s) for the operating system in use. Going with an obscure choice, or a combination of MES, database, and operating system that is not popular in the industry could leave you on an island with the esteemed privilege of uncovering many of the bugs and issues that exist in the system without having a fast path to fixing them.

 

Middleware

 

Think of the middleware as the mail carrier for factory automation, who also speaks every language on the planet. Middleware serves as an isolation and translation layer between the entire install base of automated capabilities. It enables you to reconfigure or swap-out factory automation systems without having to make changes to any of the other automation components. It also supports transaction aggregation, which is useful in many equipment transaction situations such as moving a carrier of production material into or out of a process operation.

 

When a factory transaction executes, the middleware abstracts this transaction and translates the single “action” into many separate actions. For example, it could start actions in the MES and scheduling system, set a flag in the quality control system, and publish data to a repository.

 

Middleware usually handles every transaction and routes every message in the factory. Therefore, evaluation of the available middleware solutions with high performance and parallelism should be similar to the MES/database evaluation process.

 

PCs at Each Equipment Station

 

snackable-nextgen.pngPCs are the ears, eyes, and mouth of factory automation. They communicate with the equipment, control that equipment, and send that information back to the factory automation systems like the MES. At Intel, we use PCs equipped with high-end processors, because performance is critical to keeping Intel’s factories running at peak capacity. High-performance PCs are necessary because the PC may communicate with a production tool many times per second, with hundreds of data variables in play – and data comes back at that same rate – providing real-time data that powers quality decisions. (Some factories that focus on assembly of products may not need this level of monitoring, but the semiconductor industry certainly does.)

 

In addition, the PC must communicate simultaneously with the MES and the equipment. Therefore, it must be capable of multiplexing (the simultaneous transmission of several messages along a single channel of communication). The PC also serves as the operator interface for controlling equipment (start, stop, change parameters).

 

Connections between the PC and the equipment are defined by the equipment in use. Many semiconductor factories use Ethernet connections with SEMI standard protocols, but connections could also use Wi-Fi. Other factories might use serial interfaces, Modbus, CAN bus, EtherCAT, or one of many other options.

 

Additional Factory Automation Components

 

Several backend components comprise the rest of the automation system. These include, but are not limited to, the following:

  • Statistical process control system
  • Yield analysis system
  • Scheduling system

When a company is ready for more advanced automation, they may add in automated material handling through robotic delivery (methods include overhead hoist, ground-based transport, and rail-mounted transport). A material control system is required to maintain the state of the automated vehicles and to provide instructions and synchronization.

 

Putting it All Together

 

If it seems like there’s a lot to factory automation, you’re right. That’s why creating a reference architecture for computer integrated manufacturing is important before beginning implementation. Because Intel has decades of experience in factory automation, we’re documenting our journey with the goal of helping fellow travelers create their own factory automation architecture.

 

In the next few months, I’ll be posting additional blogs talking about more advanced aspects of smart factories, the industrial Internet of Things, edge computing, and how Intel is putting technology to work in our manufacturing facilities. In the meantime, I’d love to hear your challenges and success stories – please leave a comment below and join the conversation.

Read more >

The Future of Mobile: 3 Takeaways From the 5G Summit

In mid-March, tech innovators from all over the world gathered for the 5G Summit in Taipei to discuss the future of mobile network technology. This conference was significant for the tech community, as it’s poised to change not only mobile technology, but how we conduct business and connect with each other. It’s nearly impossible to overstate the impact 5G will have. That doesn’t mean the switch will be easy, though. Quite the contrary. Here are my three main takeaways from the summit, and what they mean for the future of your tech.


5G Summit Taipei image.jpg

1. Taiwan’s role in the future of 5G

 

Deciding to hold the conference in the capital of Taiwan was no coincidence. Taiwan is renowned in the tech sector for leading design and manufacturing. In fact, just a month before the 5G Summit, Ericsson announced a strategic partnership with Quanta Computer, a Taiwanese leader in cloud computing. Together, they’ll be scaling design and developing data center solutions.

 

The companies present at the conference were some of the most cutting edge creators in technology today. They listened as the world’s leading telecom service providers, including Vodafone, Verizon, Bell Canada, China Mobile, Orange, and Telecom Italia, discussed their requirements for specific 5G services and use cases. The presentations emphasized that the transition to 5G will bring communication and computing together in a way we’ve never seen before, and that Taiwanese companies are positioned to play a huge part in the rollout and success of 5G.

 

2. A game changer

 

The technology involved in 5G will require small cells that connect to billions of embedded devices, and many Taiwanese companies attended the event looking to get a head start in development of 5G hardware and software.

 

Earlier this year at Mobile World Congress, Intel announced plans to collaborate with several industry leaders in an effort to accelerate the path to 5G. In fact, the connection between these two events is strong. How 5G services and requirements will differ depending on the vertical market and use cases was a discussion that started at Mobile World Congress and continued at the 5G Summit.

 

Next Generation Mobile Networks (NGMN) made a point in their presentation to emphasize that it will take a lot of work to identify unique 5G use cases and related KPIs worldwide. The low latency requirements in automotive, for example, could likely be significantly more stringent than in a typical consumer use case.

 

3. Transforming the network

 

Throughout the conference, one message was repeated over and over: 5G networks have to transform to allow easier deployment through software-defined networks (SDN) and network function virtualization (NFV) that can run on standard servers. This was especially interesting to me since Intel is actively engaging with service providers and the SDN/NFV ecosystem. We’ll be opening another NFV customer engagement center in Taipei in the second quarter of this year.

 

There was also a consensus among speakers and attendees that 5G and LTE will coexist. One presentation specified chip package size and power consumption, which gave something for the Taiwanese hardware companies to consider. But in order for these technologies to coexist, services and corresponding devices must be designed to aggregate bandwidth while maintaining reasonable power consumption. The most interesting message from several participants cautioned avoiding pre-standard solutions.

 

It was exciting to participate in the 5G Summit this year. I can’t wait to see how this technology transforms business and enables bigger leaps in technological innovation. Did you attend the conference? Please, share your takeaways.

  • Who do you think is going to be first to rollout standards-based 5G service?
  • Which verticals and use cases do you think will drive the fastest commercial adoption and where?
  • What leading companies in these verticals will benefit the most from 5G?

 

Given the healthy competition to be first among operators and countries, it will be interesting to see how this all plays out.

 

Tim Lauer is Intel’s Director of Sales for Cloud and Communication Service Providers in the APJ (Asia Pacific and Japan) region. Connect with him on Twitter and LinkedIn.

Read more >

Making The Case For Small With Mobile Analytics Design

group-meeting.jpgIn mobile analytics design, the “case for small” stems from the need to effectively manage performance and response time for mobile experiences. The concept has nothing to do with smaller screens or device sizes. Instead, it deals with the delivery of the content onto those screens.

 

One of the common denominators of all mobile user experiences deals with what I call the “patience factor.” mobile users tend to be less patient about performance and response time than PC users, since they’re on the go with less time to spare.

 

On the other hand, the unmatched access and convenience of mobile makes them heavy users of the technology that’s largely influenced by their daily experiences with their mobile devices, which is all about ease of use and instant results.

 

The challenge for mobile design is that many traditional data and analytics platforms can’t handle large volumes of data on wired systems, let alone on wireless networks. Unless you’re taking advantage of the latest technology such as in-memory computing, the case for small remains undeniable.

 

Let’s take a look at two key areas where the case for small makes sense with traditional mobile analytics platforms.

 

Mobile query size


If you’re going to load data from a traditional database, the size of the underlying mobile query will undoubtedly impact the performance.

  • Use the minimum number of data elements to satisfy your business requirement to define your mobile query. It’s completely acceptable to experiment with large queries during development, but you must clean them up before going live.
  • Optimize your queries at the database level and take advantage of additional features that your analytics or business intelligence application may offer.
  • If it makes sense and provides relief, use several smaller data queries instead of one large query.
  • As an alternative to loading all data at once, consider loading data required for the initial analysis.
  • For certain requirements, think about cached data (storing values that have been computed in advance). Although this duplicates original values that are stored elsewhere, the performance gains may be invaluable—especially for certain audiences, such as senior executives.

 

Mobile analytics asset


If you put aside the constraint of the most valuable mobile design property—real estate—avoiding a bottleneck becomes dependent on how the mobile analytics asset is designed. Consider:

  • Review the number of underlying calculations that are created inside the report. Are they all really necessary or simply duplicates or leftovers from draft copies that can be eliminated?
  • Can you leverage standard definitions/calculations? If one doesn’t exist, does it make sense to create one as part of your data model instead of inside the mobile asset?
  • How do you plan to deliver the assets: Push or Pull? (Read more on this topic.)
  • How are you configuring the mobile asset in terms of the data refresh? Is it automatic (loads the latest data on open) or manual?
  • Do you have the offline capability, which will eliminate the need for a 24/7 wireless connection and the need to refresh data sets that don’t change frequently?

 

Bottom line


Mobile analytics is about delivering actionable insight—not loading thousands of rows. Such overload won’t necessarily promote faster, better-informed decision making and it’s sure to cause unnecessary headaches for everyone involved.

 

Stay tuned for my next blog in the Mobile Analytics Design series.

 

You may also like the Mobile BI Strategy series on IT Peer Network.

Connect with me on Twitter @KaanTurnali, LinkedIn and here on the IT Peer Network.

A version of this post was originally published on turnali.com and also appeared on the SAP Analytics Blog.

Read more >

Retailers Reach for Robust Solutions Built for a Mobile World

It’s an exciting time for the retail industry. New mobile technologies are putting an amazing amount of insight at retailers’ fingertips—empowering sales assistants with timely, tailored knowledge that leads to richer customer engagement and more seamless shopping. New mobile devices … Read more >

The post Retailers Reach for Robust Solutions Built for a Mobile World appeared first on IoT@Intel.

Read more >

Bio-IT World Highlights Innovation for All in One Day Precision Medicine

Bio-IT World is a great occasion to take stock and see what’s on the horizon. In a plenary keynote session on April 5, I spoke about three areas where we’re making progress toward achieving All in One Day precision medicine.

                                                           

All in One Day is both a vision and a challenge. The vision is that if you’re diagnosed with cancer or another genetically-influenced disease, your clinical team will sequence your DNA and provide you with a precision treatment plan based on your biomolecular profile—all within 24 hours. To do that, they’ll scour massive databases, examining the known available treatments to find the ones that are most effective for people who most closely line up with your unique biology, age, lifestyle, and other factors. So you receive the treatment that’s likely to be most successful with the fewest side effects. The upshot: less anxiety and uncertainty, less trial-and-error treatment, and the likelihood of better outcomes.

 

With enough of the right kinds of innovation and focus, Intel thinks the goal is achievable by 2020. We’re working hard to make the vision a reality, and to make it practical enough for community oncologists to use as part of their clinical workflows.

                                                 

Tools for Making the Most of Genomics Data

 

What kinds of innovation am I talking about? One crucial area is the development of open source tools for analyzing and managing genomics data.

 

Genomic analysis and precision medicine are massive big data applications. Increasingly, the limiting factor isn’t sequencing a genome, but assembling, analyzing, comparing, studying and storing it along with clinical and other data. At Bio-IT World, Intel and the Broad Institute of MIT and Harvard announced that we are advancing fundamental capabilities so large genomic workflows can run at cloud scale, as well as co-developing new open source tools to simplify the execution of large genomic workflows such as the Broad’s Genome Analysis Toolkit (GATK).

 

The Broad Institute released Cromwell, an integrated workflow execution engine designed to give organizations greater control by launching genomic pipelines on private or public clouds in a portable and reproducible manner. Broad and Intel also announced GenomicsDB, a novel way to store vast amounts of patient variant data and to process it with unprecedented speed and scalability. Broad is teaming up with Intel, Cloudera, and four leading cloud service providers to enable cloud-based access to GATK software. (Read more about optimized open source solutions on Intel® platforms.)

 

Collaborative Networks to Accelerate Breakthroughs

 

Solving massive challenges calls for deep collaborations across diverse institutions. For precision medicine, these collaborations must balance open data sharing with institutional control and rigorous protection of patient privacy.

 

The Collaborative Cancer Cloud, established last year by Intel and Oregon Health & Science University (OHSU), provides a robust foundation for such collaborations by enabling medical institutions to securely share insights from their private patient genomic data. The Cancer Cloud’s unique, federated approach to data sharing allows for rapid advances while overcoming many concerns about sharing sensitive datasets. At Bio-IT World, we welcomed the Dana-Farber Cancer Institute and the Ontario Institute for Cancer Research as recent additions to the Cancer Cloud.

 

Platform Innovation for Diverse Genomics Workloads

 

As powerful as today’s supercomputers are, All in One Day will require significant increases in computational capacity, performance, and throughput. Intel is driving progress on multiple fronts to help institutions manage, analyze, share, and store the expanding world of bio data.  We’ve created Intel® Scalable System Framework (Intel® SSF) as a next-generation approach to developing high-performance, balanced, efficient, and reliable computing (HPC) systems. We recently launched the Intel® Xeon® processor E5-2600 v4 product family, the first processor within Intel Scalable Systems Framework. Together with Intel® Xeon Phi™ processors, Intel® Omni-Path Architecture, Intel® Enterprise Edition for Lustre* Solutions, revolutionary Intel® Optane™ memory/storage technology, and other critical elements of Intel SSF, we’re dramatically advancing the capabilities needed for precision medicine.

 

What will All in One Day mean for your organization? What questions do you have? What do you need to do to get ready? Tell me in the comments.

                                                           

Dig deeper:

Stay in touch:                         

  • @IntelHealth, @portlandketan

Read more >

Can NHS England’s Healthy New Towns programme present an opportunity to rethink how we live?

How long before we see a real and dramatic change in the way health and care services are delivered in England on a large scale? It’s a question you can be forgiven for asking – and subsequently thinking that we’re still a long way from achieving – but the recent announcements by NHS England around the Healthy New Towns (HNT) programme recently had me thinking about how bricks & mortar could be the catalyst for change that health and care services need.

 

Healthcare at the Heart of New Developments

The HNT programme will facilitate joined-up thinking from clinicians, designers and technology experts who will essentially start with a blank slate with house-builders creating new developments. From designing infrastructure which will make healthy activities such as walking and cycling safer (and thus more attractive) to the sharing of technology and information across a range of public services such as healthcare and social care, the programme aims to deliver better healthcare in a more efficient and economically sound way.

 

I think we’d all agree that a new approach to the provision of healthcare is needed in England and across the UK. Budgets are under pressure, we have an increasingly elderly population and chronic diseases such as diabetes and obesity are swallowing up huge resources. So what can new models of health & care services look like in a Healthy New Town and what advantages might it bring?

 

Utilizing Technology

NHS England’s Five Year Forward View clearly states that technology will play an important role in enabling change. Three key areas where I see technology bringing significant improvements for a Healthy New Town are:

 

  • Improved communication across the health and social care ecosystem – moving patient records to an electronic system ensures that patient information is always up-to-date and always available anytime and anywhere, whether that be on a desktop computer on a hospital ward or on a 2 in 1 device in the hands of a community nurse. The data can be easily and securely shared too, amongst authorized parties such as social care teams, thus helping to deliver a seamless patient experience through primary, secondary and social care. Often, these electronic medical records are made up of unstructured case notes which may contain hidden value to clinicians. North East London NHS Foundation Trust and Santana Big Data Analytics are working together on a project to extract value from unstructured case notes using data analytics for the benefit of health and social care teams read this whitepaper[PDF] for more insight on that project.


  • Making new homes more accessible and connected – there are some obvious and practical considerations around accessibility for those with mobility issues which should be easy to plan into a new-build property. I’m also keen to see how the concept of smart homes and the internet of things can be incorporated into new building developments and how such technologies could be used within new health & care models.


  • Accessing healthcare in new ways – millennials access many aspects of their daily lives through a connected mobile device, whether that be banking services, social media or checking on a utility bill for example; and healthcare will be no different. With faster high-speed internet connections and 5G mobile network capabilities coming soon I see the ways in which future generations access healthcare will change too, e.g. a face-to-face consultation with a GP may no longer be the first option for patients.

 

Those are just three examples but there are certainly more and I’d love to hear how you see this Healthy New Towns programme playing out and the benefits it can bring (leave a comment @IntelHealth on Twitter or contact me via LinkedIn). We need to take a more holistic approach to health and care to make a real difference, so the design of this type of new community is a step in the right direction.

 

Read more >

Fostering the Internet of Things to Advance U.S. Leadership

By Marjorie Dickman, Global Director and Managing Counsel, IoT Policy Intel commends the Department of Commerce’s National Telecommunications and Information Administration (NTIA) for launching a request for comment today on “The Benefits, Challenges, and Potential Roles for the Government in Fostering … Read more >

The post Fostering the Internet of Things to Advance U.S. Leadership appeared first on Policy@Intel.

Read more >

Intel IoT Fuels Innovation at Digital Signage Expo 2016

From connecting at every touchpoint to creating lasting impressions, the Intel Internet of Things Group demonstrated engaging and secure digital signage experiences at this year’s Digital Signage Expo. Intel works with the industry to develop innovative and disruptive solutions from … Read more >

The post Intel IoT Fuels Innovation at Digital Signage Expo 2016 appeared first on IoT@Intel.

Read more >

An Introduction to Dual-Port NVMe SSD.

March 31st 2016. It’s last day of Q1 and it’s full of surprises. If you missed the announcement, take a chance and spend a minute reading press release. I’m proud today; we at NSG (Non-volatile Memory Solutions Group) have just released new products, based on very new to the industry technologies.It comes first with a first Intel 3D NAND based NVMe SSD for Data Center. My peer Vivek Sarathy is excited about its performance and similar to SATA price in his blog.

That’s not just it. Another new SSD family is announced, Intel® SSD DC D3700 / D3600 Series. These are very special SSDs to address dual-port PCI Express* SSDs High Availability designs. This architecture is used to address a critical redundancy and failover, protecting against to every single path failure.

 

diagram.png

 

In reality that means that the SSD has a capability to be connected to two hosts at a time, shown as Storage Controller on the diagram. They can be connected directly to a host CPU or via PCIe switch topology if higher SSD count is required. If you’re familiar with Enterprise Storage HA designs based on SAS, this looks very similar but implemented with PCIe bus.

Dual-Port NVMe extensions were added to original specification with NVMe 1.1 revision few years ago. Since that time few vendors have announced products and solutions based on that technology. It’s ramping up now but the eco-system is new and very focused on addressing specific problems. These problems are common for Enterprise Storage (Scale Up Storage) and some other areas such as HPC Storage. By the way, please, take a look on my other peer’s blog, Allen Scheer “What Kind of Storage Buyer Are You?”.

 

Dual-port NVMe is another way for HA topologies. This also means system design of single port NVMe SSDs needs the re-architecture. As the product SSD is available in single form factor – 2.5” U.2, sharing similar connector as before. That also means, it still has 4 lanes of PCIe Gen3 as in original design, but for dual-port designs they are split into the pair by 2, so 2 x PCIe Gen3 x2. In order to support new connectivity, system must have new backplane which have a PCIe properly routed to two hosts with or without PCIe switches.

 

There is another advantage of D3700 / D3600 series over current single port SSDs. These drives are based on NVMe 1.2 specification, which introduces new features for all NVMe SSDs.


features.png


The one of those is multiple Namespace support. You can make here an analogy with SCSI LUNs, so a single SSD can be partitioned in multiple hardware partitions where a namespace can be assigned to two hosts or otherwise dedicated to a single host. This allows isolating the partition from another host until a critical failure on assigned host happened.


Looks complicated? Yes, it’s complex design changes but they are paid back right away by performance improvement. This also means to make the product successful Intel partnership with hardware and software vendors to enable the support of new drives. I’m very happy to see storage innovators such as XIO and E8 Storage working with Intel to show the benefits proof points in enterprise storage solutions as followings. More works are going on with Quanta, Wistron, AIC and others storage partners.

 

xio.png

xio2.png

Intel SSD DC D3700 vs. SAS SSD performance comparison.  Source – XIO.  Configuration – External Host with windows server 2008 running. External host specifications: HP DL360, G7 with dual intel E5-2620 and 25GB ram. Storage array system using E52699v3 with 40*Intel DC D3700 10 DWPD 800GB & Storage array system using E52699v3 with  40* SAS 10 DWPD 400GB . Test – 8K transfer with 80/20 Read/Write workload on QD 1,2,4 accessing 1 volume on the shared storage array.  Measurements taken on IOMeter.

 

 

e8.png

E8 Storage high availability. Source – E8.  Configuration: 4 host connected to E8 PoC storage system: 2 E5 2650v3 CPU and 24 intel DC D3700 800GB Drive. Performance measured by 8FIO threads per host, QD=32 per thread, 4k 100% Random read.

 

Can’t wait sharing more with you. See you at IDF16 at Dual-Port NVMe class.

Read more >

Smart Infrastructure: Is the next IoT revolution on the right trajectory?

In early to mid-2015, the Indian government announced plans to turn 100 Indian cities into “smart cities.” The idea is to leverage cloud technology, IoT/M2M, and big data in order to rethink waste management, traffic, electricity, and other city infrastructures. Smart city initiatives have been proposed or launched all over the world in the past few years. Cities across the spectrum like Singapore, Helsinki, Nairobi, and New York are all in the midst of it.

 

Smart City blog image.jpgBut there’s a huge obstacle that these cities are encountering in their first attempts at converting their city into a smart city: a lack of concerted planning, communication, and collaboration between multiple players.

 

The Way to Smart Cities is Connected Infrastructure

 

At this point, the smart infrastructure movement is still disparate. The companies building these smart solutions see their products as autonomous but, for the cities trying to integrate these systems, nothing could be further from the truth.

 

A truly smart city will communicate seamlessly. Different technologies and products made by different companies have to speak the same language and play by the same rules. But at this early stage in the IoT movement, most companies making these technologies are waiting for standards to make them more integrated and collaborative.

 

That’s what we’re doing at Intel: finding ways to unite more of the players in the space. We’re stepping back with a product-agnostic approach and watching the market with an eye on best solutions and products. Based on what we find and what customers need, we’re making connections between players in the ecosystem. We’re looking for and influencing designs and standards that are future-proof, sustainable, and scalable.

 

The Factors at Play

 

Here’s an example: A city decides they want to install smart traffic or weather cameras on the streets. There are several ways the city can go about pulling the data from the camera. They could do it right away, at the camera itself: the camera catches the activity and it has the intelligence built-in to process the data and send an alert to the right authority. Or, it could be set up as a “dumb” camera: the camera simply captures the images and then sends them all the way back to the data center in a centralized location, where the information is processed and alerts are sent out.

 

For a city trying to find the right products, there are dozens of factors to consider before making a decision. What’s the internet infrastructure like in their city? How expensive is it to send the data back and forth? How expensive is it to use hardware that can process the data at the site? And that’s on top of the challenges we discussed above, about how the different pieces of technology within the product itself speak to one another and the software that’s used to analyze the data.

 

In order to make good choices and create truly smart infrastructure, we need to evaluate what each city’s needs truly are and what solutions best fit these needs. These cities need some help from advisors that are truly agnostic.

 

Because of the many different products Intel makes, the company has been working closely with Original Design Manufacturers (or ODMs) in nearly every area of tech for the last 50 years. It’s given us a deeper understanding of these brands, their products, technology roadmaps and the ways in which smart infrastructure can be successfully implemented. Put simply – for the gastronomically inclined: the company offers the best ingredients to transform your recipes

 

In future posts, we’ll be delving deeper into the challenges smart cities and enterprises are facing as they implement more IoT solutions.

 

Kavitha Mohammad is the Director of Sales for Intel IoT and SmartCities, in the Asia Pacific Japan region. Follow her on Twitter and LinkedIn.

Read more >