Recent Blog Posts

TPA and the Promise of International Trade – A New Era of American Innovation and Economic Growth

By Lisa Malloy, director of Government Relations and Policy Communications With approval of Trade Promotion Authority (TPA), Congress has taken a significant step towards advancing American economic, social and technological interests at home and abroad. As an American high-tech manufacturer, … Read more >

The post TPA and the Promise of International Trade – A New Era of American Innovation and Economic Growth appeared first on Policy@Intel.

Read more >

The Innovation Act of 2015 – Promoting Innovation by Stopping Abusive Patent Litigation

By Steve Rodgers, senior vice president and general counsel for Intel Earlier this month, the House Judiciary Committee approved H.R. 9, the Innovation Act of 2015. This bipartisan bill aims to curb abusive patent litigation by non-practicing entities (NPEs) while still protecting … Read more >

The post The Innovation Act of 2015 – Promoting Innovation by Stopping Abusive Patent Litigation appeared first on Policy@Intel.

Read more >

10 Questions to Develop Your Mobile BI Strategy

Mobile-Employee-On-Laptop.pngIn my post “Mobile BI” Doesn’t Mean “Mobile-Enabled Reports” I articulated the importance of developing a mobile BI strategy. If designed, implemented, and executed effectively, mobile BI will not only complement the existing business intelligence (BI) framework, but it will enable organizations to drive growth and profitability.


For my next ten posts, I want to chart a course that will highlight the key questions you need to ask before embarking on a mobile BI strategy. This is the critical first step in validating mobile BI readiness for any organization, whether it’s a Fortune 500 company, a small-to-medium enterprise, or a small team within a large enterprise. The size or the scope of the mobile BI engagement doesn’t negate the need for, or importance of, the pre-flight checklist.


Think about this for a moment. Would a flight crew skip the pre-flight planning because it expects only a small number of passengers on the flight? No, and we shouldn’t skip it either. We want to evaluate and identify any issues before the takeoff.


It doesn’t matter in what order you answer these questions. What matters is that you consider them all as you work to develop a comprehensive mobile BI strategy that will set you up for success.


1. Executive Sponsorship

Do we have an executive sponsor? It starts and ends with executive sponsorship. As with any engagement, this not only ensures alignment between your business and mobile strategies but also the attainment of required resources.


2. Security

How do we mitigate risks associated with all three layers of mobile BI security: device(s), mobile BI app, and data consumed on the app? Is there an existing corporate security policy or framework that can be leveraged?


3. Enterprise Mobility

Do we have either a formal enterprise mobility strategy that we need to align with or a road map that we can follow?


4. Technology Infrastructure

Can our current IT and BI infrastructure, which includes both hardware and software, support mobile BI? Are there any gaps that need to be addressed prior to going live?


5. Design

Do we have the know how to apply mobile BI design best practices, whether it’s for dashboards or operational reports? Does the existing software support effective use of metadata and modeling to leverage the “develop once, use many times” design philosophy?


6. Talent Management

Do we have internal talent with the required skill set that includes not only technical expertise but also soft skills such as critical thinking?


7. Support Infrastructure

Do we have a sufficient support infrastructure in place to ensure that both business (content, analysis) and technical (access, installation) challenges are addressed in a timely manner? Do we have the right resources to develop effective documentation? Can we leverage existing IT and/or BI resources?


8. Communication

What will be our communication strategy in the pre-and post-Go Live phase? How will we update the user community on a regular basis?


9. Business Processes

Are there any business processes that need to be updated, changed, or created to support the mobile BI strategy? Are these changes feasible and can we complete them prior to development to ensure proper testing and validation?


10. System Integration

Are there any requirements or opportunities for integration with other internal apps, business systems, or processes?


Many of these topics are not unique to mobile BI. Moreover, additional areas of interest such as project management or quality assurance (testing) are assumed to be part of the existing IT or BI framework. Although these initial questions may seem extensive at first, their primary purpose is to provide a checklist.


I subscribe to the notion that strategy planning for any engagement—not just IT projects— should invite questions that promote critical thinking. Only by encouraging questions can we make sure that we ask the right questions.


What key questions do you see as critical to the development of a comprehensive mobile BI strategy?


Stay tuned for my next blog in the Mobile BI Strategy series.


Connect with me on Twitter at @KaanTurnali and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Read more >

Taking Square Aim at Accelerating Cloud Adoption with Cisco, Dell and Red Hat

With the digital service economy scaling to $450B by 2020, companies are relying on their IT infrastructure to fuel business opportunity.  The role of the data center has never been as central to our economic vitality, and yet many enterprises continue to struggle to integrate the efficient and agile infrastructure required to drive the next generation of business growth.


At Intel, we are squarely focused on accelerating cloud adoption by working with the cloud software industry to deliver the capabilities required to fuel broad scale cloud deployment across a wide range of use cases and workloads.  We are ensuring that cloud software can take full advantage of Intel architecture platform capabilities to deliver the best performance, security, and reliability, while making it simpler to deploy and manage cloud solutions.


That’s why our latest collaboration with Red Hat to accelerate the adoption of OpenStack in the enterprise holds incredible promise.  We kicked off the OnRamp to OpenStack program in 2013. This program has been centered on educational workshop, early trials, and customer PoCs. Today, we are excited to augment this collaboration with a focus on accelerating OpenStack deployments, by building on our long-standing history of technical collaboration to accelerate feature delivery to drive broad proliferation of OpenStack in the enterprise.


This starts by expanding our focus on integrating enterprise class features such as high availability of OpenStack services and tenants, ease of deployment, and rolling upgrades.  What does this entail?  With high availability of OpenStack services we are ensuring an “always on” state for cloud control services.  High availability of tenants focuses on a number of capabilities including improving VM migration, and VM recovery from host failures.  Ease of deployment will help IT shops get up and running faster, and increase capacity whenever required with simplicity.  Once the cloud is up and running, rolling upgrades enable OpenStack upgrades without downtime.


We’re also excited to have industry leaders Cisco and Dell join the program to deliver a selection of proven solutions to the market.  With their participation, we expect to upstream much of the work we’ve collectively delivered to ensure that the entire open source community can leverage these contributions.  What does this mean to you? If you’re currently evaluating OpenStack and are seeking improvement in high availability features or predictable and understood upgrade paths, please reach out to us to find out more about what the collaboration members are delivering.  If you’re looking to evaluate OpenStack in your environment, the time is ripe to take action.  Take the time to learn more about Cisco, Dell and Red Hat plans for delivery of solutions based on the collaboration, and comment here if you have questions or feedback on the collaboration.

Read more >

On The Front Line: People Who Bring Moore’s Law to Life (Part 2)

Who they are and what they do: Meet some of the employees that bring Moore’s Law to life. Ruth Brain, senior principal engineer, Portland Technology Development, TMG (Oregon) My Moore’s Law role: Ruth is currently the 7-nanometer integration group leader for … Read more >

The post On The Front Line: People Who Bring Moore’s Law to Life (Part 2) appeared first on Jobs@Intel Blog.

Read more >

Top 4 Questions (and Answers) about the New Compute Stick


There has been a lot of excitement and discussion around the new compute stick—the latest form factor to join the desktop family. Lately, I’ve been getting a bunch of questions about it: What is it? What do I need to use it? What can I do with it? Where can I get one? The fact that I’m receiving so many questions tells me there’s a lot of interest out there—and for good reason.


I’m really excited about this newest innovation and how it allows people to bring computing to new devices and new areas. Here are answers to some of the most common questions I’ve been getting, but please don’t hesitate to reach out if you don’t find the information you’re looking for.



What Is a Compute Stick?


To put it simply, compute sticks are small, light devices that allow you to turn any HDMI display into a desktop computer. They’re barely bigger than a thumb drive, but they allow you to add computing functionality to any display with an HDMI input. And its size allows you to bring the compute stick with you wherever you go.

What Do I Need to Use a Compute Stick?


All you need is a display that has an HDMI input and a wireless keyboard and mouse. What’s got people so excited is the total freedom of that. Just think; you can enjoy computer access anywhere you have those simple ingredients. Pretty cool.


What Can I Do with a Compute Stick?


You can do many of the same things you love to do on your computer. We’re talking searching the Web, sharing photos, doing email, and keeping up on social media. Additionally, you can stream content from a local network, or any Internet source, allowing you to access the content you want on the displays you want. With a compute stick you can also add simple digital signage capabilities to any HDMI display. Finally, its size and portability free you to bring your computing with you, whether you’re on a business trip or a vacation. Just plug it into the back of the TV in your hotel room and your display can be turned into an engaging, connected device.


Where Can I Get a Compute Stick?


There are several compute stick devices in the marketplace today, including the Intel Compute Stick, and we expect to see more from other manufacturers in the coming weeks and months.


If you have any other questions, please let me know in the comments below or on social media with #IntelDesktop, and I’ll address them in future posts.

Read more >

Intel Kicks off “Tech + Policy @ Intel” Series: 5 Key Takeaways from our Evening on Trade

By Lisa Malloy, director of Government Relations and Policy Communications for Intel Last week, Intel kicked off our “Tech + Policy @Intel” series in Washington, D.C. with an intimate conversation hosted in partnership with the Information Technology and Innovation Foundation … Read more >

The post Intel Kicks off “Tech + Policy @ Intel” Series: 5 Key Takeaways from our Evening on Trade appeared first on Policy@Intel.

Read more >

The Open Container Project: An Opportunity to Deliver True Container Interoperability

Today, Intel announced that it is one of the founding members of the Open Container Project (OCP), and effort focused on ensuring a foundation of interoperability across container environments. We were joined by industry leaders including Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware in the formation of this group which will be established under the umbrella of the Linux Foundation.  This formation represents an enormous opportunity for the industry to “get interoperability right” at a critical point of maturation of container use within cloud environments.




Why is this goal important?  We know the tax limited interoperability represents to workload portability and the limiter it represents to enterprises extracting the full value of the hybrid cloud.  We also know the challenge of true interoperability when it is not established in early phases of technology maturity.  This is why container interoperability is an important part of Intel’s broader strategy for open cloud software innovation and enterprise readiness and why we are excited to be joining other industry leaders in OCP.


Intel brings decades of experience in working on open, industry standard efforts to our work with OCP, and we have reason to be bullish about the opportunity for OCP to deliver to its goals.  We have the right players assembled to lead this program forward and the right commitments from vendors to contribute code and runtime to the effort.  We’re looking forward to helping to lead this organization to rapid delivery to its goals and plan to use what we learn in OCP towards our broader engagements in container collaboration.


Our broader goal is squarely focused on delivery of containers that are fully optimized for Intel platforms and ready for enterprise environments as well as acceleration of easy to deploy container based solutions to the market.  You may have seen our earlier announcement of collaboration with CoreOS on optimization of their Tectonic cloud software environment with Intel architecture to ensure enterprise capabilities.  This announcement also features work with leading solutions providers such as SuperMicro and RedApt on delivery of ready to deploy solutions at Tectonic GA.  At DockerCon this week, we are highlighting our engineering work to optimize Docker containers for Intel Cloud Integrity Technology extending workload attestation from VM based workloads to containers.  These are two examples of our broader efforts to ready containers for the enterprise and highlight the importance of the work of OCP.


If you are engaged in the cloud software arena, I encourage you to consider participation in OCP.  If you’re an enterprise considering integration of containers in your environment the news of OCP should provide confidence of portability of future container based workloads and that evaluation of container solutions should be considered as part of your IT strategy.

Read more >

Climbing the Trusted Stack with Intel CIT 3.0

Enterprises have a love-hate relationship with cloud computing. They love the flexibility. They love the economics. They hate the fact they can’t guarantee the infrastructure and applications running their businesses and hosting their corporate data are completely trusted and haven’t been tampered with by cyber criminals for nefarious purposes.


Even if organizations have confidence in the systems deployed in their data centers, in hybrid cloud environments, on premise systems may be instantly and automatically supplemented by capacity from a public provider. How do we know and control where application instances are running? Who attests to their trust? For cloud service providers, how do they demonstrate the platforms they provide are secure and can be verified for compliance purposes? And how do we manage and orchestrate OS, VM, and application integrity across private and public clouds in an OpenStack environment? At Intel, we’re developing a solution for hardware-assisted workload integrity and confidentially that can answer those questions and create a platform for trusted cloud computing.


Intel® Xeon® processors offer a hardware-based solution using Intel Trusted Execution Technology (TXT) and Trusted Platform Module (TPM) technology to attest to the integrity and trust of the platform. That lets us assure nothing has been tampered with and that the platform is running the authorized versions of firmware and software. To access and manage this capability, we provide Intel® Cloud Integrity Technology (CIT) 3.0 software.


At the OpenStack Summit in May, we demonstrated how we use Intel CIT 3.0 to verify a chain of trust at boot time from the hardware to the workload in a Linux/Docker and Linux/KVM environment. That includes the hardware, firmware, BIOS, hypervisor, OS, and the Docker engine itself. When integrated with OpenStack, we assure when an application was launched, it is launched in a trusted environment right up through its VM. In addition, VM images can be encrypted to assure their confidentiality. Intel CIT 3.0 provides Enterprise Ownership and Control in clouds through encrypted VM storage and enterprise managed keys.


At DockerCon in San Francisco, we have taken that one step farther. We have extended the chain of trust up through the Docker container image and application itself to assure trusted launch of a containerized application.


For enterprises that need trusted cloud computing, it means:


  • You can assure at boot time that the platform running the Docker daemon or hypervisor has not been tampered with and is running correct versions.

  • You can assure when a VM or container is launched that the container and VM images—including the containerized application—have not been tampered with and are correct versions.

  • You can achieve the above when deploying VMs and containers from the same OpenStack controller to enable trusted compute pools.


VMs and containers can be launched from a dashboard, which also displays their execution and trust status. But the real power of the solution will come as the capabilities are integrated into orchestration software which can launch trusted container transparently on trusted compute pools. And we are continuing our work to address storage and networking workloads like storage controllers, software-defined networking (SDN) controllers, and virtual network functions.


The demonstration at DockerCon is a proof of concept we built using CIT 3.0. We’re currently integrating with a select set of cloud service providers and security vendor partners and will announce general availability after that is complete. CIT 3.0 protects virtualized and containerized workloads (Docker containers) running on OpenStack-managed Ubuntu, RHEL, and Fedora systems with KVM/Docker. It also protects non-virtualized (bare metal) environments. If you have one of those environments running on Xeon TXT-enabled servers with TPM activated by the OEM, we invite you to try it out under our beta program.


Integrity and confidentiality assurance is becoming a critical requirement in private, public, and hybrid cloud infrastructures, and cloud service providers must offer trusted clouds to their customers to provide them with the confidence to move sensitive workloads into the cloud. Intel Cloud Integrity Technology 3.0 is the only infrastructure integrity solution in the market that offers complete chain of trust, from the hardware to the application. We think enterprises will be loving cloud computing a lot more.

Read more >

Caesars Entertainment Bets on Big Data and Wins

Three best practices for successful big data projects


Many people have asked me why only 27% of respondents in a recent consulting report believed their Big Data projects were successful.


I don’t know the particulars of the projects in the report, but I can comment on the key attributes of successful Big Data projects that I’ve seen.


Let’s look at an example. Intel recently published a case study about an entirely new Big Data analytics engine that Caesars Entertainment built on top of Cloudera Hadoop and a cluster of Xeon E5 servers. This analytics engine was intended to support new marketing campaigns targeted at customers with interests beyond traditional gaming, including entertainment, dining and online social gaming. The results of this project have been spectacular, increasing Caesars’ return on marketing programs and dramatically reducing the time to respond to important customer events.


Three ways that Caesars Entertainment got it right:


1. Pick a good use case


Caesars chose to improve the segmentation and targeting of specific marketing offers.  This is a great use case because it is a specific, well-defined problem that the Caesars analytics team already understands well.  It has the additional benefit that new unstructured and semi-structured data sources were available that could not be included in the previous generation of analysis.


Rizwan Patel, IT director, commented, “When it comes to implementation, it is … essential to select use cases that solve real business problems. That way, you have the backing of the company to do what it takes to make sure the use case is successful.”


2. Prioritize what data you include in your analysis


“We have a cross-functional team…that meets quarterly to prioritize and select use cases for implementation.”


This applies to both data and analytics. There is a common misconception that a data lake is like an ocean: Every possible source of data should flow into it.  My recommendation is to think of a data lake as a single pool where you can easily access all the data that is relevant to your projects. It takes a lot of effort to import, clean and organize each data source. Start with data you already understand.  Then layer in one or two additional sources, such as web clickstream data or call center text, to enrich your analysis.


3. Measure your results


“The original segments were not generating enough return on customer offers.”


It’s hard to declare a project a success if it has no measurable outcome.  This is particularly important for Big Data projects because there is often an unrealistic expectation that valuable insights will magically bubble to the surface of the data lake.  When this doesn’t happen, the project may be judged a failure, even when it has delivered real improvements on a meaningful metric. Be sure to define key metrics in advance and measure them before and after the project.


Your organization’s best odds


Big Data changes the game for data-driven businesses by removing obstacles to analyzing large amounts of data, different types of unstructured and semi-structured data, and data that requires rapid turnaround on results.


Give your organization the best odds possible for a successful Big Data project by following Caesars Entertainment’s good example.

Read more >

Internet of Things in Healthcare Helps Shift Focus from Cure to Prevention with MimoCare


The Internet of Things (IoT) is one of those subject matters that tends to include a lot of future-gazing around what may be possible in five, 10 or even 20 years’ time but we’re very fortunate in the healthcare sector to be able to show real examples where IoT is having a positive impact for both patient and provider today.

IoT across Healthcare

It’s estimated that IoT in healthcare could be worth some $117 billion by 2020 and while that number may seem incomprehensibly large it is worth remembering that IoT touches on so many areas of healthcare from sensors and devices for recording and analysis through to the need for secure cloud and networks to transmit and store voluminous data.

When the UK Government published their ‘The Internet of Things: making the most of the Second Digital Revolution’ report, healthcare was one of most talked about areas with IoT making a significant impact in helping to ‘shift healthcare from cure to prevention, and give people greater control over decisions affecting their wellbeing.’

Meaningful Use Today

Here at Intel in the UK we’re working with a fantastic company in the Internet of Things space that is having a real and meaningful impact for patient and provider. MimoCare’s mission is ‘to support independent living for the elderly and vulnerable’ using pioneering sensory-powered systems. And with an ageing population across Europe and the associated rise in healthcare costs, Mimocare are already helping to ‘shift healthcare from cure to prevention’ today.

I think it’s important to highlight that MimoCare’s work focuses on measuring the patient’s environment, rather than the patient. For example, sensors can be placed to record frequency of bathroom visits and a sudden variation from the normal pattern may indicate a urinary infection or dehydration.


Medication Box


The phrase ‘changing lives’ is sometimes overused but when you read feedback from an elderly patient benefiting from MimoCare’s work then I think you’d agree that it is more than appropriate. MimoCare talked me through a fantastic example of an 89 year old male who is the primary carer for his 86 year old wife and is benefiting greatly from IoT in healthcare. The elderly gentleman has a pacemaker fitted so is required to administer warfarin but with his primary focus on caring for his wife there is a risk that he may miss taking his own medication.


Using MimoCare sensors on the patient’s pill box enables close family to be alerted by SMS if medication is missed. The advantage to the patient is that both the sensors in the home and, importantly, the alert triggers are unobtrusive, meaning that the patient remains free from anxiety. If medication is missed a gentle reminder via a phone call from a family member is all that I needed to ensure the patient takes medication. And for the healthcare provider the cost in providing care for the patient is significantly reduced too.

The elderly male patient said, “I really like the medication box as it feels like something for me. It’s nice to know someone is keeping an eye out to help remind me to take my medication daily and on time.  In fact last time I visited the surgery they were able to reduce my warfarin and I’m sure that’s because I’m now taking it regularly.” Read more on how MimoCare is using sensors in the home to help the elderly stay independent and out of hospital.

Big Data, Big Possibilities

I’m really excited about the possibilities of building up an archive of patient behaviour in their own home that will enable cloud analytics to produce probability curves to predict usual and unusual behaviour. It’s a fantastic example of the more data we have, the more accurate we can be in predicting unusual behaviour and being able to trigger alerts to patients, family and carers. And that can only be a positive when it comes to helping elderly patients stay out of hospital (and thus significantly reduce the cost of hospital admissions).

Intel has played a pivotal role in assisting of porting both software and hardware to give improved performance of the IoT gateway, also provided through WindRiver Linux an enhanced data and network security including down-the-wire device management for software updates and configuration changes.

Sensing the Future

But where will the Internet of Things take healthcare in the next 5-10 years? What I can say is that sensors will become more cost-effective, smaller and will be more power-efficient meaning that they can be attached to a multitude of locations around the home. Combining this sensor data with that recorded by future wearable technology will give clinicians a 360 degree view of a patient at home which will truly enable the focus to be shifted from cure to prevention.

I asked MimoCare’s Gerry Hodgson for his thoughts on the future too and he told me, “IoT and big data analytics will revolutionise the way care and support services are integrated. Today we have silos of information which hold vital information for coordinating emergency services, designing care plans, scheduling transport and providing family and community support networks. The projected growth in the elderly population means that it is imperative we find new ways of connecting local communities, families and healthcare professionals and integrating services.”

“Our cascade 3-D big data analytics provides a secure and globally scalable ecosystem that will totally revolutionise the way services are coordinated.  End to end, IoT sensors stream valuable data to powerful server platforms such as Hadoop which today provides an insight into what would otherwise be unobtainable.”

“I’m very excited about the future where sensors and analytics change the way we coordinate and deliver services on a huge scale.”


Read more >