ADVISOR DETAILS

RECENT BLOG POSTS

Taking Square Aim at Accelerating Cloud Adoption with Cisco, Dell and Red Hat

With the digital service economy scaling to $450B by 2020, companies are relying on their IT infrastructure to fuel business opportunity.  The role of the data center has never been as central to our economic vitality, and yet many enterprises continue to struggle to integrate the efficient and agile infrastructure required to drive the next generation of business growth.

 

At Intel, we are squarely focused on accelerating cloud adoption by working with the cloud software industry to deliver the capabilities required to fuel broad scale cloud deployment across a wide range of use cases and workloads.  We are ensuring that cloud software can take full advantage of Intel architecture platform capabilities to deliver the best performance, security, and reliability, while making it simpler to deploy and manage cloud solutions.

 

That’s why our latest collaboration with Red Hat to accelerate the adoption of OpenStack in the enterprise holds incredible promise.  We kicked off the OnRamp to OpenStack program in 2013. This program has been centered on educational workshop, early trials, and customer PoCs. Today, we are excited to augment this collaboration with a focus on accelerating OpenStack deployments, by building on our long-standing history of technical collaboration to accelerate feature delivery to drive broad proliferation of OpenStack in the enterprise.

 

This starts by expanding our focus on integrating enterprise class features such as high availability of OpenStack services and tenants, ease of deployment, and rolling upgrades.  What does this entail?  With high availability of OpenStack services we are ensuring an “always on” state for cloud control services.  High availability of tenants focuses on a number of capabilities including improving VM migration, and VM recovery from host failures.  Ease of deployment will help IT shops get up and running faster, and increase capacity whenever required with simplicity.  Once the cloud is up and running, rolling upgrades enable OpenStack upgrades without downtime.

 

We’re also excited to have industry leaders Cisco and Dell join the program to deliver a selection of proven solutions to the market.  With their participation, we expect to upstream much of the work we’ve collectively delivered to ensure that the entire open source community can leverage these contributions.  What does this mean to you? If you’re currently evaluating OpenStack and are seeking improvement in high availability features or predictable and understood upgrade paths, please reach out to us to find out more about what the collaboration members are delivering.  If you’re looking to evaluate OpenStack in your environment, the time is ripe to take action.  Take the time to learn more about Cisco, Dell and Red Hat plans for delivery of solutions based on the collaboration, and comment here if you have questions or feedback on the collaboration.

Read more >

Top 4 Questions (and Answers) about the New Compute Stick

Man-Presenting-From-His-Computer.png

There has been a lot of excitement and discussion around the new compute stick—the latest form factor to join the desktop family. Lately, I’ve been getting a bunch of questions about it: What is it? What do I need to use it? What can I do with it? Where can I get one? The fact that I’m receiving so many questions tells me there’s a lot of interest out there—and for good reason.

 

I’m really excited about this newest innovation and how it allows people to bring computing to new devices and new areas. Here are answers to some of the most common questions I’ve been getting, but please don’t hesitate to reach out if you don’t find the information you’re looking for.

 

Intel-Compute-Stick.png

What Is a Compute Stick?

 

To put it simply, compute sticks are small, light devices that allow you to turn any HDMI display into a desktop computer. They’re barely bigger than a thumb drive, but they allow you to add computing functionality to any display with an HDMI input. And its size allows you to bring the compute stick with you wherever you go.

What Do I Need to Use a Compute Stick?

 

All you need is a display that has an HDMI input and a wireless keyboard and mouse. What’s got people so excited is the total freedom of that. Just think; you can enjoy computer access anywhere you have those simple ingredients. Pretty cool.

 

What Can I Do with a Compute Stick?

 

You can do many of the same things you love to do on your computer. We’re talking searching the Web, sharing photos, doing email, and keeping up on social media. Additionally, you can stream content from a local network, or any Internet source, allowing you to access the content you want on the displays you want. With a compute stick you can also add simple digital signage capabilities to any HDMI display. Finally, its size and portability free you to bring your computing with you, whether you’re on a business trip or a vacation. Just plug it into the back of the TV in your hotel room and your display can be turned into an engaging, connected device.

 

Where Can I Get a Compute Stick?

 

There are several compute stick devices in the marketplace today, including the Intel Compute Stick, and we expect to see more from other manufacturers in the coming weeks and months.

 

If you have any other questions, please let me know in the comments below or on social media with #IntelDesktop, and I’ll address them in future posts.

Read more >

The Open Container Project: An Opportunity to Deliver True Container Interoperability

Today, Intel announced that it is one of the founding members of the Open Container Project (OCP), and effort focused on ensuring a foundation of interoperability across container environments. We were joined by industry leaders including Amazon Web Services, Apcera, Cisco, CoreOS, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware in the formation of this group which will be established under the umbrella of the Linux Foundation.  This formation represents an enormous opportunity for the industry to “get interoperability right” at a critical point of maturation of container use within cloud environments.

 

 

 

Why is this goal important?  We know the tax limited interoperability represents to workload portability and the limiter it represents to enterprises extracting the full value of the hybrid cloud.  We also know the challenge of true interoperability when it is not established in early phases of technology maturity.  This is why container interoperability is an important part of Intel’s broader strategy for open cloud software innovation and enterprise readiness and why we are excited to be joining other industry leaders in OCP.

 

Intel brings decades of experience in working on open, industry standard efforts to our work with OCP, and we have reason to be bullish about the opportunity for OCP to deliver to its goals.  We have the right players assembled to lead this program forward and the right commitments from vendors to contribute code and runtime to the effort.  We’re looking forward to helping to lead this organization to rapid delivery to its goals and plan to use what we learn in OCP towards our broader engagements in container collaboration.

 

Our broader goal is squarely focused on delivery of containers that are fully optimized for Intel platforms and ready for enterprise environments as well as acceleration of easy to deploy container based solutions to the market.  You may have seen our earlier announcement of collaboration with CoreOS on optimization of their Tectonic cloud software environment with Intel architecture to ensure enterprise capabilities.  This announcement also features work with leading solutions providers such as SuperMicro and RedApt on delivery of ready to deploy solutions at Tectonic GA.  At DockerCon this week, we are highlighting our engineering work to optimize Docker containers for Intel Cloud Integrity Technology extending workload attestation from VM based workloads to containers.  These are two examples of our broader efforts to ready containers for the enterprise and highlight the importance of the work of OCP.

 

If you are engaged in the cloud software arena, I encourage you to consider participation in OCP.  If you’re an enterprise considering integration of containers in your environment the news of OCP should provide confidence of portability of future container based workloads and that evaluation of container solutions should be considered as part of your IT strategy.

Read more >

Climbing the Trusted Stack with Intel CIT 3.0

Enterprises have a love-hate relationship with cloud computing. They love the flexibility. They love the economics. They hate the fact they can’t guarantee the infrastructure and applications running their businesses and hosting their corporate data are completely trusted and haven’t been tampered with by cyber criminals for nefarious purposes.

iStock_000038394898_Medium.jpg

Even if organizations have confidence in the systems deployed in their data centers, in hybrid cloud environments, on premise systems may be instantly and automatically supplemented by capacity from a public provider. How do we know and control where application instances are running? Who attests to their trust? For cloud service providers, how do they demonstrate the platforms they provide are secure and can be verified for compliance purposes? And how do we manage and orchestrate OS, VM, and application integrity across private and public clouds in an OpenStack environment? At Intel, we’re developing a solution for hardware-assisted workload integrity and confidentially that can answer those questions and create a platform for trusted cloud computing.

 

Intel® Xeon® processors offer a hardware-based solution using Intel Trusted Execution Technology (TXT) and Trusted Platform Module (TPM) technology to attest to the integrity and trust of the platform. That lets us assure nothing has been tampered with and that the platform is running the authorized versions of firmware and software. To access and manage this capability, we provide Intel® Cloud Integrity Technology (CIT) 3.0 software.

 

At the OpenStack Summit in May, we demonstrated how we use Intel CIT 3.0 to verify a chain of trust at boot time from the hardware to the workload in a Linux/Docker and Linux/KVM environment. That includes the hardware, firmware, BIOS, hypervisor, OS, and the Docker engine itself. When integrated with OpenStack, we assure when an application was launched, it is launched in a trusted environment right up through its VM. In addition, VM images can be encrypted to assure their confidentiality. Intel CIT 3.0 provides Enterprise Ownership and Control in clouds through encrypted VM storage and enterprise managed keys.

 

At DockerCon in San Francisco, we have taken that one step farther. We have extended the chain of trust up through the Docker container image and application itself to assure trusted launch of a containerized application.

 

For enterprises that need trusted cloud computing, it means:

 

  • You can assure at boot time that the platform running the Docker daemon or hypervisor has not been tampered with and is running correct versions.

  • You can assure when a VM or container is launched that the container and VM images—including the containerized application—have not been tampered with and are correct versions.

  • You can achieve the above when deploying VMs and containers from the same OpenStack controller to enable trusted compute pools.

 

VMs and containers can be launched from a dashboard, which also displays their execution and trust status. But the real power of the solution will come as the capabilities are integrated into orchestration software which can launch trusted container transparently on trusted compute pools. And we are continuing our work to address storage and networking workloads like storage controllers, software-defined networking (SDN) controllers, and virtual network functions.

 

The demonstration at DockerCon is a proof of concept we built using CIT 3.0. We’re currently integrating with a select set of cloud service providers and security vendor partners and will announce general availability after that is complete. CIT 3.0 protects virtualized and containerized workloads (Docker containers) running on OpenStack-managed Ubuntu, RHEL, and Fedora systems with KVM/Docker. It also protects non-virtualized (bare metal) environments. If you have one of those environments running on Xeon TXT-enabled servers with TPM activated by the OEM, we invite you to try it out under our beta program.

 

Integrity and confidentiality assurance is becoming a critical requirement in private, public, and hybrid cloud infrastructures, and cloud service providers must offer trusted clouds to their customers to provide them with the confidence to move sensitive workloads into the cloud. Intel Cloud Integrity Technology 3.0 is the only infrastructure integrity solution in the market that offers complete chain of trust, from the hardware to the application. We think enterprises will be loving cloud computing a lot more.

Read more >

Caesars Entertainment Bets on Big Data and Wins

Three best practices for successful big data projects

 

Many people have asked me why only 27% of respondents in a recent consulting report believed their Big Data projects were successful.

 

I don’t know the particulars of the projects in the report, but I can comment on the key attributes of successful Big Data projects that I’ve seen.

 

Let’s look at an example. Intel recently published a case study about an entirely new Big Data analytics engine that Caesars Entertainment built on top of Cloudera Hadoop and a cluster of Xeon E5 servers. This analytics engine was intended to support new marketing campaigns targeted at customers with interests beyond traditional gaming, including entertainment, dining and online social gaming. The results of this project have been spectacular, increasing Caesars’ return on marketing programs and dramatically reducing the time to respond to important customer events.

iStock_000064050499_Medium.jpg

Three ways that Caesars Entertainment got it right:

 

1. Pick a good use case

 

Caesars chose to improve the segmentation and targeting of specific marketing offers.  This is a great use case because it is a specific, well-defined problem that the Caesars analytics team already understands well.  It has the additional benefit that new unstructured and semi-structured data sources were available that could not be included in the previous generation of analysis.

 

Rizwan Patel, IT director, commented, “When it comes to implementation, it is … essential to select use cases that solve real business problems. That way, you have the backing of the company to do what it takes to make sure the use case is successful.”

 

2. Prioritize what data you include in your analysis

 

“We have a cross-functional team…that meets quarterly to prioritize and select use cases for implementation.”

 

This applies to both data and analytics. There is a common misconception that a data lake is like an ocean: Every possible source of data should flow into it.  My recommendation is to think of a data lake as a single pool where you can easily access all the data that is relevant to your projects. It takes a lot of effort to import, clean and organize each data source. Start with data you already understand.  Then layer in one or two additional sources, such as web clickstream data or call center text, to enrich your analysis.

 

3. Measure your results

 

“The original segments were not generating enough return on customer offers.”

 

It’s hard to declare a project a success if it has no measurable outcome.  This is particularly important for Big Data projects because there is often an unrealistic expectation that valuable insights will magically bubble to the surface of the data lake.  When this doesn’t happen, the project may be judged a failure, even when it has delivered real improvements on a meaningful metric. Be sure to define key metrics in advance and measure them before and after the project.

 

Your organization’s best odds

 

Big Data changes the game for data-driven businesses by removing obstacles to analyzing large amounts of data, different types of unstructured and semi-structured data, and data that requires rapid turnaround on results.

 

Give your organization the best odds possible for a successful Big Data project by following Caesars Entertainment’s good example.

Read more >

Internet of Things in Healthcare Helps Shift Focus from Cure to Prevention with MimoCare

 

The Internet of Things (IoT) is one of those subject matters that tends to include a lot of future-gazing around what may be possible in five, 10 or even 20 years’ time but we’re very fortunate in the healthcare sector to be able to show real examples where IoT is having a positive impact for both patient and provider today.


IoT across Healthcare

It’s estimated that IoT in healthcare could be worth some $117 billion by 2020 and while that number may seem incomprehensibly large it is worth remembering that IoT touches on so many areas of healthcare from sensors and devices for recording and analysis through to the need for secure cloud and networks to transmit and store voluminous data.


When the UK Government published their ‘The Internet of Things: making the most of the Second Digital Revolution’ report, healthcare was one of most talked about areas with IoT making a significant impact in helping to ‘shift healthcare from cure to prevention, and give people greater control over decisions affecting their wellbeing.’


Meaningful Use Today

Here at Intel in the UK we’re working with a fantastic company in the Internet of Things space that is having a real and meaningful impact for patient and provider. MimoCare’s mission is ‘to support independent living for the elderly and vulnerable’ using pioneering sensory-powered systems. And with an ageing population across Europe and the associated rise in healthcare costs, Mimocare are already helping to ‘shift healthcare from cure to prevention’ today.


I think it’s important to highlight that MimoCare’s work focuses on measuring the patient’s environment, rather than the patient. For example, sensors can be placed to record frequency of bathroom visits and a sudden variation from the normal pattern may indicate a urinary infection or dehydration.

 

Medication Box


emeapillimage.jpg

The phrase ‘changing lives’ is sometimes overused but when you read feedback from an elderly patient benefiting from MimoCare’s work then I think you’d agree that it is more than appropriate. MimoCare talked me through a fantastic example of an 89 year old male who is the primary carer for his 86 year old wife and is benefiting greatly from IoT in healthcare. The elderly gentleman has a pacemaker fitted so is required to administer warfarin but with his primary focus on caring for his wife there is a risk that he may miss taking his own medication.

 

Using MimoCare sensors on the patient’s pill box enables close family to be alerted by SMS if medication is missed. The advantage to the patient is that both the sensors in the home and, importantly, the alert triggers are unobtrusive, meaning that the patient remains free from anxiety. If medication is missed a gentle reminder via a phone call from a family member is all that I needed to ensure the patient takes medication. And for the healthcare provider the cost in providing care for the patient is significantly reduced too.


The elderly male patient said, “I really like the medication box as it feels like something for me. It’s nice to know someone is keeping an eye out to help remind me to take my medication daily and on time.  In fact last time I visited the surgery they were able to reduce my warfarin and I’m sure that’s because I’m now taking it regularly.” Read more on how MimoCare is using sensors in the home to help the elderly stay independent and out of hospital.


Big Data, Big Possibilities

I’m really excited about the possibilities of building up an archive of patient behaviour in their own home that will enable cloud analytics to produce probability curves to predict usual and unusual behaviour. It’s a fantastic example of the more data we have, the more accurate we can be in predicting unusual behaviour and being able to trigger alerts to patients, family and carers. And that can only be a positive when it comes to helping elderly patients stay out of hospital (and thus significantly reduce the cost of hospital admissions).


Intel has played a pivotal role in assisting of porting both software and hardware to give improved performance of the IoT gateway, also provided through WindRiver Linux an enhanced data and network security including down-the-wire device management for software updates and configuration changes.


Sensing the Future

But where will the Internet of Things take healthcare in the next 5-10 years? What I can say is that sensors will become more cost-effective, smaller and will be more power-efficient meaning that they can be attached to a multitude of locations around the home. Combining this sensor data with that recorded by future wearable technology will give clinicians a 360 degree view of a patient at home which will truly enable the focus to be shifted from cure to prevention.


I asked MimoCare’s Gerry Hodgson for his thoughts on the future too and he told me, “IoT and big data analytics will revolutionise the way care and support services are integrated. Today we have silos of information which hold vital information for coordinating emergency services, designing care plans, scheduling transport and providing family and community support networks. The projected growth in the elderly population means that it is imperative we find new ways of connecting local communities, families and healthcare professionals and integrating services.”


“Our cascade 3-D big data analytics provides a secure and globally scalable ecosystem that will totally revolutionise the way services are coordinated.  End to end, IoT sensors stream valuable data to powerful server platforms such as Hadoop which today provides an insight into what would otherwise be unobtainable.”

“I’m very excited about the future where sensors and analytics change the way we coordinate and deliver services on a huge scale.”

 

Read more >

THAT Project is a REAL Cluster! Amplify Your Value: Avoid CF Projects!


Amplify Your Value (4).png

SNAFU, FUBAR, CF…if you have been in IT longer than five minutes you have been involved in a project that has been described as a real cluster. (If you are confused, look it up in the Urban Dictionary).


In fact, these words are used so often to describe IT projects, when the speaker said, “You know, there are two types of IT projects: AC and CF”, the room of IT executives exploded in laughter, many of them shaking their heads and shuddering as if having flashbacks of projects past, present and future!


“No, no, no, I’m not talking about THAT kind of CF project. It’s a grading scale. There are some projects that you do, that no matter how well you execute them, no matter if you hit the ball out of the park, the best grade you will ever receive is a ‘C’. No one is EVER going to walk into your office, shake your hand and say ‘Thank you for delivering my email today’. No one…EVER. On the other hand, if the project goes poorly, you will most certainly receive an ‘F’ (and you WILL be in a ‘cluster’ of a situation!). Conversely, there are some projects that when executed properly you will receive rave reviews and most certainly earn an ‘A’. When THOSE projects miss the mark a bit, you can earn a ‘B’ or a ‘C’.”


As the panel continued its discussion on stage, their words faded into the background as I thought about what I had just heard. He was RIGHT! I started thinking about all the projects on our plate. I would categorize most of them as ‘CF’ projects. The speaker had even called out the biggest project facing us at the moment…email! We were faced with another massive upgrade of our email system. It would take months and tens of thousands of dollars…and in the end? We would STILL be delivering email. It was one of those “epiphany moments”! Why on earth would we ever want to go down that road again, and again, and again. I’d like to say it was a Cecil B. DeMille moment and the heavens parted, the sun shone down, a rainbow formed, and a booming voice said, “Cloud!”, but honestly, it was more like a quiet whisper inside my head. “Cloud…move the CF work to the cloud”.

 

Before we even returned from the conference, we evaluated all of our projects and assigned them to one of three categories: maintain (CF projects), grow (BC projects) and innovate (AB projects). For every project in the maintain category, we began to seek out cloud-based solutions. The projects in the other two categories would take more time, but eventually, we took a cloud first approach to those project as well.images.jpg


Slowly at first, and then faster and faster we were able to divert more of our resources (human and financial) to growth and innovation. To me that is the true promise of cloud: moving the needle from 80% maintenance to 70%, 60%…and beyond. Sure, it can save some money, but what it really does is make you more agile and more elastic. You can quickly focus your talent on business initiatives that are truly game changers, not just another hardware or software upgrade.


So, this journey – our journey – started with email. We immediately changed course on the project and began to evaluate the two heavyweights in cloud-based email. A few short months later, we were “in the cloud”. We retired four servers and tons of storage. The benefits were immediate.


There are risks with this approach. The number one question I get asked about our journey is “what was the impact on your staff?”, very quickly followed by, “how did you prepare them?”. Admittedly, it is a significant change. When we moved our email to the cloud, our senior engineer resigned. He had always seen himself as an “Exchange guy” and couldn’t see himself in the new reality. As we continued to explore cloud options, I was very nervous others would follow suit. However, as we continued to discuss the rewards of moving in the direction of the cloud, less of the mundane, more exploration of new ideas and concepts (bright shiny objects notwithstanding, exploration of new ideas ALWAYS motivates IT pros!), as we started to SEE the rewards, the team got behind the vision and expanded the vision. In all honesty, they prepared themselves. Sure, I would share articles, thoughts, and ideas about the future, but they took it from there.


Together, we began to map out our journey, our journey into the unknown.


Next month: Amplify Your Value: Draw Your Own Maps


The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue


We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).


Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

The Growing Cybersecurity Perils of Connected Transportation

Connected World 2.jpgThere exists a direct relationship between our reliance on technology and the potentially detrimental impacts of cybersecurity compromises.  The more integrated and dependent we become as a society on computing, the more relevant the cybersecurity risks become.  Transportation will be the next great test case we face, one which potentially puts our lives at risk.

 

Computing technology is a powerful tool.  One which we are embracing more and more every day in our lives to build, experience, and evolve our world.  Communication, manufacturing, and transportation industries are just a few of the domains being revolutionized.  But as we connect and extend more control to devices which intersect life-safety roles, the cybersecurity weaknesses can play a more visceral role in our lives.  Nowhere will this be more apparent than the changes taking place in the transportation industry.  Our cars, planes, and trains, are becoming smarter, connected, and more autonomous. 

 

For some time, computers have played a passive role in monitoring critical systems in order to optimize or report problems in our transportation devices.  But as we evolve, we demand more.  Nowadays, computers are taking a more active role and given direct access to control vehicle navigation, steering, speed, and braking controls.  Take for example the simple auto-park features in newer cars.  It seems harmless, as it occurs at very slow speeds, but consider the access on-board computers must have to successfully get the vehicle into that tight spot.  The conveyance sensors determine the path, identify obstacles, take control of the steering, acceleration, and braking.  Basically, the computers must access to all the critical functions of the vehicle.  This has the potential for greater efficiency, safety, usability.  But it can equally create disastrous situations

 

Researchers are hacking the software, networks, and hardware systems in cars and gaining access to these systems.  What happens when malicious attackers do this to a vehicle?  They could cause a fatal accident.  Now think bigger.  What happens when malicious attackers do this to hundreds or thousands of vehicles simultaneously?  Disaster. 

 

Much of the current vulnerability research is limited to a single vehicle, with the attacker in close proximity.  Recently a 14 year-old hacked a car with $15 worth of off-the-shelf equipment.  Vulnerability experts were able to hack a Tesla Model S to make the doors open while the vehicle was being driven.

 

The researchers are typically inside the vehicle, connecting into a planes controls through the on-board entertainment system or accessing a car’s computers via the diagnostic port for example.  Vulnerabilities also extend to flaws in software doing unexpected things and external control networks such as traffic routing systems.  A crash of an Airbus transport plane has been attributed to a software bug which did a “wipe” of critical engine control data, killing the 4 test crewmembers.  A recent report found hackers could ‘Crash Trains’ using a cyber attack.

 

But this is not where the research will stop.  Cars, trains, and planes are connecting to the Internet and private remote networks in greater numbers.  What is taking place by researchers today with direct access to a vehicle, may eventually be done remotely by a hacker halfway around the world. 

 

Manufacturers, governments, and consumers must think hard about potential consequences, as they may be life-threatening.  This is not a denial-of-service attack making a webpage unavailable.  These are people’s lives at stake and must be approached with the appropriate level of seriousness and forethought.  Consumer Reports recently called on members to pressure Congress for more protections

 

Privacy is also an issue.  An unflattering report from U.S. Senator Ed Markey, evaluated 16 manufacturers and found “a clear lack of appropriate security measures to protect drivers against hackers who may be able to take control of a vehicle or against those who may wish to collect and use personal driver information.”

 

Vehicles today have evolved to become a miniature electronic ecosystem, with nodes, networks, multiple processors, actuators, sensors, input interfaces, and displays.  Each vehicle must have the right architecture, controls, and resiliency to defend itself, just like a modern enterprise.  It is very challenging and largely unexplored territory.

 

The good news is many vehicle companies are concerned.  They are exploring, investing, and working hard to understand the problem.  But we too, as consumers and the government bodies chartered to protect the public, must also be actively involved in the discussion.  We all have a stake in this matter, must set the right expectations, and hold accountable manufacturers to deliver and operate safe products, both at the point of sale and across the vehicle’s lifetime.  The next few years will be critical in setting the stage as manufacturers are pressured by heated competition to deliver new automated capabilities, seek to meet any emerging regulatory requirements, and struggle with making sure their products are safe from ever more sophisticated cybersecurity threats.   We all may be in for a bumpy ride.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

 

Read more >

Tackling Information Overload in Industrial IoT Environments

This blog was originally posted on March 25, 2015 on blogs.intel.com/IoT – click to view

 

manufacturer.jpgFeeling inundated by too much industrial IoT data? Well, you’re not alone. According to an Economist Intelligence Unit report, most manufacturers are experiencing  information overload due to the increasing volume of data generated by automated processes.

Senior factory executives in the United States and Europe were interviewed, and of those, 86 percent reported major increases in shop floor data collection over the past two years, while only 14 percent said they had no problems managing the overabundance of data. Despite challenges, two-thirds said data insights have led to annual quality and efficiency savings of 10 percent or more.


Doing More with Industrial IoT Data

What’s key is turning data from the entire production flow into actionable information that can help produce tangible benefits, including:

  • Higher profitability – Increase yields, and reduce spares and energy usage by applying big data analytics to optimize your manufacturing flow.
  • Improved supply chain management – Use cloud-based applications with short message service (SMS) features to better coordinate and integrate workflow among key players.
  • Faster time to market – Customize production flow in real time to more quickly satisfy customer requests.
  • Better data handling – Automate the collection, aggregation, and analysis of massive amounts of factory data – upwards of 5 GB per machine per week – to improve decision making.

 

Internet of Things Data Automation

This can be done with solutions for the Internet of Things (IoT) that can help you get the most out of sensor data and the associated analytical modeling. They collect, filter, and analyze data in more efficient and constructive ways.

Still, a lot of useful information is locked away in production equipment. One IoT solution is to use an industrial PC with gateway functionality to tie IoT sensor networks together and put factory data into a more manageable form.

Making this easier, Intel® Industrial Solutions System Consolidation Series software runs an assortment of industrial and IoT workloads on a single platform. It’s a production-ready, virtualization software stack supporting real-time, embedded, and general-purpose operating systems, giving equipment suppliers a great deal of flexibility to include gateway, firewall, control, and other applications.

Intel IoT Industrial Gateway

 

With this solution, multiple factory functions can all run on a single system, decreasing operating expense, factory footprint, energy consumption, and integration and support effort.

The system consolidation software runs on a family of 4th generation Intel® Core™ vPro™ Processors that featureIntel® Virtualization Technology (Intel® VT). This combination delivers the computing performance needed to simultaneously host multiple instances of Wind River VxWorks* RTOS and Wind River Linux* 7.0, as well as one instance of Microsoft Windows* 7. Also included is McAfee Embedded Control* – a “deploy-and-forget” security solution with a small footprint and low overhead. Learn more about Intel® Industrial Solutions System Consolidation Series software.

____

Stay up-to-date with Intel’s IoT developments—keep your eyes on this blog, our website, and on Facebook andTwitter.

Read more >

5 Solutions Showing Off Intel IoT Gateway and Ecosystem Collaboration

This blog was originally posted on April 9, 2015 on blogs.intel.com/IoT – click to view

IoT.pngInternet of Things solutions are all about connections that transform business and change lives — from predictive maintenance and operational efficiencies to personalized healthcare and beyond. When you consider that more than 85 percent of today’s legacy systems are unconnected, you’re reminded of perhaps the greatest IoT challenge: integrating the newest technology with existing infrastructure in order to take full advantage of cloud connectivity and IoT data management and analysis.

Intel® IoT Gateways are crucial to addressing this inherent complexity. By providing pre-integrated, pre-validated hardware and software building blocks, the gateways speed the deployment of connecting legacy and new systems and enable seamless and secure data flow between edge devices and the cloud. Plus, Intel gateways have the performance to provide critical analysis and intelligence at the edge, so that only useful data is sent to the cloud and both data transmission and storage costs are better managed.

Since no single person, company, or technology can enable the Internet of Things alone, gateway interoperability and ecosystem collaboration are also key elements to the growth and success of IoT.

In this post, I’ve gathered five examples of how we at Intel are collaborating with our robust ecosystem to deliver the critical interoperability that lets you choose your desired development environment and gets you to exciting IoT innovation, faster. Many include step-by-step guides for building your own IoT solution, so explore, enjoy, and be inspired.



Running Oracle Java* on Intel IoT Gateway Solutions
Oracle Java


Internet of Things solution serving machine-to-machine (M2M) and mobile environments integrates Java runtime environment.


Today, the Java runtime environment is one of the most important application development components of an IoT gateway solution. Learn about the interoperability of the Intel IoT Gateways with Oracle Java*, a combination which delivers the analytics, security, and performance capabilities required to process large volumes of events in near real time.
Get the details »


Scaling Internet of Things Data Movement with Amazon Kinesis*
Collect, cache, and distribute high-throughput, low-latency machine data coming from Intel IoT Gateways for real-time processing.

Amazon Kinesis

In the past, collecting, storing, and analyzing high-throughput information required complex software and a lot of infrastructure that was expensive to buy, provision, and manage. Today, it’s easier for companies to set up high-capacity pipes that can collect and distribute data in real time, at any scale, using Intel IoT Gateways to send each event to Amazon Kinesis to enable real-time processing. Learn more about how this collaboration can help you turn raw data into actionable information in a connected world.
Get the details »


Simplifying Application Development for the Internet of Things
The Solution Family development framework running on Intel IoT Gateways reduces effort to communicate with legacy devices over various protocols and APIs.

Intel IoT Solution

In any IoT solution, devices and sensors must communicate to the outside world using a wide variety of protocols and APIs. To reduce the time and effort required to write and test this code, the Solution Family team has created a tool-based IoT development platform that enables rapid, extensible, and secure IoT development in minutes while preserving existing investments and reducing development costs by 80 to 90 percent. Read the solution brief for details, including a step-by-step guide for creating a simple deployment with the Solution Family product running on an Intel IoT Gateway.
Get the details »


Connecting Sensor Networks and Devices to the Cloud in Just Minutes
Connecting Sensors and Devices to Intel IoT

Intel IoT Gateways serving machine-to-machine (M2M) and mobile environments interoperate with an IBM messaging appliance via a simple command line interface.

Two major IoT challenges are connecting legacy devices to the Internet and ensuring that all IoT and M2M communications are seamless and secure. Now, organizations can quickly connect devices of all types to cloud infrastructure using systems developed by Intel and IBM that work seamlessly out of the box. The solution combines an IBM messaging appliance and an Intel IoT Gateway that communicate via the MQTT* protocol, which is much smaller and faster than HTTP.  Discover the details and see step-by-step instructions that can get you started.
Get the details »


The IBM Internet of Things Foundation*: Connecting an Intel IoT Gateway to IBM Cloud Services*
Rapidly compose analytics applications, visualization dashboards, and mobile IoT apps.

The IBM Internet of Things Foundation is designed to help you derive value from IoT devices. On the Foundation’s associated developer site, you’ll find recipes that help you gain access to IBM’s cloud services. Craig Owen, a solutions architect with Intel, demonstrates in a short video how to connect an Intel IoT Gateway to the IBM IoT cloud with step-by-step instructions. Craig’s demo of the IoT Cloud Quickstart* connection is especially helpful.

Learn more about Intel IoT Gateways, join the conversation via Facebook and Twitter, and also make sure to check out the Intel® Internet of Things Solutions Alliance, a directory that lets you search for ready-to-go systems or foundational components to help you build your own innovative solutions.

Stay up-to-date with Intel’s IoT developments—keep your eyes on this blog, our website, and on Facebook andTwitter.

Read more >

Cisco and Intel’s Work on Ethernet Advances Software Defined Infrastructure

Intel has had a long-standing collaboration with Cisco to advance Ethernet technology; a relationship that is growing stronger as both companies seek to evolve Ethernet to ensure that it meets the needs for next-generation data centers.

 

One indication of this increasing teamwork was the recent announcement that Cisco has joined Intel Network Builders, our ecosystem of companies collaborating to build SDN and NFV solutions. We recently published a white paper to our Network Builders website about our joint work to make NFV part of a flexible, open and successful transformation for service provider customers.

 

I saw more fruit from this partnership at the Cisco Live conference in San Diego, where the two companies worked together on important networking-related technology demonstrations.

 

The first demo showcased new NBASE-T technology, which allows a 10GBASE-T connection to negotiate links at 5Gb and 2.5Gb over a 100M CAT 5E cable. This capability is especially exciting for campus deployments and wireless access points.

 

Cisco and Intel have a long history of working together to ensure that our 10GBASE-T products work seamlessly together. And now in the data center we are seeing rapid growth in the deployments of this technology. In fact, recent market projections from Crehan Research show that the upgrades from 1GBASE-T to 10GBASE-T are driving the largest adoption phase ever for the 10GbE market. **

 

Ethernet is the Best Interconnect for SDI

 

Ethernet is the best option to be the leading interconnect technology in next-generation software defined infrastructure (SDI)-based data centers. With 40GbE and 100GbE options, Ethernet has the throughput for the most demanding data centers as well as the low latency needed for the real-time applications.

 

The next step is to virtualize the network and network functions. To that end, we created a technology demo showing Intel® Ethernet controllers in a service chaining application using Network Service Headers (NSH).

 

Cisco initially developed NSH, but it’s now working its way through the IEEE standardization process. NSH creates a virtual packet-processing pipeline on top of a physical network and uses a chaining header to direct packets to particular VM-based services. This service chaining is an important element in next-generation network functions virtualization (NFV) services.

 

This technology demonstration – which drew big crowds earlier this year at the Mobile World Congress in Barcelona – pairs Cisco’s UCS platform with Intel’s Ethernet Converged Network Adapters XL710 and 100GbE Red Rock Canyon adapters.

 

Red Rock Canyon is our new multi-host controller that is under development. This device provides low-latency, high-bandwidth interfaces to Intel® Xeon processing resources. It also provides flexible Ethernet interfaces including 100GbE ports. Red Rock Canyon has advanced frame-processing features including tunneling and forwarding frames using network service chaining headers at full line rate up to 100Gbps.

 

We have two videos from Cisco Live, one of the NSH demo and one of the NBASE-T demo if you want to see these technologies in action.

 

The future for Ethernet in the evolving software-defined datacenter is bright and I look forward to continuing our work with Cisco to develop and promote the key technologies to meet the needs of this evolving market.

Read more >

Communication: The Biggest Barrier to Digital Transformation

IT-Translator-Comic.jpg

Do your business partners look at you like you’re speaking a different language when talking about IT? An organization’s dependency on technology for success means that the IT team needs to always be thinking like the business. It’s up to today’s IT leaders to communicate the business value of innovation and map technology solutions to specific business challenges, opportunities, and requirements from their business partners. Without this translation, we might as well be speaking a different language … and unfortunately there is not an app for that, as much as we’d like to have one.

 

Non-technical skills represent some of the most critical development areas for today’s IT professionals, managers, and leaders regardless of industry. The risk? Extinction or worse — irrelevance.

 

So when John Palinkas from the IT Transformation Institute reached out to me last year to discuss a partnership between his company and the Intel IT Center, it quickly became clear to me what was needed. We developed a Web show aimed at giving IT and business leaders some tools, tips, and techniques to better communicate with each other. The Transform IT Web show offers a series of live interviews with CxOs and industry thought leaders from a range of companies and industries. The show aims to provide real-world, practical advice that will pave the way for business transformation through digital disruption.

 

Digital-Disruption.png

Source: Harvey Nash 2015 CIO Survey

 

Below are a few highlights from three Transform IT episodes that touch on digital disruption through cloud technology.

 

Embracing Cloud Through Cultural Transformation

 

Lance Weaver, CTO of GE Cloud Architecture, shares GE’s journey to cloud migration and how he dealt with the resistance to change that came from this transformation. Realizing the biggest barriers included company culture, Lance shares techniques that helped his organization to shift their mindset, embrace change, and improve business communications to successfully tackle a massive project like a corporate cloud migration.

 

Watch the full episode: Leading Your Enterprise into a Fast Future

Reinvent Your IT Career by Trying Something New

Diversification reduces risk, but also improves reward. Aziz Safa, Intel VP and General Manager of Enterprise Applications and Application Strategy, spent much of his career outside of IT. His experience as both a business and an IT leader has provided him with a unique view on how to initiate change as a technologist. These alternative experiences helped Aziz approach challenges differently: “The thinking process that needs to happen is not to get stuck with the difficulties of the past, but what’s possible to do moving forward.”

 

Watch the full episode: How Technologist Can Thrive Using Architecture, Integration & Flexibility

 

Why Honesty May be the Great Career Differentiator

 

Rich Roseman, former CIO of 21st Century Fox, was a self-professed cloud naysayer. During the U.S. Open, he discovered that embracing this technology was imperative. Rich executed two simultaneous cloud projects — a full-scale enterprise conversion and a green field startup implementation. In both instances, he found that honesty was a critical success factor. Rich shares that having the courage and integrity to stand up and highlight what is not working is the only way to improve transparency, opening clear lines of communication within IT departments and the businesses we support.


Watch the full episode: Why Honesty may be the Great Career Differentiator

 

I have derived a lot of value from these shows and our guests, and I’m equally inspired by the growing movement of IT transformation advocates discussing these topics on our monthly Transform IT Google Hangouts.

 

Today’s business and IT landscape is shifting fast, and the tools and technologies of the past may not be sufficient for the future. The Transform IT series gives IT and business professionals access to thought leaders who share personal insights on how to bridge the communication divide between IT and business.

 

Technology-Skills-Shortage.png

Source: Harvey Nash 2015 CIO Survey

 

I invite you to become a fan and explore the full series, stay up-to-date on upcoming shows, lend your personal insights, and join the social conversation using #TransformIT.

Read more >

Empowering Our Private Cloud Through API Exposure

Back in the early 2000s, Amazon may have been the progenitor of today’s API economy, when the company CEO issued a mandate that all teams must expose their data and functionality through service interfaces. While Intel IT’s executives have not issued such a directive, we make sure that every layer in our enterprise private cloud exposes and consumes web services through APIs. In fact, this a key component of our overall hybrid cloud strategy, as described in the enterprise private cloud white paper we recently published. As the IT Principal Engineer for Intel’s cloud efforts, I’ve taken APIs’ important role to heart and have integrated this concept into our architecture.

 

Fig2.png


Why Is This so Important?

 

First, as Amazon foresaw, you cannot scale fast enough unless you automate and to do so you need APIs. Intel IT, acting as its own cloud provider, requires automation and self-service. If all you have is a GUI with no scripting capabilities, you cannot automate. For example, you cannot scale out and add more applications, and—just as importantly—scale back as business needs change. Therefore, we strive to provide an API and a command-line interface for every layer in our private cloud—including IaaS, PaaS, and DBaaS.

 

Automation, enabled by APIs, also helps IT keep our costs down and do more with less—the IT mantra of the century. Current industry data suggests that in a highly automated environment, a single system admin can manage 1,000 servers or more. While we haven’t reached that point yet, we have made significant strides in increasing automation and providing more services without increasing cost. More importantly, automation is critical for business agility supported by self-service capabilities. As the speed of business increases, users need tools and automation to “do it themselves” as opposed to waiting for specialized personnel to act on their behalf.

 

Finally, exposing APIs is a critical part of our move to a hybrid cloud model, where workloads can be balanced among clouds by using policies. Without consistent API exposure, such a hybrid cloud model would be impossible.

 

How Have We Implemented API Exposure at Intel?

 

We strongly encourage our application developers to create cloud-aware applications—even the cloud itself should incorporate cloud-aware principles. Part of being cloud-aware is implementing small, stateless components designed to scale out and using web services to interact among components. We are heavily promoting the use of RESTful APIs for web services, for several reasons:

 

  • The RESTful model is easy for developers to use in terms of methodology with specific methods (GET, POST, PUT, and DELETE).
  • REST is based on HTTP, which is designed to scale and is tolerant of network latency—very important in the cloud.
  • If you call an API multiple times with the same data, the calls return the same result, which facilitates retry and error-handling scenarios.

 

We’re adamant about API exposure at every layer of our private cloud for one reason: automation supports a highly agile environment and is essential to the success of our hybrid cloud strategy. Please check out the paper and tell us what you are doing in this space. I would love to read about everyone’s ideas.

Cathy

Catherine Spence is an Enterprise Architect and PaaS Lead for the Intel IT Cloud program.

Connect with Cathy on LinkedIn
Read more posts by Cathy on the ITPN

Read more >

Is Cloud Destined to be Purely Public?

 

51 percent of workloads are now in the cloud, time to break through that ceiling?

 

 

At this point, we’re somewhat beyond discussions of the importance of cloud. It’s been around for some time, just about every person and company uses it in some form and, for the kicker, 2014 saw companies place more computing workloads in the cloud (51 percent) — through either public cloud or colocation — than they process in house.

In just a few years we’ve moved from every server sitting in the same building as those accessing it, to a choice between private or public cloud, and the beginning of the IT Model du jour, hybrid cloud. Hybrid is fast becoming the model of choice, fusing the safety of an organisation’s private data centre with the flexibility of public cloud. However, in today’s fast paced IT world as one approach becomes mainstream the natural reaction is to ask, ‘what’s next’? A plausible next step in this evolution is the end of the permanent, owned datacentre and even long-term co-location, in favour of an infrastructure entirely built on the public cloud and SaaS applications. The question is will businesses really go this far in their march into the cloud? Do we want it to go this far?

 

Public cloud, of course, is nothing new to the enterprise and it’s not unheard of for a small business or start-up to operate solely from the public cloud and through SaaS services. However, few, if any, examples of large scale corporates eschewing their own private datacentres and co-location approaches for this pure public cloud approach exist.

 

For such an approach to become plausible in large organisations, CIOs need to be confident of putting even the most sensitive of data into public clouds. This entails a series of mentality changes that are already taking place in the SMB. The cloud based Office 365, for instance, is Microsoft’s fastest selling product ever. For large organisations, however, this is far from a trivial change and CIOs are far from ready for it.

 

Connectivity-And-Cloud-Computing.jpgThe Data Argument

 

Data protectionism is the case in point. Data has long been a highly protected resource for financial services and legal organisations both for their own competitive advantage and due to legal requirements designed to protect their clients’ information. Thanks to the arrival of big data analysis, we can also add marketers, retailers and even sports brands to that list, as all have found unique advantages in the ability to mine insights from huge amounts of data.

 

This is at the same time an opportunity and problem. More data means more accurate and actionable insights, but that data needs storing and processing and, consequently, an ever growing amount of server power and storage space. Today’s approach to this issue is the hybrid cloud. Keep sensitive data primarily stored in a private data centre or co-located, and use public cloud as an overspill when processing or as object storage when requirements become too much for the organisation’s existing capacity.

 

The amount of data created and recorded each day is ever growing. In a world where data growth is exponential,  the hybrid model will be put under pressure. Even organisations that keep only the most sensitive and mission critical data within their private data centres whilst moving all else to the cloud will quickly see data inflation. Consequently, they will be forced to buy ever greater numbers of servers and space to house their critical data at an ever growing cost, and without the flexibility of the public cloud.

 

In this light, a pure public cloud infrastructure starts to seem like a good idea – an infrastructure that can be instantly switched on and expanded as needed, at low cost. The idea of placing their most sensitive data in a public cloud, beyond their own direct control and security, however, will remain unpalatable to the majority of CIOs. Understandable when you consider research such as that released last year stating that only one in 100 cloud providers meets EU Data Protection requirements currently being examined in Brussels.

 

So, increasing dependence on the public cloud becomes a tug of war between a CIO’s data burden and their capacity for the perceived security risk of the cloud.

 

Cloud Creep

 

The process that may well tip the balance in this tug of war is cloud’s very own version of exposure therapy. CIOs are storing and processing more and more non-critical data in the public cloud and, across their organisations, business units are independently buying in SaaS applications, giving them a taste of the ease of the cloud (from an end user point of view, at least). As this exposure grows, the public cloud and SaaS applications will increasingly prove their reliability and security whilst earning their place as invaluable tools in a business unit’s armoury. The result is a virtuous circle of growing trust of public cloud and SaaS services – greater trust means more data placed in the public cloud, which creates greater trust. Coupled with the ever falling cost of public cloud, eventually, surely, the perceived risks of the public cloud fall enough to make its advantages outweigh the disadvantages, even for the most sensitive of data?

 

Should It Be Done?

This all depends on a big ‘if’. Trust in the public cloud and SaaS applications will only grow if public cloud providers remain unhacked and SaaS data unleaked. This is a big ask in a world of weekly data breaches, but security is relative and private data centre leaks are rapidly becoming more common, or at least better publicised, than those in the public cloud. Sony Pictures’ issues arose from a malevolent force within its network, not its public cloud based data. It will take many more attacks such as these to convince CIOs that losing direct control of their data security and putting all that trust in their cloud provider is the most sensible option. Those attacks seem likely to come, however, and in the meantime, barring a major outage or truly headline making attack on it, cloud exposure is increasing confidence in public cloud.

At the same time, public cloud providers need to work to build confidence, not just passively wait for the scales to tip. Selecting a cloud service is a business decision and any CIO will lend the diligence that they would any other supplier choice. Providers that fail to meet the latest regulation, aren’t visibly planning for the future or fail to convince on data privacy concerns and legislation will damage confidence in the public cloud and actively hold it back, particularly within large enterprises. Those providers that do build their way to becoming a trusted partner will, however, flourish and compound the ever growing positive effects of public cloud exposure.

 

As that happens, the prospect of a pure public cloud enterprise becomes more realistic. Every CIO and organisation is different, and will have a different tolerance for risk. This virtuous circle of cloud will tip organisations towards pure cloud approaches at different times, and every cloud hack or outage will set the model back different amounts in each organisation. It is, however, clear that, whether desirable right now or not, pure public cloud is rapidly approaching reality for some larger enterprises.

Read more >

It Looked Good On Paper

When it comes to technology and automation for data centers IT is not a hard sell.  But sometimes it pays to step back and take a second look.  For example:  We worked with an enterprise who was well underway in their build out of a new … Read more >

3 Strategies to Get Started with Mobile BI

Man-On-Subway-Reading-Tablet.pngIn my post, “Mobile BI” Doesn’t Mean “Mobile-Enabled Reports,” I highlighted two main areas that affect how organizations can go about realizing the benefits of mobile BI: enterprise mobility and BI maturity.

 

Today I want to focus on the latter and outline high-level strategies that require different avenues of focus, time, and resources.

 

Before an organization can execute these high-level strategies, it must have the following:

 

  • An existing BI framework that can be leveraged
  • Current technology (hardware and software) used for BI that support mobile capabilities
  • A support infrastructure to address technical challenges

 

If an organization meets these minimum prerequisites, then there’s a greater chance for success. Thus, the higher the level of BI maturity, the better a head start an organization gets on its mobile BI journey.

 

“Mobile-Only” Strategy

 

A “mobile-only” strategy reflects a strong commitment, or all-in approach, by the management team to mobile BI, or mobility in general. This may be due to a specific reason, such as the relevance of mobility in a particular industry or the opportunity to create a strategic advantage in a highly competitive market. Or a company may decide that mobility needs to be a vital part of their vision.

 

However, in order for this strategy to be successful, it requires a commitment that results in both championing the cause at the board or senior management level and making the necessary resources available for execution at the tactical level.

 

In reality, this approach doesn’t necessarily translate into creating a mobile version of every analysis or shutting down all production lines for PC-based outlets for reporting and analytics. Instead, it reflects a strong emphasis on establishing scalable mobile consumption paths for analytics, and it signals a willingness to exploit a mobile-first mindset.

 

Mobile-BI-Strategy.png

 

“Key Asset(s) First” Mobile Strategy

 

Organizations that aren’t ready or don’t have the resources for a mobile-only strategy may be forced to pursue a less ambitious approach. This would enable such organizations to supplement their existing BI portfolio with key analyses delivered in mobile BI, resulting in a smaller initial investment and reduced pressure to overhaul large stacks of assets, so to speak.

 

With the “key-assets-first“ strategy, the spotlight is on finding key BI areas of focus that can both return the maximum value when delivered effectively on mobile platforms, and can directly support the execution of the business strategy in the short-term. For example, the business strategy may include expansion into a new market, and the mobile BI may deliver analytics to help sales teams to sell more and provide management with insight into forecast and pipeline.

 

To me, this is the most flexible strategy because it doesn’t commit to an all or nothing approach. Most importantly, it differentiates between what may be conducive to mobile-ready consumption and what can produce the maximum impact by ignoring those assets with marginal returns on investment.

 

“Key Group(s) First” Mobile Strategy

 

A “key-group-first” strategy makes a considerable commitment to arm a particular group or groups in an organization with a complete set of capabilities that can be delivered in mobile BI. This hybrid strategy identifies the best candidate group(s) for mobile BI and delivers an end-to-end solution. At minimum, it may consider the existing BI framework, the BI culture (history in terms of successes and failures), the BI adoption across the enterprise, and the current BI asset portfolio to develop this more comprehensive approach.

 

For example, sales teams, which travel and spend a lot of time in the field, tend to benefit most from mobility. If they’re selected as the target group, the mobile BI strategy’s goal will be to provide them with a comprehensive package of sales-centric BI assets. Thus, existing capabilities in enterprise mobility for sales teams may compliment not only the delivery of new mobile BI resources but also the greater adoption of mobile BI content.

 

Mobile BI Bottom Line

 

The fundamentals don’t change — a smart mobile BI strategy needs to contribute to growth or to profitability. In order to deliver the true business value for mobile BI, all three strategies must embrace the common objectives of an integrated mobile intelligence framework. They must leverage the technology’s strengths as well as minimize its weaknesses within a supported infrastructure.

 

The mobile intelligence framework can’t exist separately from, or independent of, the organization’s business or technology strategy.

 

What is your start-up strategy for mobile BI?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Part II: Future Health, Future Cities – Intel Physical Computing Module at IDE

by Chiara Garattini & Han Pham

 

In the first of our Future Health, Future Cities blog series, we posed questions around the future of health in the urban environment. Today we look at some of the projects undertaken by students on the physical computing module on the Innovation Design and Engineering Masters programme run in conjunction between Imperial College and the Royal College of Arts (RCA).

 

Health and Safety in the Workplace

The first group of project is related to the important issue of health and safety in the workplace.

2 - a - Health and Safety in the Workplace.png

Figure 1. Circadian Glasses


Christina Petersen’s ‘circadian glasses’ considered the dangers of habitual strains and stressors at work, particularly for individuals in careers with prolonged evening hours or excessively in light-poor conditions, which may have a cumulative effect on health over time. Although modern technologies allow for the convenience of working at will regardless of external environmental factors, what is the effect on the body’s natural systems? In particular, how does artificial lighting affect the circadian rhythm?

 

Her prototyped glasses use two LED screens that can adjust the type of light to help users better adjust their circadian rhythms and sleep patterns. The concept also suggests a potentially valuable intersection of personal wearable and personal energy usage (lighting) in the future workplace. Unlike sunglasses, the glasses are also a personal, portable source of light – an interesting concept in workplace sustainability, given the majority of energy expenditure is in heating/cooling systems and lighting.

 

While there is room to make the user context and motivation more plausible, the prototype literally helps shed light on meaningful, and specific, design interventions for vulnerable populations such as nurses or night shift workers for personal and workplace sustainability over time.

 

2 - b - Smart Workplace Urinal.png

Figure 2. Smart Workplace Urinal


As we often see within our work, a city’s hubs for healthcare resources and information often are informally ubiquitous and present within the community before one reaches the hospital. Jon Rasche’s smart urinal was created to decrease the queue and waiting time at the doctor’s office even before you arrived, by creating more personal, preventative care via lab testing at the workplace.

 

The ‘Smart Urinal’ created an integrated service with a urinal-based sensor and a display unit, QR codes, and a mobile application (Figure 2). The system also considered concerns around patient privacy by intentionally preventing private patient information from entering the cloud. Instead, each of the possible results links to a QR codes leading to a static web page with the urinalysis information.

 

While the system might be perceived as too public for comfort, it connects to the technological trends toward for more personalised and accessible testing (Scanadu’s i-Phone ready urinalysis strip is a good example). It also raises the consideration of how to design for the connected ecosystem of responsibility, accountability and care – how can different environments influence, impact and support an individual’s wellbeing? How can personalised, connected care be both anticipatory, preventative, and immediate, yet, private?

 

Pollutants Awareness

The dynamic life of a city often means it’s in a state of constant use and regeneration – but many of the resulting pollutants are invisible to the naked eye. How do we know when the microscopic accumulation of pollutants will be physically harmful? How can we make the invisible visible in a way that better engages us with our environment?

 

2A.jpg

Figure 3. Air Pollution Disk


Maria’s Noh’s ‘Air Pollution Disc’ (Figure 3) considers how we can design for information to be more physical, visible and intuitive by creating a mechanical, physical filter on our immediate environment driven by local air quality data using polarised lenses.

 

It’s a very simple mechanism with an elegant design that ties to some of our earlier cities research into perceptual bias around air quality substituting numeric data for physical feedback (e.g., although pollutants may not always be visible, we equate pollution with visual cues). Noh suggested two use scenarios – one was to affix device to a window of a home to understand pollution at potential destinations, such as the school; another was to potentially influence driver behaviour by providing feedback on relationship of driving style to pollution.

 

While there are some future nuances and challenges to either case, the immediacy of the visualisation for both adults and children, may make it interesting to see the Air Pollution Disc as a play-based, large-scale urban installation of physicalizing the hidden environment of the city.

 

Ghost.jpg

Figure 4. Ghost 7.0


The pollutants category relates to the prototype for ‘Ghost 7.0’ by student Andre McQueen, a smart clothing system that addresses how weather and air quality affect health. The idea tries to tackle breathing problems, e.g. due to allergies, associated to weather changes. The device (Figure 4) embedded in the running clothing is designed to communicate with satellites to receive updates on weather conditions and signal warnings under certain circumstances.

 

When a significant meteorological change is signalled, the fabric would change colour and release negative ions (meant to help breathing under certain conditions). The student also investigated oxidisation to fight pollutants, but could not overcome the problem of the releasing some small amounts of CO2.

 

What we found interesting in this project was the idea that a wearable device would do something to help against poor air quality, rather than just passively detecting the problem. Too many devices currently are focusing on the latter task, leaving the user wondering about the actionability of the information they receive.

 

Glove.jpg

Figure 5. Dumpster diving ‘smart glove’


The last selected project for this section is a project on dumpster diving by student Yuri Klebanov. Yuri built a system to make dumpster diving safer (by creating a ‘smart glove’ that reacts to chemicals) and more effective (by creating a real time monitoring system that uploads snapshots of what is thrown away on a website for users to monitor).

 

While the latter idea is interesting but presents several challenges (e.g. privacy around taking pictures of people throwing away things), what we liked about the project was the ‘smart glove’ idea. The solution device was to boil fabric gloves with cabbage, making them capable of changing colour when in contact with acid, liquids, fats and so on (Figure 5). This frugal technology solution made us reflect on how smart is ‘smart’? Technology overkill is not always the best solution to a problem, and something simple is always preferable to something more complex that provides the same (or little incremental) results.

 

In the third and final blog of our Future Cities, Future Health blog series we will look at the final theme around Mapping Cities (Creatively) which will showcase creative ideas of allocating healthcare resources and using sound to produce insights into complex health data.

 

Read Part I

Read Part III

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

Read more >

Part I: Future Health, Future Cities – Intel Physical Computing Module at IDE

Intel has sponsored a physical computing module on the topics of ‘Future Health, Future Cities’ as part of the first year in the Innovation Design and Engineering Master programme in conjunction between Imperial College and the Royal College of Arts (RCA). This module, coordinated by Dominic Southgate at Imperial College, was intended to be an investigation into the future of health in urban environments and a projection of how technology might support this multi-faceted theme.

 

The Intel team (John Somoza, Chiara Garattini and Duncan Wilson) suggested themes that the 40 students (10 allocated for each theme) had to work on individually over the course of four weeks:

 

1.  Food, Water, Air

The human body can only live for three weeks without food, three days without water, and three minutes without air. These ingredients are vital for our survival and key to our good health – how can we optimise each of them within our cities?

 

Food has an obvious connection to healthy living. But what about the more subtle relationships? How can food be analysed/customised/regulated to help with specific disorders or conditions? Meanwhile, how can technology help us in water catchment and distribution in the city or manage quality? Can we reuse water better?

 

Likewise, an invisible and yet vast component of cities is its air, which is key to human well-being. While air is currently rated by proxy metrics in many ways, what is air quality and pollution through a human lens? How can we re-think the air we breathe?

2.  Systems of Systems

A city is made of many systems inextricably related and depending on each other. One important aspect of cities is its healthcare system. How can we re-imagine a city-based healthcare service? For example, hospitals are currently hubs for providing health care when needed, yet they often may not be the first or best place we seek care when unwell. Can we reimagine what a hospital of the future would look like? What would a healthcare worker of the future look like and what equipment would they use?

 

Although we currently use tools such as the healthy city index rates that rate cities as healthy and un-healthy, how could we measure a healthy city in a way which reflects its complexity? Measuring the world in a new way at some point becomes experiencing the world in a new way — what tools do we need and what are the implications?

 

Ultimately, if cities are systems of systems, then we are the nodes in those systems: how do we understand the impact of our individual accumulative actions on the larger systems? How can we see small, seemingly un-impactful actions, in their incremental, community wide scaling? How can we entangle (or disentangle) personal and collective responsibilities?

 

3.  Measuring and Mapping

There are various ways to measure a sustainable city, but none is perfect (e.g. carbon credits). What is the next thing for measuring a sustainable city? What would be the tools to do so? How local do we want our measures to be?

 

Our cities have different levels of language and communication embedded in their fabric (symbols, maps, and meanings). Some of these are more evident and readable than others, marking danger, places, and opportunities. One class of such signals relates to health. What kind of message does our city communicate in order to tell us about health? What symbols does it use and how do they originate and change through time?

 

4.  Cities of Data

Much of the current quantified-self movement is centred on metrics collected by individuals and shared with a relatively close, like-minded community. What would a ‘quantified-selfless’ citizen look like within the context of a city-wide community? How would people share data to improve their lives and that of other people? How could this impact on the environment and systems in which they live? How could the city augment and integrate people’s self-generated data and support you in an effort of being ‘healthy’ (e.g. personalised health cityscapes)? At the same time, individual and communities’ interests are sometimes harmonic and sometimes competing. How would citizens and cities of the future face this tension?

 

Commentary on selected projects

The underlying idea behind these themes was to stimulate design, engineering and innovation students to think about the complex relationship between connected cities and connected health. Because the task is wide and complex, we decided to start by pushing them to consider some broad issue, e.g., how can a city’s health infrastructure become more dynamic? How can we help cities reconsider the balance between formal/informal resourcing to meet demand? What are the triggers to help communities understand/engage with environmental/health data?

 

The aim was to encourage the upcoming generation of technology innovators to think of health and cities as vital to their work.

 

The IDE programme was ideal for the task. Imagined originally for engineers who wanted to become more familiar with design, it has now transformed into a multidisciplinary programme that attracts innovative students from disciplines as varied as design, business, fashion and archaeology. This shows a resurgence of the relevance of engineering among students, possibly stimulated by the accessibility and ubiquity of tools for development (e.g. mobile apps) as well as the desire to find solutions to pressing contemporary problems (e.g. aging population trends).

 

Students were able to explore different points of interest in Intel’s ‘Future Health, Future Cities’ physical computing module, each an interesting starting point into the challenges of designing for complex, living systems such as a city.

 

We will share eight of the projects in our next two blogs, based not on their overall quality (which was instead assessed by their module coordinators) but rather how their collective narrative under three emergent sub-themes help highlight connections to some of the ongoing challenges and questions we face in our daily work.

 

Read Part II

Read Part III

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

Read more >

Part III: Future Health, Future Cities – Intel Physical Computing Module at IDE

by Chiara Garattini & Han Pham

 

Read Part I

Read Part II

 

In the third and final edition of our Future Cities, Future Health blog series, we will look at the final theme around Mapping Cities (Creatively) which showcases the creative ideas of allocating healthcare resources and using sound to produce insights into complex health data as part of the physical computing module on the Innovation Design and Engineering Masters programme run in conjunction between Imperial College and the Royal College of Arts (RCA).


Mapping cities (creatively)

In considering how to allocate resources, we also need to understand where resources are most needed, and how this changes dynamically within a city.

 

3A.jpg

Figure 1. Ambulance Density Tracker


Antoni Pakowski asked how to distribute ambulances within a city to shorten response times for critical cases, and suggested this could be supported by anonymous tracking of people via their mobile phones. The expected service window of ambulance arrival in critical care cases is 8 minutes. However, in London, only around 40 percent of calls meet that target. This may be due to ambulances being tied to a static base station. How can the location of the ambulance change as people density changes across a city?

 

The ambulance density tracker (Figure 1) combined a mobile router and hacked Pirate Box to retrieve anonymously the IP of phones actively seeking Wi-Fi to create a portable system to track the density of transient crowds. The prototype was designed to only rely upon one point of data within a certain region, requiring less processing than an embedded phone app. He also created a scaled down model of the prototype, to suggest a future small device that could potentially be affixed to static and moving infrastructure such as taxis within the city.

 

Although the original use case needs additional design work to be clearer, the prototype itself as a lightweight, anonymous device that allows for a portable proxy of transient crowd density may be useful as a complementary technology for other design projects geared toward designing for impromptu and ad hoc health resources within a city based on audience shifts.

 

3B.jpg

Figure 2. ‘Citybeat’


The second project in this category is called ‘Citybeat’ by student Philippe Hohlfeld (Figure 2). Philippe wanted to look at the sound of a city and create not only ‘sound’ maps of the city, but also capturing the ‘heartbeat’ of a city by exploring ‘sonified’ feedback from it. His thinking originated from three distinct scientific endeavours: a) turning data from the Higgs Boson Atlas preliminary data at CERN into a symphony to celebrate the connectedness of different scientific fields; b) turning solar flares into music at the University of Michigan to produce new scientific insights; and c) a blind scientist at NASA turning gravitational fields of distant stars into sound to determine how they interact.

 

The project looked specifically at the Quality of Life Index (safety, security, general health, culture, transportation, etc.) and tried to attribute sounds to different elements so to create a ‘tune’ for each city. Sonification is good for finding trends and for comparison between two entities. What we most liked of the project though, was the idea of using sound rather than visual tools to produce insights into complex data.


Personal data from wearables, for example, is generally often in visual dashboard. Even though these are meant to simplify data fruition, they not always do. Sound could be quicker than visual displays in expressing, for example, rapid or slow progress (e.g. upbeat) or regress (e.g. downbeat). In the current landscape of information overload, exploring sound as alternative way of summarizing usage is something we thought very interesting.

3 - b - Bee Gate.png

Figure 3. ‘Bee gate’

Finally, the last selected project in this list is also one of the most unusual ones. Student James Batstone wanted to think of how bees interact with polluted environments and how they could be used as part of reclamation or decontamination programmes. He imagined a city (or territory) abandoned due to pollution, and of using bees to collect and analyse pollen to establish whether the territory was ready for being reclaimed to human habitation.

He built a prototype with ‘bee gates’ that would allow for the harmless capturing of pollen from the individuals insects when returning to the hive (Figur3). He also theorised to complement this with an automated software that used cameras to track and automatically analyse their dance to establish provenance. What we liked about this project is the imaginative idea of using bees to monitor air and land quality by analysing vegetation through their pollen, as well as radiation and pollutants in honey, to create maps of lands quality levels. Using natural resources and occurring events to complement what technology can do (and vice versa) is the way to achieve sustainable solutions in the long term.

 

Final thoughts

As part of our work at Intel, we collaborate with the world’s top universities to look at the future of cities with an eye toward the intersection of technology, environment, and social sustainability. In our groups one can find entrepreneurs, designers, hacktivists, engineers, data artists, architects and more.

 

We seek to support the same diversity of inspiration in today’s students as the future technology innovators by tapping into how to connect creativity to the technology for more vibrant, connected cities and communities. In many ways, working with first year master’s students is a refreshing perspective of how to open these questions with a beginner’s mind-set by suggesting how embrace simplicity in the face of rising information – just because our digital traces and data footprint will be increasing, our time to juggle what that means won’t.

 

Physical computing is coming into play in new ways, more often. It will not be enough to get lost in a screen – the interface of tomorrow will be everywhere, and interactions leap off screens into the real world. ‘Future Health, Future Cities’ suggested how to consider the role of physical computing in helping create more sustainable services by, for example, making transparent what and where the need for services are, by exploring how to communicate simply and well new urban information streams, and, last but not least, by reflecting on how to deliver resources where it will be most needed in a constantly changing city.

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

Read more >

Intel at Citrix Synergy 2015: Delivering a Foundation for Mobile Workspaces

From May 12-14, Citrix Synergy 2015 took over the Orange County Convention Center in Orlando, providing a showcase for the Citrix technologies in mobility management, desktop virtualization, server virtualization and cloud services that are leading the transition to the software-defined workplace. Intel and Citrix have worked [together closely] (https://www.youtube.com/watch?v=gsm26JHYIaY) for nearly 20 years to help businesses improve productivity and collaboration by securely delivering applications, desktops, data and services to any device on any network or cloud.  Operating Citrix mobile workspace technologies on Intel® processor-based clients and Intel® Xeon® processor-based servers can help protect data, maintain compliance, and create trusted cloud and software-defined infrastructures that help businesses better manage mobile apps and devices, and enable collaboration from just about anywhere.

 

During Citrix Synergy, a number of Intel experts took part in presentations to highlight the business value of operating Citrix software solutions on Intel® Architectures.

 

Dave Miller, director of Intel’s Software Business Development group, appeared with Chris Matthieu, director of Internet of Things (IoT) engineering at Citrix, to discuss trends in IoT.   In an interview on Citrix TV, Dave and Chris talked about how the combination of Intel hardware and Intel-based gateways and the Citrix* Octoblu IoT software platform make it easy for businesses to build and deploy IoT solutions that collect the right data and help turn it into insights to improve business outcomes.

 

Dave looked in his crystal ball to discuss what he saw coming next for IoT technologies. He said that IoT’s initial stages have been about delivering products and integrated solutions to create a connected IoT workflow that is secure and easily managed. This will be followed by increasingly sophisticated technologies for handling and manipulating data to bring insights to businesses. A fourth wave will shift the IoT data to help fuel predictive systems, based on the increasing intelligence of compute resources and data analytics.

 

I also interviewed David Cowperthwaite, an engineer in Intel’s Visual and Parallel Computing Group and an architect for virtualization of Intel Processor Graphics. In this video, we discussed how Intel and Citrix work together to deliver rich virtual applications to mobile devices using Citrix* XenApp.  David explained how running XenApp on the new Intel® Xeon® processor E3 v4 family  with Intel® Iris™ Pro Graphics technology provides the perfect platform for mobile delivery of 3D graphics and multimedia applications on the highly integrated, cartridge-based HP* Moonshot System.  

 

One of the more popular demos showcased in the Intel booth was the Intel® NUC and Intel® Compute Stick as zero-client devices.  Take a live look in this video.  We also released this joint paper on XenServer, take a look.

 

For a more light-hearted view of how Citrix and Intel work together to help you Work Anywhere on Any Device, watch this fun animation.

Read more >