ADVISOR DETAILS

RECENT BLOG POSTS

Amplify Your Value: Take a Journey to the Cloud!


Amplify Your Value (6).png

“Hello, my name is Jeff Ton and it has been one thousand, two hundred and seventy two days since I last opened Outlook.”


February 6, 2012, an historic date in Indianapolis, Indiana. Yeah, there was some little football game that night, Super Bowl XLVI – New York Giants against the New England Patriots. But that is not the event that made the date historic (though it was great to watch a Manning beat Brady!) What made that date historic was our go-live on Google Apps, our first step in our Journey to the Cloud.


Now that I have offended everyone from the Pacific Northwest and New England, let me rewind and start at the beginning. In 2010, I arrived at Goodwill Industries of Central Indiana. We were running Microsoft Exchange 2003 coupled with Outlook 2010. Back in the day, the adage was “No one ever got fired for buying IBM”, I was in the “No one ever got fired for buying Microsoft camp”. In fact, when I learned the students in our high school were using Google, I was pretty adamant that they use Office. After all, that is what they will be using when they get jobs!


At about this same time, we were switching from Blackberry to Android-based smartphones. We were having horrible sync problems between Exchange and the Androids using ActiveSync. We needed to upgrade our Exchange environment desperately!


As we were beginning to make plans for upgraded servers to support the upgraded exchange environment, I attend my first MIT Sloan CIO Symposium in Boston. Despite the fact that I bleed Colts blue, I actually love Boston, the history, the culture, the vibe; but I digress. At the conference I learned about AC vs. CF projects (see: That Project is a Real Cluster to learn more). I could not fathom a more likely CF project than an email upgrade project. Why not look to the cloud? Since we were doing an upgrade anyway, perhaps this would be the LAST email upgrade we would have to do!


Enter The Google Whisper. For months a former colleague-turned-Google-consultant had been telling me we should check out Google as an email platform. Usually my response was “Google? That’s for kids not an enterprise!” (Ok, now I have offended everyone from Silicon Valley, too!) Everytime I saw him, he would bring it up. I finally agreed to attend one of Google’s roadshow presentations. I came away from that event with an entirely different outlook (pun intended) on Google.


We decided to run a A/B pilot. We would convert 30 employees to the Google Apps platform for 60 days. We would then convert the same 30 employees to BPOS (predecessor to Office 365) for 60 days and may the best man, er, I mean platform, win. We handpicked the employees for the pilot. I purposely selected many who were staunchly in the Microsoft camp and several others who typically resisted change.


At the end of the pilot an amazing thing happened. Not one person on the pilot team wanted to switch off of Google onto BPOS, in fact, each and every person voted to recommend a Google migration to the Executive Team. Unanimous! When was the last time that ever happened in one of your projects?!!?


The decision made, we launched the project to migrate to the cloud! We leveraged this project to also implement our email retention policy (email is retained for five years). The vast majority of the work in the project involved locating all the .PST in our environment, moving them to a central location from network file folders, local drives, and yes, even thumb drives and CDs. Once in that central location, they were uploaded to the Google platform. During this time, we also mirrored our email environment so every internal and external email also went to the Google platform in real time.


The process took about three months, but finally it was Super Bowl Sunday, time for go-live. Now before you think me an ogre of a boss for scheduling a major go-live for Super Bowl Sunday, I should tell you, the date of February 6, 2012 was selected by the project team. Their thought? No one is going to be doing email after the game is over. We announced a blackout period of eight hours beginning at midnight to do our conversion. Boy, were we ever wrong about the length of the blackout period! Our conversion that night took about 20 minutes. 20 minutes and email was flowing again in and out of the Google environment.


Our implementation included email, contacts, calendar, and groups for three domains. We made the decision to keep the other Google Apps available, but not promote them. We also implemented our five year archive and optional email encryption for sensitive communications. The other decision we made (ok, I made) was not to allow the use of Outlook to access Gmail. One of the tenets of our strategic plan was “Any time, Any place, Any device”, I felt having a piece of client software violated that tenet and created additional support issues that were not necessary.


We learned several things as a result of the project. First, search is not sort. If you have used Gmail, then you know there is not a way to sort your Inbox, it relies instead on the power of Google Search. People really like their sort. Took some real handholding to get them comfortable.


Second, Google Groups are not Distribution Lists. We converted all of our Exchange Distribution Lists to Groups. Yes, they do function in somewhat the same way, however, there are many more settings in Groups, settings that can have unexpected consequences. Consequences like the time our CFO replied to an email that had been sent to a Group, and even though he did not use reply all, his reply went to everyone in the Group! We found that setting very quickly and turned it off! (Sorry Dan!)


The third lesson learned was “You cannot train enough”. Yes, we held many classes during the lead up to conversion and continued them long afterwards. A lot of the feedback we had heard (“everyone has Gmail at home, we already know how to use it”) led us to believe once the initial project was complete we didn’t need to continue training. We recently started a series of Google Workshops to continue the learning process. Honestly, I think some of this is generational. Some love to click on links, watch a video, and then use the new functionality. Others, really want a classroom environment. We now offer both.

One of the things that pleasantly surprised us (well, at least me) was the organic adoption of other Google tools. The first shared Google Doc came to me from outside the IT department. The first meeting conducted using Google Hangouts came from the Marketing department. People were finding the apps and falling in love with them.


Today, one thousand, two hundred and seventy-two days later our first step to the cloud is seen as a great accomplishment. It has saved us tens of thousands (if not hundreds of thousands) of dollars, thousands of hours, and has freed up our team to work on those AC projects!


Before I close, I do want to say, we are still a Microsoft shop. We have Office, Windows, Server, SQL Server and many other Microsoft Products. This post is not intended to be a promotion of one product over another. As I said in my previous post, your path may be different from ours. For us, a 3,000 employee non-profit, Google was the right choice. You may find it meets your requirements, or you may find another product is a better fit. The point here is not the specific product, but the product’s delivery method…cloud…SaaS. The project was such a resounding success, we changed one of our Application Guiding Principles. We are now “cloud-first” when selecting a new application or upgrading an existing one. In fact, almost all of the applications we have added in the last three and half years have been SaaS-based, including Workday, Domo, Vonigo, ETO, Facility Dude and more.


Go and Get Your Google On!

Go and get your Google on, later hit your Twitter up

We out workin’ y’all from summer through the winter, bruh

Red eye precision with the speed of a stock car

You’re now tuned in to some Independent Rock Stars


Next month, we will explore a project that did more to take us to a Value-add revenue generating partner than just about any other project. Amplify Your Value: Reap the Rewards!


The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue


We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).


Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

An Omni-Channel Think Tank at FIT

Woman-talking-on-phone-and-using-tablet-in-retail.jpgI had the privilege of representing Intel at the Fashion Institute of Technology’s (FIT) Symposium on Omni Retailing in New York in April.

 

And the privilege of listening to several industry leaders and – of great interest – a team of FIT’s top senior students, who presented their vision for the store of tomorrow.

 

Some common threads:

  • We’re living in a world of digital screens – brands can either get on board or get left behind.
  • Brand success is as much about effective storytelling as it is about product and operational efficiency. And the best brands tell their stories across the screens.
  • When it comes to the millennial shopper, it’s about authenticity and trust.

 

And, of course, technology is the thread that runs through it all.

 

Highlights

 

Jennifer Schmidt, Principal and leader of the Americas Apparel Fashion and Luxury practice at McKinsey & Company, emphasized the importance of storytelling in this important global segment. According to Ms. Schmidt, 50 percent of value creation in fashion and luxury is about perception – the ability of a brand to consistently deliver (in every facet of the business) a differentiating, conversation-building, relationship-building story.

 

(Those who joined Dr. Paula Payton’s NRF store tour in January will remember her emphasis on storytelling and narrative).

 

  1. Ms. Schmidt also spoke to three elements of import in her current strategy work:
    • The change in the role of the store – which now shifts from solely emphasizing transactions to brand-building – and with 20-30% fewer doors than before;
    • The change in retail formats – which, in developed world retailing, now take five different shapes: 1) flagship store, 2) free-standing format, 3) mini- and urban-free standing, 4) shops within shops and 5) outlet;
    • The importance of international expansion, especially to the PRC and South Asia.

 

Daniella Yacobovsky, co-founder of online jewelry retailer Baublebar, also noted the importance of brand building – and she explained that her brand story is equal parts product and speed. Baublebar works on an eight-week production cycle, achieving previously unheard of turns in jewelry. Data is Ms. Yacobovsky’s friend – she tracks search engine results, web traffic and social media to drive merchandising decisions.

 

And, last but certainly not least: FIT seniors Rebeccah Amos, Julianne Lemon, Rachel Martin and Alison McDermott, winners of FIT’s Experience Design for Millennials Competition, opined on what makes the best brand experience for millennials. Their unequivocal answer – paired with a lot of good, solid retailing advice – was videos and music.

 

It’s not just about entertainment. It’s also an issue of trust and authenticity (does a brand’s playlist resonate with you?), which ultimately leads to brand stickiness.

 

Envision video – and lots of it. On enormous, in-store video walls, on mobile, hand-held devices and on brand YouTube channels. To display products virtually or provide information on how to wear or accessorize them. With in-store video, retailers can orchestrate, curate and simplify, giving shoppers a fast, trusted way to be on trend.

 

Music? The students suggested that every brand needs a music director. Brand-right soundtracks and playlists and connections to the right bands and music events can be powerful influences on today’s largest consumer group.

 

Quite the day.

 

Jon Stine
Global Director, Retail Sales

Intel Corporation

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

* Other names and brands may be claimed as the property of others.

 

© 2015 Intel Corporation

Read more >

10 Mobile BI Strategy Questions: Design

two-men-projecting-image-on-tablet-to-screen.pngWhen the term design is used in mobile business intelligence (BI), it often refers to the user interface (UI). However, when I consider the question of design in developing a mobile BI strategy, I go beyond what a report or dashboard looks like.

 

As I wrote in “Mobile BI” Doesn’t Mean “Mobile-Enabled Reports,” when designing a mobile BI solution, we need to consider all facets of user interactions and take a holistic approach in dealing with all aspects of the user experience. Here are three areas of design to consider when developing a mobile BI strategy.

 

How Should the Mobile BI Assets Be Delivered?

 

In BI, we typically consider three options for the delivery of assets: push, pull, and hybrid. The basic concept of a “push” strategy is similar to ordering a pizza for home delivery. The “users” passively receive the pizza when it’s delivered, and there’s nothing more that they need to actively do in order to enjoy it (ok, maybe they have to pay for it and tip the driver). Similarly, when users access a report with the push strategy, whether through regular e-mail or mobile BI app, it’s no different than viewing an e-mail message from a colleague.

 

On the other hand, to have pizza with the pull strategy, users need to get into their cars and drive to the pizza place. They must take action and “retrieve the asset.” Likewise, users need to take action to “pull” the latest report and/or data, whether they log on using the app or mobile browser. The hybrid approach employs a combination of both the push and pull methods.

 

Selecting the right delivery system for the right role is critical. For example, the push method may be more valuable for executives and sales teams, who travel frequently and may be short on time. However, data updates are less frequent with the push method, so accessing the latest data can’t be critical if you choose this option. In contrast, the “pull” strategy may be more appropriate for analysts and customer service teams, who depend on the latest data.

 

Additional considerations include data security and enterprise mobility. Does the current BI solution or software support both options? Can the integrity of data security be maintained if data assets are delivered outside the demarcation lines (for example, mobile BI report delivered as an attachment to an e-mail)?

 

What Are the Format and Functionality of the Mobile BI Assets?

 

The format deals with the type and category of the asset that is delivered to mobile BI users. What does the end-user receive? Is it a static file in Adobe PDF or Microsoft Excel format with self-contained data, or is it dynamic such as a mobile BI app that employs native device functionality? Is the format limited to data consumption, or does it allow for interactions such as “what-if” scenarios or database write-back capability?

 

If the format supports exploration, what can I do with it? Can I select different data elements at run time as well as different visualization formats? How to I select different values to filter the result sets, like prompts? Does the format support offline viewing? Is the format conducive to collaboration?

 

Does the User Interface Optimize the BI Elements?

 

The UI represents the typical BI elements that are displayed on the screen: page layout, menus, action buttons, orientation, and so on. When you consider the design, decide if the elements really add value or if they’re just pointless visualizations like empty calories in a diet. You want to include just the “meat” of your assets in the UI. More often than not, a simple table with the right highlighting or alerts can do a better job than a colorful pie chart or bar graph.

 

In addition, the UI covers the navigation among different pages and/or components of a BI asset or package. How do the users navigate from one section to another on a dashboard?

 

Bottom Line: Design Is Key for the User Experience

 

The end-to-end mobile BI user experience is a critical component that requires a carefully thought-out design that includes not only soft elements (such as an inviting and engaging UI), but also hard elements (such as the optimal format for the right role and for the right device). Designing the right solution is both art and science.

 

The technical solution needs to be built and delivered based on specifications and following best practices – that’s the science part. How we go about it? That’s completely art. It requires both ingenuity and critical thinking, since not all components of design come with hard-and-fast rules that we can rely on.

 

What other facets of the mobile BI user experience do you include in your design considerations?

 

Stay tuned for my next blog in the Mobile BI Strategy series

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Under the Hood: How Dynamic Resource Pooling Unlocks Innovation

connected-resources-circle-graphic.png

If you have watched a movie on Netflix*, called for a ride from Uber* or paid somebody using Square*, you have participated in the digital services economy. Behind those services are data centers and networks that must be scalable, reliable and responsive.

 

Dynamic resource pooling is one of the benefits of a software defined infrastructure (SDI) and helps unlock scalability in data centers to enable innovative services.

 

How does it work? In a recent installment of Intel’s Under the Hood video series, Sandra Rivera, Intel Vice President, Data Center Group and General Manager, Network Platforms Group, provides a great explanation of dynamic resource pooling and what it takes to make it happen.

 

In the video, Sandra explains how legacy networks, built using fixed-function, purpose-built network elements, limit scalability and new service deployment. But when virtualization and software defined networking are combined into a software defined infrastructure, the network can be much more flexibly configured.

 

Pools of virtualized networking, compute and storage functionality can be provisioned in different configurations, all without changing the infrastructure, to support the needs of different applications. This is the essence of dynamic resource pooling.

 

To get to an infrastructure that supports dynamic resource pooling takes the right platform. Sandra talks about how Intel is helping developers build these platforms with a strategy that starts with powerful silicon building blocks and software ingredient technology, in addition to support for open standards development, building an ecosystem, collaborating on technology trials and delivering open reference platforms.

 

It is an exciting time for the digital services economy – who knows what service will become the next Netflix, Uber or Square!

 

There’s much more to Sandra’s overview of dynamic resource pooling, so I encourage you to watch it in its entirety.

 

Read more >

Empowering Wiltshire Police Employees with Mobile Technology

police-car-in-england.jpgWhat enables you to do really great work? Motivation to do a good job and belief in what you are doing are important. You also need access to the right tools and resources — be they pen and paper, a complex software package, or your team and their expertise. And you need the freedom to decide how you are going to pull all this together to achieve your goals.

 

I’ve recently seen how Wiltshire Police Force has used technology to bring together the combination of drive, the right tools and the freedom to act. Working with Wiltshire Council, it has developed a new approach to policing that empowers staff members to decide how, when and where they work in order to best serve the local community.

 

The organization deployed 600 tablets and laptop PCs, all powered by Intel® Core™ i5 processors, placing one in each patrol vehicle and giving some to back-office support staff. The devices connect (using 3G) to all the applications and systems the officers need. This allows them to check case reports, look up number plates, take witness statements, record crime scene details, and even fill in HR appraisal forms, from any location.


It’s What You Do, Not Where You Do It


Kier Pritchard is the assistant chief constable who drove the project. He and his team follow the philosophy that “work should be what you do, not where you go”. By giving officers the flexibility to work anywhere, he’s empowering them to focus on doing their jobs, while staying out in the community.

 

“We’re seeing officers set up in a local coffee shop, or the town hall,” he said. “In this way they can keep up to date with their cases, but they’re also more in touch with the citizens they serve.”

 

The other advantage of the new model is that officers can be much more productive. There’s no more driving to and from the station to do administrative tasks. Instead, they can catch up on these in quiet periods during their shift. “This essentially means there’s no downtime at all for our officers now,” said Pritchard.

 

The introduction of this new policing approach has gone down well with Wiltshire’s officers. They’ve taken to the devices enthusiastically and are regularly coming up with their own ways of using them to improve efficiency and collaboration.

 

In addition to making the working day more productive and rewarding for its staff, the mobile devices have also made a big difference to Wiltshire residents. Specialists in different departments of the police force are able to collaborate much more effectively by sharing their findings and resources through an integrated platform, making the experience for citizens much smoother. Areas in which the devices are used have also seen an improvement in crime figures thanks to the increased police presence within the community  — for example in the town of Trowbridge, antisocial behaviour dropped by 15.8 percent, domestic burglaries by 34.1 percent, and vehicle crime by 33 percent.

 

You can read more about how the officers are using the devices to create their own ideal ways of working in this recently published case study or hear about it in the team’s own words in this video. In the meantime, I’d love to hear your views on the role of mobile technology in empowering the workforce — how does it work for you?

 

To continue this conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


Find me on LinkedIn.

Keep up with me on Twitter.

Read more >

The Evolution of Big Data Use at Intel

BigDataJourney.png

Since Intel IT generated US$351 million in value from Big Data and analytics during 2014, you might wonder how Intel started on the road to reach that milestone.  In this presentation named “Evolution of Big Data at Intel: Crawl, Walk and Run Approach” from the 2015 Hadoop Summit in San Jose, Gomathy Bala, Director, and Chandhu Yalla, Manager and Architect, talk about Intel IT’s big data journey. They cover its beginning, current use cases and long term vision.  Along the way, they offer some useful information to organizations just starting out to explore big data techniques and uses.

 

One key piece of advice that the presenters mention is to start on small, well-defined projects where you can see a clear return.  That allows an organization to develop the skills to use Big Data with lower risk and known reward, part of the “crawl” stage from the presentation title.  Interestingly enough, Intel IT did not rush out and try to hire people who could immediately start using tools like Hadoop.  Instead, they gathered engineers who were passionate about new technology and trained them to use them.  This is part of “walk” stage.  Finally, with that experience, they developed an architecture to use Big Data techniques more generally.  This “run” stage architecture is shown below, where all enterprise data can be analyzed in real time.  We will be talking about Intel’s Data Lake in an upcoming white paper.

 

Another lesson is to evaluate Hadoop distributions and use distributions that is core open source. This is one of a number of criteria that were established.   You can see more on Intel IT’s Hadoop Distribution evaluation criteria and how we migrated between Hadoop versions in a previous blog entry.

 

A video of “The Evolution of Big Data at Intel, Crawl, Walk and Run Approach” can be seen here, and the presentations slides are available as a slideshare.  A video of Intel CIO Kim Stevenson talking about Intel’s use of Big Data is shown in the presentation video, but a clearer version can be found here.

BigDataArchitecture.png

Read more >

Mobility – Transforming the Mobile Field Worker

MFW .gif

Mobilizing the Field Worker

 

I recently had the opportunity to host an industry panel discussing the business transformation that occurs when mobility solutions are deployed for field workers. Generally speaking, field workers span a spectrum of industries and currently operate in the four following ways: pen & paper, laptop tethered to a truck, consumer grade tablet or a single function device like a bar code scanner.

 

Intel currently defines this market as 10 million workers divided into two general categories – hard hat workers and professional services. Hard hat workers generally function in a ruggedized environment – think construction or field repair teams. Professional services includes real estate appraisal, insurance agents, law enforcement, and many others.

 

Field teams are capable of improving customer service, generating new revenue streams, and actively driving cost reductions.  A successful mobile strategy can enable all three.  Regardless of the industry, field workers need access to vital data when they’re not in the office.

 

The panel of experts consisted of system integrators as well as communication, hardware, and security experts. Together, we discussed the elements required for the successful deployment of a mobile solution.

 

The panel was comprised of; Geoff Goetz from BSquare, Nancy Green from Verizon Wireless and Michael Seawright from Intel Security. They brought a wealth of information, expertise and insight to the panel.  I have tried to share the essence of this panel discussion – I am sure I will not do it justice as they were truly outstanding.


The field worker segment represents a great business opportunity.  By the very definition the field worker is on the front line delivering benefits and services to customers.  They are reliant upon having the right information in a real time manner. Frequently, this information is available only through applications and legacy based software running back at headquarters.  In planning for a successful deployment the enterprise must consider how they connect the field worker to this information. Hardware, applications, and back-office optimizations must all  be considered.

 

bsquare.pngGeoff Goetz from BSQUARE shared the perspective of both a hardware and system integrator. BSQUARE is a global leader of embedded software and customized hardware solutions. They enable smart connected systems at the device level, which are used by millions every day.  BSQUARE offers solutions for mobile field workers across a spectrum of vertical industries.  They have worked closely with Microsoft to develop a portfolio of Windows 10 based devices on 5, 8 and 10” form factors.  What was interesting to me was the 5” Intel-based handheld capable of running Windows 8.1 and soon Windows 10.  The Inari5 fills the void for both field workers and IT managers. The Inari5 is a compelling solution that doesn’t comprise on performance or functionality.  Geoff and his team truly understand the value of having the right device for the job as well as the software and applications to accelerate an enterprise while achieving the full benefits of mobilizing their field teams.


verzn.pngNancy Green from Verizon Wireless highlighted the advantages of utilizing an extensive network to deliver connectivity right to the job site. Verizon Wireless offers a full suite of software solutions and technical capabilities to accelerate mobile programs across industries. Verizon delivers upon the value proposition for both the Line of Business manager seeking a competitive advantage, as well as the IT manager looking to easily manage and secure the devices in the field. As I mentioned before, one of the most critical requirements for field workers is access to information.  Verizon has worked with numerous companies to unlock workforce optimization by reducing costs, simplifying access to remote data, and increasing collaboration.  I was very impressed with the extensive resources Verizon can bring to bear in designing a mobile solution for field workers. 


intelsecurity.pngMichael Seawright from Intel Securities is an industry advocate who has been successfully leading business transformation with Intel’s fellow travelers for more than 20 years.  In a hyper competitive market, the field worker has the opportunity to drive customer good will, address and fix problems the first time all the while driving sell-up. 

Meanwhile, many companies are struggling figuring out the right level of management and security for their mobile workforce.

 

One advantage in deploying Intel-based mobile solutions is the built-in security at the processor level.  Ultimately, the device security is only as good as its user’s passwords. The Intel Security team is working to address the vulnerabilities associated with passwords.

 

Ultimately, mobility is a business disruptor offering a chance to transform business processes and gain a competitive advantage.  A successful program requires the IT department and its vendors to think beyond the device.  It requires a solution approach to successfully manage the development, implementation and rollout.  In addition, it may require back-office optimization.  The following image depicts my attempt to highlight the architecture framework that should be considered for mobile program.


beyond tablets.png

 

mgseawright; kodonovan; IT Peer Network; ChrisPeters; JenAust; @Pattie_Sims; quoyeser; arose;

Read more >

Transform Data Centre Networking, Without the Small Talk

Intels-history-of-networking-data-cetner.pngIf I asked you to play a round of word associations starting with ‘Intel’, I doubt many of you would come back with ‘networking’. Intel is known for a lot of other things, but would it surprise you to know that we’ve been in the networking space for more than 30 years, collaborating with key leaders in the industry? I’m talking computer networking here of course, not the sort that involves small talk in a conference centre bar over wine and blinis. We’ve been part of the network journey from the early Ethernet days, through wireless connectivity, datacentre fabric and on to silicon photonics. And during this time we’ve shipped over 1 billion Ethernet ports.

 

As with many aspects of the move to the software-defined infrastructure, networking is changing – or if it’s not already, it needs to. We’ve spoken in this blog series about the datacentre being traditionally hardware-defined, and this is especially the case with networking. Today, most networks consist of a suite of fixed-function devices – routers, switches, firewalls and the like. This means that the control plane and the data plane are combined with the physical device, making network (re)configuration and management time-consuming, inflexible and complex. As a result, a datacentre that’s otherwise fully equipped with the latest software-defined goodies could still be costly and lumbering. Did you know, for example, that even in today’s leading technology companies, networking managers have weekly meetings to discuss what changes need to be made to the network (due to the global impact even small changes can have), which can then take further weeks to implement? Ideally, these changes should be made within hours or even minutes.

 

So we at Intel (and many of our peers and customers) are looking at how we can take the software-defined approach we’ve used with compute and apply it to the network as well. How, essentially, do we create a virtualised pool of network resources that runs on industry-standard hardware and that we can manage using our friend, the orchestration layer? We need to separate the control plane from the data plane.

 

Intels-workload-consolidation-strategy.png

Building virtual foundations

 

The first step in this journey of network liberation is making sure the infrastructure is in place to support it. Historically, traditional industry-standard hardware wasn’t designed to deal with networking workloads, so Intel adopted a 4:1 workload consolidation strategy which uses best practices from the telco industry to optimise the processing core, memory, I/O scalability and performance of a system to meet network requirements. In practice, this means combining general-purpose hardware with specially designed software to effectively and reliably manage network workloads for application, control, packet and signal processing.

 

With this uber-foundation in place, we’re ready to implement our network resource pools, where you can run a previously fixed network function (like a firewall, router or load balancer) on a virtual machine (VM) – just the same as running a database engine on a VM. This is network function virtualisation, or NFV, and it enables you to rapidly stand up a new network function VM, enabling you to meet those hours-and-minutes timescales rather than days-and-weeks. It also effectively and reliably addresses OpEx and manual provisioning challenges associated with a fixed-function network environment in the same way that compute virtualisation did for your server farm. And the stronger your fabric, the faster it’ll work – this is what’s driving many data centre managers to consider upgrading from 10Gb Ethernet, through to 40Gb Ethernet and on to 100Gb Ethernet.

 

Managing what you’ve built

 

So, hooray! We now have a path to virtualising our network functions, so we can take the rest of the week off, right? Well, not quite. The next area I want to address is software-defined networking (SDN), which is about how you orchestrate and manage your shiny new virtual network resources at a data centre level. It’s often confused with NFV but they’re actually separate and complementary approaches.

 

Again, SDN is nothing new as a concept. Take storage for example – you used to buy a fixed storage appliance, which came with management tools built-in. However, now it’s common to break the management out of the fixed appliance and manage all the resources centrally and from one location. It’s the same with SDN, and you can think of it as “Network Orchestration” in the context of SDI.

 

With SDN, administrators get a number of benefits:

 

  • Agility. They can dynamically adjust network-wide traffic flow to meet changing needs agilely and in near real-time.
  • Central management. They can maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatic configuration. They can configure, manage, secure and optimise network resources quickly, via dynamic, automated SDN programs which they write themselves, making them tailored to the business.
  • Open standards and vendor neutral. They get simplified network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. This open standards point is key from an end user perspective as it enables centralised management.

 

Opening up

 

There’s still a way to go with NFV and SDN, but Intel is working across the networking industry to enable the transformation. We’re doing a lot of joint work in open source solutions and standards, such as OpenStack.org – unified computing management including networking, OpenDaylight.org – a platform for network programmability, and also the Cisco* Opflex Protocol – an extensible policy protocol. We’re also looking at how we proceed from here, and what needs to be done in order to build an open, programmable ecosystem.

 

Today I’ll leave you with this short interview with one of our cloud architects, talking about how Intel’s IT team has implemented software-defined, self-service networking. My next blog will be the last in this current series, and we’ll be looking at that other hot topic for all data centre managers – analytics. In the meantime, I’d love to hear your thoughts on how your business could use SDN to drive time, cost and labour out of the data centre.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


*Other names and brands may be claimed as the property of others.

Read more >

SDI: The Foundation for Cloud

When it comes to the cloud, there is no single answer to the question of how to ensure the optimal performance, scalability, and portability of workloads. There are, in fact, many answers, and they are all tied to the interrelated layers of the software-defined infrastructure (SDI) stack. The recently announced Intel Cloud for All Initiative is focused directly at working with cloud software vendors and the community to deliver fully optimized SDI stacks that can serve a wide array of apps and data.  To better understand the underlying strategy driving the Cloud for All Initiative, it’s important to see the relationships between each layer of the SDI stack.

 

In this post, we will walk through the layers of the SDI stack, as shown here.

 

sdi-stack.png

 

The foundation

 

The foundation of Software Defined Infrastructure is the creation of infrastructure resource pools establishing compute, storage and network services.  These resource pools utilize the performance and platform capabilities of Intel architecture, to enable applications to understand and then control what they utilize. Our work with the infrastructure ecosystem is focused on ensuring that the infrastructure powering the resource pools is always optimized for a wide array of SDI stacks.

The OS layer

 

At the operating system level, the stack includes commonly used operating systems and software libraries that allow applications to achieve optimum performance while enabling portability from one environment to another. Intel has a long history of engineering with both OS vendors and the community, and has extended this work to extend to light weight OS that provide greater efficiency for cloud native workloads.

 

The Virtualization layer

 

Moving up the stack, we have the virtualization layer, which is essential to software-defined infrastructure. Without virtualization, SDI would not be possible. But in this context, virtualization can include more than just typical hypervisors. In order to establish resources pools the infrastructure components of compute, storage, and network are virtualized through various means.  The most optimum resource pools are those that can continue to scale out to meet the growing needs of their consumers. Last but not least, the performance isolation provided by containers can be considered OS virtualization which has enabled a whole new set of design patterns for developers to use.  For both containers and hypervisors, Intel is working with software providers to fully utilize the capabilities of Intel® Virtualization Technology (Intel® VT) to drastically reduce performance overhead and increase security isolation.  For both storage and network, we have additional libraries and instruction sets that help deliver the best performance possible for this wide array of infrastructure services.

 

The Orchestration layer

 

There are numerous orchestration layers and schedulers available, however for this discussion we will focus on those being built in the open; OpenStack, Apache Mesos, and Kubernetes.  This layer provides central oversight of the status of the infrastructure, what is allocated and what is consumed, how applications or tenants are deployed, and how to best meet the goals of most DC infrastructure teams…. Increase utilization while maintaining performance. Intel’s engagement within the orchestration layer focuses on working with the industry to both harden this layer as well as bring in advanced algorithms that can help all DC’s become more efficient.  Some examples are our work in the OpenStack community to improve the availability of the cloud services themselves, and to provide rolling upgrades so that the cloud and tenants are always on.  In Mesos, we are working to help users of this technology use all available computing slack so they can improve their TCO.

 

The Developer environment

 

The entire SDI infrastructure is really built to power the developers code and data which all of us as consumers use every day of our life.  Intel has a long history of helping improve debugging tools, making it easier for developers to move to new design patterns like multi-threaded, and now distributed systems, and helping developers get the most performance out of their code.  We will continue to increase our focus here to make sure that developers can focus on making the best SW, and let the tools help them build always on highly performant apps and services.

 

For a close-up look at Intel’s focus on standards-based innovation for the SDI stack, check out the related sessions at the Intel Developer Forum, which takes place August 18 – 20 in San Francisco. These events will include a class that dives down into the Intel vision for the open, standards-based SDI stacks that are the key to mainstream cloud adoption.

Read more >

Taking the Training Wheels off Cloud with Cloud for All Initiative

Cloud computing has been a tremendous driver of business growth over the past five years.  Digital services such as Uber, AirBnB, Coursera, and Netflix have defined consumer zeitgeist while redefining entire industries in the process.  This first wave of cloud fueled business growth, has largely been created by businesses leveraging cloud native applications aimed at consumer services.  Traditional enterprises who seek the same agility and efficiency that the cloud provides have viewed migration of traditional enterprise applications to the cloud as a slow and complex challenge.  At the same time, new cloud service providers are seeking to compete on a cost parity with large providers, and industry standard solutions that can help have been slow in arriving.  The industry simply isn’t moving fast enough to address these very real customer challenges, and our customers are asking for help.

 

To help solve these real issues, Intel is announcing the Cloud for All Initiative with the goal of accelerating the deployment of tens of thousands of clouds over the next five years. This initiative is focused solely on cloud adoption to deliver the benefits of cloud to all of our customers.  This represents an enormous efficiency and strategic transition for Enterprise IT and Cloud Service Providers.  The key to delivering the efficiency of the cloud to the enterprise is rooted in software defined infrastructure. This push for more intelligent and programmable infrastructure is something that we’ve been working on at Intel for several years. The ultimate goal of Software Defined Infrastructure is one where compute, storage and network resource pools are dynamically provisioned based on application requirements.

 

Cloud for All has three key objectives:

 

  1. Invest in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions
  2. Optimize SDI stacks for high efficiency across workloads
  3. Align the industry towards standards and development focus to accelerate cloud deployment

 

Through investment, Intel will utilize our broad ecosystem relationships to ensure that a choice of SDI solutions supporting both traditional enterprise and cloud native applications are available in easy to consume options.  This work will include scores of industry collaborations that ensure SDI stacks have frictionless integration into data center infrastructure.

 

Through optimization, Intel will work with cloud software providers to ensure that SDI stacks are delivered with rich enterprise feature sets, highly available and secure, and scalable to thousands of nodes.  This work will include the full optimization of software to take advantage of Intel architecture features and technologies like Intel virtualization technology, cloud integrity technology, and platform telemetry, all to deliver optimal enterprise capabilities.

 

Through industry alignment, Intel will use its leadership role in industry organizations as well as our work with the broad developer community to ensure that the right standards are in place to ensure workloads have true portability across clouds. This standardization will help enterprises have the confidence to deploy a mix of traditional and cloud native applications.

 

This work has already started.  We have been engaged in the OpenStack community for a number of years as a consumer, and more recently our integration into the Foundation board last year. We have used that user and leadership position to push for features needed in the enterprise. Our work does not stop there however, over the past few months we’ve announced collaborations with cloud software leaders including CoreOS, Docker and Red Hat highlighting enterprise readiness for OpenStack and container solutions.  We’ve joined with other industry leaders to form the Open Container Initiative and Cloud Native Computing Foundation to drive the industry standards and frameworks for cloud native applications.

 

Today, we’ve announced our next step in Cloud for All with a strategic collaboration with Rackspace, the co-founder of OpenStack and a company with a deep history of collaboration with Intel.  We’ve come together to deliver a stable, predictable, and easy to operate enterprise ready OpenStack scalable to 1000’s of nodes. This will be accomplished through the creation of the OpenStack Innovation Center. Where we will be assembling large developer teams across Intel and Rackspace to work together to address the key challenges facing the Openstack platform.  Our upstream contributions will align with the priorities of the OpenStack Foundation’s Enterprise Workgroup. To facilitate this effort we will create the Hybrid Cloud Testing Cluster, a large scale environment, open to all developers in the community wishing to test their code at scale with the objective of improving the OpenStack platform.  In total, we expect this collaboration to engage hundreds of new developers internally and through community engagement to address critical requirements for the OpenStack community.

 

Of course, we’ve only just begun.  You can expect to hear dozens of announcements from us in the coming year including additional investments and collaborations, as well as the results of our optimization and delivery.  I’m delighted to be able to share this journey with you as Cloud for All gains momentum. We welcome discussion on how Intel can best work with industry leaders and customers to deliver the goals of Cloud for All to the enterprise.

Read more >

10 Mobile BI Strategy Questions: Technology Infrastructure

man-looking-at-enterprise-mobility-devices.pngWhen an organization is considering implementing a mobile BI strategy , it needs to ask/consider if its current information technology (IT) and business intelligence (BI) infrastructure can support mobile BI. It must determine if there are any gaps that need to be addressed prior to going live.

 

When we think of an end-to-end mobile BI solution, there are several areas that can impact the user experience. I refer to them as choke points. Some of the risks associated with these choke points can be eliminated; others will have to be mitigated. Depending on the business model and how the IT organization is set up, these choke points may be dependent on the configuration of technology or they may hinge on processes that are embedded into business or IT operations. Evaluating both infrastructures for mobile BI readiness is the first step.

 

IT Infrastructure’s Mobile BI Readiness

 

The IT infrastructure typically includes the mobile devices, wireless networks, and any other services or operations that will enable these devices to operate smoothly within a set of connected networks, which span those owned by the business or external networks managed by third party vendors. As mobile BI users move from one point of access to another, they consume data and assets on these connected networks and the mobile BI experience should be predictable within each network’s constraints of flexibility and bandwidth.

 

Mobile device management (MDM) systems also play a crucial role in the IT infrastructure. Before mobile users have a chance to access any dashboards or look at data on any reports, their mobile devices need to be set up first. Depending on the configuration, enablement may include device and user enrollment, single sign-on (SSO), remote access, and more.

 

Additionally, failing to properly enroll either the device or the user may result in compliance issues or other risks. It’s critical to know how much of this comes preconfigured with the device and how the user will manage these tasks. When you add to the mix the bring-your-own-device (BYOD) arrangements, the equation gets more complex.

 

BI Infrastructure’s Mobile BI Readiness

 

Once the user is enabled on the mobile device and business network, the BI infrastructure will be employed. The BI infrastructure typically includes the BI software, hardware, user profiles, and any other services or operations that will enable consumption of BI assets on mobile devices. The mobile BI software, whether it is an app or web-based solution, will need to be properly managed.

 

The first area of concern for an app-based solution is the installation of the app from an app store. For example, does the user download the app from iTunes (in the case of an iPad or iPhone) or from an IT-managed corporate app store or gallery? Is it a custom-built app developed in-house or is it part of the current BI software? Does the app come preconfigured with the company-supplied mobile device (similar to how email is set up on a PC) or is the user left alone to complete the installation?

 

When the app is installed, are we done? No. In many instances, the app would need to be configured to connect to the mobile BI servers. Moreover, this configuration step needs to come after obtaining proper authorizations, which involves entering user’s access credentials (at minimum user id and password unless SSO can be leveraged).

 

If the required authorization requests, regardless of existing BI user profiles, are not obtained in time, the user configuration can only be completed partially. More often than not, the mobile BI users will need assistance with technical and process-related topics. Hence, streamlining both installation and configurations steps will further improve the onboarding process.

 

Bottom Line

 

Infrastructure is the backbone of any technology operation, and it’s equally important for mobile BI. Close alignment with enterprise mobility, as I wrote in “10 Mobile BI Strategy Questions: Enterprise Mobility,” will help to close the gaps in many of these areas. When we’re developing a mobile BI strategy, we can’t take the existing IT or BI infrastructure for granted.

 

Where do you see the biggest gap when it comes to technology infrastructure in mobile BI planning?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Cloud Native Computing Foundation Charters Standards for Cloud Native App Portability

virtual-servers-graphic.jpg

With the cloud software industry advancing on a selection of Software Defined Infrastructure ‘stacks’ to support enterprise data centers, the question of application portability comes squarely into focus. A new ‘style’ of application development has started to gather momentum in both the Public Cloud, as well as Private Cloud. Cloud native applications, as this new style has been named, are those applications that are container packaged, dynamically scheduled, and microservices oriented. They are rapidly gaining favor for their improved efficiency and agility as compared to more traditional monolithic data center applications.

 

However, creating a cloud native application, does not eliminate the dependencies on traditional data center services. Foundational services such as networking, storage, automation, and of course, compute are all still very much required.  In fact, since the concept of a full virtual machine may not be present in a cloud native application, these applications rely significantly on their infrastructure software to provide the right components. When done well, a ‘Cloud Native Application’ SDI stack can provide efficiency and agility previously only seen in a few hyperscale environments.

 

Another key aspect of the Cloud Native Application, is that it should be highly portable. This portability between environments is a massive productivity gain for both developers and operators. An application developer wants the ability to package an application component once, and have it be reusable across all clouds, both public and private. A cloud operator wants the freedom to position portions of their application where it makes the most sense. That location may be on their private cloud, or on their public cloud partner. Cloud Native Applications are the next step in true hybrid cloud usage.

 

So, with this promise of efficiency, operational agility, and portability, where do data center managers seek the definitions for how the industry will address movement of apps between stacks? How can one deploy a cloud native app and ensure it can be moved across clouds and SDI stacks without issue?  Without a firm answer, can one really develop cloud native apps with the confidence that portability will not be limited to those environments running identical SDI stacks?  These are the type of questions that often stall organizational innovation and is the reason why Intel has joined with other cloud leaders in the formation of the Cloud Native Computing Foundation (CNCF).

 

Announced this week at the first ever KuberCon event, the CNCF has been chartered to provide guidance, operational patterns, standards and over time , APIs, to ensure container based SDI stacks are both interoperable, and optimized for a seamless, performant developer experience. The CNCF will work with the recently formed Open Container Initiative (OCI) towards a synergistic goal of addressing the full scope of container standards and the supporting services needed for success.

 

Why announce this at KuberCon? The goal of the CNCF is to foster innovation in the community around these application models. The best way to speed innovation is to start with some seed technologies. Much the same way it is easier to start writing (a blog perhaps?) when you have a few sentences on screen, rather than staring at a blank page, the CNCF is starting with some seed technologies. Kubernetes, having just passed its 1.0 release, will be one of the first technologies used to kick start this effort. Many more technologies, and even full stacks, will follow, with a goal of several ‘reference’ SDI platforms that support the portability required.

 

What is Intel’s role here?  Based on our decades of experience helping lead industry innovation and standardization across computing hardware and open source software domains, we are firmly committed to the CNCF goals and plan to actively participate in the leadership body and Technical Oversight Committee of the Foundation.  This effort is reflective of our broader commitment to working with the industry to accelerate the broad use of cloud computing through delivery of optimized, easy to consume, operate, and feature complete SDI stacks.  This engagement complements our existing leadership roles in the OCI, the OpenStack Foundation, Cloud Foundry Foundation as well as our existing work driving solutions with the SDI platform ecosystem.

 

With the cloud software industry accelerating its pace of innovation, please stay tuned for more details on Intel’s broad engagement in this space. To deepen your engagement with Intel, I invite you to join us at the upcoming Intel Developer Forum in San Francisco to gain a broader perspective on Intel’s strategy for acceleration of cloud.

Read more >

Intel, Industry Highlights Latest Collaboration at KuberCon

Portland.jpg

 

 

My hometown of Portland, Oregon is home this week to the first ever KuberCon Launch event bringing together the Kubernetes ecosystem at OSCON. While the industry celebrates the delivery of Kubernetes 1.0 and formation of the Cloud Native Computing Foundation, this week is also an opportunity to gauge the state of the development around open source container solutions.

 

Why so much attention on containers? Basically, it is because containers help software developers and infrastructure operators at the same time.  This tech will help put mainstream data centers and developers on the road to the advanced, easy-to-consume, easy to ship and run, hyperscale technologies that are a hallmark of the world’s largest and most sophisticated cloud data centers. The container approach, packages up applications and software libraries to create units of computing that are both scalable and portable—two keys to the agile data center.  With the addition of Kubernetes and other key tech like Mesos, the orchestration and scheduling of the containers is making the impossible now simple.

 

This is a topic close to the hearts of many people at Intel. We are an active participant in the ecosystem that is working to bring the container model to a wide range of users and data centers as part of our broader strategy for standards based stack delivery for software defined infrastructure.  This involvement was evidenced earlier this year through our collaborations with both CoreOS and Docker, two leading software players in this space, as well as our leadership engagement in the new Open Container Project.

 

As part of the effort to advance the container cause, Intel is highlighting the latest advancements in our CoreOS collaboration to advance and optimize the Tectonic stack, a commercial distribution of Kubernetes plus CoreOS software. At KuberCon, Intel, Redapt, Supermicro and CoreOS are showing a Tectonic rack running on bare metal highlighting the orchestration and portability that Tectonic provides to data center workloads.  Local rock-star company Jive has been very successful in running their workloads on this platform showing that their app can move between public cloud and on-premise bare metal cloud.  We’re also announcing extensions of our collaboration with CoreOS to drive broad developer training for Tectonic and title sponsorship in CoreOS’s Tectonic Summit event planned for December 2nd and 3rd in New York. For details, check out the CoreOS news release.

 

We’re also featuring an integration of an OpenStack environment running Kubernetes based containers within an enterprise ready appliance.  This collaboration with Mirantis, Redapt and Dell highlights the industry’s work to drive open source SDI stacks into solutions that address enterprise customer needs for simpler to deploy solutions and demonstrate the progress that the industry has made in integrating Kubernetes with OpenStack as it reaches 1.0.

 

Our final demonstration features a new software and hardware collaboration with Mesosphere, the company behind much of the engineering for Mesos which provides the container scheduling for Twitter, Apple Siri, AirBnB among other digital giants.  Here, we’ve worked to integrate Mesosphere’s DCOS platform with Kubernetes on a curated and optimized hardware stack supplied by Quanta.  This highlights yet another example of an open source SDI stack integrating efficient container based virtualization to drive the portability and orchestration of hyperscale.

 

For a closer look at Intel’s focus on standards-based innovation for the software-defined infrastructure stack, check out my upcoming presentation at the Intel Developer Forum (Intel IDF). I’ll be detailing further advancements in our industry collaborations in delivery of SDI to the masses as well as going deeper in the technologies Intel is integrating into data center infrastructure to optimize SDI stacks for global workload requirements.

Read more >

5 Questions for Tracey Moorhead, President, VNAA

 

As populations age around the world, home healthcare will become a more vital part of caring for senior patients. To learn more about this growing trend, and how technology can play a role, we sat down with Tracey Moorhead, president and CEO of the Visiting Nurse Associations of America (VNAA), which represents non-profit providers of home health, hospice, and palliative care services and has more than 150 agency members in communities across the country.

 

Intel: How has technology impacted the visiting nurse profession?

 

Moorhead: Technology has impacted the profession of home care providers, particularly, by expanding the reach of our various agencies. It allows our agencies to cover greater territories. I have a member in Iowa who covers 24,000 square miles and they utilize a variety of technologies to provide services to patients in communities that are located quite distantly from the agencies themselves. It has also impacted the individual providers by helping them communicate more quickly back to the home office and to the nurses making decisions about the course of care for the individual patients.

 

The devices that our members and their nurses are utilizing are increasingly tablet-based. We do have some agencies who are utilizing smartphones, but for the most part the applications, the forms and checklists that our nurses utilize in home based care are better suited for a tablet-based app.

 

Intel: What is the biggest challenge your members face?

 

Moorhead: One of the biggest challenges that we have in terms of better utilizing technology in the home based care industry is interoperability; not only of devices but also of platforms on the devices. An example is interoperability of electronic health records. Our individual agencies may be collaborating with two or more hospital systems, who may have two or more electronic health records in utilization. Combine that with different physician groups or practice models with different applications within each of those groups and you have a recipe for chaos in terms of interoperability and the rapid sharing and care coordination for these various patients out in the field. The challenges of interoperability are quite significant: they prevent effective handoffs, they cause great challenges in effective and rapid care coordination among providers, and they really continue to maintain this fragmentation of healthcare that we’ve seen.

 

Intel: What value are patients seeing with the integration of technology in care?

 

Moorhead: Patients and family caregivers have responded so positively to the integration of these new technologies and apps. Not only does technology allow for our nurses to communicate with family members and caregivers to help them understand how to best care for and support their loved ones, but it also allows the patients to have regular communication with their nurse care providers when they’re not in the home. Our patients are able to contact the home health agency or their nurse on days when there may not be a scheduled visit.

 

I visited a family in New Jersey with one of our agencies and they were so excited that it was visit day. When the nurse arrived not only was the wife there, but the two daughters, the daughter-in-law and also the son were there to greet the nurse and to talk with the nurse at length about the progress of the father and the challenges that they were having caring for him. That experience for me really brought home the person-centered, patient-centered, family-centered care that our patients provide and the technologies that were being utilized in that home not only when the nurse was there but the technologies that the nurse had provided with the family, including a tablet with an app to allow them to contact the home health agency, really made the family feel like they had the support that they needed to best care for their father and husband.

 

Intel: How are the next generation of home care providers adapting to technology?

 

Moorhead: The next generation of nurses, the younger nurses who are just entering the field and deciding to devote themselves to the home based care delivery system, are very accustomed to utilizing technologies, whether on their tablets or their mobile phones, and have integrated this quite rapidly into their care delivery models and processes. Many of them report to us that they feel it provides them a significant degree of freedom and support for the care delivery to their patients in the home.

 

Intel: Where will the home care profession be in five years from now?

 

Moorhead: I see significant change coming in our industry in the next five years. We are, right now, in the midst of a cataclysm of evolution for the home based care provider industry and I see only significant opportunities going forward. It’s certainly true that we have significant challenges, particularly on the regulatory and administrative burden side, but the opportunities in new care delivery models are particularly exciting for us. We see the quality improvement goals, the patient-centered goals and the cost reduction goals of care delivery models such as accountable care organizations and patient-centered medical homes as requiring the integration of home based care providers. Those organizations simply will not be able to achieve the outcomes or the quality improvement goals without moving care into the community and into the home. And so, I see a rapid expansion and increased valuation of home based care providers.

 

The technologies that we see implemented today will only continue to enhance the ability to care for these patients, to coordinate care and to communicate back to those nascent health delivery models, such as ACOs and PCMHs.

Read more >

IT Refresh is Over-Rated

IT-refresh_graham-palmer_linkedin-blog-post.jpg

 

I have to admit my headline is a little tongue in cheek but please hear me out. 

 

With the recent end of support for Windows Server 2003, the noise around refresh can be deafening.  I’d bet many people have already tuned out or have grown weary of hearing about it. They’ve listened to the incessant arguments about increased risks of security breaches, issues around compatibility, and estimates of costly repair bills should something go wrong.

 

While it’s true all of these things will leave a business vulnerable and less productive, it appears that’s just not enough for many companies to make a shift. 

 

Microsoft Canada estimates that 40% of its install base is running Windows Server 2003, illustrating Canadian companies’ conservatism when it comes to major changes to their infrastructure.  I’d suggest that in this current economic environment, being conservative in the adoption of new technology is leaving us vulnerable to an attack that could have a far reaching impact.

 

But let’s set that aside for the time being. You’ll be pleased to know I’m not going to talk about all the common reasons for refreshing your hardware and software including security, productivity, downtime, and support costs.  All these issues are important and valid, however I’ve no doubt we will hear a great deal about all of them in the weeks leading up to July 14th.

 

Instead I offer you a slightly different perspective.

 

In a previous post I wrote about a global marketplace that is getting more competitive.  Canadian companies are, and will be, facing off against larger enterprises located around the world. Competition is no longer from across the street or in the neighboring town. Canadian trade agreements have been signed or updated with numerous countries or economic regions globally including the European Union, China, Korea, and Chile. While these agreements signal opportunities for businesses to gain access to new markets, they also herald the risk of increased domestic competition.

 

To continue to succeed, businesses will have to find more efficient ways of doing whatever they need to get done.  This means pushing beyond their traditional comfort zone towards greater innovation.  This push will undoubtedly be enabled by advances in technology to support productivity gains.

 

As companies consider what it will take to succeed into the future, I believe you need to look at the people you have working for your company.  Are these the employees who can drive your company forward? Are they future leaders or innovators who can help you compete against global powerhouses?

 

Here’s where an important impetus to refresh your technology begins to take shape.

 

The employees of Generation Y and Generation C, also known as the connected generation, want to work for progressive, leading-edge companies and are shying away from large, stodgy traditional businesses or governments.  Being perceived as dated will limit your recruitment options as the top candidates chose the firms that are progressive in all areas of their business.

 

I’ve seen statistics indicating that 75% of the workforce will be made up of Generation Y workers by 2025. They are already sending ripples of change throughout corporate cultures and have started to cause a shift in employment expectations. These employees aren’t attracted to big blue chip firms, and in fact only 7% of millennials currently work for Fortune 500 companies.  Instead, they are attracted to the fun, dynamic, and flexible environments touted by start-ups.

 

The time has never been better to decide if you are going to continue to rinse and repeat, content to stick with the status quo, or if you are ready to embrace a shift that could take your business to the next level and at the same time position yourself to become more attractive to the next generation of employees.

 

So let’s talk a little about the opportunity here: In my experience from the UK, and I would argue it is similar in any market, new opportunities are realized first by small- and medium-sized businesses. They are more nimble and typically they are in a better position to make a significant change more quickly.  Since they are also closest to their customers and their local community, they can shift gears more rapidly to respond to changes they are seeing in their local market and benefit from offering solutions first that meet an emerging need.

 

SMBs are also in a strong position to navigate and overcome barriers to adoption of new technology since they don’t have that massive install base that requires a huge investment to change. In other words, they don’t have a mountain of technology to climb in order to deliver that completely new environment, but it takes vision and leadership willing to make a fundamental shift that will yield future dividends.

 

The stark truth is that millennials are attracted to and have already adopted the latest technology. They don’t want to take a step backward when they head into the workplace.  The technologies they will use and the environment they will be working in are already being factored into their decision about whether or not to accept a new position.

 

How do you think your workplace would be viewed by these future employees?  It goes without saying that we need to equip people to get their jobs done without worrying about the speed of the technology they’re using. No one wants delays caused by technology that is aging, slowing them down, and preventing them from doing what they need to get done.  Today’s employees are looking for more: more freedom, more flexibility, and more opportunities. But the drive to provide more is accelerating a parallel requirement for increased security to keep sensitive data safe.

 

I’d offer these final thoughts:  In addition to the security and productivity reasons, companies challenged to find talent should consider a PC refresh strategy as a tool to attract the best and brightest of the next generation.

 

Technology can be an enabler to fundamentally transform your workplace but you need a solid foundation on which to build.  A side benefit is that it will also help deliver the top talent to your door.

Read more >

Big Data in Life Sciences: The Cost of Not Being Prepared

For years, the term “Big Data” has been thrown around the Healthcare and Life Science research fields like it was a new fashion that was trendy to talk about. In some manner, everyone knew that the day was coming that the amount of data being generated would outpace our ability to process it if major steps to stave off that eventuality weren’t taken immediately. But, many IT organizations chose to treat the warnings of impending overload much like Y2K in the aftermath, that it was a false threat and there was no real issue to prepare for in advance. That was five years ago, and, the time for big data has come.

 

The pace at which life science-related data can be produced has increased at a rate that far exceeds Moore’s Law, and it has never been cheaper or easier for scientists and clinical researchers to acquire data in vast quantities. Many research computing environments have found themselves in the middle of a data storm, in which researchers and healthcare professionals need enormous amounts of storage, and need to analyze the stored data with alacrity so that discoveries can be made, and cures for disease can be possible. In the wake of a lack of preparedness on the organizations’ part, researchers have found themselves in the middle of a research computing desert with nowhere to go, and the weight of that data threatening to collapse onto them.

 

Storage and Compute

The net result of IT calling the assumed bluff of the scientists is that they are unprepared to provide the sheer amount of storage that is necessary for the research, and, even when they can provide that storage, they don’t have enough compute power to help them get through the data (so that it can be archived), causing a back log of data storage that exponentially compounds as more and more data pours into the infrastructure. To make matters worse, scientists are left with the option of moving the data elsewhere to help them get through processing and analysis. Sometimes, well-funded laboratories purchase their own HPC equipment, sometimes cloud-based compute and storage is purchased, sometimes researchers find a collaborator with access to an HPC system that they can use to help chunk through the backlog. Unfortunately, these solutions create another barrier; how to get that much data moved from one point to another. Most organizations don’t have Internet connections much above 1Gbps for the entire organization, while most of these datasets are many terabytes (TBs) in size and would take weeks to move over those connections at saturation (which would effectively shut down the Internet connection for the organization). So, being the resourceful folks they are, scientists then take to physically shipping hard drives to their collaborators to be able to move their data, which has it’s own complex set of issues to contend with.

 

The depth of the issues that have arisen out of the lack of preparedness of research- or healthcare-based organizations are so profound that many of these organizations are finding it difficult to attract and hire the talent they need to actually accomplish their missions. New researchers, and those on the forefront of laboratory technologies, largely understand the requirements they have computationally. If a hiring organization isn’t going to be able to provide that, they look elsewhere.

 

Today and Tomorrow

As such, these organizations have finally started to make the proper investments into research computing infrastructure, and the problem is slowly starting to get better. But, many of them are taking the approach of only funding what they have to today to get today’s jobs done. This approach is a bit like expanding a highway in a busy city to meet the current population’s needs, rather than trying to build it for 10 years from now; it won’t make a difference in the problem by the time the highway is completed because the population will have already exceeded that capacity. Building this stuff the correct way for an unpredictable time at some point in the future is scary, and quite expensive, but the alternative is the likely failure of the organization to meet their mission. Research computing is now a reality in life science and healthcare research, and not investing will only slow things down and cost the organizations much more in the future.

 

So, if this situation describes your organization, encourage them to invest now in technologies for the 5-years-from-now timeframe. Ask them to think big, to think strategically, instead of putting tactical bandages on the problems at hand. If we can get most organizations to invest in the needed technologies, scientists will be able to stop worrying about where their data goes, and will be able to get back to work, which will result in an overall improvement in our health-span as a society.

 

What questions do you have?

Read more >

10 Mobile BI Strategy Questions: Enterprise Mobility

Two-Men-Looking-At-A-Tablet.jpgIs your mobile business intelligence (BI) strategy aligned with your organization’s enterprise mobility strategy? If you’re not sure what this means, you’re in big trouble. In its simplest form, enterprise mobility can be considered a framework to maximize the use of mobile devices, wireless networks, and all other related services in order to drive growth and profitability. However, it goes beyond just the mobile devices or the software that runs on them to include people and processes.

 

It goes without saying that enterprise mobility should exist in some shape or form before we can talk about mobile BI strategy, even if the mobile BI engagement happens to be the first pilot planned as a mobility project. Therefore, an enterprise mobility roadmap serves as both a prerequisite for mobile BI execution and as the foundation on which it relies.

 

When the development of a successful mobile BI strategy is closely aligned with the enterprise mobility strategy, the company benefits from the resulting cost savings, improvement in the execution of the mobile strategy, and increased value.

 

Alignment with Enterprise Mobility Results in Cost Savings

 

Although mobile BI will inherit most of its rules for data and reports from the underlying BI framework, the many components that it relies on during execution will be dependent on the enterprise rules or lack thereof. For example, the devices on which the mobile BI assets (reports) are consumed will be offered and supported as part of an enterprise mobility management system, including bring-your-own-device (BYOD) arrangements. Therefore, operating outside of these boundaries could not only be costly to the organization but it could also result in legal and compliance concerns.

 

Whether the mobile BI solutions are built in-house or purchased, as with any other technology initiative, it doesn’t make any sense to reinvent the wheel. Existing contracts with software and hardware vendors could offer major cost savings. Moreover, fragmented approaches in delivering the same requirement for multiple groups and/or for the same functionality won’t be a good use of scarce resources.

 

For example, building forecast reports for sales managers within the customer relationship management (CRM) system and forecast reports developed on the mobile BI platform may offer the same or similar functionality and content, resulting in confusion and duplicate efforts.

 

Leveraging Enterprise Mobility Leads to Improved Execution

 

If you think about it, execution of the mobile BI strategy can be improved in all aspects if an enterprise mobility framework exists that can be leveraged. The organization’s technology and support infrastructure (two topics I will discuss later in this series) are the obvious ones worth noting. Consider this—how can you guarantee an effective delivery of BI content when you rollout to thousands of users without having a robust mobile device support infrastructure?

 

If we arm our sales force with mobile devices around the same time we plan to deliver our first set of mobile BI assets, we can’t expect flawless execution and increased adoption. What if the users have difficulty setting up their devices and have nowhere to turn for immediate and effective support?

 

Enterprise Mobility Provides Increased Value for Mobile BI Solutions

 

By aligning our mobility BI strategy with our organization’s enterprise mobility framework, we not only increase our chances of success, but, most importantly, we have the opportunity to provide increased value beyond pretty reports with colorful charts and tables. This increased value means that we can deliver an end-to-end solution even though we may not be responsible for them under the BI umbrella. Enterprise mobility components such as connectivity, device security, or management contribute to a connected delivery system that mobile BI will share.

 

Bottom Line: Enterprise Mobility Plays an Important Role

 

Enterprise mobility will influence many of mobile BI’s success criteria. When we’re developing a mobile BI strategy, we need to stay not only in close alignment with the enterprise mobility strategy so we can take advantage of the synergies that exist, but also consider the potential gaps that we may have to address if the roadmap does not provide timely solutions.

 

How do you see enterprise mobility influencing your mobile BI execution?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Intel France Collaborates with Teratec to Open Big Data Lab

By Valère Dussaux


Here at Intel in France, we recently announced a collaboration with the European-based Teratec consortium to help unlock new insights into sustainable cities, precision agriculture and personalized medicine. These three themes are closely interlinked because each of them requires significant high performance computing power and big data analysis.

 

Providing Technology and Knowledge

The Teratec campus, located south of Paris, is comprised of more than 80 organisations from the world of commerce and academia. It’s a fantastic opportunity for us at Intel to provide our expertise not only in the form of servers, networking solutions and big data analytics software but also by utilising the skills and knowledge of our data scientists who will work closely with other scientists on the vast science and technology park.

 

The big data lab will be our principal lab for Europe and will initially be focused on proof of concept works with our first project being in the area of precision agriculture. As we progress techniques we will bring the learnings into the personalized medicine arena where one of our big focuses is the analysis of merged clinical data and genomic data that are currently stored in silos as we seek to advance the processing of unstructured data.

 

Additionally we will also be focusing on the analysis of merged clinical data and open data like weather, traffic and other publically available data in order to help healthcare organizations to enhance resource allocation and health insurers and payers to build sustainable healthcare systems.

 

Lab makes Global impact

You may be asking why Intel is opening up a big data lab in France. Well, the work we will be undertaking at Teratec will not only benefit colleagues and partners in France or Europe, but globally too. The challenges we all face as a collective around an ageing population and movement of people towards big cities present unique problems, with healthcare very much towards the top of that list. And France presents a great environment for innovation, especially in the 3 focus areas, as the Government here is in the process of promulgating a set of laws that will really help build a data society.

 

I highly recommend taking time to read about some of the healthcare concepts drawn up by students on the Intel-sponsored Innovation Design and Engineering Master programme in conjunction between Imperial College and the Royal College of Arts (RCA) in our ‘Future Health, Future Cities’ series of blogs. For sustainable cities, the work done at Teratec will allow us to predict trends and help mitigate the risks associated with the expectation that more than 2/3rds of the world’s population living in big cities by 2050.

 

So far, we have seen research into solutions curtailed from both a technical and knowledge aspect but we look forward to overcoming these challenges with partners at Teratec in the coming years. We know there are significant breakthroughs to be made as we push towards providing personalized medicine at the bedside. Only then can we truly say we are forging ahead to build a future for healthcare that matches the future demands of our cities.

 

Further Reading:

 

 

We’d love to keep in touch with you about the latest insight and trends in Health IT so please drop your details here to receive our quarterly newsletter.

Read more >

New HP and Intel Alliance to Optimize HPC Workload Performance for Targeted Industries

HP and Intel are again joining forces to develop and deliver industry-specific solutions with targeted workload optimization and deep domain expertise to meet the unique needs of High Performance Computing (HPC) customers. These solutions will leverage Intel’s HPC scalable system framework and HP’s solution framework for HPC to take HPC mainstream.

 

HP systems innovation augments Intel’s chip capabilities with end-to-end systems integration, density optimization and energy efficiency built into each HP Apollo platform.  HP’s solution framework for HPC optimizes workload performance for targeted vertical industries.  HP offers clients Solutions Reference Architectures that deliver the ability to process, analyze and manage data while addressing the complex requirements across a variety of industries including Oil and Gas, Financial Services and Life Sciences.  With HP HPC solutions customers can address their need for HPC innovation with an infrastructure that delivers the right Compute for the right workload at the right economics…every time!

 

In addition to combining Intel’s HPC scalable system framework with HP solutions framework for HPC   to develop HPC optimized solutions, the HPC Alliance goes a step further, by introducing a new Center of Excellence (CoE) specifically designed to spur customer innovation.  This CoE effectively combines deep vertical industry expertise and technological understanding with the appropriate tools, services and support. This approach makes it simple and easy for our customers to drive innovation with HPC. This service is open to all HPC customers from academia to industry.

 

Today, in Grenoble, France, customers have access to HP and Intel engineers at the HP and Intel Solutions Center.  Clients can conduct a Proof of Concept using the latest in HP and Intel technologies.  Furthermore, HP and Intel engineers stand ready to help customers modernize their codes to take advantage of new technologies….resulting in faster performance, improved efficiencies, and ultimately better business outcomes.

 

 

HP and Intel will make the HPC Alliance announcement at ISC’15 in Frankfurt, Germany July 12-16, 2015. To learn more, visit www.hp.com and search ‘high performance computing’.

Read more >