Recent Blog Posts

To Get Where You Are Going…Amplify Your Value



Amplify Your Value (1).png
To get where you are going…you have to know where you are! Sounds pretty obvious doesn’t it. To arrive at any destination, you have to leave where you are and move toward that destination. At the very least, you have to know where your starting point is in relation to your ending point.


Our journey started with my interview for the position of CIO. As a candidate for the role, one of the steps in the interview process was a meeting with the entire team. WOW! Talk about intimidating…walking into a room of 15 people, all of whom had your resume in front of them and you know nothing about them them. I kicked things off by introducing myself and then having them introduce themselves. I then asked for questions. Two questions in particular stood out.


“I see from your resume that you have worked for XYZ Company. I know they use Oracle. Are you going to replace our Microsoft stuff with Oracle?”


Interesting question for an interview. My answer? I couldn’t possibly make any recommendation or decision of that magnitude with the information I have today. I would spend the first several weeks and months learning, listening, asking questions. If any change in technology seemed appropriate, it would be because WE decided it made sense.


“I see from your resume that you have done a lot of outsourcing. Are you going to outsource us?”


Another interesting question. My answer? Similar to the first. I couldn’t possibly make any recommendation or decision…


This time however, I went deeper into the answer. I explained each of the outsourcing projects that I had been involved with in prior roles. I explained the business reasons for each, the benefits achieved, and the issues we experienced. Every business is different, every company is different. I would never sit before you and say “never”, but I will say, we will come to those conclusions (whatever those conclusions are) together.


I got the job! A few weeks later, as an initial step in the creation of an Information Technology Strategic Plan, an assessment and evaluation of Goodwill’s current application architecture, infrastructure, staffing levels and skills was performed.


The project was conducted over an eight week period. During this time we reviewed system documentation, attended various staff and project meetings, interviewed key IT resources, participated in issue resolution and analyzed application configurations.


The purpose was to review the company’s Information Technology assets and deliver an executive-level report; detailing and summarizing our evaluation of risk associated with the current IT environment, platforms, and required level of support.


The approach during the assessment focused on the following key characteristics:

1.  Scalability – how would the current environment handle double the volume or more

2.  Maintainability – how much duplicate effort is spent supporting the current environment

3.  Reliability – how well does the current environment perform

4.  Supportability – what organizational skills or investments in end-user training/tools are needed

5.  Controllability – how secure is the environment


This “Current State Assessment” was based on the principles of Control Objects for Information and related Technology (COBIT), Information Technology Infrastructure Library (ITIL) and Capability Maturity Model Integration (CMMI). It did not, however, represent a full audit under any of these frameworks.


In addition, the current technical architecture “as-is” was reviewed and areas of risk, potential bottlenecks, and deviations from best practices were identified. Those observations and others are detailed in the document. As the risks were identified, the estimated probability of the risk occurring and the impact an occurrence would have on the organization were identified.

During the assessment we used the “Business Model Continuum” from Gartner. This continuum categorizes IT departments into five different business models: 

  • Silo Model is a department that is very reactive, viewed as a utility, and is treated with uncertainty. These departments have difficulty planning, typically do not adhere to schedules and have very little documentation.
  • Process-Based Model is a department that has begun to implement some repeatable processes. This adds some predictability to the services they provide. They are able to be more proactive, even though they are still viewed with skepticism.
  • Internal Service Company Model is a department that has matured into an organization that proactively manages its assets, works with the business to enhance the systems thereby providing more value. They have a list of services they provide and act like a company within a company. They are an accepted partner with the business.
  • Shared Services Model takes the ISC model a step further and provides additional services to the organization by centralizing some of the operation and maintenance of the applications themselves. Typically this is found in large multinational organization with disparate business products.
  • Profit Generator Model is a department that truly becomes part of the product. They are creating new ideas for generating revenue and are imbedded in the process of design of new or enhanced product offerings. Very few companies ever reach this stage.


In understanding these models and using them to transform an IT department, there are two fundamental principles that must be adhered to. First, you can only move along the continuum as far as your business wants you to go. You cannot operate under the profit generator model, if the business doesn’t want you operating in that way.


The second principle is, as you move along the continuum, you cannot skip a model. For example, you cannot move to the Internal Service Provider business model, without first becoming process-based.


At the end of the eight week period, we believed ourselves to be operating under a Silo-based model; while occasionally exhibiting traits of  a Process-based Model (some processes where implemented during the assessment because they were too important to defer until after the assessment). Our report included approximately 25 “Action Areas” that would later be turned into projects. These areas ranged from applications requiring an upgrade, filling open positions, to remodeling the IT area and everything in between.


It had been a difficult, yet revealing eight week introspection of the department. However, we now knew where we were! We knew our strengths, we knew our weaknesses. We had identified and prioritized areas of risk and quick wins for improvement. We had identified projects that had the potential to deliver outstanding results for our company.


We were about ready to embark on an adventure of a lifetime, well, ok, the adventure of a career, at least. There was tons of work to do, dozens of projects to execute, thousands of tasks to complete, so what did we do? We partied of course! We partied like it was 2015 (apologies for the lame reference to Prince). Next up in this series: “Happy New Year 2016!”


The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue

 

Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

Performance & Value with Oracle Engineered Systems Based on Intel Xeon Processors

By Juan F. Roche

 

Oracle’s new generation of X5 Engineered Systems, announced in late January, are powered by the latest top-of-the-line Intel® Xeon® E5 v3 processors to deliver businesses significantly improved benefits in performance, power efficiency, virtualization, and security.

 

Oracle Engineered Systems, which are purpose-built converged systems with pre-integrated stacks of software and hardware engineered to work together, are a cost-effective, simple-to-deploy alternative to data center complexity. With Intel performance, security technologies, and flexibility co-engineered directly into the system, these integrated systems are built with business value in mind.

 

These cost-effective Oracle systems perfectly demonstrate the tight, ongoing collaboration between Oracle and Intel. For over twenty years, the two companies have worked closely together, and the relationship is built on much more than simply tuning CPUs for performance. The collaboration extends from silicon to systems, with both companies working to optimize architectures, operating systems, software, tools and designs for each other’s technologies. Intel and Oracle work together to co-engineer the entire software, middleware, and hardware stack to ensure that the Engineered Systems take maximum advantage of the power built into Intel® Xeon® processors.

 

How does this translate into business value for our customers? Here’s a real-world example. Intel worked closely with Oracle in developing the customized Intel® Xeon® E7-8895 v2 processor, creating an elastic enterprise workload SKU that allows users to vary core counts and frequencies to meet the needs of differing workloads. Because of optimizations between the Intel Xeon processors and Oracle operating system kernels and system BIOS, this single processor SKU can behave like a variety of other SKUs at runtime.

 

That means faster processing and more flexibility for addressing business computing requirements: Oracle Exalytics* workloads that would take nearly 12 hours to run on an a non-optimized Intel Xeon-based platform now drop to a 6.83 hour runtime on the customized Intel Xeon platform—a speed-up of 1.72x. That’s the difference between being able to run analytical workloads overnight and having to wait until a weekend to get business critical analytics. With on-demand analytics like this, businesses have timely, precise intelligence for real-time decision-making.

 

In addition, the Exalytics platform is flexible: not all workloads require heavy per-thread concurrency, and this elastic SKU can be tuned and balanced to vary core counts and frequency levels to meet the needs of different computing requirements.

 

Oracle Engineered Systems are also optimized to take advantage of Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI), which is built into advanced Intel Xeon processors. Intel AES-NI helps boost data security by eliminating the performance penalty usually associated with software-based encryption by running encryption technology on hardware. Intel AES-NI speeds up the execution of encryption algorithms by as much as 300 percent, enhancing encryption performance so businesses don’t have to pay a performance overhead to keep data more secure.

 

To learn more about Oracle Exadata Engineered Systems powered by Intel Xeon processors, download our ebook What Will It Take to Manage Your Database Infrastructure?

Read more >

Intel Commends Senate IoT Resolution and National Vision

By Marjorie Dickman, Global Director of Internet of Things Policy, Intel Corporation The Internet of Things (IoT) is generating unprecedented opportunities for the US public and private sectors to develop new services, enhance productivity and efficiency, improve real-time decision making, … Read more >

The post Intel Commends Senate IoT Resolution and National Vision appeared first on Policy@Intel.

Read more >

How to Accelerate the Move to Mainstream NFV Deployment

By John Healy, General Manager, Software Defined Networking Division, Network Platforms Group, Intel

Mobile World Congress is upon us and there is plenty of buzz again about the progress of network functions virtualization (NFV). I’m looking forward many new NFV demos, product announcements and presentations on how mobile operators are solving problems using the technology.

I’m very bullish on the future of the NFV market. In this last year, the industry has successfully passed from a normative phase where specifications and use cases were determined, applications developed and proofs of concept and demos were successfully conducted.





Now we are moving into the next phase where NFV applications move into operation in production networks.  I am excited at the progress that our partners have achieved in translating trials into deployments and the benefits that they are beginning to achieve and measure.

But at the same time, I realize that as an industry there is still significant work to do to accelerate the technology to a point where carriers can consider full deployment and scaled implementations.  I believe there are two significant themes that need to be addressed in the coming year.

 

Challenge 1 – Technology Maturity

There have been plenty of successful NFV demos over the last 18 months proving the capability of virtualized services and the performance of standards-based computing platforms.  Now, we need to achieve mass scale and ruggedized implementations and for that the various building block technologies need to be hardened and matured.

Through this work the many virtual network functions (VNFs) will be “ruggedized” in order to provide the same service and reliability levels as today’s fixed-function counterparts.  This need for “carrier-grade reliability“ is the necessary maturing that will occur.

Much of this ruggedization will happen as operators test these VNFs in practical demonstrations; those that feature traffic types, patterns and volumes found in production networks.  Several announcements at MWC have highlighted the deployments into live networks that mark this new phase. We are actively involved in this critical activity with our partners and their customers.

But there’s also a need for more orchestration functionality to be developed and proven so that service providers can scale their networks through the automation of the implementation composition of network functions and services.

The intelligent placement of network functions mapped to the best capabilities of the computing platforms enables network services orchestration (NSO) to achieve the best performance. Exciting demos of this NSO in practice in a multi-vendor environment are on show at MWC.

Many of our ecosystem partners are tackling the orchestration of lower-level functions such as inventory, security, faults, application settings and other infrastructure elements following the ETSI management and network operations (MANO) model.  Others still have focused on the service orchestration based on models of the networks resources and policy definition schemes.

The open source community is also a key enabler of the maturing phase, including projects such as the Nova and Neutron software developments that are building orchestration functionality into OpenStack. The Open Platform for NFV (OPNFV) project is focused on hardening the NFV infrastructure and improving infrastructure management, which should improve the performance predictability of NFV services.

All of these initiatives are important and must be tested through implementation into carrier networks and stressed so that operators can be confident that services will perform predictably.

I’ve seen this performance evolution take place at Intel as we tackled the challenge of consolidating multiple processing workloads on our general purpose Intel Architecture CPUs while growing performance for packet processing to enable replacement of fixed function packet processors.

In the mid-2000s – packet processing performance on Intel processors was not where we wanted it to be, so we made modifications to the microarchitecture and at the same time we developed a series of acceleration libraries and algorithms that became the Data Plane Development Kit.

After several product generations, we can now provide wire-speed packet processing performance delivering 160Gbps of layer-three forwarding on a single core. This is made possible through our innovations and through deep collaborations with our partners, a concept we have extended to the world of NFV and from which many of the announcements at MWC have originated.

 

Challenge 2 – Interoperability

Interoperability on a grand scale is what will make widespread NFV possible. That means specification, standardization and interoperability are a major requirement for this phase of NFV.

The Open source dimension to NFV creates the community driven and supported approach that speeds innovation but it needs to be married to the world of specification definition and standardization that has traditionally moved at a much slower pace. Too slow for the new world that NFV enables.

This is a significant opportunity and challenge for the industry – we need to collectively find the bridge between both worlds. This is new territory for many of the parties involved and many of the projects are just starting on the path.

 

Intel’s Four Phase Approach to NFV

Intel is leading efforts to accelerate the maturity of the NFV market and we have outlined four key ways to do that.

First, we’re very active in developing and promoting open source components and standards. We are doing this by contributing engineering and management talent and our own technology to open source efforts. The goal is to ensure that standards evolve in an open and interoperable way.

Next, we have developed the Open Network Platform to integrate open source and Intel technologies into a set of server and networking reference designs that VNF developers can use to shorten their time to market.

Working with the industry is important, which is why we have developed Intel Network Builders, a very active ecosystem of ISVs, hardware vendors, operating system vendors and VNF developers. Network Builders gives these companies opportunities to work together and with Intel, and gives operators and others in the industry a place to find solutions and keep a pulse on the industry.

And lastly, we are working closely with service providers to support them in converting POCs into full deployments in their networks. It was at last year’s MWC that Telefonica announced its virtual CPE implementation, which Intel contributed to, and this year there are several more and we have many other similar projects that we’re working on now.

While these engineering challenges are significant, they are the growing pains that NFV must pass through to be a mature and tested solution. The key will be to keep openness and interoperability at the forefront and to keep the testing and development programs active so that they can scale to meet the needs of today’s carriers. If MWC is an indicator of the future it is definitely very bright.

Read more >

Enabling Real-Time Apps: Supporting Open Source Software: Intel Open Network Platform Server Release 1.3

By Dana Nehama, Sr. Product Marketing Manager, Network Platforms Group (NPG), Intel

It’s a busy time for the Intel Open Network Platform Server team and our Intel Network Builder partners. This week at Mobile World Congress in Barcelona, there are no less than six SDN/NFV demos that are based on Intel ONP Server and are developed by our Intel Network Builder ecosystem partners. Back home, we are releasing Intel ONP Server release 1.3 with updates to the open source software as well as the addition of real-time Linux kernel support and 40GbE NIC support.

The Intel ONP Server is a reference-architecture that brings together hardware and open source software building blocks used in SDN/NFV. It helps drive development of optimized SDN/NFV products in telecom, cloud and enterprise IT markets

The MWC demos illustrate this perfectly as they all involve Intel Network Builders partners showcasing cutting-edge SDN/NFV solutions.

The ONP software stack comprises Intel and community-developed open source released software such as Fedora Linux, DPDK, Open vSwitch, OpenStack, OpenDaylight and others. The key is that we address the integration gap of multiple open source projects and bring it all together into a single software release.

Here’s what’s in release 1.3:


  • OpenStack Juno 2014.2.2 release
  • OpenDaylight Helium.1 release
  • Open vSwitch 2.3.90 release
  • DPDK 1.7.1 release
  • Fedora 21 release
  • Real-Time Linux Kernel
  • Integration with 4×10 Gigabit Intel® Ethernet Controller XL710 (Fortville)
  • Validation with a server platform that incorporates the Intel® Xeon® Processor E5-2600 v3 product family

Developers who go to www.01.org to generate the software will see the value of this bundle because it all works together.  In addition, the reference architecture guide available on 01.org is a “cook book” that provides guidelines on how to test ONP servers or build products that are based on Intel ONP Server software and hardware ingredients.

A first for this release is the support of Real-Time Linux Kernel, which makes ONP Server an option for new applications.

Another important aspect to the new release is the support for the 4x10GbE Intel Ethernet Controller XL710. This adapter delivers high performance with low power consumption. For applications like a vEPC, having the data throughput of the XL710 is a significant advance.

If you are an NFV / SDN developer who wants to get to market quickly, I hope you will take a closer look at the latest release of ONP Server and consider it as a reference for your NFV/SDN development.

If you can’t make it to Barcelona to see the demos, you can find more information at: www.intel.com/ONP or at www.01.org.

Read more >

Moving from Maintenance to Growth in Retail Technology

Ready, set, welcome to the new retail year.

 

It’s time to start fresh, and for those of us in retail technology, it’s time to get the final word on budgets and do the final editing on the 2015 plan.

 

As you do that, keep two simple questions in mind. Is it enough? What’s beyond?

 

Role of Technology in Retail: Is It Enough?

 

Tech-Retail.jpg

Of course it’s not. These are tumultuous times. Few retailers need a consultant to explain the economic, demographic, and technological transitions that are blowing like a storm across the landscape.

 

Words like innovation, transformation, and disruption filled my recent face-to-face conversations with industry executives in North America and Europe. There was a clear understanding of the central and strategic importance of technology in retail.

 

But such understandings are not always accompanied by budget growth. And even if they are, there’s never enough funding (or time) to meet all of what a business truly needs. Especially in these times.

 

Let’s take a step, then, beyond discussions of portfolio management and prioritization. Let’s talk about what it takes to shift significant dollars from maintenance and ongoing operations to growth and innovation initiatives.

 

That’s easier said than done. But to compete today, let alone tomorrow, retailers must spend an ever-higher percentage of the funds available for IT on ways to make the business more efficient, the customers more satisfied, and the merchants, marketers, and store ops folk more informed.

 

We at Intel have some ideas on how and where you can do this.

 

The price of not doing so — of waving the white flag in the race to survival — is simply too high.

Envisioning the Future of Retail Technology: What’s Beyond?

 

As we look at the challenges ahead for retailers, “what’s beyond” has become our strategic mantra. And, for 2015, the focus is on three specific areas:

 

What’s beyond the digital store?

 

It’s been clear for several years that shoppers jump back and forth between channels. The so-called “showrooming” is multidirectional, and the Internet is increasingly the front door to the brand — that is, if shoppers are willing to resist the blandishments of Amazon.

 

We’d argue that we should all be thinking about the cross-channel brand, thinking about and designing decision influence and influence delivery that works from sofa to shelf to post-sale service. (For some further thoughts on this, take a look at my next blog post.)

 

What’s beyond big data?

 

In a world of rapid advancements in data acquisition and analytics, big data becomes the strategic starting point and not the end goal.

 

How can we move from a descriptive understanding of trend and assortment to something more predictive? How can we put to work those interesting and real-time leading environmental indicators of demand, such as weather (e.g., temperature, humidity, and wind speed, to start) or the Twitterverse or the location of opt-in mobile app loyalists?

 

What’s beyond PCI compliance?

 

A lot, actually. All of it begging to be implemented to reduce the risk of data breach. There are security tools in the silicon and security software and — most importantly — a holistic, connected approach to data security architecture.

 

The cybercriminals have certainly upped their game. It’s time we do as well.

 

The New Year is here.

 

Let’s find a way to make it enough. And to move beyond.

 

Jon Stine
Global Director, Retail Sales

Intel Corporation

Read more >

Accelerating Network Transformation via the Ecosystem

Renu Navale, Director of Intel Network Builders Program, Network Platforms Group, Intel

As a die-hard Carl Sagan fan, I love his quote – “Imagination will often carry us to worlds that never were, but without it we go nowhere.” There was a lot of imagination and strategic vision behind the beginnings of network function virtualization (NFV) and software defined networking (SDN). Now network transformation is an unstoppable force that has encompassed an entire industry ecosystem. The need for services agility, reduction in operational and capital expenses and the rapid growth in the Internet of Things is driving a transformation of network infrastructure. Both telco and cloud service providers aim to accelerate delivery of new services and capabilities for consumers and businesses, improve their operational efficiencies, and use cloud computing to meet their customers’ demand for more connectivity and delivery of real-time data.

 

 

With proven server, cloud, and virtualization technologies, Intel is an excellent position to apply these same technologies to the network infrastructure. Intel is working closely with the industry to drive this transformation by offering building blocks of standardized hardware and software, as well as server reference designs with supporting software, that address the performance, power, and security needs of the industry.  Intel also actively participates in open source and open standards development, invests in building strong ecosystems, and brings a breadth of experience in enterprise and cloud computing innovation.

Execution is an integral facet of any strategy. I consider the Intel Network Builders program part of the required execution for Intel’s NFV and SDN strategy. First – what is the Intel Network Builders program? It is an Intel led initiative to work with the larger industry ecosystem to accelerate network transformation on Intel architecture, products and technologies. Since the inception of the Intel Network Builders program, our ecosystem of partner companies has seen tremendous growth. We now have about 130 members who are hardware and software vendors, system integrators, and equipment manufacturers. The key value proposition for the members is increased visibility and market awareness, technology enabling via POCs and reference architectures using Intel products and ingredients, and increased business opportunities via various tools, workshops, and summits.

The tremendous increase in membership over this past year has resulted in the upgrade of our website and other engagement tools to meet our ecosystem partners’ needs. Most recently, we have launched a revamped member portal, where Intel Network Builders members have the opportunity to directly engage with one another, foster new business relationships, learn about upcoming events and webinars, and highlight their solutions to other community members. If you are already an Intel Network Builders ecosystem partner, you are invited to start engaging with us today, and if you are in the industry seeking resources and general news, please check out our site at networkbuilders.intel.com.

 

IMG_3127.JPG

 

It takes a whole village to raise a child. In a similar manner, it will take the whole networking industry ecosystem to accomplish this transformation. Hence Intel Network Builders as a program to connect and collaborate with the ecosystem is absolutely essential to deliver on the promise of NFV and SDN. I am in the midst of this amazing transformation. There are moments, as when writing this blog, that I am humbled to be part of this journey and transformation.

I hope to see you in Barcelona!

Read more >

High Performance Packet Processing in the NFV World

Network transformation is taking off like a rocket … with the SDN, NFV, and network virtualization market accounting for nearly $10 Billion (USD) in 2015, according to SNS Research.(1) This momentum will take front stage this week at Mobile World Congress (MWC) 2015, including dozens of solutions and demos that spotlight Intel technology.

 

New Ways to Speed up Packet Processing

Packet processing workloads are continuously evolving and becoming more complex, as seen by progressing SDN/Network-overlay standards and signature-based DPI, just to name a few examples. One requires highly flexible software and silicon ingredients to deliver cost-effective solutions to cater to these workloads. NFV solutions are all judged on how fast they can move packets on virtualized, general-purpose hardware. This is why the Data Plane Development Kit (DPDK) is seen as a critical capability, delivering packet processing performance improvements in the range of 25 to 50 times( 2, 3) on Intel® processors.

Building upon the DPDK, Intel will demonstrate at MWC how equipment manufacturers can boost performance further while making NFV more reliable. One way is to greatly reduce cache trashing by pinning L3 cache memory to high-priority applications using Intel Cache Allocation Technology. Another is to use a DPDK-based pipeline to process packets instead of distributing the load across multiples cores, which can result in bottlenecks if the flows cannot be uniformly distributed.

 

Intel Cache Allocation Technology

It’s no secret that virtualization inherently introduces overheads that lead to some level of application performance degradation compared to a non-virtualized environment. Most are aware of the more obvious speed bumps, like virtual machine (VM) enters/exits and memory address translations.

A lesser known performance degrader is caused by various VMs competing for the same cache space, called cache contention. When the hypervisor switches context to a VM that is a cache hog, cache entries for the other VMs get evicted, only to be reloaded when those VMs start up again. This can result in an endless cycle of cache reloads that can cut performance in half, as shown in the figure. (2, 3)

 

 

DPDK MWC Blog Graphic.jpg

 

 

On the left side, the guest VM implementing a three-stage packet processing pipeline (classify, L3 forward, and traffic shaper) has the L3 cache to itself, so it can forward packets at 11 Mpps. The middle pane introduces an aggressor VM that consumes more than half the cache, and the throughput of the guest VM drops to 4 Mpps. The right side implements Intel Cache Allocation Technology, which pins the majority of the cache to the guest VM, thus restoring the packet forwarding throughput to 11 Mpps. (2, 3)

 

IP Pipeline Using DPDK

There are two common models for processing packets on multi-core platforms:

 

  • Run-to-completion: A distributor divides incoming traffic flows among multiple processor cores, each processing their assigned flows to completion.
  • Pipeline: All traffic is processed by a pipeline constructed of several processor cores, each performing a different packet processing function in series.

At MWC 2015, Intel will have a live demonstration of high-performance NFV running on industry standard high volume server, where copies of packet processing pipelines are implemented in multiple VMs, and the performance of these VMs is governed using state-of-the-art Cache Monitoring and Allocation Technologies.

Want to know more? Get more information on Intel in Packet Processing.

 

Are you at MWC 2015?


Check out the high-performance NFV demo at the Intel Booth and see the new Intel technologies developed to drive even higher levels of performance in SDN and NFV! Visit us at MWC 2015 – App Planet, hall 8.1, stand #8.1E41.

 

 

 

 

1 Source: PR Newswire, “The SDN, NFV & Network Virtualization Bible: 2015 – 2020 – Opportunities, Challenges, Strategies & Forecasts.” Nov 27, 2014, http://www.prnewswire.com/news-releases/the-sdn-nfv–network-virtualization-bible-2015–2020–opportunities-challenges-strategies–forecasts-300002078.html.

 

2 Performance estimates are based on L2/L3 packet forwarding measurements.

 

3 Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel® products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations.

Read more >

Fueling the Next Billion Connected Devices: MWC Day 2 Recap

The excitement this week in Barcelona would make you think that Messi is in town for a match against computing and the networks that feed the billions of devices that dot our globe.  Mobile World Congress is in full photo 1.JPGswing with the who’s who of the tech industry sharing their latest wares and meeting to discuss the next generation of innovation.

I cannot underscore how struck I’ve been at the rate of telco equipment industry innovation at MWC.  It was only two years ago that I attended MWC and learned about the new NFV specifications moving through ETSI, and today I was fortunate to hear from network leaders Openet, Procera Networks, and Amartus on real solutions for telco billing solutions based on NFV powered service delivery.  This solution is a microcosm of the networking landscape today as groups of companies work together to deliver application, orchestration and infrastructure solutions together to solve point business challenges, in this case innovating billing solutions that historically were designed for voice only accounts.  With new NFV based solutions, telco operators will be better able to accurately bill for different types of data consumption along with voice usage and more rapidly deploy solutions to market.  Martin Morgan, VP of Marketing at Openet stated that initial solutions are already being deployed by select customers with range of scale from 50K to 50M customer bases.

Sandra Rivera, Intel’s VP and GM of the Network Platform Group, called out this type of ecosystem collaboration at the core of Intel’s heritage.  Her group’s Network Builder’s program has grown from 30 to 125 vendors in the 18 months since its inception and has begun adding telco operators such as Telefonica and Korea Telecom to its member roles.  Sandra explained that collaboration between providers and operators will help accelerate adoption of NFV solutions to the marketplace as providers can prioritize focus on use cases that provide the best opportunity for financial reward and operators can more quickly evaluate solutions coming to market.  She highlighted shepherding this broad collaboration as critical to Intel’s efforts in driving NFV adoption in 2015, and given the momentum behind the effort there’s little reason to expect anything other than continued growth in POC results and deployments in 2015.  To keep track of the latest developments in network ecosystem innovation visit the Intel Network Builders site.

photo 2.JPG
A blog about MWC would not be complete without mention of mobile device innovation, and one topic that has risen to the surface once again this year is the focus on mobile security.  I was fortunate to chat with executives from the Intel Security group to get the latest on Intel’s security solutions.  Mark Hocking, VP & GM of Safe Identity and Security at Intel Security discussed Intel’s latest innovation, TrueKey.  This cool technology enables a central resource for password management integrating facial recognition, biometrics, encryption technologies, and physical password entry to make the management of passwords manageable and more secure for the user.  I have to admit that as a person who has invented at least 50 ways to describe my dog to form different iterations of the seemingly endless permutations of passwords required to navigate today’s web, I was delighted to learn that soon simply smiling at my PC would provide a baseline of secure engagement with popular sites.  When Mark explained that TrueKey could add levels of security based on my requirements, I felt even better about the technology.

With the growth in wearable devices, the landscape of mobile security is evolving.  Intel’s Chief Consumer Security Evangelist, Gary Davis, caught up with me to share Intel’s strategy for addressing this new area for consumer vulnerability.  With over 780 million wearables expected to be live by 2019, users will be increasingly using mobile devices such as smart phones and tablets as aggregators of personal data.  Today’s reality is far from pretty in terms of secure use of mobile devices with <35% of mobile users not utilizing a phone PIN and even less employing mobile security or encryption technology for data.  Intel is working on this challenge, Gary explained, by bringing security technology to mobile devices through integration in silicon as well as working with device manufacturers to design and deliver security enabled solutions to market.

Come back tomorrow for my final update from Barcelona, and please listen to my interviews with these execs and more.

Read more >

6 Easy Steps to Doing the Impossible

How do you take a Fortune 10 company and move the majority of their applications to the public cloud? On the most recent episode of the Transform IT Show, I spoke with Lance Weaver, CTO of cloud architecture at GE; he shared with me exactly how they’re approaching this seemingly futile task. And as we discussed his career and approaches, I found six easy steps that I think explain why Lance and GE are going to achieve the impossible. Screen Shot 2015-03-03 at 10.55.41 AM.png

 

Start with the Attitude

GE employees adopted a certain attitude when it came to the cloud. They didn’t approach it like it was something to explore and think about. Instead, they viewed the cloud as necessary to ensuring their customers maintain a competitive advantage in the marketplace. That perspective left no room for failure.

 

Go Wide

Lance had the courage to lead this project for two reasons: his broad technical experience, and the pushes from his mentor, bosses, and the overall GE culture to step outside his comfort zone. He learned new jobs, new ways to lead people, and interacted with new lines of business. It was his ability to go wide that gave him the confidence he needed to attempt the impossible.

 

Shape the Culture – Starting with Yourself

Anyone can make this kind of change happen; a company’s culture is really what you choose to make of it. Lance suggested that we all must be willing to challenge ourselves, to step outside our comfort zones, to focus on the mission and strategy of the company and by doing so to create the kind of influence you need to drive meaningful change.

 

Balance Focus and Adaptability

Get a crystal clear understanding of your goals and the exact problems you’re trying to solve. By focusing on your business goals, you won’t get tied to any one technology — you’ll be willing to constantly reassess your situation to ensure you meet your benchmarks. While focus is critical, it can never be at the expense of adaptability. Always compare where you’re at to where you’re trying to go.

 

Communicate, Communicate, Communicate

Communication is imperative when it comes to succeeding with any major effort to change an organization. Lance discussed the need to communicate openly early in the process. Usually, executive leadership and surrounding teams want to be a part of the effort — you just need to have open discussions with them and give them a chance.

 

Screen Shot 2015-03-03 at 10.54.19 AM.png

Stand in Place

Big change takes big risk. Your team will likely be unwilling to do that until they understand the ripple effect when one of those risks doesn’t pay off. When things go wrong, the most important thing you can do as a leader is … nothing. Hold your ground. Take the heat. Don’t shirk your responsibility or blame anyone else. Stand in place.

 

Lance went on to say that if you have a track record of delivering, if you’ve communicated, if people trust you, then your team will be willing to stand in place with you. And that will be the defining moment. It will be the moment when people understand that this is real and that they can take those big risks with you. And that’s when big change happens. That’s when you can begin to conquer the impossible.

 

If you haven’t yet seen the full episode of Transform IT with Lance Weaver, you can watch it here. Also, make sure to watch our Google Hangout where we discuss some of our key takeaways and highlights.

 

And don’t forget to tell us what you think: Comment on the episode page or connect on Twitter by using #TransformIT.

Read more >