Today marks a great milestone for Intel® Integrated Native Development Experience (Intel® INDE) suite when Intel Corp announced the Multi-OS Engine, a cool new feature of Intel® INDE at the Android… Read more
Recent Blog Posts
Smart factories, among the first to move forward with the Internet of Things (IoT) thanks to factory automation, will soon reap the benefits of another successful Intel IoT ecosystem collaboration. Intel Security and Honeywell Process Solutions are teaming up to … Read more >
The post Intel and Honeywell Team Up on IoT Security for Industrial appeared first on IoT@Intel.
“Hello, my name is Jeff Ton and it has been one thousand, two hundred and seventy two days since I last opened Outlook.”
February 6, 2012, an historic date in Indianapolis, Indiana. Yeah, there was some little football game that night, Super Bowl XLVI – New York Giants against the New England Patriots. But that is not the event that made the date historic (though it was great to watch a Manning beat Brady!) What made that date historic was our go-live on Google Apps, our first step in our Journey to the Cloud.
Now that I have offended everyone from the Pacific Northwest and New England, let me rewind and start at the beginning. In 2010, I arrived at Goodwill Industries of Central Indiana. We were running Microsoft Exchange 2003 coupled with Outlook 2010. Back in the day, the adage was “No one ever got fired for buying IBM”, I was in the “No one ever got fired for buying Microsoft camp”. In fact, when I learned the students in our high school were using Google, I was pretty adamant that they use Office. After all, that is what they will be using when they get jobs!
At about this same time, we were switching from Blackberry to Android-based smartphones. We were having horrible sync problems between Exchange and the Androids using ActiveSync. We needed to upgrade our Exchange environment desperately!
As we were beginning to make plans for upgraded servers to support the upgraded exchange environment, I attend my first MIT Sloan CIO Symposium in Boston. Despite the fact that I bleed Colts blue, I actually love Boston, the history, the culture, the vibe; but I digress. At the conference I learned about AC vs. CF projects (see: That Project is a Real Cluster to learn more). I could not fathom a more likely CF project than an email upgrade project. Why not look to the cloud? Since we were doing an upgrade anyway, perhaps this would be the LAST email upgrade we would have to do!
Enter The Google Whisper. For months a former colleague-turned-Google-consultant had been telling me we should check out Google as an email platform. Usually my response was “Google? That’s for kids not an enterprise!” (Ok, now I have offended everyone from Silicon Valley, too!) Everytime I saw him, he would bring it up. I finally agreed to attend one of Google’s roadshow presentations. I came away from that event with an entirely different outlook (pun intended) on Google.
We decided to run a A/B pilot. We would convert 30 employees to the Google Apps platform for 60 days. We would then convert the same 30 employees to BPOS (predecessor to Office 365) for 60 days and may the best man, er, I mean platform, win. We handpicked the employees for the pilot. I purposely selected many who were staunchly in the Microsoft camp and several others who typically resisted change.
At the end of the pilot an amazing thing happened. Not one person on the pilot team wanted to switch off of Google onto BPOS, in fact, each and every person voted to recommend a Google migration to the Executive Team. Unanimous! When was the last time that ever happened in one of your projects?!!?
The decision made, we launched the project to migrate to the cloud! We leveraged this project to also implement our email retention policy (email is retained for five years). The vast majority of the work in the project involved locating all the .PST in our environment, moving them to a central location from network file folders, local drives, and yes, even thumb drives and CDs. Once in that central location, they were uploaded to the Google platform. During this time, we also mirrored our email environment so every internal and external email also went to the Google platform in real time.
The process took about three months, but finally it was Super Bowl Sunday, time for go-live. Now before you think me an ogre of a boss for scheduling a major go-live for Super Bowl Sunday, I should tell you, the date of February 6, 2012 was selected by the project team. Their thought? No one is going to be doing email after the game is over. We announced a blackout period of eight hours beginning at midnight to do our conversion. Boy, were we ever wrong about the length of the blackout period! Our conversion that night took about 20 minutes. 20 minutes and email was flowing again in and out of the Google environment.
Our implementation included email, contacts, calendar, and groups for three domains. We made the decision to keep the other Google Apps available, but not promote them. We also implemented our five year archive and optional email encryption for sensitive communications. The other decision we made (ok, I made) was not to allow the use of Outlook to access Gmail. One of the tenets of our strategic plan was “Any time, Any place, Any device”, I felt having a piece of client software violated that tenet and created additional support issues that were not necessary.
We learned several things as a result of the project. First, search is not sort. If you have used Gmail, then you know there is not a way to sort your Inbox, it relies instead on the power of Google Search. People really like their sort. Took some real handholding to get them comfortable.
Second, Google Groups are not Distribution Lists. We converted all of our Exchange Distribution Lists to Groups. Yes, they do function in somewhat the same way, however, there are many more settings in Groups, settings that can have unexpected consequences. Consequences like the time our CFO replied to an email that had been sent to a Group, and even though he did not use reply all, his reply went to everyone in the Group! We found that setting very quickly and turned it off! (Sorry Dan!)
The third lesson learned was “You cannot train enough”. Yes, we held many classes during the lead up to conversion and continued them long afterwards. A lot of the feedback we had heard (“everyone has Gmail at home, we already know how to use it”) led us to believe once the initial project was complete we didn’t need to continue training. We recently started a series of Google Workshops to continue the learning process. Honestly, I think some of this is generational. Some love to click on links, watch a video, and then use the new functionality. Others, really want a classroom environment. We now offer both.
One of the things that pleasantly surprised us (well, at least me) was the organic adoption of other Google tools. The first shared Google Doc came to me from outside the IT department. The first meeting conducted using Google Hangouts came from the Marketing department. People were finding the apps and falling in love with them.
Today, one thousand, two hundred and seventy-two days later our first step to the cloud is seen as a great accomplishment. It has saved us tens of thousands (if not hundreds of thousands) of dollars, thousands of hours, and has freed up our team to work on those AC projects!
Before I close, I do want to say, we are still a Microsoft shop. We have Office, Windows, Server, SQL Server and many other Microsoft Products. This post is not intended to be a promotion of one product over another. As I said in my previous post, your path may be different from ours. For us, a 3,000 employee non-profit, Google was the right choice. You may find it meets your requirements, or you may find another product is a better fit. The point here is not the specific product, but the product’s delivery method…cloud…SaaS. The project was such a resounding success, we changed one of our Application Guiding Principles. We are now “cloud-first” when selecting a new application or upgrading an existing one. In fact, almost all of the applications we have added in the last three and half years have been SaaS-based, including Workday, Domo, Vonigo, ETO, Facility Dude and more.
Go and get your Google on, later hit your Twitter up
We out workin’ y’all from summer through the winter, bruh
Red eye precision with the speed of a stock car
You’re now tuned in to some Independent Rock Stars
Next month, we will explore a project that did more to take us to a Value-add revenue generating partner than just about any other project. Amplify Your Value: Reap the Rewards!
The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue
We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).
Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.
Find him on LinkedIn.
Follow him on Twitter (@jtongici)
Add him to your circles on Google+
Check out more of his posts on Intel’s IT Peer Network
Read more from Jeff on Rivers of Thought
And the privilege of listening to several industry leaders and – of great interest – a team of FIT’s top senior students, who presented their vision for the store of tomorrow.
Some common threads:
- We’re living in a world of digital screens – brands can either get on board or get left behind.
- Brand success is as much about effective storytelling as it is about product and operational efficiency. And the best brands tell their stories across the screens.
- When it comes to the millennial shopper, it’s about authenticity and trust.
And, of course, technology is the thread that runs through it all.
Jennifer Schmidt, Principal and leader of the Americas Apparel Fashion and Luxury practice at McKinsey & Company, emphasized the importance of storytelling in this important global segment. According to Ms. Schmidt, 50 percent of value creation in fashion and luxury is about perception – the ability of a brand to consistently deliver (in every facet of the business) a differentiating, conversation-building, relationship-building story.
(Those who joined Dr. Paula Payton’s NRF store tour in January will remember her emphasis on storytelling and narrative).
- Ms. Schmidt also spoke to three elements of import in her current strategy work:
- The change in the role of the store – which now shifts from solely emphasizing transactions to brand-building – and with 20-30% fewer doors than before;
- The change in retail formats – which, in developed world retailing, now take five different shapes: 1) flagship store, 2) free-standing format, 3) mini- and urban-free standing, 4) shops within shops and 5) outlet;
- The importance of international expansion, especially to the PRC and South Asia.
Daniella Yacobovsky, co-founder of online jewelry retailer Baublebar, also noted the importance of brand building – and she explained that her brand story is equal parts product and speed. Baublebar works on an eight-week production cycle, achieving previously unheard of turns in jewelry. Data is Ms. Yacobovsky’s friend – she tracks search engine results, web traffic and social media to drive merchandising decisions.
And, last but certainly not least: FIT seniors Rebeccah Amos, Julianne Lemon, Rachel Martin and Alison McDermott, winners of FIT’s Experience Design for Millennials Competition, opined on what makes the best brand experience for millennials. Their unequivocal answer – paired with a lot of good, solid retailing advice – was videos and music.
It’s not just about entertainment. It’s also an issue of trust and authenticity (does a brand’s playlist resonate with you?), which ultimately leads to brand stickiness.
Envision video – and lots of it. On enormous, in-store video walls, on mobile, hand-held devices and on brand YouTube channels. To display products virtually or provide information on how to wear or accessorize them. With in-store video, retailers can orchestrate, curate and simplify, giving shoppers a fast, trusted way to be on trend.
Music? The students suggested that every brand needs a music director. Brand-right soundtracks and playlists and connections to the right bands and music events can be powerful influences on today’s largest consumer group.
Quite the day.
Global Director, Retail Sales
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
* Other names and brands may be claimed as the property of others.
© 2015 Intel Corporation
When the term design is used in mobile business intelligence (BI), it often refers to the user interface (UI). However, when I consider the question of design in developing a mobile BI strategy, I go beyond what a report or dashboard looks like.
As I wrote in “Mobile BI” Doesn’t Mean “Mobile-Enabled Reports,” when designing a mobile BI solution, we need to consider all facets of user interactions and take a holistic approach in dealing with all aspects of the user experience. Here are three areas of design to consider when developing a mobile BI strategy.
How Should the Mobile BI Assets Be Delivered?
In BI, we typically consider three options for the delivery of assets: push, pull, and hybrid. The basic concept of a “push” strategy is similar to ordering a pizza for home delivery. The “users” passively receive the pizza when it’s delivered, and there’s nothing more that they need to actively do in order to enjoy it (ok, maybe they have to pay for it and tip the driver). Similarly, when users access a report with the push strategy, whether through regular e-mail or mobile BI app, it’s no different than viewing an e-mail message from a colleague.
On the other hand, to have pizza with the pull strategy, users need to get into their cars and drive to the pizza place. They must take action and “retrieve the asset.” Likewise, users need to take action to “pull” the latest report and/or data, whether they log on using the app or mobile browser. The hybrid approach employs a combination of both the push and pull methods.
Selecting the right delivery system for the right role is critical. For example, the push method may be more valuable for executives and sales teams, who travel frequently and may be short on time. However, data updates are less frequent with the push method, so accessing the latest data can’t be critical if you choose this option. In contrast, the “pull” strategy may be more appropriate for analysts and customer service teams, who depend on the latest data.
Additional considerations include data security and enterprise mobility. Does the current BI solution or software support both options? Can the integrity of data security be maintained if data assets are delivered outside the demarcation lines (for example, mobile BI report delivered as an attachment to an e-mail)?
What Are the Format and Functionality of the Mobile BI Assets?
The format deals with the type and category of the asset that is delivered to mobile BI users. What does the end-user receive? Is it a static file in Adobe PDF or Microsoft Excel format with self-contained data, or is it dynamic such as a mobile BI app that employs native device functionality? Is the format limited to data consumption, or does it allow for interactions such as “what-if” scenarios or database write-back capability?
If the format supports exploration, what can I do with it? Can I select different data elements at run time as well as different visualization formats? How to I select different values to filter the result sets, like prompts? Does the format support offline viewing? Is the format conducive to collaboration?
Does the User Interface Optimize the BI Elements?
The UI represents the typical BI elements that are displayed on the screen: page layout, menus, action buttons, orientation, and so on. When you consider the design, decide if the elements really add value or if they’re just pointless visualizations like empty calories in a diet. You want to include just the “meat” of your assets in the UI. More often than not, a simple table with the right highlighting or alerts can do a better job than a colorful pie chart or bar graph.
In addition, the UI covers the navigation among different pages and/or components of a BI asset or package. How do the users navigate from one section to another on a dashboard?
Bottom Line: Design Is Key for the User Experience
The end-to-end mobile BI user experience is a critical component that requires a carefully thought-out design that includes not only soft elements (such as an inviting and engaging UI), but also hard elements (such as the optimal format for the right role and for the right device). Designing the right solution is both art and science.
The technical solution needs to be built and delivered based on specifications and following best practices – that’s the science part. How we go about it? That’s completely art. It requires both ingenuity and critical thinking, since not all components of design come with hard-and-fast rules that we can rely on.
What other facets of the mobile BI user experience do you include in your design considerations?
Stay tuned for my next blog in the Mobile BI Strategy series
This story originally appeared on the SAP Analytics Blog.
The Intel® Dual Band Wireless-AC 8260 is Intel’s 3rd Generation 802.11ac, dual band, 2×2 Wi-Fi + Bluetooth® 4.2 adapter. It’s engineered to deliver lower power consumption1, improved RF coexistence1, and complete Microsoft Windows 10 support. The M.2 1216 form factor … Read more >
The post Meet 3rd Gen Intel® Wireless-AC 8260 (2×2 802.11ac Wi-Fi) for Windows 10 appeared first on Technology@Intel.
I spend a great deal of time traveling around the world attending technology events and meeting with our customers. I get to test-drive new laptops, 2 in 1s and tablets on a regular basis. And I get to try out … Read more >
The DPDK community comes together face-to-face at this event and continues a forward looking dialogue on the direction of packet processing in a variety of industry segments such as telecom, cloud,… Read more
Young Hackers, Hipsters, and Hustlers Create Future Product Concepts in Our Summer Innovation Program
Teenagers! Not only did I used to be one, but I now have two of them living in my house—and I’m continually amazed by the unique perspective they bring to conversations and their fearlessness in trying new things. So why … Read more >
Unable to join us at the Intel Developer Forum in San Francisco this August? We have you covered. This session dives into real-world parallel programming optimization examples, from around the world,… Read more
The decline of young women’s participation in science, technology, engineering and math (STEM) education in the U.S. is alarming for many reasons, but it’s especially resonant for someone like me,… Read more
If you have watched a movie on Netflix*, called for a ride from Uber* or paid somebody using Square*, you have participated in the digital services economy. Behind those services are data centers and networks that must be scalable, reliable and responsive.
Dynamic resource pooling is one of the benefits of a software defined infrastructure (SDI) and helps unlock scalability in data centers to enable innovative services.
How does it work? In a recent installment of Intel’s Under the Hood video series, Sandra Rivera, Intel Vice President, Data Center Group and General Manager, Network Platforms Group, provides a great explanation of dynamic resource pooling and what it takes to make it happen.
In the video, Sandra explains how legacy networks, built using fixed-function, purpose-built network elements, limit scalability and new service deployment. But when virtualization and software defined networking are combined into a software defined infrastructure, the network can be much more flexibly configured.
Pools of virtualized networking, compute and storage functionality can be provisioned in different configurations, all without changing the infrastructure, to support the needs of different applications. This is the essence of dynamic resource pooling.
To get to an infrastructure that supports dynamic resource pooling takes the right platform. Sandra talks about how Intel is helping developers build these platforms with a strategy that starts with powerful silicon building blocks and software ingredient technology, in addition to support for open standards development, building an ecosystem, collaborating on technology trials and delivering open reference platforms.
It is an exciting time for the digital services economy – who knows what service will become the next Netflix, Uber or Square!
There’s much more to Sandra’s overview of dynamic resource pooling, so I encourage you to watch it in its entirety.
What enables you to do really great work? Motivation to do a good job and belief in what you are doing are important. You also need access to the right tools and resources — be they pen and paper, a complex software package, or your team and their expertise. And you need the freedom to decide how you are going to pull all this together to achieve your goals.
I’ve recently seen how Wiltshire Police Force has used technology to bring together the combination of drive, the right tools and the freedom to act. Working with Wiltshire Council, it has developed a new approach to policing that empowers staff members to decide how, when and where they work in order to best serve the local community.
The organization deployed 600 tablets and laptop PCs, all powered by Intel® Core™ i5 processors, placing one in each patrol vehicle and giving some to back-office support staff. The devices connect (using 3G) to all the applications and systems the officers need. This allows them to check case reports, look up number plates, take witness statements, record crime scene details, and even fill in HR appraisal forms, from any location.
It’s What You Do, Not Where You Do It
Kier Pritchard is the assistant chief constable who drove the project. He and his team follow the philosophy that “work should be what you do, not where you go”. By giving officers the flexibility to work anywhere, he’s empowering them to focus on doing their jobs, while staying out in the community.
“We’re seeing officers set up in a local coffee shop, or the town hall,” he said. “In this way they can keep up to date with their cases, but they’re also more in touch with the citizens they serve.”
The other advantage of the new model is that officers can be much more productive. There’s no more driving to and from the station to do administrative tasks. Instead, they can catch up on these in quiet periods during their shift. “This essentially means there’s no downtime at all for our officers now,” said Pritchard.
The introduction of this new policing approach has gone down well with Wiltshire’s officers. They’ve taken to the devices enthusiastically and are regularly coming up with their own ways of using them to improve efficiency and collaboration.
In addition to making the working day more productive and rewarding for its staff, the mobile devices have also made a big difference to Wiltshire residents. Specialists in different departments of the police force are able to collaborate much more effectively by sharing their findings and resources through an integrated platform, making the experience for citizens much smoother. Areas in which the devices are used have also seen an improvement in crime figures thanks to the increased police presence within the community — for example in the town of Trowbridge, antisocial behaviour dropped by 15.8 percent, domestic burglaries by 34.1 percent, and vehicle crime by 33 percent.
You can read more about how the officers are using the devices to create their own ideal ways of working in this recently published case study or hear about it in the team’s own words in this video. In the meantime, I’d love to hear your views on the role of mobile technology in empowering the workforce — how does it work for you?
Find me on LinkedIn.
Keep up with me on Twitter.
Thanks to all who joined the Tech Connect Chat on Wednesday, July 22 at 1 p.m. EDT/ 10 a.m. PDT. Intel’s Blake Sweeten and Kelly Boyle lead the discussion on how to leverage new cloud-based, Intel®-powered Chromebooks and technology to create a simple, streamlined … Read more >
The post #TechConnect July 22 Chat Recap: “Intel® 2-in-1 and Tablet Opportunities in Education” appeared first on Technology Provider.
Since Intel IT generated US$351 million in value from Big Data and analytics during 2014, you might wonder how Intel started on the road to reach that milestone. In this presentation named “Evolution of Big Data at Intel: Crawl, Walk and Run Approach” from the 2015 Hadoop Summit in San Jose, Gomathy Bala, Director, and Chandhu Yalla, Manager and Architect, talk about Intel IT’s big data journey. They cover its beginning, current use cases and long term vision. Along the way, they offer some useful information to organizations just starting out to explore big data techniques and uses.
One key piece of advice that the presenters mention is to start on small, well-defined projects where you can see a clear return. That allows an organization to develop the skills to use Big Data with lower risk and known reward, part of the “crawl” stage from the presentation title. Interestingly enough, Intel IT did not rush out and try to hire people who could immediately start using tools like Hadoop. Instead, they gathered engineers who were passionate about new technology and trained them to use them. This is part of “walk” stage. Finally, with that experience, they developed an architecture to use Big Data techniques more generally. This “run” stage architecture is shown below, where all enterprise data can be analyzed in real time. We will be talking about Intel’s Data Lake in an upcoming white paper.
Another lesson is to evaluate Hadoop distributions and use distributions that is core open source. This is one of a number of criteria that were established. You can see more on Intel IT’s Hadoop Distribution evaluation criteria and how we migrated between Hadoop versions in a previous blog entry.
A video of “The Evolution of Big Data at Intel, Crawl, Walk and Run Approach” can be seen here, and the presentations slides are available as a slideshare. A video of Intel CIO Kim Stevenson talking about Intel’s use of Big Data is shown in the presentation video, but a clearer version can be found here.
Read Part I of Storage Wars One of the frustrations that has made integrating variable power resources, such as wind and solar, an engineering challenge is the lack of grid-scale energy storage systems. That’s one of the issues that keeps … Read more >
– NEW- Intel® Iris™, Iris™ Pro, and HD Graphics Driver update posted for Haswell and Broadwell version 126.96.36.19951
Please see the Release Notes and excerpt below for details.
The 188.8.131.5251 has been posted to Intel Download Center at the following direct links to the drivers:
32bit – … Read more
Mobilizing the Field Worker
I recently had the opportunity to host an industry panel discussing the business transformation that occurs when mobility solutions are deployed for field workers. Generally speaking, field workers span a spectrum of industries and currently operate in the four following ways: pen & paper, laptop tethered to a truck, consumer grade tablet or a single function device like a bar code scanner.
Intel currently defines this market as 10 million workers divided into two general categories – hard hat workers and professional services. Hard hat workers generally function in a ruggedized environment – think construction or field repair teams. Professional services includes real estate appraisal, insurance agents, law enforcement, and many others.
Field teams are capable of improving customer service, generating new revenue streams, and actively driving cost reductions. A successful mobile strategy can enable all three. Regardless of the industry, field workers need access to vital data when they’re not in the office.
The panel of experts consisted of system integrators as well as communication, hardware, and security experts. Together, we discussed the elements required for the successful deployment of a mobile solution.
The panel was comprised of; Geoff Goetz from BSquare, Nancy Green from Verizon Wireless and Michael Seawright from Intel Security. They brought a wealth of information, expertise and insight to the panel. I have tried to share the essence of this panel discussion – I am sure I will not do it justice as they were truly outstanding.
The field worker segment represents a great business opportunity. By the very definition the field worker is on the front line delivering benefits and services to customers. They are reliant upon having the right information in a real time manner. Frequently, this information is available only through applications and legacy based software running back at headquarters. In planning for a successful deployment the enterprise must consider how they connect the field worker to this information. Hardware, applications, and back-office optimizations must all be considered.
Geoff Goetz from BSQUARE shared the perspective of both a hardware and system integrator. BSQUARE is a global leader of embedded software and customized hardware solutions. They enable smart connected systems at the device level, which are used by millions every day. BSQUARE offers solutions for mobile field workers across a spectrum of vertical industries. They have worked closely with Microsoft to develop a portfolio of Windows 10 based devices on 5, 8 and 10” form factors. What was interesting to me was the 5” Intel-based handheld capable of running Windows 8.1 and soon Windows 10. The Inari5 fills the void for both field workers and IT managers. The Inari5 is a compelling solution that doesn’t comprise on performance or functionality. Geoff and his team truly understand the value of having the right device for the job as well as the software and applications to accelerate an enterprise while achieving the full benefits of mobilizing their field teams.
Nancy Green from Verizon Wireless highlighted the advantages of utilizing an extensive network to deliver connectivity right to the job site. Verizon Wireless offers a full suite of software solutions and technical capabilities to accelerate mobile programs across industries. Verizon delivers upon the value proposition for both the Line of Business manager seeking a competitive advantage, as well as the IT manager looking to easily manage and secure the devices in the field. As I mentioned before, one of the most critical requirements for field workers is access to information. Verizon has worked with numerous companies to unlock workforce optimization by reducing costs, simplifying access to remote data, and increasing collaboration. I was very impressed with the extensive resources Verizon can bring to bear in designing a mobile solution for field workers.
Michael Seawright from Intel Securities is an industry advocate who has been successfully leading business transformation with Intel’s fellow travelers for more than 20 years. In a hyper competitive market, the field worker has the opportunity to drive customer good will, address and fix problems the first time all the while driving sell-up.
Meanwhile, many companies are struggling figuring out the right level of management and security for their mobile workforce.
One advantage in deploying Intel-based mobile solutions is the built-in security at the processor level. Ultimately, the device security is only as good as its user’s passwords. The Intel Security team is working to address the vulnerabilities associated with passwords.
Ultimately, mobility is a business disruptor offering a chance to transform business processes and gain a competitive advantage. A successful program requires the IT department and its vendors to think beyond the device. It requires a solution approach to successfully manage the development, implementation and rollout. In addition, it may require back-office optimization. The following image depicts my attempt to highlight the architecture framework that should be considered for mobile program.
By: David A. Hoffman: Associate General Counsel, Director of Security Policy and Global Privacy Officer On July 23rd, Intel Security’s Senior Vice President Chris Young spoke on a panel at the Aspen Security Forum. The panel was titled “Cyber Policy … Read more >
The post Paying Down the Cybersecurity Debt: A Shared Responsibility appeared first on Policy@Intel.
If I asked you to play a round of word associations starting with ‘Intel’, I doubt many of you would come back with ‘networking’. Intel is known for a lot of other things, but would it surprise you to know that we’ve been in the networking space for more than 30 years, collaborating with key leaders in the industry? I’m talking computer networking here of course, not the sort that involves small talk in a conference centre bar over wine and blinis. We’ve been part of the network journey from the early Ethernet days, through wireless connectivity, datacentre fabric and on to silicon photonics. And during this time we’ve shipped over 1 billion Ethernet ports.
As with many aspects of the move to the software-defined infrastructure, networking is changing – or if it’s not already, it needs to. We’ve spoken in this blog series about the datacentre being traditionally hardware-defined, and this is especially the case with networking. Today, most networks consist of a suite of fixed-function devices – routers, switches, firewalls and the like. This means that the control plane and the data plane are combined with the physical device, making network (re)configuration and management time-consuming, inflexible and complex. As a result, a datacentre that’s otherwise fully equipped with the latest software-defined goodies could still be costly and lumbering. Did you know, for example, that even in today’s leading technology companies, networking managers have weekly meetings to discuss what changes need to be made to the network (due to the global impact even small changes can have), which can then take further weeks to implement? Ideally, these changes should be made within hours or even minutes.
So we at Intel (and many of our peers and customers) are looking at how we can take the software-defined approach we’ve used with compute and apply it to the network as well. How, essentially, do we create a virtualised pool of network resources that runs on industry-standard hardware and that we can manage using our friend, the orchestration layer? We need to separate the control plane from the data plane.
Building virtual foundations
The first step in this journey of network liberation is making sure the infrastructure is in place to support it. Historically, traditional industry-standard hardware wasn’t designed to deal with networking workloads, so Intel adopted a 4:1 workload consolidation strategy which uses best practices from the telco industry to optimise the processing core, memory, I/O scalability and performance of a system to meet network requirements. In practice, this means combining general-purpose hardware with specially designed software to effectively and reliably manage network workloads for application, control, packet and signal processing.
With this uber-foundation in place, we’re ready to implement our network resource pools, where you can run a previously fixed network function (like a firewall, router or load balancer) on a virtual machine (VM) – just the same as running a database engine on a VM. This is network function virtualisation, or NFV, and it enables you to rapidly stand up a new network function VM, enabling you to meet those hours-and-minutes timescales rather than days-and-weeks. It also effectively and reliably addresses OpEx and manual provisioning challenges associated with a fixed-function network environment in the same way that compute virtualisation did for your server farm. And the stronger your fabric, the faster it’ll work – this is what’s driving many data centre managers to consider upgrading from 10Gb Ethernet, through to 40Gb Ethernet and on to 100Gb Ethernet.
Managing what you’ve built
So, hooray! We now have a path to virtualising our network functions, so we can take the rest of the week off, right? Well, not quite. The next area I want to address is software-defined networking (SDN), which is about how you orchestrate and manage your shiny new virtual network resources at a data centre level. It’s often confused with NFV but they’re actually separate and complementary approaches.
Again, SDN is nothing new as a concept. Take storage for example – you used to buy a fixed storage appliance, which came with management tools built-in. However, now it’s common to break the management out of the fixed appliance and manage all the resources centrally and from one location. It’s the same with SDN, and you can think of it as “Network Orchestration” in the context of SDI.
With SDN, administrators get a number of benefits:
- Agility. They can dynamically adjust network-wide traffic flow to meet changing needs agilely and in near real-time.
- Central management. They can maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
- Programmatic configuration. They can configure, manage, secure and optimise network resources quickly, via dynamic, automated SDN programs which they write themselves, making them tailored to the business.
- Open standards and vendor neutral. They get simplified network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. This open standards point is key from an end user perspective as it enables centralised management.
There’s still a way to go with NFV and SDN, but Intel is working across the networking industry to enable the transformation. We’re doing a lot of joint work in open source solutions and standards, such as OpenStack.org – unified computing management including networking, OpenDaylight.org – a platform for network programmability, and also the Cisco* Opflex Protocol – an extensible policy protocol. We’re also looking at how we proceed from here, and what needs to be done in order to build an open, programmable ecosystem.
Today I’ll leave you with this short interview with one of our cloud architects, talking about how Intel’s IT team has implemented software-defined, self-service networking. My next blog will be the last in this current series, and we’ll be looking at that other hot topic for all data centre managers – analytics. In the meantime, I’d love to hear your thoughts on how your business could use SDN to drive time, cost and labour out of the data centre.
To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.
*Other names and brands may be claimed as the property of others.