Unable to join us at the Intel Developer Forum in San Francisco this August? We have you covered. This session dives into real-world parallel programming optimization examples, from around the world,… Read more
Recent Blog Posts
If you have watched a movie on Netflix*, called for a ride from Uber* or paid somebody using Square*, you have participated in the digital services economy. Behind those services are data centers and networks that must be scalable, reliable and responsive.
Dynamic resource pooling is one of the benefits of a software defined infrastructure (SDI) and helps unlock scalability in data centers to enable innovative services.
How does it work? In a recent installment of Intel’s Under the Hood video series, Sandra Rivera, Intel Vice President, Data Center Group and General Manager, Network Platforms Group, provides a great explanation of dynamic resource pooling and what it takes to make it happen.
In the video, Sandra explains how legacy networks, built using fixed-function, purpose-built network elements, limit scalability and new service deployment. But when virtualization and software defined networking are combined into a software defined infrastructure, the network can be much more flexibly configured.
Pools of virtualized networking, compute and storage functionality can be provisioned in different configurations, all without changing the infrastructure, to support the needs of different applications. This is the essence of dynamic resource pooling.
To get to an infrastructure that supports dynamic resource pooling takes the right platform. Sandra talks about how Intel is helping developers build these platforms with a strategy that starts with powerful silicon building blocks and software ingredient technology, in addition to support for open standards development, building an ecosystem, collaborating on technology trials and delivering open reference platforms.
It is an exciting time for the digital services economy – who knows what service will become the next Netflix, Uber or Square!
There’s much more to Sandra’s overview of dynamic resource pooling, so I encourage you to watch it in its entirety.
What enables you to do really great work? Motivation to do a good job and belief in what you are doing are important. You also need access to the right tools and resources — be they pen and paper, a complex software package, or your team and their expertise. And you need the freedom to decide how you are going to pull all this together to achieve your goals.
I’ve recently seen how Wiltshire Police Force has used technology to bring together the combination of drive, the right tools and the freedom to act. Working with Wiltshire Council, it has developed a new approach to policing that empowers staff members to decide how, when and where they work in order to best serve the local community.
The organization deployed 600 tablets and laptop PCs, all powered by Intel® Core™ i5 processors, placing one in each patrol vehicle and giving some to back-office support staff. The devices connect (using 3G) to all the applications and systems the officers need. This allows them to check case reports, look up number plates, take witness statements, record crime scene details, and even fill in HR appraisal forms, from any location.
It’s What You Do, Not Where You Do It
Kier Pritchard is the assistant chief constable who drove the project. He and his team follow the philosophy that “work should be what you do, not where you go”. By giving officers the flexibility to work anywhere, he’s empowering them to focus on doing their jobs, while staying out in the community.
“We’re seeing officers set up in a local coffee shop, or the town hall,” he said. “In this way they can keep up to date with their cases, but they’re also more in touch with the citizens they serve.”
The other advantage of the new model is that officers can be much more productive. There’s no more driving to and from the station to do administrative tasks. Instead, they can catch up on these in quiet periods during their shift. “This essentially means there’s no downtime at all for our officers now,” said Pritchard.
The introduction of this new policing approach has gone down well with Wiltshire’s officers. They’ve taken to the devices enthusiastically and are regularly coming up with their own ways of using them to improve efficiency and collaboration.
In addition to making the working day more productive and rewarding for its staff, the mobile devices have also made a big difference to Wiltshire residents. Specialists in different departments of the police force are able to collaborate much more effectively by sharing their findings and resources through an integrated platform, making the experience for citizens much smoother. Areas in which the devices are used have also seen an improvement in crime figures thanks to the increased police presence within the community — for example in the town of Trowbridge, antisocial behaviour dropped by 15.8 percent, domestic burglaries by 34.1 percent, and vehicle crime by 33 percent.
You can read more about how the officers are using the devices to create their own ideal ways of working in this recently published case study or hear about it in the team’s own words in this video. In the meantime, I’d love to hear your views on the role of mobile technology in empowering the workforce — how does it work for you?
Find me on LinkedIn.
Keep up with me on Twitter.
Thanks to all who joined the Tech Connect Chat on Wednesday, July 22 at 1 p.m. EDT/ 10 a.m. PDT. Intel’s Blake Sweeten and Kelly Boyle lead the discussion on how to leverage new cloud-based, Intel®-powered Chromebooks and technology to create a simple, streamlined … Read more >
The post #TechConnect July 22 Chat Recap: “Intel® 2-in-1 and Tablet Opportunities in Education” appeared first on Technology Provider.
Since Intel IT generated US$351 million in value from Big Data and analytics during 2014, you might wonder how Intel started on the road to reach that milestone. In this presentation named “Evolution of Big Data at Intel: Crawl, Walk and Run Approach” from the 2015 Hadoop Summit in San Jose, Gomathy Bala, Director, and Chandhu Yalla, Manager and Architect, talk about Intel IT’s big data journey. They cover its beginning, current use cases and long term vision. Along the way, they offer some useful information to organizations just starting out to explore big data techniques and uses.
One key piece of advice that the presenters mention is to start on small, well-defined projects where you can see a clear return. That allows an organization to develop the skills to use Big Data with lower risk and known reward, part of the “crawl” stage from the presentation title. Interestingly enough, Intel IT did not rush out and try to hire people who could immediately start using tools like Hadoop. Instead, they gathered engineers who were passionate about new technology and trained them to use them. This is part of “walk” stage. Finally, with that experience, they developed an architecture to use Big Data techniques more generally. This “run” stage architecture is shown below, where all enterprise data can be analyzed in real time. We will be talking about Intel’s Data Lake in an upcoming white paper.
Another lesson is to evaluate Hadoop distributions and use distributions that is core open source. This is one of a number of criteria that were established. You can see more on Intel IT’s Hadoop Distribution evaluation criteria and how we migrated between Hadoop versions in a previous blog entry.
A video of “The Evolution of Big Data at Intel, Crawl, Walk and Run Approach” can be seen here, and the presentations slides are available as a slideshare. A video of Intel CIO Kim Stevenson talking about Intel’s use of Big Data is shown in the presentation video, but a clearer version can be found here.
Read Part I of Storage Wars One of the frustrations that has made integrating variable power resources, such as wind and solar, an engineering challenge is the lack of grid-scale energy storage systems. That’s one of the issues that keeps … Read more >
– NEW- Intel® Iris™, Iris™ Pro, and HD Graphics Driver update posted for Haswell and Broadwell version 220.127.116.1151
Please see the Release Notes and excerpt below for details.
The 18.104.22.16851 has been posted to Intel Download Center at the following direct links to the drivers:
32bit – … Read more
Mobilizing the Field Worker
I recently had the opportunity to host an industry panel discussing the business transformation that occurs when mobility solutions are deployed for field workers. Generally speaking, field workers span a spectrum of industries and currently operate in the four following ways: pen & paper, laptop tethered to a truck, consumer grade tablet or a single function device like a bar code scanner.
Intel currently defines this market as 10 million workers divided into two general categories – hard hat workers and professional services. Hard hat workers generally function in a ruggedized environment – think construction or field repair teams. Professional services includes real estate appraisal, insurance agents, law enforcement, and many others.
Field teams are capable of improving customer service, generating new revenue streams, and actively driving cost reductions. A successful mobile strategy can enable all three. Regardless of the industry, field workers need access to vital data when they’re not in the office.
The panel of experts consisted of system integrators as well as communication, hardware, and security experts. Together, we discussed the elements required for the successful deployment of a mobile solution.
The panel was comprised of; Geoff Goetz from BSquare, Nancy Green from Verizon Wireless and Michael Seawright from Intel Security. They brought a wealth of information, expertise and insight to the panel. I have tried to share the essence of this panel discussion – I am sure I will not do it justice as they were truly outstanding.
The field worker segment represents a great business opportunity. By the very definition the field worker is on the front line delivering benefits and services to customers. They are reliant upon having the right information in a real time manner. Frequently, this information is available only through applications and legacy based software running back at headquarters. In planning for a successful deployment the enterprise must consider how they connect the field worker to this information. Hardware, applications, and back-office optimizations must all be considered.
Geoff Goetz from BSQUARE shared the perspective of both a hardware and system integrator. BSQUARE is a global leader of embedded software and customized hardware solutions. They enable smart connected systems at the device level, which are used by millions every day. BSQUARE offers solutions for mobile field workers across a spectrum of vertical industries. They have worked closely with Microsoft to develop a portfolio of Windows 10 based devices on 5, 8 and 10” form factors. What was interesting to me was the 5” Intel-based handheld capable of running Windows 8.1 and soon Windows 10. The Inari5 fills the void for both field workers and IT managers. The Inari5 is a compelling solution that doesn’t comprise on performance or functionality. Geoff and his team truly understand the value of having the right device for the job as well as the software and applications to accelerate an enterprise while achieving the full benefits of mobilizing their field teams.
Nancy Green from Verizon Wireless highlighted the advantages of utilizing an extensive network to deliver connectivity right to the job site. Verizon Wireless offers a full suite of software solutions and technical capabilities to accelerate mobile programs across industries. Verizon delivers upon the value proposition for both the Line of Business manager seeking a competitive advantage, as well as the IT manager looking to easily manage and secure the devices in the field. As I mentioned before, one of the most critical requirements for field workers is access to information. Verizon has worked with numerous companies to unlock workforce optimization by reducing costs, simplifying access to remote data, and increasing collaboration. I was very impressed with the extensive resources Verizon can bring to bear in designing a mobile solution for field workers.
Michael Seawright from Intel Securities is an industry advocate who has been successfully leading business transformation with Intel’s fellow travelers for more than 20 years. In a hyper competitive market, the field worker has the opportunity to drive customer good will, address and fix problems the first time all the while driving sell-up.
Meanwhile, many companies are struggling figuring out the right level of management and security for their mobile workforce.
One advantage in deploying Intel-based mobile solutions is the built-in security at the processor level. Ultimately, the device security is only as good as its user’s passwords. The Intel Security team is working to address the vulnerabilities associated with passwords.
Ultimately, mobility is a business disruptor offering a chance to transform business processes and gain a competitive advantage. A successful program requires the IT department and its vendors to think beyond the device. It requires a solution approach to successfully manage the development, implementation and rollout. In addition, it may require back-office optimization. The following image depicts my attempt to highlight the architecture framework that should be considered for mobile program.
By: David A. Hoffman: Associate General Counsel, Director of Security Policy and Global Privacy Officer On July 23rd, Intel Security’s Senior Vice President Chris Young spoke on a panel at the Aspen Security Forum. The panel was titled “Cyber Policy … Read more >
The post Paying Down the Cybersecurity Debt: A Shared Responsibility appeared first on Policy@Intel.
If I asked you to play a round of word associations starting with ‘Intel’, I doubt many of you would come back with ‘networking’. Intel is known for a lot of other things, but would it surprise you to know that we’ve been in the networking space for more than 30 years, collaborating with key leaders in the industry? I’m talking computer networking here of course, not the sort that involves small talk in a conference centre bar over wine and blinis. We’ve been part of the network journey from the early Ethernet days, through wireless connectivity, datacentre fabric and on to silicon photonics. And during this time we’ve shipped over 1 billion Ethernet ports.
As with many aspects of the move to the software-defined infrastructure, networking is changing – or if it’s not already, it needs to. We’ve spoken in this blog series about the datacentre being traditionally hardware-defined, and this is especially the case with networking. Today, most networks consist of a suite of fixed-function devices – routers, switches, firewalls and the like. This means that the control plane and the data plane are combined with the physical device, making network (re)configuration and management time-consuming, inflexible and complex. As a result, a datacentre that’s otherwise fully equipped with the latest software-defined goodies could still be costly and lumbering. Did you know, for example, that even in today’s leading technology companies, networking managers have weekly meetings to discuss what changes need to be made to the network (due to the global impact even small changes can have), which can then take further weeks to implement? Ideally, these changes should be made within hours or even minutes.
So we at Intel (and many of our peers and customers) are looking at how we can take the software-defined approach we’ve used with compute and apply it to the network as well. How, essentially, do we create a virtualised pool of network resources that runs on industry-standard hardware and that we can manage using our friend, the orchestration layer? We need to separate the control plane from the data plane.
Building virtual foundations
The first step in this journey of network liberation is making sure the infrastructure is in place to support it. Historically, traditional industry-standard hardware wasn’t designed to deal with networking workloads, so Intel adopted a 4:1 workload consolidation strategy which uses best practices from the telco industry to optimise the processing core, memory, I/O scalability and performance of a system to meet network requirements. In practice, this means combining general-purpose hardware with specially designed software to effectively and reliably manage network workloads for application, control, packet and signal processing.
With this uber-foundation in place, we’re ready to implement our network resource pools, where you can run a previously fixed network function (like a firewall, router or load balancer) on a virtual machine (VM) – just the same as running a database engine on a VM. This is network function virtualisation, or NFV, and it enables you to rapidly stand up a new network function VM, enabling you to meet those hours-and-minutes timescales rather than days-and-weeks. It also effectively and reliably addresses OpEx and manual provisioning challenges associated with a fixed-function network environment in the same way that compute virtualisation did for your server farm. And the stronger your fabric, the faster it’ll work – this is what’s driving many data centre managers to consider upgrading from 10Gb Ethernet, through to 40Gb Ethernet and on to 100Gb Ethernet.
Managing what you’ve built
So, hooray! We now have a path to virtualising our network functions, so we can take the rest of the week off, right? Well, not quite. The next area I want to address is software-defined networking (SDN), which is about how you orchestrate and manage your shiny new virtual network resources at a data centre level. It’s often confused with NFV but they’re actually separate and complementary approaches.
Again, SDN is nothing new as a concept. Take storage for example – you used to buy a fixed storage appliance, which came with management tools built-in. However, now it’s common to break the management out of the fixed appliance and manage all the resources centrally and from one location. It’s the same with SDN, and you can think of it as “Network Orchestration” in the context of SDI.
With SDN, administrators get a number of benefits:
- Agility. They can dynamically adjust network-wide traffic flow to meet changing needs agilely and in near real-time.
- Central management. They can maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
- Programmatic configuration. They can configure, manage, secure and optimise network resources quickly, via dynamic, automated SDN programs which they write themselves, making them tailored to the business.
- Open standards and vendor neutral. They get simplified network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. This open standards point is key from an end user perspective as it enables centralised management.
There’s still a way to go with NFV and SDN, but Intel is working across the networking industry to enable the transformation. We’re doing a lot of joint work in open source solutions and standards, such as OpenStack.org – unified computing management including networking, OpenDaylight.org – a platform for network programmability, and also the Cisco* Opflex Protocol – an extensible policy protocol. We’re also looking at how we proceed from here, and what needs to be done in order to build an open, programmable ecosystem.
Today I’ll leave you with this short interview with one of our cloud architects, talking about how Intel’s IT team has implemented software-defined, self-service networking. My next blog will be the last in this current series, and we’ll be looking at that other hot topic for all data centre managers – analytics. In the meantime, I’d love to hear your thoughts on how your business could use SDN to drive time, cost and labour out of the data centre.
To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.
*Other names and brands may be claimed as the property of others.
Intel will be hosting a bay area spark meetup at Intel Santa Clara campus (SC12 auditorium) on Thursday Aug 20 6:30 pm – 9:00 pm
This meetup will focus on Large scale Distributed ML on… Read more
When it comes to the cloud, there is no single answer to the question of how to ensure the optimal performance, scalability, and portability of workloads. There are, in fact, many answers, and they are all tied to the interrelated layers of the software-defined infrastructure (SDI) stack. The recently announced Intel Cloud for All Initiative is focused directly at working with cloud software vendors and the community to deliver fully optimized SDI stacks that can serve a wide array of apps and data. To better understand the underlying strategy driving the Cloud for All Initiative, it’s important to see the relationships between each layer of the SDI stack.
In this post, we will walk through the layers of the SDI stack, as shown here.
The foundation of Software Defined Infrastructure is the creation of infrastructure resource pools establishing compute, storage and network services. These resource pools utilize the performance and platform capabilities of Intel architecture, to enable applications to understand and then control what they utilize. Our work with the infrastructure ecosystem is focused on ensuring that the infrastructure powering the resource pools is always optimized for a wide array of SDI stacks.
The OS layer
At the operating system level, the stack includes commonly used operating systems and software libraries that allow applications to achieve optimum performance while enabling portability from one environment to another. Intel has a long history of engineering with both OS vendors and the community, and has extended this work to extend to light weight OS that provide greater efficiency for cloud native workloads.
The Virtualization layer
Moving up the stack, we have the virtualization layer, which is essential to software-defined infrastructure. Without virtualization, SDI would not be possible. But in this context, virtualization can include more than just typical hypervisors. In order to establish resources pools the infrastructure components of compute, storage, and network are virtualized through various means. The most optimum resource pools are those that can continue to scale out to meet the growing needs of their consumers. Last but not least, the performance isolation provided by containers can be considered OS virtualization which has enabled a whole new set of design patterns for developers to use. For both containers and hypervisors, Intel is working with software providers to fully utilize the capabilities of Intel® Virtualization Technology (Intel® VT) to drastically reduce performance overhead and increase security isolation. For both storage and network, we have additional libraries and instruction sets that help deliver the best performance possible for this wide array of infrastructure services.
The Orchestration layer
There are numerous orchestration layers and schedulers available, however for this discussion we will focus on those being built in the open; OpenStack, Apache Mesos, and Kubernetes. This layer provides central oversight of the status of the infrastructure, what is allocated and what is consumed, how applications or tenants are deployed, and how to best meet the goals of most DC infrastructure teams…. Increase utilization while maintaining performance. Intel’s engagement within the orchestration layer focuses on working with the industry to both harden this layer as well as bring in advanced algorithms that can help all DC’s become more efficient. Some examples are our work in the OpenStack community to improve the availability of the cloud services themselves, and to provide rolling upgrades so that the cloud and tenants are always on. In Mesos, we are working to help users of this technology use all available computing slack so they can improve their TCO.
The Developer environment
The entire SDI infrastructure is really built to power the developers code and data which all of us as consumers use every day of our life. Intel has a long history of helping improve debugging tools, making it easier for developers to move to new design patterns like multi-threaded, and now distributed systems, and helping developers get the most performance out of their code. We will continue to increase our focus here to make sure that developers can focus on making the best SW, and let the tools help them build always on highly performant apps and services.
For a close-up look at Intel’s focus on standards-based innovation for the SDI stack, check out the related sessions at the Intel Developer Forum, which takes place August 18 – 20 in San Francisco. These events will include a class that dives down into the Intel vision for the open, standards-based SDI stacks that are the key to mainstream cloud adoption.
Cloud computing has been a tremendous driver of business growth over the past five years. Digital services such as Uber, AirBnB, Coursera, and Netflix have defined consumer zeitgeist while redefining entire industries in the process. This first wave of cloud fueled business growth, has largely been created by businesses leveraging cloud native applications aimed at consumer services. Traditional enterprises who seek the same agility and efficiency that the cloud provides have viewed migration of traditional enterprise applications to the cloud as a slow and complex challenge. At the same time, new cloud service providers are seeking to compete on a cost parity with large providers, and industry standard solutions that can help have been slow in arriving. The industry simply isn’t moving fast enough to address these very real customer challenges, and our customers are asking for help.
To help solve these real issues, Intel is announcing the Cloud for All Initiative with the goal of accelerating the deployment of tens of thousands of clouds over the next five years. This initiative is focused solely on cloud adoption to deliver the benefits of cloud to all of our customers. This represents an enormous efficiency and strategic transition for Enterprise IT and Cloud Service Providers. The key to delivering the efficiency of the cloud to the enterprise is rooted in software defined infrastructure. This push for more intelligent and programmable infrastructure is something that we’ve been working on at Intel for several years. The ultimate goal of Software Defined Infrastructure is one where compute, storage and network resource pools are dynamically provisioned based on application requirements.
Cloud for All has three key objectives:
- Invest in broad industry collaborations to create enterprise ready, easy to deploy SDI solutions
- Optimize SDI stacks for high efficiency across workloads
- Align the industry towards standards and development focus to accelerate cloud deployment
Through investment, Intel will utilize our broad ecosystem relationships to ensure that a choice of SDI solutions supporting both traditional enterprise and cloud native applications are available in easy to consume options. This work will include scores of industry collaborations that ensure SDI stacks have frictionless integration into data center infrastructure.
Through optimization, Intel will work with cloud software providers to ensure that SDI stacks are delivered with rich enterprise feature sets, highly available and secure, and scalable to thousands of nodes. This work will include the full optimization of software to take advantage of Intel architecture features and technologies like Intel virtualization technology, cloud integrity technology, and platform telemetry, all to deliver optimal enterprise capabilities.
Through industry alignment, Intel will use its leadership role in industry organizations as well as our work with the broad developer community to ensure that the right standards are in place to ensure workloads have true portability across clouds. This standardization will help enterprises have the confidence to deploy a mix of traditional and cloud native applications.
This work has already started. We have been engaged in the OpenStack community for a number of years as a consumer, and more recently our integration into the Foundation board last year. We have used that user and leadership position to push for features needed in the enterprise. Our work does not stop there however, over the past few months we’ve announced collaborations with cloud software leaders including CoreOS, Docker and Red Hat highlighting enterprise readiness for OpenStack and container solutions. We’ve joined with other industry leaders to form the Open Container Initiative and Cloud Native Computing Foundation to drive the industry standards and frameworks for cloud native applications.
Today, we’ve announced our next step in Cloud for All with a strategic collaboration with Rackspace, the co-founder of OpenStack and a company with a deep history of collaboration with Intel. We’ve come together to deliver a stable, predictable, and easy to operate enterprise ready OpenStack scalable to 1000’s of nodes. This will be accomplished through the creation of the OpenStack Innovation Center. Where we will be assembling large developer teams across Intel and Rackspace to work together to address the key challenges facing the Openstack platform. Our upstream contributions will align with the priorities of the OpenStack Foundation’s Enterprise Workgroup. To facilitate this effort we will create the Hybrid Cloud Testing Cluster, a large scale environment, open to all developers in the community wishing to test their code at scale with the objective of improving the OpenStack platform. In total, we expect this collaboration to engage hundreds of new developers internally and through community engagement to address critical requirements for the OpenStack community.
Of course, we’ve only just begun. You can expect to hear dozens of announcements from us in the coming year including additional investments and collaborations, as well as the results of our optimization and delivery. I’m delighted to be able to share this journey with you as Cloud for All gains momentum. We welcome discussion on how Intel can best work with industry leaders and customers to deliver the goals of Cloud for All to the enterprise.
Intel is a strong believer in the value cloud technology offers businesses around the world. It has enabled new usage models like Uber*, Waze*, Netflix* and even AirBnB*. Unfortunately, access to this technology has been limited because industry solutions are … Read more >
The post Intel® Cloud for All initiative to Invest in OpenStack Community appeared first on Intel Software and Services.
Much like two perfectly synced road trip buddies who have been traveling together for decades, Intel and QNX Software Systems, a subsidiary of BlackBerry, are embarking on a new adventure: advancing in-vehicle technologies that deliver new driving experiences. The collaboration … Read more >
The post Intel IoT and QNX Software Systems Collaborate on ADAS Solutions appeared first on IoT@Intel.
Imagine you are looking a person in front of you, and you need to communicate. Before you speak, do you consider how many centimeters that person is away from you, or do you consider if that person is relatively close or … Read more >
The post Developers Need to Consider Relative Input For Intel RealSense Technology appeared first on Intel Software and Services.
This past Saturday, the World Trade Organization (WTO) made significant progress toward expanding the Information Technology Agreement (ITA), an accord that has advanced innovation, trade and economic growth around the world for the past 18 years. In today’s rapidly evolving … Read more >
The post The Impact of ITA: the Progress of Today & the Promise of Tomorrow appeared first on Policy@Intel.
When an organization is considering implementing a mobile BI strategy , it needs to ask/consider if its current information technology (IT) and business intelligence (BI) infrastructure can support mobile BI. It must determine if there are any gaps that need to be addressed prior to going live.
When we think of an end-to-end mobile BI solution, there are several areas that can impact the user experience. I refer to them as choke points. Some of the risks associated with these choke points can be eliminated; others will have to be mitigated. Depending on the business model and how the IT organization is set up, these choke points may be dependent on the configuration of technology or they may hinge on processes that are embedded into business or IT operations. Evaluating both infrastructures for mobile BI readiness is the first step.
IT Infrastructure’s Mobile BI Readiness
The IT infrastructure typically includes the mobile devices, wireless networks, and any other services or operations that will enable these devices to operate smoothly within a set of connected networks, which span those owned by the business or external networks managed by third party vendors. As mobile BI users move from one point of access to another, they consume data and assets on these connected networks and the mobile BI experience should be predictable within each network’s constraints of flexibility and bandwidth.
Mobile device management (MDM) systems also play a crucial role in the IT infrastructure. Before mobile users have a chance to access any dashboards or look at data on any reports, their mobile devices need to be set up first. Depending on the configuration, enablement may include device and user enrollment, single sign-on (SSO), remote access, and more.
Additionally, failing to properly enroll either the device or the user may result in compliance issues or other risks. It’s critical to know how much of this comes preconfigured with the device and how the user will manage these tasks. When you add to the mix the bring-your-own-device (BYOD) arrangements, the equation gets more complex.
BI Infrastructure’s Mobile BI Readiness
Once the user is enabled on the mobile device and business network, the BI infrastructure will be employed. The BI infrastructure typically includes the BI software, hardware, user profiles, and any other services or operations that will enable consumption of BI assets on mobile devices. The mobile BI software, whether it is an app or web-based solution, will need to be properly managed.
The first area of concern for an app-based solution is the installation of the app from an app store. For example, does the user download the app from iTunes (in the case of an iPad or iPhone) or from an IT-managed corporate app store or gallery? Is it a custom-built app developed in-house or is it part of the current BI software? Does the app come preconfigured with the company-supplied mobile device (similar to how email is set up on a PC) or is the user left alone to complete the installation?
When the app is installed, are we done? No. In many instances, the app would need to be configured to connect to the mobile BI servers. Moreover, this configuration step needs to come after obtaining proper authorizations, which involves entering user’s access credentials (at minimum user id and password unless SSO can be leveraged).
If the required authorization requests, regardless of existing BI user profiles, are not obtained in time, the user configuration can only be completed partially. More often than not, the mobile BI users will need assistance with technical and process-related topics. Hence, streamlining both installation and configurations steps will further improve the onboarding process.
Infrastructure is the backbone of any technology operation, and it’s equally important for mobile BI. Close alignment with enterprise mobility, as I wrote in “10 Mobile BI Strategy Questions: Enterprise Mobility,” will help to close the gaps in many of these areas. When we’re developing a mobile BI strategy, we can’t take the existing IT or BI infrastructure for granted.
Where do you see the biggest gap when it comes to technology infrastructure in mobile BI planning?
Stay tuned for my next blog in the Mobile BI Strategy series.
This story originally appeared on the SAP Analytics Blog.
With the cloud software industry advancing on a selection of Software Defined Infrastructure ‘stacks’ to support enterprise data centers, the question of application portability comes squarely into focus. A new ‘style’ of application development has started to gather momentum in both the Public Cloud, as well as Private Cloud. Cloud native applications, as this new style has been named, are those applications that are container packaged, dynamically scheduled, and microservices oriented. They are rapidly gaining favor for their improved efficiency and agility as compared to more traditional monolithic data center applications.
However, creating a cloud native application, does not eliminate the dependencies on traditional data center services. Foundational services such as networking, storage, automation, and of course, compute are all still very much required. In fact, since the concept of a full virtual machine may not be present in a cloud native application, these applications rely significantly on their infrastructure software to provide the right components. When done well, a ‘Cloud Native Application’ SDI stack can provide efficiency and agility previously only seen in a few hyperscale environments.
Another key aspect of the Cloud Native Application, is that it should be highly portable. This portability between environments is a massive productivity gain for both developers and operators. An application developer wants the ability to package an application component once, and have it be reusable across all clouds, both public and private. A cloud operator wants the freedom to position portions of their application where it makes the most sense. That location may be on their private cloud, or on their public cloud partner. Cloud Native Applications are the next step in true hybrid cloud usage.
So, with this promise of efficiency, operational agility, and portability, where do data center managers seek the definitions for how the industry will address movement of apps between stacks? How can one deploy a cloud native app and ensure it can be moved across clouds and SDI stacks without issue? Without a firm answer, can one really develop cloud native apps with the confidence that portability will not be limited to those environments running identical SDI stacks? These are the type of questions that often stall organizational innovation and is the reason why Intel has joined with other cloud leaders in the formation of the Cloud Native Computing Foundation (CNCF).
Announced this week at the first ever KuberCon event, the CNCF has been chartered to provide guidance, operational patterns, standards and over time , APIs, to ensure container based SDI stacks are both interoperable, and optimized for a seamless, performant developer experience. The CNCF will work with the recently formed Open Container Initiative (OCI) towards a synergistic goal of addressing the full scope of container standards and the supporting services needed for success.
Why announce this at KuberCon? The goal of the CNCF is to foster innovation in the community around these application models. The best way to speed innovation is to start with some seed technologies. Much the same way it is easier to start writing (a blog perhaps?) when you have a few sentences on screen, rather than staring at a blank page, the CNCF is starting with some seed technologies. Kubernetes, having just passed its 1.0 release, will be one of the first technologies used to kick start this effort. Many more technologies, and even full stacks, will follow, with a goal of several ‘reference’ SDI platforms that support the portability required.
What is Intel’s role here? Based on our decades of experience helping lead industry innovation and standardization across computing hardware and open source software domains, we are firmly committed to the CNCF goals and plan to actively participate in the leadership body and Technical Oversight Committee of the Foundation. This effort is reflective of our broader commitment to working with the industry to accelerate the broad use of cloud computing through delivery of optimized, easy to consume, operate, and feature complete SDI stacks. This engagement complements our existing leadership roles in the OCI, the OpenStack Foundation, Cloud Foundry Foundation as well as our existing work driving solutions with the SDI platform ecosystem.
With the cloud software industry accelerating its pace of innovation, please stay tuned for more details on Intel’s broad engagement in this space. To deepen your engagement with Intel, I invite you to join us at the upcoming Intel Developer Forum in San Francisco to gain a broader perspective on Intel’s strategy for acceleration of cloud.