ADVISOR DETAILS

RECENT BLOG POSTS

High-Performance Computing Helping to Make Dreams Come True – HLRS

Audience-Watching-Movie-In-A-Theater.jpgThis year is shaping up to be one of the best years for cinema in a while. Some of Hollywood’s most iconic characters are returning to the big screen in 2015, with new releases from James Bond*, Star Wars* and The Hunger Games* franchises. However, few of us stop to wonder how 007 can plummet through a glass ceiling unscathed or how those X-wings crashing look so realistic… It’s all down to hidden technology.

 

Technology in the Talkies

 

Space drama, Gravity*, won a few Oscars at the 2014 Academy Awards, including Best Visual Effects, and it’s not hard to see why. Apparently, around 80 per cent of the scenes in Gravity were animated and computer generated. In many scenes, only Sandra Bullock and George Clooney’s faces existed as anything other than 1s and 0s on computers. Everything else, from space shuttles, jetpacks and space debris, was created by graphic artists using Dell* workstations powered by 4th generation Intel® Core™ i5 and i7 vPro™ processors.

 

Only last month, we released What Lives Inside, the fourth installment of Intel’s Inside Films series. Directed by two-time Oscar-winner Robert Stromberg, our latest social film stars Colin Hanks, J.K. Simmons and Catherine O’Hara alongside the recently launched Dell* Venue 8 7000 Series super-thin tablet with Intel® RealSenseTM technology and powered by an Intel® Atom™ processor Z3580. The film took eight days to shoot and relied on 200 visual effects artists, which just goes to show what it takes to bring such whimsical worlds to life on the big screen.

 

HPC Helping the Film Industry

 

3D movies rely a lot on technology and require significant computing capacity. In a change of pace from the usual manufacturing or university research projects, The High Performance Computing Center in Stuttgart (HLRS) recently supported a local production company by rendering a 3D kids’ movie called Maya, The Bee*. This 3D flick, starring Kodi Smit-McPhee and Miriam Margolyes, is not your typical HPC challenge, but the amount of data behind the 3D visuals presented quite a mountain to climb for the makers, Studio 100.

 

To ensure the film was ready in time, the project was transferred to HLRS, which had recently upgraded to the Intel® Xeon® Processor E5-2680 v3, enabling it to undertake more projects and better serve critical industry needs like this one because this system delivers four times the performance of its previous supercomputer. 1 Thanks to the HPC capacity available at HLRS, Maya, The Bee was released last month in all its 3D glory.2 “We are addicted to giving best possible service, so it is vital that we run on reliable technology,” said Bastian Koller, Manager of HLRS. For more on the HLRS supercomputer, click here.

 

Bringing Characters to Life

 

Intel and Framestore have been working together for almost five years now. However, Paddington* is the first film that Framestore has worked on with a computer-animated animal as the lead character, and the mischievous little bear caused quite a few challenges. Many characters brought to film, such as J. K. Rowling’s Dobby* or Andy Serkis as Gollum*, are shot using motion-capture technology to make them appear more lifelike, but the actor playing Paddington wore a head-mounted camera during the voice recordings so animators could see how a human face moved at every point and try to mimic it in bear form. While this gave audiences an incredible character, animating and rendering a photo-real lead character for every shot required significant processing capacity. It took 350 people across two continents working for three years to bring Paddington the CGI bear to life, but with high-performance, flexible IT solutions based on servers powered by Intel® Xeon® processors, it was a piece of cake (or a marmalade sandwich!).

 

Intel VP & GM, Gordon Graylish, shared his thoughts on the red carpet at the Paddington premiere, saying: “It is wonderful when [Intel® technology] gets to be used for creativity like this, and this is something that would have been impossible or prohibitively expensive [to make] even two or three years ago. Technology allows you to take the limits off your imagination.”

 

I’d love to hear what you think about all this hidden HPC tech used to get fantastic blockbusters into our movie theaters, so please leave a comment below or continue this conversation by connecting with me on LinkedIn or Twitter, with #ITCenter – I’m happy to answer questions you may have.

 

1 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

2 Released in March 2015 in the US and the UK

 

*Other names and brands may be claimed as the property of others.

Read more >

The Challenge of the Smart Megacity

People-Talking-By-The-Water.pngOn my recent visit to China, I was struck by the country’s commitment to investing in smart cities. China’s most recent five-year plan set aside $70 billion for smart city technologies, with around 200 cities competing for funding. This is part of a huge project of urbanization, which saw $1 trillion allocated for urban infrastructure under the same plan. Last year, the Chinese government announced its intention to increase its urban population from 53.7 percent to 60 percent by 2020, and there are already 15 megacities in China with more than 10 million people.

 

With 1.3 million people per week moving to and trying to build lives in cities globally through 2050, it’s no surprise that the impetus to bring “smart” to these locations has risen on the agenda of many of the most prominent cities worldwide.

 

If done properly, a smarter city environment should have a measurable impact on the economy, the citizens and their lifestyles, business and the environment. There’s certainly no shortage of examples of how technology can be applied to making a city smart, as listed below:

 

 

BUT do applications like this amount to the “smart city”? One thing I’ve noticed, from my discussions with Intel customers in China and other countries, is there is no single definition of what the smart city is. Government bodies recognize the opportunities presented by technologies like those I’ve mentioned and it’s clear there’s a healthy degree of friendly competition amongst the cities but where I see many struggle is working out what they should do first or next, and what the smart city really means to them.

 

While this may be well understood by many of you, the focus areas we see coming up most frequently are:

 

  • Smart Transport/Mobility
  • Smart Home-Building-Facility
  • Smart Public Infra & Community Driven Services
  • Smart Fixed-Mobile Security Surveillance
  • Analytics and Big-Data Strategy-Planning

 

SMAC-And-Cities.png

Irrespective of which area a city focuses on first, one thing is for sure: with the proliferation of millions of smart connected devices – on the transport network, and in anything from buildings to street lights to manholes – the result is a huge amount of flowing data. To get the best return on investment, it’s essential to plan how that data will be managed, how value can be extracted from it and what you plan to do with it. While almost every customer I talk to acknowledges they need to do something with the data most struggle with what they want to do with it. Without these plans in place, the data simply piles up and creates mountains in minutes. If you haven’t done so already I’d recommend hiring some Data Scientists – typically mathematicians or statisticians who can help you determine what data you need and what you might want to do with it.

 

On a somewhat related note, many of you will be familiar with SMAC stack (social, mobile, analytics, cloud). This is the digital platform being laid down across industries to underpin transformation. It’s been a core part of the rapid rise we’ve seen in shared economy companies like Uber and AirBnB. It is also fundamental to the smart city. The smart city is not just about adding connectivity to a building or other asset: it’s about the data you gather, the insights you gain, the services you can create and deliver, the accessibility you provide, the economic growth you stimulate and the communities you grow. Clearly, this all needs to be done and delivered in a secure and predictable manner. The point is not to look at or use SMAC and just look at one part of it. The impact comes from the multiplicative effect it has.

 

In the smart city, as much as anywhere these days, all roads lead to data. The question we need to be asking is: which roads do we want to travel?

 

What do you think defines the smart city? I’d be interested to read your comments below.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Ordering Kiosks Give Hardee’s a Tasty Solution for Satisfying Customers and Growing Sales

Today’s Hardees.jpgconsumers move at breakneck speedwhich is one reason quick-service restaurants like Hardee’s are so popular. And to stay out front, it’s essential for those restaurants to keep finding new ways to delight customers and keep them coming back. Hardee’s did it by installing new quick-service customer ordering kiosks with 24-inch, touch-based screens. Instead of waiting in line, customers can see enticing images of what’s on the menu and then order with a few quick taps on the screen. And since orders go directly to the kitchen, the food is ready sooner. It all means Hardee’s can serve more customers and bring in more revenue.

 

Based on Industry-Standard Technology

 

Hardee’s had investigated the idea of ordering kiosks a few years ago, but those available at the time were based on proprietary technology and too expensive to be practical.

 

 

 

The new kiosks Hardee’s chose are based on a Dell OptiPlex all-in-one system equipped with Intel® Core™ i5 vPro™ processors and Windows Pro 8.1.

 

Using Hardees_Tweet.jpgindustry-standard technology like Windows gives Hardee’s the flexibility to run other applications including software used by employees and managers. It’s also convenient for software developers, who can use familiar programming environments, and for the restaurant’s IT administrators, who can use existing Microsoft systems management tools.

 

The all-in-one form factor increases deployment flexibility, since Hardee’s can mount the kiosks in a variety of places, depending on the layout of each restaurant.

 

Controlling Costs in the Future

 

With the success of the kiosks, Hardee’s is now considering using all-in-one systems to gradually replace point-of-sale (POS) systems at the counter as a tasty solution for delivering an outstanding customer experience and controlling costs.

 

To learn more, take a look at the Hardee’s solution here or read more about it here. To explore more technology success stories, visit www.intel.com/itcasestudies or follow us on Twitter.

 

 

 

 

 

 

 

 

 

 

 

Read more >

Big Data is Changing the Football Game

The football authorities have been slow to embrace technology, at times actively resisting it. It’s only been two seasons since some of Europe’s top leagues were authorized to use goal-line technology to answer the relatively simple question of whether or not a goal has been scored, i.e., has the whole ball crossed the goal line.

 

This is something the games of tennis and cricket have been doing for nearly ten years, but for one of the world’s richest sports, it risked becoming a bit of a joke.  As one seasoned British manager once said, after seeing officials deny his team a perfectly good goal: “We can put a man on the moon, time serves of 100 miles per hour at Wimbledon, yet we cannot place a couple of sensors in a net to show when a goal has been scored.” The authorities eventually relented, of course, their hand forced by increasingly common, high profile and embarrassing slip-ups.

 

But while the sport’s governing bodies were in the grips of technological inertia, the world’s top clubs have dived in head first in the last ten to fifteen years, turning to big data analytics in search of a new competitive advantage. In turn, this has seen some innovative companies spring up to serve this new ‘industry’, companies like Intelcustomer Scout7.

 

Taking the Guesswork out of the Beautiful Game

 

Big data has become important in football in part because it is big business. And for a trend that is only in its second decade, things have moved fast since the days of teams of hundreds of scouts collecting ‘data’ in the form of thousands of written reports in an effort to provide teams with insights into the opposition or potential new signings.

 

Now, with tools like Scout7’s football database, which is powered by a solution based on the Intel® Xeon® Processor E3 Family solution, they have a fast, sophisticated system that clubs can use to enhance their scouting and analysis operations.

 

For 138 clubs in 30 leagues, Scout7 makes videos of games from all over the world available for analysis within two hours of the final whistle[1]. At the touch of a button, they can take some of the guess work and ‘instinct’ out of deciding who gets on the pitch, as well as the legwork of keeping tabs on players and prospects from all over the world.

 

Scout-7-Player-Database.png

Pass master: Map of one player’s passes and average positions from the Italian Serie A during the 2014-15 season

 

Using big data analytics to enable smarter player recruitment is among Scout7’s specialties. For young players, without several seasons of experience on which to judge them, this can be especially crucial. How do you make a call on their temperament or readiness to make the step up? How will they handle the pressure? As we enter the busiest recruitment period of the football calendar – the summer transfer window – questions like this are being asked throughout the football world right now.

 

Delving into the Data

 

It’s a global game, and Scout7 deals in global data, so we can head to a league less travelled for an example: the Czech First League. The UEFA Under-21 European Championships also took place this summer and, with international tournaments often acting as shop windows for the summer transfer market (which opened on 1st July – a day after tournament’s final), it makes sense to factor this into our analysis.

 

So, let’s look at the Scout7 player database for players in the Czech First League that are currently Under-21 internationals, to see who has had the most game time and therefore exposure to the rigors of competitive football. We can see that a 22-year-old FC Hradec Králové defender, played every single minute of his team’s league campaign this season – 2,700 minutes in total.

 

Another player’s on-field time for this season was 97% — valuable experience for a youngster. Having identified two potential first-team ready players, Scout7’s database would allow us to take a closer look at the key moments from these games in high-definition video.

 

Check out our infographic, detailing a fledgling career of another player in the context of the vast amount of data collection and analysis that takes place within Scout7.

 

Scout-7-Player-Profile.pngScout7 player profile

 

“Our customers are embracing this transition to data-driven business decision-making, breaking away from blind faith in the hunches of individuals and pulling insights from the raft of new information sources, including video, to extract value and insights from big data,” explains Lee Jamison, managing director and founder, Scout7.

 

Scout7’s platform uses Intel® technology to deliver the computing power and video transcoding speed that clubs need to mine and analyze more than 3 million minutes of footage per year, and its database holds 135,000 active player records.

 

Lonely at the Top

 

There’s only room at the top of the elite level of sport for one and the margins between success and failure can be centimeters or split seconds. Identifying exactly where to find those winning centimeters and split seconds is where big data analytics really comes into its own.

 

Read the full case study.

 

To continue this conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


Find me on LinkedIn.

Keep up with me on Twitter.

 

*Other names and brands may be claimed as the property of others.

 

[1] Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance


Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.


Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at http://www.intel.com

Read more >

10 Mobile BI Strategy Questions: Executive Sponsorship

Man-On-Morning-Comute-Using-Tablet.pngOf the ten mobile BI questions I outlined in my last post, “Do we have an executive sponsor?” is the most important one because the success of a mobile BI journey depends on it more than any other. While the role of an executive sponsor is critical in all tech projects, several aspects of mobile BI technology make it easy for executive management to be involved closely and play a unique role.

 

Moreover, although the CIO or the CTO plays a critical role in making sure the right technology is acquired or developed, the executive sponsorship from the business side provides the right level of partnership in order to run on all three cylinders of BI: insight into right data for the right role and at the right time.

 

Why Do We Need an Executive Sponsor?

 

We need executive sponsorship because, unlike grassroots efforts, business and technology projects require a top-down approach. Whether the strategy is developed as part of a structured project or as a standalone engagement, the executive sponsor delivers three critical ingredients:

 

  1. The mobile BI strategy is in line with the overall business strategy.
  2. The required resources are made available.
  3. Necessary guidance is provided in order to stay the course.

 

Is Having an Executive Sponsor Enough?

 

Having an executive only on paper isn’t enough, however. How much commitment an executive sponsor makes and the leadership he/she provides has a direct impact on the outcome of the strategy. Thus, the ideal executive sponsor of a mobile BI initiative is a champion of the cause, an ardent mobile user, and the most active consumer of its assets.

 

What Makes an Ideal Executive Sponsor for Mobile BI?

 

How does the executive champion the mobile BI initiative? First and foremost, he/she leads by example — no more printing paper copies of reports or dashboards. This means that the executive is keen not only to consume the data on mobile devices but also to apply the insight derived from these mobile assets to decisions that matter. Using the technology demonstrates firsthand the mobile mindset that sets an example for the rest of the direct reports and their teams. In addition, by recognizing the information available on these mobile BI assets as the single version of the truth, the executive provides a clear and consistent message for everyone to follow.

 

Is Mobile BI Easier to Adopt by Executive Sponsors?

 

Without a doubt, mobile BI, just like mobility, is conducive to a wide range of users, starting with executives. Unlike the PC, which wasn’t mobile at all, and the laptop, which provided limited mobility, tablets and smartphones provide a perfect combination of mobility and convenience. This ease of use makes these devices an ideal candidate in winning over even those executives who may have been initially uneasy to include mobile BI in their arsenals or to use them in their daily decision-making activities.

 

The mobility and simplicity may give the executives additional incentives to get involved in the development of requirements for the first set of mobile BI assets because they can easily see the benefits of having access to critical information at their fingertips. These benefits include an additional opportunity for sales and marketing to use mobile BI to showcase new products and services to customers (an approach that reflects the innovation inherent in the use of this technology).

 

Bottom Line: Executive Sponsorship Matters

 

The most important goal of a mobile BI strategy is to enable faster, better-informed decision making. Executive sponsorship matters because with the right sponsorship, the mobile BI initiative will have the best chance to drive growth and profitability. Without this sponsorship — even with the most advanced technology in place — a strategy will face an uphill battle.

 

What other aspects of executive sponsorship do you see playing a role in mobile BI strategy?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

NVM Express: Windows driver support decoded

NVMexpress.gifmicrosoft.png


NVMe Drivers and SSD support in Windows

Microsoft enabled native, support for NVM Express (NVMe) in Windows 8.1 and Windows Server 2012 R2 by way of inbox drivers, and subsequent versions of each OS family are expected to have native support moving forward.  Additionally, native support for NVMe in Windows 7 and Windows Server 2008 R2 was added via product updates. 

 

Intel also provides an NVMe driver for Microsoft OS’s that releases with each version of our NVMe hardware products internally and using Microsoft’s WHCK.  The list of supported OS’s is the same as those above (for both 32-bit and 64-bit versions), along with Windows 8 and Windows Server 2012 (R2). The Intel NVMe driver supports only Intel SSDs and is required for power users or server administrators who plan to use the Intel® Solid-State Drive Data Center Tool to perform administrative commands on an NVMe SSD (e.g. firmware updates).  The Intel driver is intended to provide the best overall experience in terms of performance and supportability, it is strongly recommended.

 

 

Download Links by Operating Systems

 

NVMe Drivers for Windows

Operating System

Intel Driver Download

Microsoft Driver Download

Windows 7

intel.com

microsoft.com

Windows Server 2008 R2

intel.com

microsoft.com

Windows 8

intel.com

supported by upgrade to Windows 8.1

Windows Server 2012

intel.com

supported by upgraded to Windows Server 2012 R2

Windows 8.1

intel.com

N/A (inbox driver)

Windows Server 2012 R2

intel.com

N/A (inbox driver)

 

 

Other Links of Interest

 

Link

Details

Intel® Solid-State Drive Data Center Tool

The Intel® Solid-State Drive Data Center Tool (Intel SSD DCT) is a drive management tool for Intel SSD Data Center Family of products.

Intel® SSD Data Center Family Overview

Provides access to more information on Intel’s NVMe PCIe SSDs.

nvmexpress.org

More information on what NVMe is, why you should consider using it, and news/upcoming events.

 

 

Other blogs by Operating Systems with NVM Express driver information:

NVM Express: Linux driver support decoded

The Skinny on NVM Express and ESXi


Read more >

Why Choose the Mini PC? Part 2

Retail and finance industries turn to Mini PCs for high performance, compact computing power

 

Bookstore.png

Whether it’s tucked away on a bookshelf, hidden behind a fitting room mirror or mounted on a digital display, Intel technology-based solutions featuring the Mini PC are helping to power industries as varied as the retail and the financial sectors. Thanks to their energy efficiency, compact design and high performance computing power, these tiny form factors bring full-sized PC power to the smallest of spaces. Here are some real-world examples of the endless possibilities with Mini PCs:

 

Mini PCs as Part of An Overall Technology Solution for Retail

 

One of my favorite Mini PC success stories is that of Galleria Rizzoli in Milan, Italy. Galleria Rizzoli saw the impact of digital book sales firsthand, and decided to respond with a complete digital overall of its operations.

 

With the help of Intel technology, Galleria Rizzoli launched a pilot program that gave their store a complete technology makeover. Mini PCs powered new in-store digital signage and seven new in-store customer kiosks. Mini PCs replaced bulky desktop towers, freeing up valuable store space. Thanks to the technology makeover, sales increased 40 percent.

 

Galleria Rizzoli is a great example of how Mini PCs can enhance the user experience to help drive sales.

 

Overall, it’s a winning solution for Intel, for Rizzoli, and for consumers who might be looking to quickly find the perfect kids’ book for a boy who likes to play with trucks. Read the full story of how Mini PCs modernized the bookstore.

 

Embedded Mini PCs Enable Next-Gen Vending Machines

 

Whether you’re grabbing a quick snack at the office or simply refueling at the gas station, vending machines today are operating on a complex system of motherboards, dispensing tanks, and printing and credit card machines. Many new OEMs are currently working on consolidating all these disparate parts into one Mini PC solution.

 

Mini PCs in the Fitting Room

 

Instead of treating the fitting room like a revolving door, imagine being able to tap a screen to request a different size or color. Some retailers are exploring the idea of using the Mini PC to power touch-screen consoles in fitting rooms to provide instant inventory access to customers while also recommending referential products for purchase.

 

Man-Gives-Card-To-Woman-At-Hotel-Desk.png

National Grocery Chains Power POS with Mini PCs

 

The days of the bulky cash register have given way to more compact Mini PC-powered POS systems in grocery stores as well. Not only do Mini PCs leave a smaller footprint in tight cashier stalls, they also provide the high performance computing power necessary to ring up multiple items in quick succession.

 

Hospitality Industry Welcomes Mini PCs


Look inside many hotel business centers and you’ll likely see a row of monitors with Mini PCs tucked neatly behind them. The Mini PC offers a compact solution that won’t slow guests down. And some hotels are exploring the use of Mini PCs in guest rooms attached to the TVs along with concierge-type software to enhance the in-room guest experience.

 

Banks Turn to Mini PCs for Increased Efficiency


A growing number of banks are reaching for Mini PCs, not only for their compact size, but for their energy efficiency and speed. For many clients, a visit to the local bank reveals tellers relying on Mini PCs where desktop towers once stood. Mini PCs free up valuable desk space, offer compact security, and integrate with legacy systems.

 

Day Traders Turn to Mini PCs for Quick Calculations

 

For day traders, Mini PCs featuring solid-state-drives (SSDs) are the desktop PCs of choice. While traditional hard disk drives in PCs and laptops are fairly inexpensive, they are also slow. SSDs offer greater capacity, are considered more reliable, and enable faster access to data, which is critical to an industry where seconds matter.

 

Where have you seen the Mini PC in use? Join the conversation using #IntelDesktop or view our other posts in the Desktop World Series and rediscover the desktop.

 

To read part 1, click here: Why Choose the Mini PC? Part 1

Read more >

Population Health Management Best Practices for Today and Tomorrow’s Healthcare System

By Justin Barnes and Mason Beard

 

The transition to value-based care is not an easy one. Organizations will face numerous challenges on their journey towards population health management.

 

We believe there are five key elements and best practices to consider when transitioning from volume to value-based care:  managing multiple quality programs; supporting both employed and affiliated physicians and effectively managing your network and referrals; managing organizational risk and utilization patterns; implementing care management programs; and ensuring success with value-based reimbursement.

 

When considering the best way to proactively and concurrently manage multiple quality programs, such as pay for performance, accountable care and/ or patient-centered medical home initiatives, you must rally your organization around a wide variety of outcomes-based programs. This requires a solution that supports quality program automation. Your platform must aggregate data from disparate sources, analyze that data through the lens of a program’s specific measures, and effectively enable the actions required to make improvements. Although this is a highly technical and complicated process, when done well it enables care teams to utilize real-time dashboards to monitor progress and identify focus areas for improving outcomes.

 

In order to provide support to both employed and affiliated physicians, and effectively manage your network and referrals, an organization must demonstrate its value to healthcare providers. Organizations that do this successfully are best positioned to engage and align with their healthcare providers. This means providing community-wide solutions for value-based care delivery. This must include technology and innovation, transformation services and support, care coordination processes, referral management, and savvy representation with employers and payers based on experience and accurate insight into population health management as well as risk.

 

To effectively manage organization risk and utilization patterns, it is imperative to optimize episodic and longitudinal risk, which requires the application of vetted algorithms to your patient populations using a high quality data set. In order to understand the difference in risk and utilization patterns you need to aggregate and normalize data from various clinical and administrative sources, and then ensure that the data quality is as high as possible. You must own your data and processes to be successful. And importantly, do not rely entirely on data received from payers.

 

It is also important to consider the implementation of care management programs to improve individual patient outcomes. More and more organizations are creating care management initiatives for improving outcomes during transitions of care and for complicated, chronically ill patients. These initiatives can be very effective.  It is important to leverage technology, innovation and processes across the continuum of care, while encompassing both primary and specialty care providers and care teams in the workflows. Accurate insight into your risk helps define your areas of focus. A scheduled, trended outcomes report can effectively identify what’s working and where areas of improvement remain.

 

Finally, your organization can ensure success with value-based reimbursement when the transition is navigated correctly. The shift to value-based reimbursement is a critical and complicated transformation—oftentimes a reinvention—of an organization. Ultimately, it boils down to leadership, experience, technology and commitment. The key to success is working with team members, consultants and vendor partners who understand the myriad details and programs, and who thrive in a culture of communication, collaboration, execution and accountability.

 

Whether it’s PCMH or PCMH-N, PQRS or GPRO, CIN or ACO, PFP or DSRIP, TCM or CCM, HEDIS or NQF, ACG’s or HCC’s, care management or provider engagement, governance or network tiering, or payer or employer contracting, you can find partners with the right experience to match your organizations unique needs. Because much is at stake, it is necessary to ensure that you partner with the very best to help navigate your transition to value-based care.

 

Justin Barnes is a corporate, board and policy advisor who regularly appears in journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes is also host of the weekly syndicated radio show, “This Just In.”

 

Mason Beard is Co-Founder and Chief Product Officer for Wellcentive. Wellcentive delivers population health solutions that enable healthcare organizations to focus on high quality care, while maximizing revenue and transforming to support value-based models.

Read more >

Make Your Data Centre Think for Itself

Two-People-Walking-In-A-Data-Center.pngWouldn’t it be nice if your data centre could think for itself and save you some headaches? In my last post, I outlined the principle of the orchestration layer in the software-defined infrastructure (SDI), and how it’s like the brain controlling your data centre organism. Today, I’m digging into this idea in a bit more detail, looking at the neurons that pass information from the hands and feet to the brain, as it were. In data centre terms, this means the telemetry that connects your resources to the orchestration layer.

 

Even the most carefully designed orchestration layer will only be effective if it can get constant and up-to-date, contextual information about the resources it is controlling: How are they performing? How much power are they using? What are their utilisation levels and are there any bottlenecks due to latency issues? And so on and so forth. Telemetry provides this real-time visibility by tracking resources’ physical attributes and sending the intelligence back to the orchestration software.

 

Let me give you an example of this in practice. I call it the ‘noisy neighbour’ scenario. Imagine we have four virtual machines (VMs) running on one server, but one of them is hogging a lot of the resource and this is impacting the performance of the other three. Intel’s cache monitoring telemetry on the server can report this right back to the orchestration layer, which will then migrate the noisy VM to a new server, leaving the others in peace. This is real-time situational feedback informing how the whole organism works. In other words, it’s the Watch, Decide, Act, Learn cycle that I described in my previous blog post – doesn’t it all fit together nicely?

 

Lessons from History…

 

Of course, successful telemetry relies on having the right hardware to transmit it. Just think about another data centre game changer of the recent past – virtualisation. Back in the early 2000s, demand for this technology was growing fast, but the software-only solutions available put tremendous overheard demand on the hardware behind them – not an efficient way to go about it. So, we at Intel helped build in more efficiencies with solutions like Intel® Virtualization Technology, more memory, addressability and huge performance gains. Today, we’re applying that same logic to remove SDI bottlenecks. Another example is Intel® Intelligent Power Node Manager, a hardware engine that works with management software to monitor and control power usage at the server, rack and row level, allowing you to set the usage policies for each.

However, we’re not just adding telemetry capabilities at the chip level and boosting hardware performance, but also investing in high-bandwidth networking and storage technologies.

 

….Applied to Today’s Data Centres

 

With technologies already in the market to enable telemetry within the SDI, there are a number of real-life use cases we can look to for examples of how it can help drive time, cost and labour out of the data centre. Here are some examples of how end-user organizations are using Intelligent Power Node Manager to do this:

 

 

Orchestration-Layer-Explanation-For-Data-Center.png

Another potential use case for the technology is to reduce usage of intelligent power strips, or you could throttle back on server performance and extend the life of your uninterruptable power supply (UPS) in the advent of a power outage, helping lower the risk of a service down time – something no business can afford.

 

So, once you’ve got your data centre functioning like a highly evolved neural network, what’s next? Well, as data centre technologies continue to develop, the extent to which you can build agility into your infrastructure is growing all the time. In my next blog, I’m going to look into the future a bit and explore how silicon photonics can help you create composable architectures that will enable you to build and reconfigure resources on the fly.

 

To pass the time until then, I’d love to hear from any of you that have already started using telemetry to inform your orchestration layer. What impact has it had for you, and can you share any tips for those just starting out?

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

You can find my first and second blogs on data centres here:   

Is Your Data Centre Ready for the IoT Age?

Have You Got Your Blueprints Ready?

Are You Smarter than a Data Centre?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

 

*Other names and brands may be claimed as the property of others.

Read more >

Server Refresh Can Reduce Total Cost of Ownership

Snackable-MoreBangforYourBuck.pngMore bang for your buck. Essentially that is the driving force behind my team in Intel IT. Our IT department is on a tight budget, just like most enterprise IT departments. Therefore, return on investment and total cost of ownership are important considerations for deciding when to upgrade the servers that run our silicon design workloads. As a principal engineer in infrastructure engineering, I direct the comparison of the various models of each new generation of Intel® CPU to those of previous generations of processors. (We may re-evaluate the TCO of particular models between generations, if price points significantly change.) We evaluate all the Intel® Xeon® processor families – Intel Xeon processor E3 family, Intel Xeon processor E5 family, and Intel Xeon processor E7 family – each of which have different niches in Intel’s silicon design efforts.

 

We use industry benchmarks and actual electronic design automation (EDA) workloads in our evaluations, which go beyond performance to address TCO – we include throughput, form factor (density), energy efficiency, cost, software licensing costs, and other factors. In many cases over the years, one of the models might turn out better in terms of price/watt, but performance is slower, or the software licensing fees are triple those for a different model.

 

In silicon design, back-end design jobs are time critical and require servers with considerable processing power, large memory capacity, and memory bandwidth. For these types of jobs, the bottleneck has historically been memory, not CPU cycles; with more memory, we can run more jobs in parallel. The Intel Xeon processor E7-8800 v3 product family offers new features that can increase EDA throughput, including up to 20% more cores than the previous generation and DDR4 memory support for higher memory bandwidth. A server based on the Intel Xeon processor E7-8800 v3 can take either DDR3 (thereby protecting existing investment) or DDR4 DIMMs – and supports memory capacity up to 6 TB per 4-socket server (with 64 GB DIMMs) to deliver fast turnaround time for large silicon design jobs.

 

We recently completed an evaluation of the Intel Xeon processor E7-8800 v3 product family, as documented in our recent brief. According to our test results, the Intel Xeon processor E7 v3-based server delivers excellent gains in performance and supports larger models, faster iterations, and greater throughput than was possible with the previous generation of the processor. These improvements can accelerate long-running silicon design jobs and shorten the time required to bring new silicon design to market. These improvements can also reduce data center footprint and help control operational and software licensing costs by achieving greater throughput using fewer systems than were necessary with previous generations of processors.

 

Our tests used a large multi-threaded EDA application operating on current Intel® silicon design data sets. The result shows an Intel Xeon processor E7-8890 v3-based server completed a complex silicon design workload 1.18x faster than the previous-generation Intel Xeon processor E7-4890 v2-based server and 17.04x faster than a server based on Intel® Xeon® processor 7100 series (Intel Xeon processor 7140M).

 

The Intel Xeon processor E7-8800 v3 product family also supports the Intel® Advanced Vector Extensions 2 (Intel® AVX2) instruction set. Benefits of Intel AVX2 include doubling the number of FLOPS (floating-point operations per second) per clock cycle, 256-bit integer instructions, floating-point fused multiply-add instructions, and gather operations. While our silicon design jobs do not currently use AVX2 – mostly because the design cycles can take over a year to complete and during that time we cannot modify the plan of record (POR) EDA tools and infrastructure for those servers – we anticipate that Intel AVX2 can provide a performance boost for many technical applications.

 

I’d like to hear from other IT professionals – are you considering refreshing? If you’ve already refreshed, can you share your observed benefits and concrete ROI? What best-known methods have you developed and what are some remaining pain points? If you have any questions, I’d be happy to answer them and pass on our own best practices in deploying these servers. Please share your thoughts and insights with me – and your other IT colleagues – by leaving a comment below. Join our conversation here in the IT Peer Network.

Read more >

Looking to the Future: Smart Healthcare with Big Data – HLRS

HLRS-Healthcare-Techonolgy.png

The NHS is under unbelievable pressure to do more with less; according to the latest research from the Kings Fund, the NHS budget increased by an average of just 0.8 per cent per year in real terms since 2010.

 

Clearly, there is a need for smart ways to improve healthcare around the world. One team of researchers at The University of Stuttgart is using cutting-edge technology and big data to simulate the longest and strongest bone in the human body — the femur — to improve implants.

 

People-Looking-At-Images-On-Screens.png

Medical Marvels

 

For three years, this research team has been running simulations of the types of realistic forces that the thigh bone undergoes on a daily basis for different body types and at a variety of activity levels to try and inform doctors what is needed to create much better implants for patients with severe fractures or hip deterioration. Successful implants can make a significant impact on the wearer’s quality of life, so the lighter and more durable they are the better.

 

Femoral fractures are pretty common, with around 70,000 hip fractures taking place in the UK each year, which is estimated to cost up to £2 billion. So, ascertaining better materials for bone implants that can allow longer wear and better mobility will solve a real need.

 

However, trying to achieve simulation results for a fractured bone requires a huge amount of data. Bone is not a compact structure, but is like a calcified sponge. Such a non-homogenous non-uniform material behaves in different ways under different stresses for different people. This means that the team must collect hundreds of thousands of infinitesimally small scans from genuine bone samples to learn how different femurs are structured. The incredible detail and high resolution provided by high-performance machines powered by the Intel® Xeon® Processor E5-2680 v3 enables them to replicate full femur simulations with this exact material data.

 

Such a level of intricacy cannot be done on a normal cluster. In the University of Stuttgart research team’s experience, one tiny part of the femoral head — a cube of only 0.6mm2 — generates approximately 90,000 samples and each of these samples requires at least six Finite-Element simulations to get the field of anisotropic material data needed to cover the full femoral head. To carry out this large number of simulations they definitely need the super computer! To do this in a commercial way you’d need thousands of patients, but with one supercomputer this team can predict average bone remodelling and develop reliable material models for accurate implant simulations. This is real innovation.

 

High-Performance Computing for Healthcare

 

The High Performance Computing Center in Stuttgart (HLRS) is the institution that makes all this possible. One of just three large supercomputer ‘tier 0’ sites in Germany, it recently upgraded to the Intel Xeon Processor E5-2680 v3, which according to internal tests delivers four times the performance of its previous supercomputer ii. This is great for Stuttgart University, as its computing center now has four times the storage space. ii Research like this requires intensive data processing and accurate analysis, so significant computing capacity is crucial.

 

Person-Using-Interactive-Computing.png

This new system enables breakthroughs that would be otherwise impossible. For more on HLRS’s cutting edge supercomputing offering, click here.

 

I’d love to get your thoughts on the healthcare innovation enabled by the Intel HPC technologies being used by HLRS and its researchers, so please leave a comment below — I’m happy to answer questions you may have too.

 

To continue this conversation on Twitter, please follow me at @Jane_Cooltech.

 

Join the debate in the Intel Health and Life Sciences Community, and check out Thomas Kellerer’s HLRS blog!

 

For other blogs on how Intel technology is used in the healthcare space, check out these blogs.

 

i ’Storage and Indexing of Fine Grain, Large Scale Data Sets’ by Ralf Schneider, in Michael M. Resch et. al, Sustained Simulation Performance 2013, Springer International Publishing, 2013, S. 89–104, isbn: 978-3-319-01438-8. doi: 10.1007/978-3-319-01439-5_7

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

*Other names and brands may be claimed as the property of others.

Photo Credit: copyright Boris Lehner for HLRS

Read more >

Evaluating an OS update? Take a look at eDrive….

As users of Windows 7 consider moving to Windows 8.1 or Windows 10, a new BitLocker feature is available that should be considered.  Nicknamed “eDrive,” “Encrypted Hard Drive,” or “Encrypted Drive,” the feature provides the ability for BitLocker to take advantage of the hardware encryption capabilities of compatible drives, instead of using software encryption.   Hardware encryption provides benefits over software encryption in that encryption activation is near-immediate, and real-time performance isn’t impacted.

 

eDrive is Microsoft’s implementation of managed hardware-based encryption built on the TCG Opal framework and IEEE-1667 protocols.  It is implemented a bit differently from how third-party Independent Software Vendors (ISVs) implement and manage Opal-compatible drives.  It is important to understand the differences as you evaluate your data protection strategy and solution.

 

eDrive information on the internet is relatively sparse currently.  Here are a couple of resources from Intel that will help get you started:

 

And here are a couple of tools from Intel that will be useful when working with the Intel® SSD Pro 2500 Series:

 

If you’re going to do research on the internet, I’ve found that “Opal IEEE 1667 BitLocker” are good search terms to get you started.

 

A special note to those who want to evaluate eDrive with the Intel® SSD Pro 2500 Series: the Intel-provided tool to enable eDrive support only works on “channel SKUs.”  Intel provides SSDs through the retail market (channel) and directly to OEMs (the maker/seller of your laptop).  Support for eDrive on OEM SKUs must be provided by the OEM.  Channel SKUs can be verified by looking at the firmware version on the SSD label, or with the Intel® SSD Toolbox or Intel® SSD Pro Administrator Tool.  Firmware in the format of TG## (TG20, TG21, TG26, etc…)  confirms a channel SKU, and the ability to enable eDrive support on the Intel® SSD Pro 2500 Series.

 

Take a look at eDrive, or managed hardware-based encryption solutions from ISVs such as McAfee, WinMagic, Wave, and others.

 

As always, I look forward to your input on topics you would like covered.

 

Thanks for your time!

 

Doug
intel.com/ssd

Read more >

1, 2, 3…It’s Time to Migrate from Windows Server 2003

Windows 2003 Blog 5.jpgWe’ve told you once. We’ve told you twice. Heck, we’ve told you four times: if you’re still running Windows Server 2003, you need to take action as soon as possible because the EOS is fast approaching on July 14th, 2015.

 

Need a refresher on what lies ahead? Well, good news, we’ve put together all the information you need to stay safe.

 

The upcoming Windows Server 2003 EOS means Microsoft will not be issuing any patches or security updates after the cut off date. While hackers are off rejoicing, this raises major security issues for those still running Windows Server 2003. And that appears to be quite a few of you.

 

According to Softchoice, a company specializing in technology procurement for organizations, 21 percent of all servers are still running Windows Server 2003. More worrisome is that 97 percent of all data centers are still running some form of Windows Sever 2003 within their facilities.

 

But migrating from Windows Server 2003 and ending up with proper security doesn’t have to be a pain. In our previous posts in this series, we’ve highlighted three different options for migration and how to secure the target environment. Let’s recap them here:

 

Option 1: Upgrade to Windows Server 2012

 

Because Windows Server 2008 will be losing support in January 2016, it’s a good idea for organizations to directly upgrade to Windows Server 2012 R2. This will require 64-bit servers and a refreshed application stack for supported configuration.

 

Likely, your organization might be looking to invest in a hybrid cloud infrastructure as part of this upgrade. Depending on what a server is used for, you’ll need optimized security solutions to secure your private virtual machines.

 

Intel Security has you covered. No matter what you’re running, you should at least employ either McAfee Server Security Suite Essentials or McAfee Server Security Suite Advanced.

 

If you’re running an email, sharepoint, or database server, consider McAfee Security for Email Servers, McAfee Security for Microsoft SharePoint or McAfee Data Center Security Suite for Databases, depending on your needs.

 

Option 2: Secure the public cloud

 

As the cloud ecosystem matures, the public cloud is becoming a reasonable alternative for many infrastructure needs. However, one issue remains: while public cloud solutions secure the underlying infrastructure, each company is responsible for securing their virtual servers from the Guest OS and up. Meaning, you’ll need a security solution built for the cloud.

 

Luckily, we’ve a solution that will help you break through the haze and gain complete control over workloads running within an Infrastructure- as-a-Service environment: McAfee Public Cloud Server Security Suite.

 

McAfee Public Cloud Server Security Suite gives you comprehensive cloud security, broad visibility into server instances in the public cloud and dynamic management of cloud environments.

 

Option 3: Protecting the servers you can’t migrate right now

 

For the 1.6 million of you that are behind schedule on Windows Server upgrades and won’t be able to migrate by the EOS date, you have a tough challenge ahead. Hackers know full well that Microsoft won’t be patching any newly discovered security issues, and as such, your servers might be a target.

 

But it’s not all doom and gloom – Intel Security can tie you over and keep you protected, until you’ve finished migrating.

 

With McAfee Application Control for Servers, you can command a centrally managed dynamic whitelisting solution. This solution will help you to protect your unsupported servers from malware and advanced threats, by blocking unauthorized applications, automatically categorizing threats and lowering manual input through a dynamic trust model.

 

Make your migration easy and get started today.

 

Be sure to follow along with @IntelSec_Biz on Twitter for real-time security updates. Stay safe out there!

 

 

Windows Server 2003 EOS

This blog post is episode #5 of 5 in the Tech Innovation Blog Series

View all episodes  >

Read more >

Stuck on Windows Server 2003? Migration Option #3

Windows 2003 Blog 4.pngPop quiz: what’s happening on July 14th, 2015?

 

If you’ve been following along with this series, you’ll know it’s the impending End of Support (EOS) for Windows Server 2003.

 

So far, we’ve covered two of the three migration options available to those still running Windows Server 2003: migrating to Windows Server 2012 or moving to the public cloud. Since migrating to a new server environment takes 200 days on average, and 300 for migrating applications, making the move by the mid-July end-of-life date may not be realistic.

 

This brings us to migration option #3: implementing additional protection for servers that cannot be immediately migrated by the EOS date.

 

Since Windows 2003 servers will no longer be compliant and will be vulnerable to new malware created after mid-July, you’ll need to take additional steps to keep them secure. Especially since hackers are patiently waiting for July 15th, knowing that Microsoft will no longer issue any security updates to these servers.

 

What’s even more concerning is that there are 23.8 million instances of Windows Server 2003 still running, making this a huge and potentially very lucrative target for hackers.

 

Fortunately, Intel Security can provide the security you need to keep your Windows Server 2003 instances safe from hackers’ targeted attacks.

 

If you have workloads that you cannot migrate off Windows Server 2003 by mid-July, be sure to install whitelisting protection to stay secure. McAfee Application Control for Servers is a centrally managed dynamic whitelisting solution which protects from malware and advanced threats. In more details, it:

  • Blocks unauthorized applications, therefore protecting from advanced persistent threats without requiring signature updates or labor-intensive lists to manage
  • Helps lower costs by eliminating manual support requirements associated with other whitelisting technologies
  • And best of all, it does all of that without requiring signature updates!

 

Next week, we’ll wrap up this series with a summary of the migration options for Windows Server 2003 and will highlight how to properly secure each of those paths.

 

Want to learn more about the migration options available to those running Windows Server 2003? Register for an educational webcast on “The 1-2-3 Security Approach for Windows Server 2003 EOS,” by visiting this page.

 

In the meantime, follow @IntelSec_Biz on Twitter for real-time security updates. We’ll see you back here next week!

 

 

 

 

Windows Server 2003 EOS

This blog post is episode #4 of 5 in the Tech Innovation Blog Series

View all episodes  >

Read more >

To the Public Cloud – and Beyond! Migrating from Windows Server 2003

Windows 2003 Blog 3.jpgAs you may remember from the last blog in this series, End of Support (EOS) for Windows Server 2003 goes into effect on July 15th, 2015. The clock is winding down, and we’re highlighting migration options available to those still running Windows Server 2003 in an effort to beat the buzzer.

 

Last week, we highlighted the first of three migration options available: upgrading to a newer version of Windows Server. Now, we will discuss the second migration path: moving to the public cloud.

 

Moving workloads to the public cloud has brought with it some real advantages in that it can help to reduce costs, provide greater agility, and enhance scalability. These benefits will most likely be realized as you move Windows Server 2003 based workloads into a public cloud environment.

 

However, a top concern when workloads are moved into a public cloud environment is security. As protection from a traditional network perimeter is not always available, a very common question is: what are the security controls that are in place for workloads that have moved into public cloud environments?

 

To answer this question, let’s take a look at the importance of security for your public cloud deployments starting with Infrastructure as a Service (IaaS).

 

Many enterprises mistakenly assume that public cloud providers for IaaS will protect their operating systems, applications, and data running on the server instances. While public cloud providers will secure the underlying infrastructure – it’s up to you to secure the virtual servers that run in that infrastructure. In other words, security is a shared responsibility in the cloud.

 

Luckily, we’ve come up with a solution that will help you break through the haze and gain complete control over workloads running within an IaaS environment: McAfee Public Cloud Server Security Suite.

 

This suite of security solutions is ideal for the public cloud, and uses hourly pricing so you only pay for security for the hours that your virtual servers are running in the cloud. The protection provided in McAfee Public Cloud Server Security Suite includes:

  • Comprehensive cloud security to extend and manage security policies for virtual servers in Microsoft Azure, Amazon AWS, and other public clouds such as those based on OpenStack.
  • Broader visibility into server instances in the public cloud via a powerful combination of blacklisting and whitelisting technologies.
  • Dynamic management of cloud environments in concert with on-premise endpoints and physical servers with McAfee ePolicy Orchestrator (ePO) software.

 

What about moving Windows Server 2003 applications such as email and collaboration into a Software as a Service (SaaS) public cloud environment?

 

If you’re planning on migrating to an environment such as Microsoft Office 365, McAfee Email Protection has you covered. This solution provides enterprise-grade security against targeted phishing threats, includes integrated data loss prevention technology, and protects Office 365 Exchange Online, Microsoft’s public cloud email solution.

 

Want to learn more about the security questions you need to ask about protecting email servers? Find all the information you need, here.

 

If moving to the public cloud doesn’t suit you or your business needs, fear not! Next week, we’ll take a deep dive into the third option available to those running Windows Server 2003: implementing additional protection for servers that cannot immediately be migrated by the EOS date.

 

In the meantime, follow @IntelSec_Biz on Twitter for the latest security updates. We’ll see you next week!

 

 

 

Windows Server 2003 EOS

This blog post is episode #3 of 5 in the Tech Innovation Blog Series

View all episodes  >

Read more >

Migrating from Windows Server 2003: Option #1

Windows 2003 Blog 2.jpgIf you read last week’s blog, you are well aware that when Windows Server 2003 goes off support on July 14th, 2015, it has the potential to result in the biggest vulnerability of the year. This End of Support (EOS) will effectively leave customers vulnerable and at risk of no longer being compliant.

 

Luckily, there are three options available to those still running Windows Server 2003. In this blog, we will highlight the first migration path: upgrading to a newer version of Windows Server.

 

You are currently faced with two options. The first, migrating to Windows Server 2008, we do not recommend. This will be taken off support in January 2016, setting you right back to square one. Instead, users might consider migrating to Windows Server 2012 R2, the newest available version.

 

Seems like a piece of cake, right? Not exactly – and here’s why.

 

As with any big change, there are a few challenges associated with this migration path. Here’s what lies ahead:

  • You will most likely need new hardware to install Windows Server 2012 R2, and these need to be 64-bit servers. You will also need to refresh your application stack so that you are in running supported application versions for Windows Server 2012.
  • At the same time, you will likely want to run your applications in virtual machines. In other words: in the private cloud and probably also in the public cloud.

Herein lies the security challenge: how can you best protect this hybrid-computing environment?

 

Intel Security offers a variety of server protection solutions to help you secure all types of servers, whether physical or even virtual servers in the cloud. No matter what you run on these servers, we strongly recommend one of the following options:

If your server is a specialized server such as Email, SharePoint, or Database server, you’ll need some further protection in addition to the above:

 

In next week’s Windows Server 2003 EOS blog, we will discuss the second migration path available to you: moving workloads to the public cloud instead of to Windows Server 2012. Trust us, it’s not as scary as it sounds!

 

Want to learn more about the migration options available to those running Windows Server 2003? Register for an educational webcast on “The 1-2-3 Security Approach for Windows Server 2003 EOS,” by visiting this page.

 

In the meantime, follow @IntelSec_Biz on Twitter and like us on Facebook for real-time security updates. See you back here next week!

 

 

Windows Server 2003 EOS

This blog post is episode #2 of 5 in the Tech Innovation Blog Series

View all episodes  >

Read more >

Bye, Bye Windows Server 2003 – Hello, Hackers

Still running a Windows Server 2003? 

You’ll want to read this.


Windows 2003 Blog 1.jpgAs you may or may not have heard, the end of life for Windows Server 2003 is now set for July 14th, 2015 – meaning after that date, it will no longer be supported by Microsoft.

 

So, what does this mean for you? It means that, along with other support, Microsoft will no longer be providing any security updates to Windows 2003 servers, leaving them exposed to malware and open for attack.

 

Security pros and IT specialists around the globe have already begun preparing as there are still 23.8 million Windows 2003 Servers out there. Some are even going as far to call the end of life for this product the “biggest security threat of 2015.”

 

You can bet that most hackers have this date marked on their calendars and are eagerly awaiting its arrival.

 

Lucky for you, with this blog (the first of a series of five intended to outline your options and solutions) we’ll help you to ensure you’re adequately protected when the date rolls around. Right now, there are three paths you can choose to go down in order to prepare:

  1. Upgrade to a newer version of Windows such as Windows 2012 R2
  2. Migrate Windows 2003 workloads to the public cloud
  3. Stay on Windows 2003 for now

However, each of these three paths comes with their own associated challenges, namely:

  • How to secure a hybrid compute environment and protect virtualized servers.
  • How to secure virtual servers in the public cloud.
  • How to secure Windows 2003 servers after July 14th, 2015.

 

Ultimately, the “do nothing” model is not an option as customers run the risk of no longer being compliant and leaving themselves vulnerable to malware.

 

At Intel Security we’re preparing materials to help you transition as seamlessly as possible, and overcome these challenges by securing your server environment no matter which of the three paths you decide to take.

 

Stay tuned for our next blog where we will outline in more depth each of the three migration options available to you and how you can overcome the challenges inherent with each. And, be sure to follow @IntelSec_Biz on Twitter for the latest security updates.

 

 

Windows Server 2003 EOS

This blog post is episode #1 of 5 in the Tech Innovation Blog Series

View all episodes  >

Read more >

How End-To-End Network Transformation Fuels the Digital Service Economy

To see the challenge facing the network infrastructure industry, I have to look no farther than the Apple Watch I wear on my wrist.

 

That new device is a symbol of the change that is challenging the telecommunications industry. This wearable technology is an example of the leading edge of the next phase of the digital service economy, where information technology becomes the basis of innovation, services and new business models.

 

I had the opportunity to share a view on the end-to-end network transformation needed to support the digital service economy recently with an audience of communications and cloud service providers during my keynote speech at the Big Telecom Event.

 

These service providers are seeking to transform their network infrastructure to meet customer demand for information that can help grow their businesses, enhance productivity and enrich their day-to-day lives.  Compelling new services are being innovated at cloud pace, and the underlying network infrastructure must be agile, scalable, and dynamic to support these new services.

 

The operator’s challenge is that the current network architecture is anchored in purpose-built, fixed function equipment that is not able to be utilized for anything other than the function for which it was originally designed.  The dynamic nature of the telecommunications industry means that the infrastructure must be more responsive to changing market needs. The challenge of continuing to build out network capacity to meet customer requirements in a way that is more flexible and cost-effective is what is driving the commitment by service providers and the industry to transform these networks to a different architectural paradigm anchored in innovation from the data center industry.

 

Network operators have worked with Intel to find ways to leverage server, cloud, and virtualization technologies to build networks that cost less to deploy, giving consumers and business users a great experience, while easing and lowering their cost of deployment and operation.

 

Transformation starts with reimagining the network

 

This transformation starts with reimagining what the network can do and how it can be redesigned for new devices and applications, even including those that have not yet been invented. Intel is working with the industry to reimagine the network using Network Functions Virtualization (NFV) and Software Defined Networking (SDN).

 

For example, the evolution of the wireless access network from macro basestations to a heterogeneous network or “HetNet”, using a mix of macro cell and small cell base-stations, and the addition of mobile edge computing (MEC) will dramatically improve network efficiency by providing more efficient use of spectrum and new radio-aware service capabilities.  This transformation will intelligently couple mobile devices to the access network for greater innovation and improved ability to scale capacity and improve coverage.

 

In wireline access, virtual customer premises equipment moves service provisioning intelligence from the home or business to the provider edge to accelerate delivery of new services and to optimize operating expenses. And NFV and SDN are also being deployed in the wireless core and in cloud and enterprise data center networks.

 

This network transformation also makes possible new Internet of Things (IoT) services and revenue streams. As virtualized compute capabilities are added to every network node, operators have the opportunity to add sensing points throughout the network and tiered analytics to dynamically meet the needs of any IoT application.

 

One example of IoT innovation is safety cameras in “smart city” applications. With IoT, cities can deploy surveillance video cameras to collect video and process it at the edge to detect patterns that would indicate a security issue. When an issue occurs, the edge node can signal the camera to switch to high-resolution mode, flag an alert and divert the video stream to a central command center in the cloud. With smart cities, safety personnel efficiency and citizen safety are improved, all enabled by an efficient underlying network infrastructure.

 

NFV and SDN deployment has begun in earnest, but broad-scale deployment will require even more innovation: standardized, commercial-grade solutions must be available; next-generation networks must be architected; and business processes must be transformed to consume this new paradigm. Intel is investing now to lead this transformation and is driving a four-pronged strategy anchored in technology leadership: support of industry consortia, delivery of open reference designs, collaboration on trials and deployments, and building an industry ecosystem.

 

The foundation of this strategy is Intel’s role as a technology innovator. Intel’s continued investment and development in manufacturing leadership, processor architecture, Ethernet controllers and switches, and optimized open source software provide a foundation for our network transformation strategy.

 

Open standards are a critical to robust solutions, and Intel is engaged with all of the key industry consortia in this industry, including the European Telecommunications Standards Institute (ETSI), Open vSwitch, Open Daylight, OpenStack, and others. Most recently, we dedicated significant engineering and lab investments to the Open Platform for NFV’s (OPNFV) release of OPNFV Arno, the first carrier-grade, open source NFV platform.

 

The next step for these open source solutions is to be integrated with operating systems and other software into open reference software to provide an on-ramp for developers into NFV and SDN. That’s what Intel is doing with our Open Network Platform (ONP); a reference architecture that enables software developers to lower their development cost and shorten their time to market.  The innovations in ONP form the basis of many of our contributions back to the open source community. In the future, ONP will be based on OPNFV releases, enhanced by additional optimizations and proofs-of-concept in which we continue to invest.

 

We also are working to bring real-world solutions to market and are active in collaborating on trials and deployments and deeply investing in building an ecosystem that brings companies together to create interoperable solutions.

 

As just one example, my team is working with Cisco Systems on a service chaining proof of concept that demonstrates how Intel Ethernet 40GbE and 100GbE controllers, working with a Cisco UCS network, can provide service chaining using network service header (NSH).  This is one of dozens of PoCs that Intel has participated in in just this year, which collectively demonstrate the early momentum of NFV and SDN and its potential to transform service delivery.

 

A lot of our involvement in PoCs and trials comes from working with our ecosystem partners in the Intel Network Builders. I was very pleased to have had the opportunity to share the stage with Martin Bäckström and announce that Ericsson has joined Network Builders. Ericsson is an industry leader and innovator, and their presence in Network Builders demonstrates a commitment to a shared vision of end-to-end network transformation.

 

The companies in this ecosystem are passionate software and hardware vendors, and also end users, that work together to develop new solutions. There are more than 150 Network Builder members taking advantage of this program and driving forward with a shared vision to accelerate the availability of commercial grade solutions.

 

NFV and SDN are deploying now – but that is just the start of the end-to-end network transformation. There is still a great deal of technology and business innovation required to drive NFV and SDN to scale, and Intel will continue its commitment to drive this transformation.




I invited the BTE audience – and I invite you – to join us in this collaboration to create tomorrow’s user experiences and to lay the foundation for the next phase of the digital services economy.

Read more >

10 Questions to Develop Your Mobile BI Strategy

Mobile-Employee-On-Laptop.pngIn my post “Mobile BI” Doesn’t Mean “Mobile-Enabled Reports” I articulated the importance of developing a mobile BI strategy. If designed, implemented, and executed effectively, mobile BI will not only complement the existing business intelligence (BI) framework, but it will enable organizations to drive growth and profitability.

 

For my next ten posts, I want to chart a course that will highlight the key questions you need to ask before embarking on a mobile BI strategy. This is the critical first step in validating mobile BI readiness for any organization, whether it’s a Fortune 500 company, a small-to-medium enterprise, or a small team within a large enterprise. The size or the scope of the mobile BI engagement doesn’t negate the need for, or importance of, the pre-flight checklist.

 

Think about this for a moment. Would a flight crew skip the pre-flight planning because it expects only a small number of passengers on the flight? No, and we shouldn’t skip it either. We want to evaluate and identify any issues before the takeoff.

 

It doesn’t matter in what order you answer these questions. What matters is that you consider them all as you work to develop a comprehensive mobile BI strategy that will set you up for success.

 

1. Executive Sponsorship

Do we have an executive sponsor? It starts and ends with executive sponsorship. As with any engagement, this not only ensures alignment between your business and mobile strategies but also the attainment of required resources.

 

2. Security

How do we mitigate risks associated with all three layers of mobile BI security: device(s), mobile BI app, and data consumed on the app? Is there an existing corporate security policy or framework that can be leveraged?

 

3. Enterprise Mobility

Do we have either a formal enterprise mobility strategy that we need to align with or a road map that we can follow?

 

4. Technology Infrastructure

Can our current IT and BI infrastructure, which includes both hardware and software, support mobile BI? Are there any gaps that need to be addressed prior to going live?

 

5. Design

Do we have the know how to apply mobile BI design best practices, whether it’s for dashboards or operational reports? Does the existing software support effective use of metadata and modeling to leverage the “develop once, use many times” design philosophy?

 

6. Talent Management

Do we have internal talent with the required skill set that includes not only technical expertise but also soft skills such as critical thinking?

 

7. Support Infrastructure

Do we have a sufficient support infrastructure in place to ensure that both business (content, analysis) and technical (access, installation) challenges are addressed in a timely manner? Do we have the right resources to develop effective documentation? Can we leverage existing IT and/or BI resources?

 

8. Communication

What will be our communication strategy in the pre-and post-Go Live phase? How will we update the user community on a regular basis?

 

9. Business Processes

Are there any business processes that need to be updated, changed, or created to support the mobile BI strategy? Are these changes feasible and can we complete them prior to development to ensure proper testing and validation?

 

10. System Integration

Are there any requirements or opportunities for integration with other internal apps, business systems, or processes?

 

Many of these topics are not unique to mobile BI. Moreover, additional areas of interest such as project management or quality assurance (testing) are assumed to be part of the existing IT or BI framework. Although these initial questions may seem extensive at first, their primary purpose is to provide a checklist.

 

I subscribe to the notion that strategy planning for any engagement—not just IT projects— should invite questions that promote critical thinking. Only by encouraging questions can we make sure that we ask the right questions.

 

What key questions do you see as critical to the development of a comprehensive mobile BI strategy?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >