Recent Blog Posts

Why Choose the Mini PC? Part 2

Retail and finance industries turn to Mini PCs for high performance, compact computing power

 

Bookstore.png

Whether it’s tucked away on a bookshelf, hidden behind a fitting room mirror or mounted on a digital display, Intel technology-based solutions featuring the Mini PC are helping to power industries as varied as the retail and the financial sectors. Thanks to their energy efficiency, compact design and high performance computing power, these tiny form factors bring full-sized PC power to the smallest of spaces. Here are some real-world examples of the endless possibilities with Mini PCs:

 

Mini PCs as Part of An Overall Technology Solution for Retail

 

One of my favorite Mini PC success stories is that of Galleria Rizzoli in Milan, Italy. Galleria Rizzoli saw the impact of digital book sales firsthand, and decided to respond with a complete digital overall of its operations.

 

With the help of Intel technology, Galleria Rizzoli launched a pilot program that gave their store a complete technology makeover. Mini PCs powered new in-store digital signage and seven new in-store customer kiosks. Mini PCs replaced bulky desktop towers, freeing up valuable store space. Thanks to the technology makeover, sales increased 40 percent.

 

Galleria Rizzoli is a great example of how Mini PCs can enhance the user experience to help drive sales.

 

Overall, it’s a winning solution for Intel, for Rizzoli, and for consumers who might be looking to quickly find the perfect kids’ book for a boy who likes to play with trucks. Read the full story of how Mini PCs modernized the bookstore.

 

Embedded Mini PCs Enable Next-Gen Vending Machines

 

Whether you’re grabbing a quick snack at the office or simply refueling at the gas station, vending machines today are operating on a complex system of motherboards, dispensing tanks, and printing and credit card machines. Many new OEMs are currently working on consolidating all these disparate parts into one Mini PC solution.

 

Mini PCs in the Fitting Room

 

Instead of treating the fitting room like a revolving door, imagine being able to tap a screen to request a different size or color. Some retailers are exploring the idea of using the Mini PC to power touch-screen consoles in fitting rooms to provide instant inventory access to customers while also recommending referential products for purchase.

 

Man-Gives-Card-To-Woman-At-Hotel-Desk.png

National Grocery Chains Power POS with Mini PCs

 

The days of the bulky cash register have given way to more compact Mini PC-powered POS systems in grocery stores as well. Not only do Mini PCs leave a smaller footprint in tight cashier stalls, they also provide the high performance computing power necessary to ring up multiple items in quick succession.

 

Hospitality Industry Welcomes Mini PCs


Look inside many hotel business centers and you’ll likely see a row of monitors with Mini PCs tucked neatly behind them. The Mini PC offers a compact solution that won’t slow guests down. And some hotels are exploring the use of Mini PCs in guest rooms attached to the TVs along with concierge-type software to enhance the in-room guest experience.

 

Banks Turn to Mini PCs for Increased Efficiency


A growing number of banks are reaching for Mini PCs, not only for their compact size, but for their energy efficiency and speed. For many clients, a visit to the local bank reveals tellers relying on Mini PCs where desktop towers once stood. Mini PCs free up valuable desk space, offer compact security, and integrate with legacy systems.

 

Day Traders Turn to Mini PCs for Quick Calculations

 

For day traders, Mini PCs featuring solid-state-drives (SSDs) are the desktop PCs of choice. While traditional hard disk drives in PCs and laptops are fairly inexpensive, they are also slow. SSDs offer greater capacity, are considered more reliable, and enable faster access to data, which is critical to an industry where seconds matter.

 

Where have you seen the Mini PC in use? Join the conversation using #IntelDesktop or view our other posts in the Desktop World Series and rediscover the desktop.

 

To read part 1, click here: Why Choose the Mini PC? Part 1

Read more >

Future of IoT: 5 Questions with Technology Futurist Angela Orebaugh

We sat down with technology futurist Angela Orebaugh recently to chat about emerging Internet of Things (IoT) trends. In 2011, Angela was named Booz Allen Hamilton’s first Cybersecurity Fellow, a position reserved for the firm’s most notable experts in their … Read more >

The post Future of IoT: 5 Questions with Technology Futurist Angela Orebaugh appeared first on IoT@Intel.

Read more >

Population Health Management Best Practices for Today and Tomorrow’s Healthcare System

By Justin Barnes and Mason Beard

 

The transition to value-based care is not an easy one. Organizations will face numerous challenges on their journey towards population health management.

 

We believe there are five key elements and best practices to consider when transitioning from volume to value-based care:  managing multiple quality programs; supporting both employed and affiliated physicians and effectively managing your network and referrals; managing organizational risk and utilization patterns; implementing care management programs; and ensuring success with value-based reimbursement.

 

When considering the best way to proactively and concurrently manage multiple quality programs, such as pay for performance, accountable care and/ or patient-centered medical home initiatives, you must rally your organization around a wide variety of outcomes-based programs. This requires a solution that supports quality program automation. Your platform must aggregate data from disparate sources, analyze that data through the lens of a program’s specific measures, and effectively enable the actions required to make improvements. Although this is a highly technical and complicated process, when done well it enables care teams to utilize real-time dashboards to monitor progress and identify focus areas for improving outcomes.

 

In order to provide support to both employed and affiliated physicians, and effectively manage your network and referrals, an organization must demonstrate its value to healthcare providers. Organizations that do this successfully are best positioned to engage and align with their healthcare providers. This means providing community-wide solutions for value-based care delivery. This must include technology and innovation, transformation services and support, care coordination processes, referral management, and savvy representation with employers and payers based on experience and accurate insight into population health management as well as risk.

 

To effectively manage organization risk and utilization patterns, it is imperative to optimize episodic and longitudinal risk, which requires the application of vetted algorithms to your patient populations using a high quality data set. In order to understand the difference in risk and utilization patterns you need to aggregate and normalize data from various clinical and administrative sources, and then ensure that the data quality is as high as possible. You must own your data and processes to be successful. And importantly, do not rely entirely on data received from payers.

 

It is also important to consider the implementation of care management programs to improve individual patient outcomes. More and more organizations are creating care management initiatives for improving outcomes during transitions of care and for complicated, chronically ill patients. These initiatives can be very effective.  It is important to leverage technology, innovation and processes across the continuum of care, while encompassing both primary and specialty care providers and care teams in the workflows. Accurate insight into your risk helps define your areas of focus. A scheduled, trended outcomes report can effectively identify what’s working and where areas of improvement remain.

 

Finally, your organization can ensure success with value-based reimbursement when the transition is navigated correctly. The shift to value-based reimbursement is a critical and complicated transformation—oftentimes a reinvention—of an organization. Ultimately, it boils down to leadership, experience, technology and commitment. The key to success is working with team members, consultants and vendor partners who understand the myriad details and programs, and who thrive in a culture of communication, collaboration, execution and accountability.

 

Whether it’s PCMH or PCMH-N, PQRS or GPRO, CIN or ACO, PFP or DSRIP, TCM or CCM, HEDIS or NQF, ACG’s or HCC’s, care management or provider engagement, governance or network tiering, or payer or employer contracting, you can find partners with the right experience to match your organizations unique needs. Because much is at stake, it is necessary to ensure that you partner with the very best to help navigate your transition to value-based care.

 

Justin Barnes is a corporate, board and policy advisor who regularly appears in journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes is also host of the weekly syndicated radio show, “This Just In.”

 

Mason Beard is Co-Founder and Chief Product Officer for Wellcentive. Wellcentive delivers population health solutions that enable healthcare organizations to focus on high quality care, while maximizing revenue and transforming to support value-based models.

Read more >

Make Your Data Centre Think for Itself

Two-People-Walking-In-A-Data-Center.pngWouldn’t it be nice if your data centre could think for itself and save you some headaches? In my last post, I outlined the principle of the orchestration layer in the software-defined infrastructure (SDI), and how it’s like the brain controlling your data centre organism. Today, I’m digging into this idea in a bit more detail, looking at the neurons that pass information from the hands and feet to the brain, as it were. In data centre terms, this means the telemetry that connects your resources to the orchestration layer.

 

Even the most carefully designed orchestration layer will only be effective if it can get constant and up-to-date, contextual information about the resources it is controlling: How are they performing? How much power are they using? What are their utilisation levels and are there any bottlenecks due to latency issues? And so on and so forth. Telemetry provides this real-time visibility by tracking resources’ physical attributes and sending the intelligence back to the orchestration software.

 

Let me give you an example of this in practice. I call it the ‘noisy neighbour’ scenario. Imagine we have four virtual machines (VMs) running on one server, but one of them is hogging a lot of the resource and this is impacting the performance of the other three. Intel’s cache monitoring telemetry on the server can report this right back to the orchestration layer, which will then migrate the noisy VM to a new server, leaving the others in peace. This is real-time situational feedback informing how the whole organism works. In other words, it’s the Watch, Decide, Act, Learn cycle that I described in my previous blog post – doesn’t it all fit together nicely?

 

Lessons from History…

 

Of course, successful telemetry relies on having the right hardware to transmit it. Just think about another data centre game changer of the recent past – virtualisation. Back in the early 2000s, demand for this technology was growing fast, but the software-only solutions available put tremendous overheard demand on the hardware behind them – not an efficient way to go about it. So, we at Intel helped build in more efficiencies with solutions like Intel® Virtualization Technology, more memory, addressability and huge performance gains. Today, we’re applying that same logic to remove SDI bottlenecks. Another example is Intel® Intelligent Power Node Manager, a hardware engine that works with management software to monitor and control power usage at the server, rack and row level, allowing you to set the usage policies for each.

However, we’re not just adding telemetry capabilities at the chip level and boosting hardware performance, but also investing in high-bandwidth networking and storage technologies.

 

….Applied to Today’s Data Centres

 

With technologies already in the market to enable telemetry within the SDI, there are a number of real-life use cases we can look to for examples of how it can help drive time, cost and labour out of the data centre. Here are some examples of how end-user organizations are using Intelligent Power Node Manager to do this:

 

 

Orchestration-Layer-Explanation-For-Data-Center.png

Another potential use case for the technology is to reduce usage of intelligent power strips, or you could throttle back on server performance and extend the life of your uninterruptable power supply (UPS) in the advent of a power outage, helping lower the risk of a service down time – something no business can afford.

 

So, once you’ve got your data centre functioning like a highly evolved neural network, what’s next? Well, as data centre technologies continue to develop, the extent to which you can build agility into your infrastructure is growing all the time. In my next blog, I’m going to look into the future a bit and explore how silicon photonics can help you create composable architectures that will enable you to build and reconfigure resources on the fly.

 

To pass the time until then, I’d love to hear from any of you that have already started using telemetry to inform your orchestration layer. What impact has it had for you, and can you share any tips for those just starting out?

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

You can find my first and second blogs on data centres here:   

Is Your Data Centre Ready for the IoT Age?

Have You Got Your Blueprints Ready?

Are You Smarter than a Data Centre?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

 

*Other names and brands may be claimed as the property of others.

Read more >

Server Refresh Can Reduce Total Cost of Ownership

Snackable-MoreBangforYourBuck.pngMore bang for your buck. Essentially that is the driving force behind my team in Intel IT. Our IT department is on a tight budget, just like most enterprise IT departments. Therefore, return on investment and total cost of ownership are important considerations for deciding when to upgrade the servers that run our silicon design workloads. As a principal engineer in infrastructure engineering, I direct the comparison of the various models of each new generation of Intel® CPU to those of previous generations of processors. (We may re-evaluate the TCO of particular models between generations, if price points significantly change.) We evaluate all the Intel® Xeon® processor families – Intel Xeon processor E3 family, Intel Xeon processor E5 family, and Intel Xeon processor E7 family – each of which have different niches in Intel’s silicon design efforts.

 

We use industry benchmarks and actual electronic design automation (EDA) workloads in our evaluations, which go beyond performance to address TCO – we include throughput, form factor (density), energy efficiency, cost, software licensing costs, and other factors. In many cases over the years, one of the models might turn out better in terms of price/watt, but performance is slower, or the software licensing fees are triple those for a different model.

 

In silicon design, back-end design jobs are time critical and require servers with considerable processing power, large memory capacity, and memory bandwidth. For these types of jobs, the bottleneck has historically been memory, not CPU cycles; with more memory, we can run more jobs in parallel. The Intel Xeon processor E7-8800 v3 product family offers new features that can increase EDA throughput, including up to 20% more cores than the previous generation and DDR4 memory support for higher memory bandwidth. A server based on the Intel Xeon processor E7-8800 v3 can take either DDR3 (thereby protecting existing investment) or DDR4 DIMMs – and supports memory capacity up to 6 TB per 4-socket server (with 64 GB DIMMs) to deliver fast turnaround time for large silicon design jobs.

 

We recently completed an evaluation of the Intel Xeon processor E7-8800 v3 product family, as documented in our recent brief. According to our test results, the Intel Xeon processor E7 v3-based server delivers excellent gains in performance and supports larger models, faster iterations, and greater throughput than was possible with the previous generation of the processor. These improvements can accelerate long-running silicon design jobs and shorten the time required to bring new silicon design to market. These improvements can also reduce data center footprint and help control operational and software licensing costs by achieving greater throughput using fewer systems than were necessary with previous generations of processors.

 

Our tests used a large multi-threaded EDA application operating on current Intel® silicon design data sets. The result shows an Intel Xeon processor E7-8890 v3-based server completed a complex silicon design workload 1.18x faster than the previous-generation Intel Xeon processor E7-4890 v2-based server and 17.04x faster than a server based on Intel® Xeon® processor 7100 series (Intel Xeon processor 7140M).

 

The Intel Xeon processor E7-8800 v3 product family also supports the Intel® Advanced Vector Extensions 2 (Intel® AVX2) instruction set. Benefits of Intel AVX2 include doubling the number of FLOPS (floating-point operations per second) per clock cycle, 256-bit integer instructions, floating-point fused multiply-add instructions, and gather operations. While our silicon design jobs do not currently use AVX2 – mostly because the design cycles can take over a year to complete and during that time we cannot modify the plan of record (POR) EDA tools and infrastructure for those servers – we anticipate that Intel AVX2 can provide a performance boost for many technical applications.

 

I’d like to hear from other IT professionals – are you considering refreshing? If you’ve already refreshed, can you share your observed benefits and concrete ROI? What best-known methods have you developed and what are some remaining pain points? If you have any questions, I’d be happy to answer them and pass on our own best practices in deploying these servers. Please share your thoughts and insights with me – and your other IT colleagues – by leaving a comment below. Join our conversation here in the IT Peer Network.

Read more >

Chromium Command-Line Options for Crosswalk Builds with the Intel XDK

If you are an HTML5 web app developer, something you probably take for granted with that Chrome browser on your desktop are the handy Chromium Command-Line Switches that can be used to enable experimental features in the browser or control useful debugging and … Read more >

The post Chromium Command-Line Options for Crosswalk Builds with the Intel XDK appeared first on Intel Software and Services.

Read more >

Looking to the Future: Smart Healthcare with Big Data – HLRS

HLRS-Healthcare-Techonolgy.png

The NHS is under unbelievable pressure to do more with less; according to the latest research from the Kings Fund, the NHS budget increased by an average of just 0.8 per cent per year in real terms since 2010.

 

Clearly, there is a need for smart ways to improve healthcare around the world. One team of researchers at The University of Stuttgart is using cutting-edge technology and big data to simulate the longest and strongest bone in the human body — the femur — to improve implants.

 

People-Looking-At-Images-On-Screens.png

Medical Marvels

 

For three years, this research team has been running simulations of the types of realistic forces that the thigh bone undergoes on a daily basis for different body types and at a variety of activity levels to try and inform doctors what is needed to create much better implants for patients with severe fractures or hip deterioration. Successful implants can make a significant impact on the wearer’s quality of life, so the lighter and more durable they are the better.

 

Femoral fractures are pretty common, with around 70,000 hip fractures taking place in the UK each year, which is estimated to cost up to £2 billion. So, ascertaining better materials for bone implants that can allow longer wear and better mobility will solve a real need.

 

However, trying to achieve simulation results for a fractured bone requires a huge amount of data. Bone is not a compact structure, but is like a calcified sponge. Such a non-homogenous non-uniform material behaves in different ways under different stresses for different people. This means that the team must collect hundreds of thousands of infinitesimally small scans from genuine bone samples to learn how different femurs are structured. The incredible detail and high resolution provided by high-performance machines powered by the Intel® Xeon® Processor E5-2680 v3 enables them to replicate full femur simulations with this exact material data.

 

Such a level of intricacy cannot be done on a normal cluster. In the University of Stuttgart research team’s experience, one tiny part of the femoral head — a cube of only 0.6mm2 — generates approximately 90,000 samples and each of these samples requires at least six Finite-Element simulations to get the field of anisotropic material data needed to cover the full femoral head. To carry out this large number of simulations they definitely need the super computer! To do this in a commercial way you’d need thousands of patients, but with one supercomputer this team can predict average bone remodelling and develop reliable material models for accurate implant simulations. This is real innovation.

 

High-Performance Computing for Healthcare

 

The High Performance Computing Center in Stuttgart (HLRS) is the institution that makes all this possible. One of just three large supercomputer ‘tier 0’ sites in Germany, it recently upgraded to the Intel Xeon Processor E5-2680 v3, which according to internal tests delivers four times the performance of its previous supercomputer ii. This is great for Stuttgart University, as its computing center now has four times the storage space. ii Research like this requires intensive data processing and accurate analysis, so significant computing capacity is crucial.

 

Person-Using-Interactive-Computing.png

This new system enables breakthroughs that would be otherwise impossible. For more on HLRS’s cutting edge supercomputing offering, click here.

 

I’d love to get your thoughts on the healthcare innovation enabled by the Intel HPC technologies being used by HLRS and its researchers, so please leave a comment below — I’m happy to answer questions you may have too.

 

To continue this conversation on Twitter, please follow me at @Jane_Cooltech.

 

Join the debate in the Intel Health and Life Sciences Community, and look for Thomas Kellerer’s HLRS blog next week!

 

For other blogs on how Intel technology is used in the healthcare space, check out these blogs.

 

i ’Storage and Indexing of Fine Grain, Large Scale Data Sets’ by Ralf Schneider, in Michael M. Resch et. al, Sustained Simulation Performance 2013, Springer International Publishing, 2013, S. 89–104, isbn: 978-3-319-01438-8. doi: 10.1007/978-3-319-01439-5_7

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

*Other names and brands may be claimed as the property of others.

Photo Credit: copyright Boris Lehner for HLRS

Read more >

Evaluating an OS update? Take a look at eDrive….

As users of Windows 7 consider moving to Windows 8.1 or Windows 10, a new BitLocker feature is available that should be considered.  Nicknamed “eDrive,” “Encrypted Hard Drive,” or “Encrypted Drive,” the feature provides the ability for BitLocker to take advantage of the hardware encryption capabilities of compatible drives, instead of using software encryption.   Hardware encryption provides benefits over software encryption in that encryption activation is near-immediate, and real-time performance isn’t impacted.

 

eDrive is Microsoft’s implementation of managed hardware-based encryption built on the TCG Opal framework and IEEE-1667 protocols.  It is implemented a bit differently from how third-party Independent Software Vendors (ISVs) implement and manage Opal-compatible drives.  It is important to understand the differences as you evaluate your data protection strategy and solution.

 

eDrive information on the internet is relatively sparse currently.  Here are a couple of resources from Intel that will help get you started:

 

And here are a couple of tools from Intel that will be useful when working with the Intel® SSD Pro 2500 Series:

 

If you’re going to do research on the internet, I’ve found that “Opal IEEE 1667 BitLocker” are good search terms to get you started.

 

A special note to those who want to evaluate eDrive with the Intel® SSD Pro 2500 Series: the Intel-provided tool to enable eDrive support only works on “channel SKUs.”  Intel provides SSDs through the retail market (channel) and directly to OEMs (the maker/seller of your laptop).  Support for eDrive on OEM SKUs must be provided by the OEM.  Channel SKUs can be verified by looking at the firmware version on the SSD label, or with the Intel® SSD Toolbox or Intel® SSD Pro Administrator Tool.  Firmware in the format of TG## (TG20, TG21, TG26, etc…)  confirms a channel SKU, and the ability to enable eDrive support on the Intel® SSD Pro 2500 Series.

 

Take a look at eDrive, or managed hardware-based encryption solutions from ISVs such as McAfee, WinMagic, Wave, and others.

 

As always, I look forward to your input on topics you would like covered.

 

Thanks for your time!

 

Doug
intel.com/ssd

Read more >

1, 2, 3…It’s Time to Migrate from Windows Server 2003

Windows 2003 Blog 5.jpgWe’ve told you once. We’ve told you twice. Heck, we’ve told you four times: if you’re still running Windows Server 2003, you need to take action as soon as possible because the EOS is fast approaching on July 14th, 2015.

 

Need a refresher on what lies ahead? Well, good news, we’ve put together all the information you need to stay safe.

 

The upcoming Windows Server 2003 EOS means Microsoft will not be issuing any patches or security updates after the cut off date. While hackers are off rejoicing, this raises major security issues for those still running Windows Server 2003. And that appears to be quite a few of you.

 

According to Softchoice, a company specializing in technology procurement for organizations, 21 percent of all servers are still running Windows Server 2003. More worrisome is that 97 percent of all data centers are still running some form of Windows Sever 2003 within their facilities.

 

But migrating from Windows Server 2003 and ending up with proper security doesn’t have to be a pain. In our previous posts in this series, we’ve highlighted three different options for migration and how to secure the target environment. Let’s recap them here:

 

Option 1: Upgrade to Windows Server 2012

 

Because Windows Server 2008 will be losing support in January 2016, it’s a good idea for organizations to directly upgrade to Windows Server 2012 R2. This will require 64-bit servers and a refreshed application stack for supported configuration.

 

Likely, your organization might be looking to invest in a hybrid cloud infrastructure as part of this upgrade. Depending on what a server is used for, you’ll need optimized security solutions to secure your private virtual machines.

 

Intel Security has you covered. No matter what you’re running, you should at least employ either McAfee Server Security Suite Essentials or McAfee Server Security Suite Advanced.

 

If you’re running an email, sharepoint, or database server, consider McAfee Security for Email Servers, McAfee Security for Microsoft SharePoint or McAfee Data Center Security Suite for Databases, depending on your needs.

 

Option 2: Secure the public cloud

 

As the cloud ecosystem matures, the public cloud is becoming a reasonable alternative for many infrastructure needs. However, one issue remains: while public cloud solutions secure the underlying infrastructure, each company is responsible for securing their virtual servers from the Guest OS and up. Meaning, you’ll need a security solution built for the cloud.

 

Luckily, we’ve a solution that will help you break through the haze and gain complete control over workloads running within an Infrastructure- as-a-Service environment: McAfee Public Cloud Server Security Suite.

 

McAfee Public Cloud Server Security Suite gives you comprehensive cloud security, broad visibility into server instances in the public cloud and dynamic management of cloud environments.

 

Option 3: Protecting the servers you can’t migrate right now

 

For the 1.6 million of you that are behind schedule on Windows Server upgrades and won’t be able to migrate by the EOS date, you have a tough challenge ahead. Hackers know full well that Microsoft won’t be patching any newly discovered security issues, and as such, your servers might be a target.

 

But it’s not all doom and gloom – Intel Security can tie you over and keep you protected, until you’ve finished migrating.

 

With McAfee Application Control for Servers, you can command a centrally managed dynamic whitelisting solution. This solution will help you to protect your unsupported servers from malware and advanced threats, by blocking unauthorized applications, automatically categorizing threats and lowering manual input through a dynamic trust model.

 

Make your migration easy and get started today.

 

Be sure to follow along with @IntelSec_Biz on Twitter for real-time security updates. Stay safe out there!

 

 

Windows Server 2003 EOS

This blog post is episode #5 of 5 in the Tech Innovation Blog Series

View all episodes  >

Read more >