Recent Blog Posts

Big Data is Changing the Football Game

The football authorities have been slow to embrace technology, at times actively resisting it. It’s only been two seasons since some of Europe’s top leagues were authorized to use goal-line technology to answer the relatively simple question of whether or not a goal has been scored, i.e., has the whole ball crossed the goal line.

 

This is something the games of tennis and cricket have been doing for nearly ten years, but for one of the world’s richest sports, it risked becoming a bit of a joke.  As one seasoned British manager once said, after seeing officials deny his team a perfectly good goal: “We can put a man on the moon, time serves of 100 miles per hour at Wimbledon, yet we cannot place a couple of sensors in a net to show when a goal has been scored.” The authorities eventually relented, of course, their hand forced by increasingly common, high profile and embarrassing slip-ups.

 

But while the sport’s governing bodies were in the grips of technological inertia, the world’s top clubs have dived in head first in the last ten to fifteen years, turning to big data analytics in search of a new competitive advantage. In turn, this has seen some innovative companies spring up to serve this new ‘industry’, companies like Intelcustomer Scout7.

 

Taking the Guesswork out of the Beautiful Game

 

Big data has become important in football in part because it is big business. And for a trend that is only in its second decade, things have moved fast since the days of teams of hundreds of scouts collecting ‘data’ in the form of thousands of written reports in an effort to provide teams with insights into the opposition or potential new signings.

 

Now, with tools like Scout7’s football database, which is powered by a solution based on the Intel® Xeon® Processor E3 Family solution, they have a fast, sophisticated system that clubs can use to enhance their scouting and analysis operations.

 

For 138 clubs in 30 leagues, Scout7 makes videos of games from all over the world available for analysis within two hours of the final whistle[1]. At the touch of a button, they can take some of the guess work and ‘instinct’ out of deciding who gets on the pitch, as well as the legwork of keeping tabs on players and prospects from all over the world.

 

Scout-7-Player-Database.png

Pass master: Map of one player’s passes and average positions from the Italian Serie A during the 2014-15 season

 

Using big data analytics to enable smarter player recruitment is among Scout7’s specialties. For young players, without several seasons of experience on which to judge them, this can be especially crucial. How do you make a call on their temperament or readiness to make the step up? How will they handle the pressure? As we enter the busiest recruitment period of the football calendar – the summer transfer window – questions like this are being asked throughout the football world right now.

 

Delving into the Data

 

It’s a global game, and Scout7 deals in global data, so we can head to a league less travelled for an example: the Czech First League. The UEFA Under-21 European Championships also took place this summer and, with international tournaments often acting as shop windows for the summer transfer market (which opened on 1st July – a day after tournament’s final), it makes sense to factor this into our analysis.

 

So, let’s look at the Scout7 player database for players in the Czech First League that are currently Under-21 internationals, to see who has had the most game time and therefore exposure to the rigors of competitive football. We can see that a 22-year-old FC Hradec Králové defender, played every single minute of his team’s league campaign this season – 2,700 minutes in total.

 

Another player’s on-field time for this season was 97% — valuable experience for a youngster. Having identified two potential first-team ready players, Scout7’s database would allow us to take a closer look at the key moments from these games in high-definition video.

 

Check out our infographic, detailing a fledgling career of another player in the context of the vast amount of data collection and analysis that takes place within Scout7.

 

Scout-7-Player-Profile.pngScout7 player profile

 

“Our customers are embracing this transition to data-driven business decision-making, breaking away from blind faith in the hunches of individuals and pulling insights from the raft of new information sources, including video, to extract value and insights from big data,” explains Lee Jamison, managing director and founder, Scout7.

 

Scout7’s platform uses Intel® technology to deliver the computing power and video transcoding speed that clubs need to mine and analyze more than 3 million minutes of footage per year, and its database holds 135,000 active player records.

 

Lonely at the Top

 

There’s only room at the top of the elite level of sport for one and the margins between success and failure can be centimeters or split seconds. Identifying exactly where to find those winning centimeters and split seconds is where big data analytics really comes into its own.

 

Read the full case study.

 

To continue this conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


Find me on LinkedIn.

Keep up with me on Twitter.

 

*Other names and brands may be claimed as the property of others.

 

[1] Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance


Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.


Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at http://www.intel.com

Read more >

10 Mobile BI Strategy Questions: Executive Sponsorship

Man-On-Morning-Comute-Using-Tablet.pngOf the ten mobile BI questions I outlined in my last post, “Do we have an executive sponsor?” is the most important one because the success of a mobile BI journey depends on it more than any other. While the role of an executive sponsor is critical in all tech projects, several aspects of mobile BI technology make it easy for executive management to be involved closely and play a unique role.

 

Moreover, although the CIO or the CTO plays a critical role in making sure the right technology is acquired or developed, the executive sponsorship from the business side provides the right level of partnership in order to run on all three cylinders of BI: insight into right data for the right role and at the right time.

 

Why Do We Need an Executive Sponsor?

 

We need executive sponsorship because, unlike grassroots efforts, business and technology projects require a top-down approach. Whether the strategy is developed as part of a structured project or as a standalone engagement, the executive sponsor delivers three critical ingredients:

 

  1. The mobile BI strategy is in line with the overall business strategy.
  2. The required resources are made available.
  3. Necessary guidance is provided in order to stay the course.

 

Is Having an Executive Sponsor Enough?

 

Having an executive only on paper isn’t enough, however. How much commitment an executive sponsor makes and the leadership he/she provides has a direct impact on the outcome of the strategy. Thus, the ideal executive sponsor of a mobile BI initiative is a champion of the cause, an ardent mobile user, and the most active consumer of its assets.

 

What Makes an Ideal Executive Sponsor for Mobile BI?

 

How does the executive champion the mobile BI initiative? First and foremost, he/she leads by example — no more printing paper copies of reports or dashboards. This means that the executive is keen not only to consume the data on mobile devices but also to apply the insight derived from these mobile assets to decisions that matter. Using the technology demonstrates firsthand the mobile mindset that sets an example for the rest of the direct reports and their teams. In addition, by recognizing the information available on these mobile BI assets as the single version of the truth, the executive provides a clear and consistent message for everyone to follow.

 

Is Mobile BI Easier to Adopt by Executive Sponsors?

 

Without a doubt, mobile BI, just like mobility, is conducive to a wide range of users, starting with executives. Unlike the PC, which wasn’t mobile at all, and the laptop, which provided limited mobility, tablets and smartphones provide a perfect combination of mobility and convenience. This ease of use makes these devices an ideal candidate in winning over even those executives who may have been initially uneasy to include mobile BI in their arsenals or to use them in their daily decision-making activities.

 

The mobility and simplicity may give the executives additional incentives to get involved in the development of requirements for the first set of mobile BI assets because they can easily see the benefits of having access to critical information at their fingertips. These benefits include an additional opportunity for sales and marketing to use mobile BI to showcase new products and services to customers (an approach that reflects the innovation inherent in the use of this technology).

 

Bottom Line: Executive Sponsorship Matters

 

The most important goal of a mobile BI strategy is to enable faster, better-informed decision making. Executive sponsorship matters because with the right sponsorship, the mobile BI initiative will have the best chance to drive growth and profitability. Without this sponsorship — even with the most advanced technology in place — a strategy will face an uphill battle.

What other aspects of executive sponsorship do you see playing a role in mobile BI strategy?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

NVM Express: Windows driver support decoded

NVMexpress.gifmicrosoft.png


NVMe Drivers and SSD support in Windows

Microsoft enabled native, support for NVM Express (NVMe) in Windows 8.1 and Windows Server 2012 R2 by way of inbox drivers, and subsequent versions of each OS family are expected to have native support moving forward.  Additionally, native support for NVMe in Windows 7 and Windows Server 2008 R2 was added via product updates. 

 

Intel also provides an NVMe driver for Microsoft OS’s that releases with each version of our NVMe hardware products internally and using Microsoft’s WHCK.  The list of supported OS’s is the same as those above (for both 32-bit and 64-bit versions), along with Windows 8 and Windows Server 2012 (R2). The Intel NVMe driver supports only Intel SSDs and is required for power users or server administrators who plan to use the Intel® Solid-State Drive Data Center Tool to perform administrative commands on an NVMe SSD (e.g. firmware updates).  The Intel driver is intended to provide the best overall experience in terms of performance and supportability, it is strongly recommended.

 

 

Download Links by Operating Systems

 

NVMe Drivers for Windows

Operating System

Intel Driver Download

Microsoft Driver Download

Windows 7

intel.com

microsoft.com

Windows Server 2008 R2

intel.com

microsoft.com

Windows 8

intel.com

supported by upgrade to Windows 8.1

Windows Server 2012

intel.com

supported by upgraded to Windows Server 2012 R2

Windows 8.1

intel.com

N/A (inbox driver)

Windows Server 2012 R2

intel.com

N/A (inbox driver)

 

 

Other Links of Interest

 

Link

Details

Intel® Solid-State Drive Data Center Tool

The Intel® Solid-State Drive Data Center Tool (Intel SSD DCT) is a drive management tool for Intel SSD Data Center Family of products.

Intel® SSD Data Center Family Overview

Provides access to more information on Intel’s NVMe PCIe SSDs.

nvmexpress.org

More information on what NVMe is, why you should consider using it, and news/upcoming events.

 

 

Other blogs by Operating Systems with NVM Express driver information:

NVM Express: Linux driver support decoded

The Skinny on NVM Express and ESXi


Read more >

Empathizing with Teachers and Learners Leads to the Read With Me App

Teaching is tough work. In one design thinking project that I mentioned in a previous blog post, empathy for teachers and students led to development of the Read With Me app (available now on Chrome and select Android devices) co-developed … Read more >

The post Empathizing with Teachers and Learners Leads to the Read With Me App appeared first on Intel Software and Services.

Read more >

Why Choose the Mini PC? Part 2

Retail and finance industries turn to Mini PCs for high performance, compact computing power

 

Bookstore.png

Whether it’s tucked away on a bookshelf, hidden behind a fitting room mirror or mounted on a digital display, Intel technology-based solutions featuring the Mini PC are helping to power industries as varied as the retail and the financial sectors. Thanks to their energy efficiency, compact design and high performance computing power, these tiny form factors bring full-sized PC power to the smallest of spaces. Here are some real-world examples of the endless possibilities with Mini PCs:

 

Mini PCs as Part of An Overall Technology Solution for Retail

 

One of my favorite Mini PC success stories is that of Galleria Rizzoli in Milan, Italy. Galleria Rizzoli saw the impact of digital book sales firsthand, and decided to respond with a complete digital overall of its operations.

 

With the help of Intel technology, Galleria Rizzoli launched a pilot program that gave their store a complete technology makeover. Mini PCs powered new in-store digital signage and seven new in-store customer kiosks. Mini PCs replaced bulky desktop towers, freeing up valuable store space. Thanks to the technology makeover, sales increased 40 percent.

 

Galleria Rizzoli is a great example of how Mini PCs can enhance the user experience to help drive sales.

 

Overall, it’s a winning solution for Intel, for Rizzoli, and for consumers who might be looking to quickly find the perfect kids’ book for a boy who likes to play with trucks. Read the full story of how Mini PCs modernized the bookstore.

 

Embedded Mini PCs Enable Next-Gen Vending Machines

 

Whether you’re grabbing a quick snack at the office or simply refueling at the gas station, vending machines today are operating on a complex system of motherboards, dispensing tanks, and printing and credit card machines. Many new OEMs are currently working on consolidating all these disparate parts into one Mini PC solution.

 

Mini PCs in the Fitting Room

 

Instead of treating the fitting room like a revolving door, imagine being able to tap a screen to request a different size or color. Some retailers are exploring the idea of using the Mini PC to power touch-screen consoles in fitting rooms to provide instant inventory access to customers while also recommending referential products for purchase.

 

Man-Gives-Card-To-Woman-At-Hotel-Desk.png

National Grocery Chains Power POS with Mini PCs

 

The days of the bulky cash register have given way to more compact Mini PC-powered POS systems in grocery stores as well. Not only do Mini PCs leave a smaller footprint in tight cashier stalls, they also provide the high performance computing power necessary to ring up multiple items in quick succession.

 

Hospitality Industry Welcomes Mini PCs


Look inside many hotel business centers and you’ll likely see a row of monitors with Mini PCs tucked neatly behind them. The Mini PC offers a compact solution that won’t slow guests down. And some hotels are exploring the use of Mini PCs in guest rooms attached to the TVs along with concierge-type software to enhance the in-room guest experience.

 

Banks Turn to Mini PCs for Increased Efficiency


A growing number of banks are reaching for Mini PCs, not only for their compact size, but for their energy efficiency and speed. For many clients, a visit to the local bank reveals tellers relying on Mini PCs where desktop towers once stood. Mini PCs free up valuable desk space, offer compact security, and integrate with legacy systems.

 

Day Traders Turn to Mini PCs for Quick Calculations

 

For day traders, Mini PCs featuring solid-state-drives (SSDs) are the desktop PCs of choice. While traditional hard disk drives in PCs and laptops are fairly inexpensive, they are also slow. SSDs offer greater capacity, are considered more reliable, and enable faster access to data, which is critical to an industry where seconds matter.

 

Where have you seen the Mini PC in use? Join the conversation using #IntelDesktop or view our other posts in the Desktop World Series and rediscover the desktop.

 

To read part 1, click here: Why Choose the Mini PC? Part 1

Read more >

Future of IoT: 5 Questions with Technology Futurist Angela Orebaugh

We sat down with technology futurist Angela Orebaugh recently to chat about emerging Internet of Things (IoT) trends. In 2011, Angela was named Booz Allen Hamilton’s first Cybersecurity Fellow, a position reserved for the firm’s most notable experts in their … Read more >

The post Future of IoT: 5 Questions with Technology Futurist Angela Orebaugh appeared first on IoT@Intel.

Read more >

Population Health Management Best Practices for Today and Tomorrow’s Healthcare System

By Justin Barnes and Mason Beard

 

The transition to value-based care is not an easy one. Organizations will face numerous challenges on their journey towards population health management.

 

We believe there are five key elements and best practices to consider when transitioning from volume to value-based care:  managing multiple quality programs; supporting both employed and affiliated physicians and effectively managing your network and referrals; managing organizational risk and utilization patterns; implementing care management programs; and ensuring success with value-based reimbursement.

 

When considering the best way to proactively and concurrently manage multiple quality programs, such as pay for performance, accountable care and/ or patient-centered medical home initiatives, you must rally your organization around a wide variety of outcomes-based programs. This requires a solution that supports quality program automation. Your platform must aggregate data from disparate sources, analyze that data through the lens of a program’s specific measures, and effectively enable the actions required to make improvements. Although this is a highly technical and complicated process, when done well it enables care teams to utilize real-time dashboards to monitor progress and identify focus areas for improving outcomes.

 

In order to provide support to both employed and affiliated physicians, and effectively manage your network and referrals, an organization must demonstrate its value to healthcare providers. Organizations that do this successfully are best positioned to engage and align with their healthcare providers. This means providing community-wide solutions for value-based care delivery. This must include technology and innovation, transformation services and support, care coordination processes, referral management, and savvy representation with employers and payers based on experience and accurate insight into population health management as well as risk.

 

To effectively manage organization risk and utilization patterns, it is imperative to optimize episodic and longitudinal risk, which requires the application of vetted algorithms to your patient populations using a high quality data set. In order to understand the difference in risk and utilization patterns you need to aggregate and normalize data from various clinical and administrative sources, and then ensure that the data quality is as high as possible. You must own your data and processes to be successful. And importantly, do not rely entirely on data received from payers.

 

It is also important to consider the implementation of care management programs to improve individual patient outcomes. More and more organizations are creating care management initiatives for improving outcomes during transitions of care and for complicated, chronically ill patients. These initiatives can be very effective.  It is important to leverage technology, innovation and processes across the continuum of care, while encompassing both primary and specialty care providers and care teams in the workflows. Accurate insight into your risk helps define your areas of focus. A scheduled, trended outcomes report can effectively identify what’s working and where areas of improvement remain.

 

Finally, your organization can ensure success with value-based reimbursement when the transition is navigated correctly. The shift to value-based reimbursement is a critical and complicated transformation—oftentimes a reinvention—of an organization. Ultimately, it boils down to leadership, experience, technology and commitment. The key to success is working with team members, consultants and vendor partners who understand the myriad details and programs, and who thrive in a culture of communication, collaboration, execution and accountability.

 

Whether it’s PCMH or PCMH-N, PQRS or GPRO, CIN or ACO, PFP or DSRIP, TCM or CCM, HEDIS or NQF, ACG’s or HCC’s, care management or provider engagement, governance or network tiering, or payer or employer contracting, you can find partners with the right experience to match your organizations unique needs. Because much is at stake, it is necessary to ensure that you partner with the very best to help navigate your transition to value-based care.

 

Justin Barnes is a corporate, board and policy advisor who regularly appears in journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes is also host of the weekly syndicated radio show, “This Just In.”

 

Mason Beard is Co-Founder and Chief Product Officer for Wellcentive. Wellcentive delivers population health solutions that enable healthcare organizations to focus on high quality care, while maximizing revenue and transforming to support value-based models.

Read more >

Make Your Data Centre Think for Itself

Two-People-Walking-In-A-Data-Center.pngWouldn’t it be nice if your data centre could think for itself and save you some headaches? In my last post, I outlined the principle of the orchestration layer in the software-defined infrastructure (SDI), and how it’s like the brain controlling your data centre organism. Today, I’m digging into this idea in a bit more detail, looking at the neurons that pass information from the hands and feet to the brain, as it were. In data centre terms, this means the telemetry that connects your resources to the orchestration layer.

 

Even the most carefully designed orchestration layer will only be effective if it can get constant and up-to-date, contextual information about the resources it is controlling: How are they performing? How much power are they using? What are their utilisation levels and are there any bottlenecks due to latency issues? And so on and so forth. Telemetry provides this real-time visibility by tracking resources’ physical attributes and sending the intelligence back to the orchestration software.

 

Let me give you an example of this in practice. I call it the ‘noisy neighbour’ scenario. Imagine we have four virtual machines (VMs) running on one server, but one of them is hogging a lot of the resource and this is impacting the performance of the other three. Intel’s cache monitoring telemetry on the server can report this right back to the orchestration layer, which will then migrate the noisy VM to a new server, leaving the others in peace. This is real-time situational feedback informing how the whole organism works. In other words, it’s the Watch, Decide, Act, Learn cycle that I described in my previous blog post – doesn’t it all fit together nicely?

 

Lessons from History…

 

Of course, successful telemetry relies on having the right hardware to transmit it. Just think about another data centre game changer of the recent past – virtualisation. Back in the early 2000s, demand for this technology was growing fast, but the software-only solutions available put tremendous overheard demand on the hardware behind them – not an efficient way to go about it. So, we at Intel helped build in more efficiencies with solutions like Intel® Virtualization Technology, more memory, addressability and huge performance gains. Today, we’re applying that same logic to remove SDI bottlenecks. Another example is Intel® Intelligent Power Node Manager, a hardware engine that works with management software to monitor and control power usage at the server, rack and row level, allowing you to set the usage policies for each.

However, we’re not just adding telemetry capabilities at the chip level and boosting hardware performance, but also investing in high-bandwidth networking and storage technologies.

 

….Applied to Today’s Data Centres

 

With technologies already in the market to enable telemetry within the SDI, there are a number of real-life use cases we can look to for examples of how it can help drive time, cost and labour out of the data centre. Here are some examples of how end-user organizations are using Intelligent Power Node Manager to do this:

 

 

Orchestration-Layer-Explanation-For-Data-Center.png

Another potential use case for the technology is to reduce usage of intelligent power strips, or you could throttle back on server performance and extend the life of your uninterruptable power supply (UPS) in the advent of a power outage, helping lower the risk of a service down time – something no business can afford.

 

So, once you’ve got your data centre functioning like a highly evolved neural network, what’s next? Well, as data centre technologies continue to develop, the extent to which you can build agility into your infrastructure is growing all the time. In my next blog, I’m going to look into the future a bit and explore how silicon photonics can help you create composable architectures that will enable you to build and reconfigure resources on the fly.

 

To pass the time until then, I’d love to hear from any of you that have already started using telemetry to inform your orchestration layer. What impact has it had for you, and can you share any tips for those just starting out?

 

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance

 

You can find my first and second blogs on data centres here:   

Is Your Data Centre Ready for the IoT Age?

Have You Got Your Blueprints Ready?

Are You Smarter than a Data Centre?

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

 

*Other names and brands may be claimed as the property of others.

Read more >

Server Refresh Can Reduce Total Cost of Ownership

Snackable-MoreBangforYourBuck.pngMore bang for your buck. Essentially that is the driving force behind my team in Intel IT. Our IT department is on a tight budget, just like most enterprise IT departments. Therefore, return on investment and total cost of ownership are important considerations for deciding when to upgrade the servers that run our silicon design workloads. As a principal engineer in infrastructure engineering, I direct the comparison of the various models of each new generation of Intel® CPU to those of previous generations of processors. (We may re-evaluate the TCO of particular models between generations, if price points significantly change.) We evaluate all the Intel® Xeon® processor families – Intel Xeon processor E3 family, Intel Xeon processor E5 family, and Intel Xeon processor E7 family – each of which have different niches in Intel’s silicon design efforts.

 

We use industry benchmarks and actual electronic design automation (EDA) workloads in our evaluations, which go beyond performance to address TCO – we include throughput, form factor (density), energy efficiency, cost, software licensing costs, and other factors. In many cases over the years, one of the models might turn out better in terms of price/watt, but performance is slower, or the software licensing fees are triple those for a different model.

 

In silicon design, back-end design jobs are time critical and require servers with considerable processing power, large memory capacity, and memory bandwidth. For these types of jobs, the bottleneck has historically been memory, not CPU cycles; with more memory, we can run more jobs in parallel. The Intel Xeon processor E7-8800 v3 product family offers new features that can increase EDA throughput, including up to 20% more cores than the previous generation and DDR4 memory support for higher memory bandwidth. A server based on the Intel Xeon processor E7-8800 v3 can take either DDR3 (thereby protecting existing investment) or DDR4 DIMMs – and supports memory capacity up to 6 TB per 4-socket server (with 64 GB DIMMs) to deliver fast turnaround time for large silicon design jobs.

 

We recently completed an evaluation of the Intel Xeon processor E7-8800 v3 product family, as documented in our recent brief. According to our test results, the Intel Xeon processor E7 v3-based server delivers excellent gains in performance and supports larger models, faster iterations, and greater throughput than was possible with the previous generation of the processor. These improvements can accelerate long-running silicon design jobs and shorten the time required to bring new silicon design to market. These improvements can also reduce data center footprint and help control operational and software licensing costs by achieving greater throughput using fewer systems than were necessary with previous generations of processors.

 

Our tests used a large multi-threaded EDA application operating on current Intel® silicon design data sets. The result shows an Intel Xeon processor E7-8890 v3-based server completed a complex silicon design workload 1.18x faster than the previous-generation Intel Xeon processor E7-4890 v2-based server and 17.04x faster than a server based on Intel® Xeon® processor 7100 series (Intel Xeon processor 7140M).

 

The Intel Xeon processor E7-8800 v3 product family also supports the Intel® Advanced Vector Extensions 2 (Intel® AVX2) instruction set. Benefits of Intel AVX2 include doubling the number of FLOPS (floating-point operations per second) per clock cycle, 256-bit integer instructions, floating-point fused multiply-add instructions, and gather operations. While our silicon design jobs do not currently use AVX2 – mostly because the design cycles can take over a year to complete and during that time we cannot modify the plan of record (POR) EDA tools and infrastructure for those servers – we anticipate that Intel AVX2 can provide a performance boost for many technical applications.

 

I’d like to hear from other IT professionals – are you considering refreshing? If you’ve already refreshed, can you share your observed benefits and concrete ROI? What best-known methods have you developed and what are some remaining pain points? If you have any questions, I’d be happy to answer them and pass on our own best practices in deploying these servers. Please share your thoughts and insights with me – and your other IT colleagues – by leaving a comment below. Join our conversation here in the IT Peer Network.

Read more >