Recent Blog Posts

Is Your Data Center Ready for the IoT Age?

How many smartphones are there in your household? How about laptops, tablets, PCs? What about other gadgets like Internet-enabled TVs or smart room temperature sensors? Once you start to think about it, it’s clear that even the least tech-savvy of us has at least one of these connected devices. Each device is constantly sending or receiving data over the Internet, data which must be handled by a server somewhere. Without the data centres containing these servers, the devices (or the apps they run) are of little value. Intel estimates that for every 400 smartphones, one new server is needed. That’s about one server per street I’d say.

 

We’re approaching 2 billion smartphones in service globally, each with (Intel estimates) an average of 26 apps installed. We check our phones an average of 110 times per day, and on top of that, each app needs to connect to its data centre around 20 times daily for updates. All of this adds up to around one trillion data centre accesses every day. And that’s just for smartphones. Out-of-home IoT devices like wearable medical devices or factory sensors need even more server resource.

 

Sounds like a lot, right? Actually, if we were watching a movie about the Internet, it’d be an epic and we’d still just be in the opening credits. Only about 40 percent of the world’s population is connected today, so there’s a huge amount of story yet to tell as more and more people come to use, like and expect on-demand, online services. With use of these applications and websites set to go up, and connected devices expected to reach 50 billion by 2020, your data centre is a critically important piece of your business.

 

Man-On-Subway-Reading-Tablet.pngHere Comes the Hybrid Cloud

 

What fascinates me about all this is the impact it’s going to have on the data centre and how we manage it. Businesses are finding that staggering volumes of data and demand for more complex analytics mean that they must be more responsive than ever before. They need to boost their agility and, as always, keep costs down – all in the face of this tsunami of connected devices and data.

 

The cost point is an important one. Its common knowledge that for a typical organisation, 75 percent of the IT budget goes on operating expenditure. In a bid to balance this cost/agility equation, many organizations have begun to adopt a Hybrid Cloud approach.

 

In the hybrid model, public cloud or SaaS is used to provide some of the more standard business services – such HR, expenses or CRM systems; but also to provide overspill capacity in times of peak demand. In turn, the private cloud hosts the organizations most sensitive or business-critical services, typically those delivering true business differentiating capabilities.

 

This hybrid cloud model may mean you get leading edge, regularly updated commodity services which consume less of your own valuable time and resource. However, to be truly effective your private cloud also needs to deliver highly efficient cost/agility dynamics – especially when faced with the dawning of the IoT age and its associated demands.

 

For many organizations the evolution of their data centre(s) to deliver upon the promise of private cloud is a journey they’ve been on for a number years, but one that’s brought near term benefits on the way. In fact, each stage in the journey should help drive time, cost and labour out of running your data centre.

 

The typical journey can be viewed as a series of milestones:

 

  • Stage 1: Standardization. Consolidating storage, networking and compute resources across your data centres can create a simplified infrastructure that delivers cost and time savings. With standardized operating system, management tools and development platform, you can reduce the tools, skills, licensing and maintenance needed to run your IT.
  • Stage 2: Virtualization. By virtualising your environment, you enable optimal use of compute resources, cutting the time needed to build new environments and eliminating the need to buy and operate one whole server for each application.
  • Stage 3: Automation. Automated management of workloads and compute resource pools increases your data centre agility and helps save time. With real-time environment monitoring and automated provisioning and patching, you can do more with less.
  • Stage 4: Orchestration. Highly agile, policy-based rapid and intelligent management of cloud resource pools can be achieved with full virtualization of compute, storage and networking into software-defined resource pools. This frees up your staff to focus on higher-value, non-routine assignments.
  • Stage 5: Real-time Enterprise. Your ultra-agile, highly optimized, real-time management of federated cloud resources enables you to meet business-defined SLAs while monitoring your public and private cloud resources in real time. Fully automated management and composable resources enable your IT talent to focus on strategic imperatives for the business.


Man-Works-On-Servers.png

A typical reaction from organizations first considering the journey is “That sounds great!” However, this is quickly followed by two questions, the first being “Where do I begin?”


Well, let’s start with the fact that it’s hard to build a highly efficient cloud platform that will enable real-time decision making using old infrastructure. The hardware really does matter, and it needs to be modern, efficient and regularly refreshed – evergreen, if you will. If you don’t do this, you could be losing an awful lot of efficiency.

 

Did you know, for example, that according to a survey conducted by a Global 100 company in 2012, 32 percent of its servers were more than four years old? These servers made up just four percent of total server performance capabilities but yet they constituted 65 percent of the total energy consumption. Clearly, there are better ways to run a data centre.

 

It’s All About Meeting Business Expectations

 

And as for that second question? You guessed it, “How can we achieve steps 4 and 5?” This is a very real consideration, even for the most innovative of organisations. Even those companies considered leaders in their private cloud build-out are generally only at Stage 3: Automation, and experimenting with how to tackle Stage 4: Orchestration.

 

The key thing to remember is that your on-line services, web sites and apps run the show. They are a main point of contact with your customers (both internally and externally), so they must run smoothly and expectations must be met. This means your private cloud must be elastic – flexing on-demand as the businesses requires. Responding to business needs in weeks to months is no longer acceptable as the clock speed of business continues to ramp. Hours to minutes to seconds is the new order.

 

Time for a New Data Centre Architecture

 

I believe the best way to achieve this hyper-efficient yet agile private cloud model is to shift from the hardware-defined data centre of today to a new paradigm that is defined by the software: the software-defined infrastructure (SDI).

 

Does this mean I’m saying the infrastructure doesn’t matter? Not at all, and we’ll come on to this later in this blog series. I’ll be delving into the SDI architecture model in more detail, looking at what it is, Intel’s role in making it possible, and how it’ll enable your private cloud to get the Holy Grail – Stage 5: Real-time Enterprise.

 

In the meantime, I’d love to hear from you. How is your organization responding to the connected device deluge, and what does your journey to the hybrid cloud look like?


To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Top 10 Predictions: Evolution of Cybersecurity in 2015

Cybersecurity is poised for a notorious year. The next 12 to 18 months will see greater, bolder, and more complex attacks emerge. This year’s installment for the top computer security predictions highlights how the threats are advancing, outpacing defenders, and the landscape is becoming more professional and organized. Although the view of our cybersecurity future is obscured, one thing is for certain: We’re in for an exciting ride.

 

In this blog I’ll discuss my top 10 predictions for Cybersecurity in 2015.

 

Top Predictions:

 

1. Cyber warfare becomes legitimate

Cyber-Warefare.png

Governments will leverage their professional cyber warfare assets as a recognized and accepted tool for governmental policy. For many years governments have been investing in cyber warfare capabilities, and these resources will begin to pay dividends.

 

 

 

 



2. Active government intervention

Governement-Intervention.png

Governments will be more actively involved in responding to major hacking events affecting their citizens. Expect government response and reprisals to foreign nation-state attacks, which ordinary business enterprises are not in a position to act or counter. This is a shift in policy, both timely and necessary to protect how the public enjoys life under the protection of a common defense.




 

 

 

3. Security talent in demand

Security-Talent.png

The demand for security professionals is at an all-time high, but the workforce pool is largely barren of qualified candidates. The best talent has been scooped up. A lack of security workforce talent, especially in leadership roles, is a severe impediment to organizations in desperate need of building and staffing in-house teams. We will see many top-level security professionals jump between organizations, lured by better compensation packages. Academia will struggle to refill the talent supply in order to meet the demand.

 

 

 

 

 

4. High profile attacks continue

High-Profile-Attacks.png

High-profile targets will continue to be victimized. As long as the return is high for attackers while the effort remains reasonable, they will continue to target prominent organizations. Nobody, regardless of how large, is immune. Expect high-profile companies, industries, government organizations, and people to fall victim to theft, hijacking, forgery, and impersonation.

 

 

 

 

 

5. Attacks get personal

Attacks-Get-Personal.png

We will witness an expansion in strategies in the next year, with attackers acting in ways that put individuals directly at risk. High profile individuals will be threatened with embarrassment, exposing sensitive healthcare, photos, online activities, and communication data. Everyday citizens will be targeted with malware on their devices to siphon bank information, steal crypto-currency, and to hold their data for ransom. For many people this year, it will feel like they are being specifically targeted for abuse.

 

 

 

 

6. Enterprise risk perspectives change

Enterprise-Risk-Perspectives.png

Enterprises will overhaul how they view risks. Serious board-level discussions will be commonplace, with a focus on awareness and responsibility. More attention will be paid to the security of products and services, with the protection of privacy and customer data beginning to supersede “system availability” priorities. Enterprise leaders will adapt their perspectives to focus more attention on security as a critical aspect of sustainable business practices.




 



7. Security competency and attacker innovation increase

Security-Competency.png

The security and attacker communities will make significant strides forward this year. Attackers will continue to maintain the initiative and succeed with many different types of attacks against large targets. Cybercrime will grow quickly in 2015, outpacing defenses and spurring smarter security practices across the community. Security industry innovation will advance as the next wave of investments emerge and begin to gain traction in protecting data centers, clouds, and the ability to identify attackers.

 

 

 

 

 

8. Malware increases and evolves

Malware-Evolves.png

Malware numbers will continue to skyrocket, increase in complexity, and expand more heavily beyond traditional PC devices. Malicious software will continue to swell at a relentless pace, averaging over 50 percent year-over-year growth. The rapid proliferation and rising complexity of malware will create significant problems for the security industry. The misuse of stolen certificates will compound the problems, and the success of ransomware will only reinforce more development by criminals.

 

 

 

 

 

9. Attacks follow technology growth

Attacks-Technology-Growth.png

Attackers move into new opportunities as technology broadens to include more users, devices, data, and evolving supporting infrastructures. As expansion occurs, there is a normal lag for the development and inclusion of security. This creates a window of opportunity. Where the value of data, systems, and services increases, threats surely follow. Online services, phones, the IoT, and cryptocurrency are being heavily targeted.

 

 

 

 

 

10. Cybersecurity attacks evolve into something ugly

Cybersecurity-Attacks.png

Cybersecurity is constantly changing and the attacks we see today will be superseded by more serious incursions in the future. We will witness the next big step in 2015, with attacks expanding from denial-of-service and data theft activities to include more sophisticated campaigns of monitoring and manipulation. The ability to maliciously alter transactions from the inside is highly coveted by attackers.

 

 

 

Welcome to the next evolution of security headaches.

I predict 2015 to be an extraordinary year in cybersecurity. Attackers will seek great profit and power, while defenders will strive for stability and confidence. In the middle will be a vicious knife fight between aggressors and security professionals. Overall, the world will take security more seriously and begin to act in more strategic ways. The intentional and deliberate protection of our digital assets, reputation, and capabilities will become a regular part of life and business.

 

If you’d like to check out my video series surrounding my predictions, you can find more here.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Postshttps://communities.intel.com/people/MatthewRosenquist/blog/2015/03/04/why-ransomware-will-rise-in-2015

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >

Accelerating Business Intelligence and Insights with Software Optimized for the Intel® Xeon® Processor E7 v3 Family

On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family.  One key area of focus for the new processor family is that it is designed to accelerate business insight and optimize … Read more >

The post Accelerating Business Intelligence and Insights with Software Optimized for the Intel® Xeon® Processor E7 v3 Family appeared first on Intel Software and Services.

Read more >

Pharma Sales: The 90-Second Rule

I have just spent the better part of two weeks involved in the training of a new 50-strong sales team. Most of the team were experienced sales people but very inexperienced in pharmaceutical sales. They had a proven record in B2B sales, but only 30 percent of the team had previously sold pharmaceutical or medical device products to health care professionals (HCPs). Clearly, after the logistical and bureaucratic aspects of the training had been completed, most of the time was spent training the team on the medical background, disease state, product specifics and treatment landscape/competitor products.

 

Preparing the team for all eventualities and every possible question/objection they may get from HCPs was key to making sure that on the day of product launch they would be competent to go out into their new territories and speak with any potential customer. With particular reference to this product it was equally important for the team to be in a position to speak with doctor, nurse and pharmacist.

 

The last part of the training was to certify each of the sales professionals and make sure that they not only delivered the key messages but that they could also answer most of the questions HCPs would fire at them. In order to do this the sales professionals were allowed 10 minutes to deliver their presentation to trainers, managers and medical personal. The assessors were randomly assigned questions/objections to be addressed during the presentation.

 

The question remains, “does this really prepare the sales person for that first interaction with a doctor or other HCP?” Experience tells us that most HCPs are busy people and they allow little or no time for pharmaceutical sales professionals in their working day. The 90 seconds that a sales professional gets with most of their potential customers is not a pre-fixed amount. Remember, doctors are used to getting the information they need to make clinical decisions by asking the questions they need answers to in order to make a decision that will beneficially affect their patient(s). So, starting the interaction with an open question is quite simply the worst thing to do, as most doctors will take this opportunity to back out and say they do not have time.

 

The trick is to get the doctor to ask the first question (that is what they spend their lives doing and they are good at it) and within the first 10-15 seconds. Making a statement that shows you understand their needs and have something beneficial to tell them is the way you will get “mental access.” Once the doctor is engaged in a discussion, the 90-second call will quickly extend to 3+ minutes. Gaining “mental access” is showing the doctor that you have a solution to a problem they have in their clinical practice and that you have the necessary evidence to support your key message/solution. This has to be done in a way that the doctor will see a potential benefit for, most importantly, their patients. In order to do this the sales professional needs to really understand the clinical practice of the person that they are seeing (i.e. done their pre-call planning) and have the materials available to instantly support their message/solution.

 

The digital visual aid is singularly the best means of providing this supporting information/data, as whatever direction the sales professional needs to go in should be accessible in 1-2 touches of the screen. Knowing how to navigate through the digital sales aid is essential as this is where the HCP is engaged or finding a reason to move on.

 

What questions do you have? Agree or disagree?

Read more >

Should You Take the High Road or the Low Road to SDI?

stylized_city_photo-s.jpg

When I started my career in IT, infrastructure provisioning involved a lot of manual labor. I installed the hardware, installed the operating systems, connected the terminals, and loaded the software and data, to create a single stack to support a specific application. It was common to have one person who carried out all of these tasks on a single system with very few systems in an Enterprise.

 

Now let’s fast forward to the present. In today’s world, thanks to the dynamics of Moore’s Law and the falling cost of compute, storage, and networking, enterprises now have hundreds of applications that support the business. Infrastructure and applications are typically provisioned by teams of domain specialists—networking admins, system admins, storage admins, and software folks—each of whom puts together a few pieces of a complex technology puzzle to enable the business.

 

While it works, this approach to infrastructure provisioning has some obvious drawbacks. For starters, it’s labor-intensive with too many hands in order to support, it’s costly in both people and software, and it can be rather slow from start to finish. While the first two are important for TCO, it is the third that I have heard the most about… Just too slow for the pace of business in the era of fast-moving cloud services.

 

How do you solve this problem? That is what the Software Defined Infrastructure is all about. With SDI, compute, network, and storage resources are deployed as services, potentially reducing deployment times from weeks to minutes. Once services are up and running, hardware is managed as a set of resources, and software has the intelligence to manage the hardware to the advantage of the supported workloads. The SDI environment automatically corrects issues and optimizes performance to ensure you can meet your service levels and security controls that your business demands.

 

So how do you get to SDI? My current response is that SDI is a destination that sits at the summit for most organizations. At the simplest level, there are two routes to this IT nirvana—a “buy it” high road and a “build-it-yourself” low road. I call the former a high road because it’s the easiest way forward—it’s always easier to go downhill than uphill. The low road has lots of curves and uphill stretches on it to bring you to the higher plateau of SDI.  Each of these approaches has its advantages and disadvantages.

 

The high road, or the buy-the-packaged-solution route, is defined by system architectures that bring together all the components for an SDI into a single deployable unit. Service providers who take you on the high road leverage products like Microsoft Cloud Platform System (CPS) and VMware EVO: RAIL to create standalone platform units with virtualized compute, storage, and networking resources.

 

On the plus side, the high road offers faster time to market for your SDI environment, a tested and certified solution, and the 24×7 support most enterprises are looking for in a path.  These are also the things you can expect in a solution delivered by a single vendor. On the downside, the high road locks you into certain choices in the hardware and software components and forces you to rely on the vendor for system upgrades and technology enhancements, which might happen faster with other solutions, but take place in their timelines. This approach, of course, can be both Opex and Capex heavy, depending on the solution.

 

The low road, or the build-it-yourself route, gives you the flexibility to design your environment and select your solution components from the portfolio of various hardware, software vendors and open source. You gain the agility and technology choices that come with an environment that is not defined by a single vendor. You can pick your own components and add new technologies on your timelines—not your vendor’s timelines—and probably enjoy lower Capex along the way, although at the expense of more internal technical resources.

 

Those advantages, of course, come with a price. The low road can be a slower route to SDI, and it can be a drain on your staff resources as you engage in all the heavy lifting that comes with a self-engineered solution set.  Also, it is quite possible with the pace of innovations that you see today in this area, that you never really achieve the vision of SDI due to all the new choices. You have to design your solution; procure, install, and configure the hardware and software; and add the platform-as-a-service (PaaS) layer. All of that just gets to a place where you can start using the environment. You still haven’t optimized the system for your targeted workloads.

 

In practice, most enterprises will take what amounts to a middle road. This hybrid route takes the high road to SDI with various detours onto the low road to meet specific business requirements. For example, an organization might adopt key parts of a packaged solution but then add its own storage or networking components or decide to use containers to implement code faster.

 

Similarly, most organizations will get to SDI in stepwise manner. That’s to say they will put elements of SDI in place over time—such as storage and network virtualization and IT automation—to gain some of the agility that comes with an SDI strategy. I will look at these concepts in an upcoming post that explores an SDI maturity model.

Read more >

The Path to Ethernet Standards and the Intel Ethernet, NBASE-T

The “Intel Ethernet” brand symbolizes the decades of hard work we’ve put into improving performance, features, and ease of use of our Ethernet products.

 

What Intel Ethernet doesn’t stand for, however, is any use of proprietary technology. In fact, Intel has been a driving force for Ethernet standards since we co-authored the original specification more than 40 years ago.

 

At Interop Las Vegas last week, we again demonstrated our commitment to open standards by taking part in the NBASE-T Alliance public multi-vendor interoperability demonstration. The demo leveraged our next generation single-chip 10GBASE-T controller supporting the NBASE-T intermediate speeds of 2.5Gbps and 5Gbps (see a video of that demonstration here).

IMG_0254.JPG.jpeg

 

Intel joined the NBASE-T Alliance in December 2014 at the highest level of membership, which allows us to fully participate in the technology development process including sitting on the board and voting for changes in the specification.

 

The alliance, and its 33 members, is an industry-driven consortium that has developed a working 2.5GbE / 5GbE specification that is the basis of multiple recent product announcements. Based on this experience, our engineers are working diligently now to develop the IEEE standard for 2.5G/5GBASE-T.

 

By first developing the technology in an industry alliance, vendors can have a working specification to develop products, and customers can be assured of interoperability.

 

The reason Ethernet has been so widely adopted over the past 40 years is its ability to adapt to new usage models. 10GBASE-T was originally defined to be backwards compatible to 1GbE and 100Mbs, and required category 6a or category 7 cabling to get 10GbE. Adoption of 10GBASE-T is growing very rapidly in the datacenter, and now we are seeing the need for more bandwidth in enterprise and campus networks to support the next generation 802.11AC access points, local servers, workstations, and high-end PCs.

 

Copper twisted pair has long been the cabling preference for enterprise data centers and campus networks, and most enterprises have miles and miles of this cable already installed throughout their buildings. In the past 10 years alone, about 70 billion meters of category 5e and category 6 cabling have been sold worldwide.


Supporting higher bandwidth connections over this installed cabling is a huge win for our customers. Industry alliances can be a useful tool to help Ethernet adapt, and the NBASE-T alliance enables the industry to address the need for higher bandwidth connections over installed cables.


Intel is the technology and market leader in 10GBASE-T network connectivity. I spoke about Intel’s investment in the technology in an earlier blog about Ethernet’s ubiquity.

 

We are seeing rapid adoption of our 10GBASE-T products in the data center, and now through the NBASE-T Alliance we have a clear path to address enterprise customers with the need for more than 1GbE. Customers are thrilled to hear that they can get 2.5GbE/ 5GbE over their installed Cat 5e copper cabling—making higher speed networking between bandwidth-constrained endpoints achievable.

 

Ethernet is a rare technology in that it is both mature (more than 40 years old since its original definition in 1973) and constantly evolving to meet new network demands. Thus, it has created an expectation by users that the products will work the first time, even if they are based on brand new specifications. Our focus with Intel Ethernet products is to ensure that we implement solutions that are based on open standards and that these products seamlessly interoperate with products from the rest of the industry.

 

If you missed the NBASE-T demonstration at Interop, come see how it works at Cisco Live in June in San Diego.

Read more >

Demonstrating Commitment to iWARP Technology with Microsoft

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

iWARP was on display recently in multiple contexts.  If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an Internet Engineering Task Force (IETF) standard that delivers Remote Direct Memory Access (RDMA).

 

In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet.  It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding the interrupts and making of extra copies of data that accompany kernel processing.

 

A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments. More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.

 

Intel® is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs).  To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP.  Naturally it was slow – about 1 Gbps – since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust.  (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)

 

pastedImage_17.png

 

Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a poster chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications.  With SMB Direct, no new software and no system configuration changes are required for system administrators to take advantage of iWARP.

 

pastedImage_1.png

 

Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division.  Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.

 

Lastly, the Non-Volatile Memory Express* (NVMe*) community demonstrated “remote NVMe,” made possible by iWARP.  NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express.  NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices.  iWARP makes it possible to securely and efficiently access NVM across an Ethernet network.  The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.**

 

pastedImage_27.png

 

Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards.  iWARP goes anywhere the Internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.

 

Intel, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

**Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

Read more >

Redefining Sleep with Intel® Ready Mode Technology on Desktops

Did you know that many reptiles, marine mammals, and birds sleep with one side of their brains awake? This adaptation lets these creatures rest and conserve energy while remaining alert and instantly ready to respond to threats and opportunities. It also enables amazing behaviors such as allowing migrating birds to sleep while in flight. How’s that for maximizing productivity?

 

Taking a cue from nature, many new desktop PCs challenge how we define sleep with Intel® Ready Mode Technology. This innovation replaces traditional sleep mode with a low-power, active state that allows PCs to stay connected, up-to-date, and instantly available when not in use—offering businesses several advantages over existing client devices.

 

Man-On-Computer.png1. Always current, available, and productive

 

Users get the productivity boost of having real-time information ready the instant that they are. Intel Ready Mode enhances third-party applications with the ability to constantly download or access the most current content, such as the latest email messages or media updates. It also allows some applications to operate behind the scenes while the PC is in a low-power state. This makes some interesting new timesaving capabilities possible—like, for example, facial recognition software that can authenticate and log in a user instantly upon their arrival.

 

In addition, when used with third-party apps like Dropbox*, Ready Mode can turn a desktop into a user’s personal cloud that both stores the latest files and media from all of their mobile devices and makes it available remotely as well as at their desks. Meanwhile, IT can easily run virus scans, update software, and perform other tasks on user desktops anytime during off hours, eliminating the need to interrupt users’ workdays with IT admin tasks.

 

2. Efficiently energized

 

PCs in Ready Mode consume only about 10 watts or less (compared to 30 – 60 watts active) while remaining connected, current, and ready to go. That’s enough energy to power an LED lamp equal to 60 watts of luminosity. Energy savings will vary, of course; but imagine how quickly a six-fold energy-consumption reduction would add up with, say, 1,000 users who actively use their PCs only a few hours a day.

 

In the conference room, a desktop-powered display setup with Intel Ready Mode will wait patiently in an energy-sipping, low-power state when not in use, but will be instantly ready to go for meetings with the latest presentations and documents already downloaded. How much time would you estimate is wasted at the start of a typical meeting simply getting set up? Ten minutes? Multiply that by six attendees, and you have an hour of wasted productivity. Factor in all of your organization’s meetings, and it’s easy to see how Ready Mode can make a serious contribution to the bottom line.

 

3. Streamlined communication

 

Desktops with Intel Ready Mode help make it easier for businesses to move their landline or VoIP phone systems onto their desktop LAN infrastructures and upgrade from regular office phones to PC-based communication solutions such as Microsoft Lync*. Not only does this give IT fewer network infrastructures to support, but with Ready Mode, businesses can also deploy these solutions and be confident that calls, instant messages, and videoconference requests will go through even if a user’s desktop is idle. With traditional sleep mode, an idle PC is often an offline PC.

 

Ready to refresh with desktops featuring Intel® Ready Mode Technology today? Learn how at: www.intel.com/readymode

Read more >

FCC Holds Public Workshop on Broadband Consumer Privacy

By John Kincaide, Privacy and Security Policy Attorney at Intel The FCC’s (Federal Communications Commission) Wireline Competition and Consumer & Governmental Affairs Bureaus held a public workshop to explore the FCC’s role in protecting the privacy of consumers using broadband … Read more >

The post FCC Holds Public Workshop on Broadband Consumer Privacy appeared first on Policy@Intel.

Read more >