Since that brief aside on terminology is out of the way, let us continue with the kitchen analogy.
For the Intel® Xeon Phi™ family of products, and indeed for any processor, one of its cores is… Read more
Since that brief aside on terminology is out of the way, let us continue with the kitchen analogy.
For the Intel® Xeon Phi™ family of products, and indeed for any processor, one of its cores is… Read more
How many smartphones are there in your household? How about laptops, tablets, PCs? What about other gadgets like Internet-enabled TVs or smart room temperature sensors? Once you start to think about it, it’s clear that even the least tech-savvy of us has at least one of these connected devices. Each device is constantly sending or receiving data over the Internet, data which must be handled by a server somewhere. Without the data centres containing these servers, the devices (or the apps they run) are of little value. Intel estimates that for every 400 smartphones, one new server is needed. That’s about one server per street I’d say.
We’re approaching 2 billion smartphones in service globally, each with (Intel estimates) an average of 26 apps installed. We check our phones an average of 110 times per day, and on top of that, each app needs to connect to its data centre around 20 times daily for updates. All of this adds up to around one trillion data centre accesses every day. And that’s just for smartphones. Out-of-home IoT devices like wearable medical devices or factory sensors need even more server resource.
Sounds like a lot, right? Actually, if we were watching a movie about the Internet, it’d be an epic and we’d still just be in the opening credits. Only about 40 percent of the world’s population is connected today, so there’s a huge amount of story yet to tell as more and more people come to use, like and expect on-demand, online services. With use of these applications and websites set to go up, and connected devices expected to reach 50 billion by 2020, your data centre is a critically important piece of your business.
What fascinates me about all this is the impact it’s going to have on the data centre and how we manage it. Businesses are finding that staggering volumes of data and demand for more complex analytics mean that they must be more responsive than ever before. They need to boost their agility and, as always, keep costs down – all in the face of this tsunami of connected devices and data.
The cost point is an important one. Its common knowledge that for a typical organisation, 75 percent of the IT budget goes on operating expenditure. In a bid to balance this cost/agility equation, many organizations have begun to adopt a Hybrid Cloud approach.
In the hybrid model, public cloud or SaaS is used to provide some of the more standard business services – such HR, expenses or CRM systems; but also to provide overspill capacity in times of peak demand. In turn, the private cloud hosts the organizations most sensitive or business-critical services, typically those delivering true business differentiating capabilities.
This hybrid cloud model may mean you get leading edge, regularly updated commodity services which consume less of your own valuable time and resource. However, to be truly effective your private cloud also needs to deliver highly efficient cost/agility dynamics – especially when faced with the dawning of the IoT age and its associated demands.
For many organizations the evolution of their data centre(s) to deliver upon the promise of private cloud is a journey they’ve been on for a number years, but one that’s brought near term benefits on the way. In fact, each stage in the journey should help drive time, cost and labour out of running your data centre.
The typical journey can be viewed as a series of milestones:
A typical reaction from organizations first considering the journey is “That sounds great!” However, this is quickly followed by two questions, the first being “Where do I begin?”
Well, let’s start with the fact that it’s hard to build a highly efficient cloud platform that will enable real-time decision making using old infrastructure. The hardware really does matter, and it needs to be modern, efficient and regularly refreshed – evergreen, if you will. If you don’t do this, you could be losing an awful lot of efficiency.
Did you know, for example, that according to a survey conducted by a Global 100 company in 2012, 32 percent of its servers were more than four years old? These servers made up just four percent of total server performance capabilities but yet they constituted 65 percent of the total energy consumption. Clearly, there are better ways to run a data centre.
And as for that second question? You guessed it, “How can we achieve steps 4 and 5?” This is a very real consideration, even for the most innovative of organisations. Even those companies considered leaders in their private cloud build-out are generally only at Stage 3: Automation, and experimenting with how to tackle Stage 4: Orchestration.
The key thing to remember is that your on-line services, web sites and apps run the show. They are a main point of contact with your customers (both internally and externally), so they must run smoothly and expectations must be met. This means your private cloud must be elastic – flexing on-demand as the businesses requires. Responding to business needs in weeks to months is no longer acceptable as the clock speed of business continues to ramp. Hours to minutes to seconds is the new order.
I believe the best way to achieve this hyper-efficient yet agile private cloud model is to shift from the hardware-defined data centre of today to a new paradigm that is defined by the software: the software-defined infrastructure (SDI).
Does this mean I’m saying the infrastructure doesn’t matter? Not at all, and we’ll come on to this later in this blog series. I’ll be delving into the SDI architecture model in more detail, looking at what it is, Intel’s role in making it possible, and how it’ll enable your private cloud to get the Holy Grail – Stage 5: Real-time Enterprise.
In the meantime, I’d love to hear from you. How is your organization responding to the connected device deluge, and what does your journey to the hybrid cloud look like?
Our guest blogger for this post is Blanka Vlasak, an innovation and incubation specialist in Intel’s New Business Initiatives incubator, where she spent the last two years providing financial and business expertise to ventures focusing on areas ranging from wearables, … Read more >
After the great experience we had at Hackster.io Phoenix last month, I had the opportunity to help developers build hardware hacks at Hackster.io’s hardware weekend in Boston, earlier this… Read more
Cybersecurity is poised for a notorious year. The next 12 to 18 months will see greater, bolder, and more complex attacks emerge. This year’s installment for the top computer security predictions highlights how the threats are advancing, outpacing defenders, and the landscape is becoming more professional and organized. Although the view of our cybersecurity future is obscured, one thing is for certain: We’re in for an exciting ride.
In this blog I’ll discuss my top 10 predictions for Cybersecurity in 2015.
Governments will leverage their professional cyber warfare assets as a recognized and accepted tool for governmental policy. For many years governments have been investing in cyber warfare capabilities, and these resources will begin to pay dividends.
Governments will be more actively involved in responding to major hacking events affecting their citizens. Expect government response and reprisals to foreign nation-state attacks, which ordinary business enterprises are not in a position to act or counter. This is a shift in policy, both timely and necessary to protect how the public enjoys life under the protection of a common defense.
The demand for security professionals is at an all-time high, but the workforce pool is largely barren of qualified candidates. The best talent has been scooped up. A lack of security workforce talent, especially in leadership roles, is a severe impediment to organizations in desperate need of building and staffing in-house teams. We will see many top-level security professionals jump between organizations, lured by better compensation packages. Academia will struggle to refill the talent supply in order to meet the demand.
High-profile targets will continue to be victimized. As long as the return is high for attackers while the effort remains reasonable, they will continue to target prominent organizations. Nobody, regardless of how large, is immune. Expect high-profile companies, industries, government organizations, and people to fall victim to theft, hijacking, forgery, and impersonation.
We will witness an expansion in strategies in the next year, with attackers acting in ways that put individuals directly at risk. High profile individuals will be threatened with embarrassment, exposing sensitive healthcare, photos, online activities, and communication data. Everyday citizens will be targeted with malware on their devices to siphon bank information, steal crypto-currency, and to hold their data for ransom. For many people this year, it will feel like they are being specifically targeted for abuse.
Enterprises will overhaul how they view risks. Serious board-level discussions will be commonplace, with a focus on awareness and responsibility. More attention will be paid to the security of products and services, with the protection of privacy and customer data beginning to supersede “system availability” priorities. Enterprise leaders will adapt their perspectives to focus more attention on security as a critical aspect of sustainable business practices.
The security and attacker communities will make significant strides forward this year. Attackers will continue to maintain the initiative and succeed with many different types of attacks against large targets. Cybercrime will grow quickly in 2015, outpacing defenses and spurring smarter security practices across the community. Security industry innovation will advance as the next wave of investments emerge and begin to gain traction in protecting data centers, clouds, and the ability to identify attackers.
Malware numbers will continue to skyrocket, increase in complexity, and expand more heavily beyond traditional PC devices. Malicious software will continue to swell at a relentless pace, averaging over 50 percent year-over-year growth. The rapid proliferation and rising complexity of malware will create significant problems for the security industry. The misuse of stolen certificates will compound the problems, and the success of ransomware will only reinforce more development by criminals.
Attackers move into new opportunities as technology broadens to include more users, devices, data, and evolving supporting infrastructures. As expansion occurs, there is a normal lag for the development and inclusion of security. This creates a window of opportunity. Where the value of data, systems, and services increases, threats surely follow. Online services, phones, the IoT, and cryptocurrency are being heavily targeted.
Cybersecurity is constantly changing and the attacks we see today will be superseded by more serious incursions in the future. We will witness the next big step in 2015, with attacks expanding from denial-of-service and data theft activities to include more sophisticated campaigns of monitoring and manipulation. The ability to maliciously alter transactions from the inside is highly coveted by attackers.
I predict 2015 to be an extraordinary year in cybersecurity. Attackers will seek great profit and power, while defenders will strive for stability and confidence. In the middle will be a vicious knife fight between aggressors and security professionals. Overall, the world will take security more seriously and begin to act in more strategic ways. The intentional and deliberate protection of our digital assets, reputation, and capabilities will become a regular part of life and business.
If you’d like to check out my video series surrounding my predictions, you can find more here.
IT Peer Network: My Previous Postshttps://communities.intel.com/people/MatthewRosenquist/blog/2015/03/04/why-ransomware-will-rise-in-2015
On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family. One key area of focus for the new processor family is that it is designed to accelerate business insight and optimize … Read more >
Thank you all for your participation in and submissions to Level Up! With the contest closing last night, I was eager to wake up this morning and see what was submitted and from where. Here’s some… Read more
I have just spent the better part of two weeks involved in the training of a new 50-strong sales team. Most of the team were experienced sales people but very inexperienced in pharmaceutical sales. They had a proven record in B2B sales, but only 30 percent of the team had previously sold pharmaceutical or medical device products to health care professionals (HCPs). Clearly, after the logistical and bureaucratic aspects of the training had been completed, most of the time was spent training the team on the medical background, disease state, product specifics and treatment landscape/competitor products.
Preparing the team for all eventualities and every possible question/objection they may get from HCPs was key to making sure that on the day of product launch they would be competent to go out into their new territories and speak with any potential customer. With particular reference to this product it was equally important for the team to be in a position to speak with doctor, nurse and pharmacist.
The last part of the training was to certify each of the sales professionals and make sure that they not only delivered the key messages but that they could also answer most of the questions HCPs would fire at them. In order to do this the sales professionals were allowed 10 minutes to deliver their presentation to trainers, managers and medical personal. The assessors were randomly assigned questions/objections to be addressed during the presentation.
The question remains, “does this really prepare the sales person for that first interaction with a doctor or other HCP?” Experience tells us that most HCPs are busy people and they allow little or no time for pharmaceutical sales professionals in their working day. The 90 seconds that a sales professional gets with most of their potential customers is not a pre-fixed amount. Remember, doctors are used to getting the information they need to make clinical decisions by asking the questions they need answers to in order to make a decision that will beneficially affect their patient(s). So, starting the interaction with an open question is quite simply the worst thing to do, as most doctors will take this opportunity to back out and say they do not have time.
The trick is to get the doctor to ask the first question (that is what they spend their lives doing and they are good at it) and within the first 10-15 seconds. Making a statement that shows you understand their needs and have something beneficial to tell them is the way you will get “mental access.” Once the doctor is engaged in a discussion, the 90-second call will quickly extend to 3+ minutes. Gaining “mental access” is showing the doctor that you have a solution to a problem they have in their clinical practice and that you have the necessary evidence to support your key message/solution. This has to be done in a way that the doctor will see a potential benefit for, most importantly, their patients. In order to do this the sales professional needs to really understand the clinical practice of the person that they are seeing (i.e. done their pre-call planning) and have the materials available to instantly support their message/solution.
The digital visual aid is singularly the best means of providing this supporting information/data, as whatever direction the sales professional needs to go in should be accessible in 1-2 touches of the screen. Knowing how to navigate through the digital sales aid is essential as this is where the HCP is engaged or finding a reason to move on.
What questions do you have? Agree or disagree?
When I started my career in IT, infrastructure provisioning involved a lot of manual labor. I installed the hardware, installed the operating systems, connected the terminals, and loaded the software and data, to create a single stack to support a specific application. It was common to have one person who carried out all of these tasks on a single system with very few systems in an Enterprise.
Now let’s fast forward to the present. In today’s world, thanks to the dynamics of Moore’s Law and the falling cost of compute, storage, and networking, enterprises now have hundreds of applications that support the business. Infrastructure and applications are typically provisioned by teams of domain specialists—networking admins, system admins, storage admins, and software folks—each of whom puts together a few pieces of a complex technology puzzle to enable the business.
While it works, this approach to infrastructure provisioning has some obvious drawbacks. For starters, it’s labor-intensive with too many hands in order to support, it’s costly in both people and software, and it can be rather slow from start to finish. While the first two are important for TCO, it is the third that I have heard the most about… Just too slow for the pace of business in the era of fast-moving cloud services.
How do you solve this problem? That is what the Software Defined Infrastructure is all about. With SDI, compute, network, and storage resources are deployed as services, potentially reducing deployment times from weeks to minutes. Once services are up and running, hardware is managed as a set of resources, and software has the intelligence to manage the hardware to the advantage of the supported workloads. The SDI environment automatically corrects issues and optimizes performance to ensure you can meet your service levels and security controls that your business demands.
So how do you get to SDI? My current response is that SDI is a destination that sits at the summit for most organizations. At the simplest level, there are two routes to this IT nirvana—a “buy it” high road and a “build-it-yourself” low road. I call the former a high road because it’s the easiest way forward—it’s always easier to go downhill than uphill. The low road has lots of curves and uphill stretches on it to bring you to the higher plateau of SDI. Each of these approaches has its advantages and disadvantages.
The high road, or the buy-the-packaged-solution route, is defined by system architectures that bring together all the components for an SDI into a single deployable unit. Service providers who take you on the high road leverage products like Microsoft Cloud Platform System (CPS) and VMware EVO: RAIL to create standalone platform units with virtualized compute, storage, and networking resources.
On the plus side, the high road offers faster time to market for your SDI environment, a tested and certified solution, and the 24×7 support most enterprises are looking for in a path. These are also the things you can expect in a solution delivered by a single vendor. On the downside, the high road locks you into certain choices in the hardware and software components and forces you to rely on the vendor for system upgrades and technology enhancements, which might happen faster with other solutions, but take place in their timelines. This approach, of course, can be both Opex and Capex heavy, depending on the solution.
The low road, or the build-it-yourself route, gives you the flexibility to design your environment and select your solution components from the portfolio of various hardware, software vendors and open source. You gain the agility and technology choices that come with an environment that is not defined by a single vendor. You can pick your own components and add new technologies on your timelines—not your vendor’s timelines—and probably enjoy lower Capex along the way, although at the expense of more internal technical resources.
Those advantages, of course, come with a price. The low road can be a slower route to SDI, and it can be a drain on your staff resources as you engage in all the heavy lifting that comes with a self-engineered solution set. Also, it is quite possible with the pace of innovations that you see today in this area, that you never really achieve the vision of SDI due to all the new choices. You have to design your solution; procure, install, and configure the hardware and software; and add the platform-as-a-service (PaaS) layer. All of that just gets to a place where you can start using the environment. You still haven’t optimized the system for your targeted workloads.
In practice, most enterprises will take what amounts to a middle road. This hybrid route takes the high road to SDI with various detours onto the low road to meet specific business requirements. For example, an organization might adopt key parts of a packaged solution but then add its own storage or networking components or decide to use containers to implement code faster.
Similarly, most organizations will get to SDI in stepwise manner. That’s to say they will put elements of SDI in place over time—such as storage and network virtualization and IT automation—to gain some of the agility that comes with an SDI strategy. I will look at these concepts in an upcoming post that explores an SDI maturity model.
The “Intel Ethernet” brand symbolizes the decades of hard work we’ve put into improving performance, features, and ease of use of our Ethernet products.
What Intel Ethernet doesn’t stand for, however, is any use of proprietary technology. In fact, Intel has been a driving force for Ethernet standards since we co-authored the original specification more than 40 years ago.
At Interop Las Vegas last week, we again demonstrated our commitment to open standards by taking part in the NBASE-T Alliance public multi-vendor interoperability demonstration. The demo leveraged our next generation single-chip 10GBASE-T controller supporting the NBASE-T intermediate speeds of 2.5Gbps and 5Gbps (see a video of that demonstration here).
Intel joined the NBASE-T Alliance in December 2014 at the highest level of membership, which allows us to fully participate in the technology development process including sitting on the board and voting for changes in the specification.
The alliance, and its 33 members, is an industry-driven consortium that has developed a working 2.5GbE / 5GbE specification that is the basis of multiple recent product announcements. Based on this experience, our engineers are working diligently now to develop the IEEE standard for 2.5G/5GBASE-T.
By first developing the technology in an industry alliance, vendors can have a working specification to develop products, and customers can be assured of interoperability.
The reason Ethernet has been so widely adopted over the past 40 years is its ability to adapt to new usage models. 10GBASE-T was originally defined to be backwards compatible to 1GbE and 100Mbs, and required category 6a or category 7 cabling to get 10GbE. Adoption of 10GBASE-T is growing very rapidly in the datacenter, and now we are seeing the need for more bandwidth in enterprise and campus networks to support the next generation 802.11AC access points, local servers, workstations, and high-end PCs.
Copper twisted pair has long been the cabling preference for enterprise data centers and campus networks, and most enterprises have miles and miles of this cable already installed throughout their buildings. In the past 10 years alone, about 70 billion meters of category 5e and category 6 cabling have been sold worldwide.
Supporting higher bandwidth connections over this installed cabling is a huge win for our customers. Industry alliances can be a useful tool to help Ethernet adapt, and the NBASE-T alliance enables the industry to address the need for higher bandwidth connections over installed cables.
Intel is the technology and market leader in 10GBASE-T network connectivity. I spoke about Intel’s investment in the technology in an earlier blog about Ethernet’s ubiquity.
We are seeing rapid adoption of our 10GBASE-T products in the data center, and now through the NBASE-T Alliance we have a clear path to address enterprise customers with the need for more than 1GbE. Customers are thrilled to hear that they can get 2.5GbE/ 5GbE over their installed Cat 5e copper cabling—making higher speed networking between bandwidth-constrained endpoints achievable.
Ethernet is a rare technology in that it is both mature (more than 40 years old since its original definition in 1973) and constantly evolving to meet new network demands. Thus, it has created an expectation by users that the products will work the first time, even if they are based on brand new specifications. Our focus with Intel Ethernet products is to ensure that we implement solutions that are based on open standards and that these products seamlessly interoperate with products from the rest of the industry.
If you missed the NBASE-T demonstration at Interop, come see how it works at Cisco Live in June in San Diego.
By now, hopefully, you’ve heard or read the coverage about the recent Intel Solutions Summit 2015 – or even better yet, maybe you attended. One of the key takeaways from ISS 2015 was the identification of three areas of priority … Read more >
Hybrid HTML5 Cordova apps (aka PhoneGap and Intel XDK apps) are far too often considered as inadequate for high-performance apps. That is a shame, because it is simply not true. The idea that HTML5 Cordova apps cannot satisfy the performance … Read more >
The post Build High-Performance HTML5 Cordova Apps with Crosswalk appeared first on Intel Software and Services.
One of the themes that ran through this year’s Intel Software Conference, in EMEA, was programmer productivity. The event took place in Seville in April and gave invited resellers and journalists an… Read more
Management practices from the HPC world can get even bigger results in smaller-scale operations. In 2014, industry watchers have seen a major rise in hyperscale computing. Hadoop and other cluster architectures that originated in academic and rese… Read more
The @robodubinc battle arena is shaping up nicely for @makerfaire @intel #intelmaker pic.twitter.com/SzyMI2ltNX — Rex St John (@rexstjohn) May 12, 2015 Seattle’s own Robodub Inc is set to appear at this year’s MakerFaire with a brand new, Intel powered robotics … Read more >
By David Fair, Unified Networking Marketing Manager, Intel Networking Division
iWARP was on display recently in multiple contexts. If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an Internet Engineering Task Force (IETF) standard that delivers Remote Direct Memory Access (RDMA).
In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet. It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding the interrupts and making of extra copies of data that accompany kernel processing.
A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments. More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.
Intel® is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs). To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP. Naturally it was slow – about 1 Gbps – since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust. (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)
Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a poster chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications. With SMB Direct, no new software and no system configuration changes are required for system administrators to take advantage of iWARP.
Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division. Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.
Lastly, the Non-Volatile Memory Express* (NVMe*) community demonstrated “remote NVMe,” made possible by iWARP. NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express. NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices. iWARP makes it possible to securely and efficiently access NVM across an Ethernet network. The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.**
Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards. iWARP goes anywhere the Internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.
Intel, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
**Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Did you know that many reptiles, marine mammals, and birds sleep with one side of their brains awake? This adaptation lets these creatures rest and conserve energy while remaining alert and instantly ready to respond to threats and opportunities. It also enables amazing behaviors such as allowing migrating birds to sleep while in flight. How’s that for maximizing productivity?
Taking a cue from nature, many new desktop PCs challenge how we define sleep with Intel® Ready Mode Technology. This innovation replaces traditional sleep mode with a low-power, active state that allows PCs to stay connected, up-to-date, and instantly available when not in use—offering businesses several advantages over existing client devices.
Users get the productivity boost of having real-time information ready the instant that they are. Intel Ready Mode enhances third-party applications with the ability to constantly download or access the most current content, such as the latest email messages or media updates. It also allows some applications to operate behind the scenes while the PC is in a low-power state. This makes some interesting new timesaving capabilities possible—like, for example, facial recognition software that can authenticate and log in a user instantly upon their arrival.
In addition, when used with third-party apps like Dropbox*, Ready Mode can turn a desktop into a user’s personal cloud that both stores the latest files and media from all of their mobile devices and makes it available remotely as well as at their desks. Meanwhile, IT can easily run virus scans, update software, and perform other tasks on user desktops anytime during off hours, eliminating the need to interrupt users’ workdays with IT admin tasks.
PCs in Ready Mode consume only about 10 watts or less (compared to 30 – 60 watts active) while remaining connected, current, and ready to go. That’s enough energy to power an LED lamp equal to 60 watts of luminosity. Energy savings will vary, of course; but imagine how quickly a six-fold energy-consumption reduction would add up with, say, 1,000 users who actively use their PCs only a few hours a day.
In the conference room, a desktop-powered display setup with Intel Ready Mode will wait patiently in an energy-sipping, low-power state when not in use, but will be instantly ready to go for meetings with the latest presentations and documents already downloaded. How much time would you estimate is wasted at the start of a typical meeting simply getting set up? Ten minutes? Multiply that by six attendees, and you have an hour of wasted productivity. Factor in all of your organization’s meetings, and it’s easy to see how Ready Mode can make a serious contribution to the bottom line.
Desktops with Intel Ready Mode help make it easier for businesses to move their landline or VoIP phone systems onto their desktop LAN infrastructures and upgrade from regular office phones to PC-based communication solutions such as Microsoft Lync*. Not only does this give IT fewer network infrastructures to support, but with Ready Mode, businesses can also deploy these solutions and be confident that calls, instant messages, and videoconference requests will go through even if a user’s desktop is idle. With traditional sleep mode, an idle PC is often an offline PC.
Ready to refresh with desktops featuring Intel® Ready Mode Technology today? Learn how at: www.intel.com/readymode
By John Kincaide, Privacy and Security Policy Attorney at Intel The FCC’s (Federal Communications Commission) Wireline Competition and Consumer & Governmental Affairs Bureaus held a public workshop to explore the FCC’s role in protecting the privacy of consumers using broadband … Read more >
The post FCC Holds Public Workshop on Broadband Consumer Privacy appeared first on Policy@Intel.
It’s hard to believe another Intel Solutions Summit is behind us. Thanks to all of our valued partners who attended ISS 2015. Intel Technology Providers play a vital role in our partner ecosystem, creating incredible solutions with Intel technology. We appreciate … Read more >
By Mike Pearce, Ph.D. Intel Developer Evangelist for the IDZ Server Community.
On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family…. Read more