Recent Blog Posts

21st Century Nursing Brings Anytime, Anywhere Care

In my days as a practicing registered nurse, technology felt like something that just got in the way of doing the real job of looking after patients. The perception of technology held by my fellow RNs was that it was forced on them by an IT department and that ultimately it was more hassle than it was worth.

 

Today, things have changed. Nurses are truly embracing technology and, in many cases, I’d say they that they are pioneers of its use across the healthcare sector. Just one example is the benefits offered by the flexibility of using tablets and two-in-ones for patient care settings outside of the norm of a hospital or clinic.

 

A couple of years ago we put together a video here at Intel showing a nurse transcribing hand-written notes from a home visit on what would now be deemed to be a bulky laptop. Suffice to say that in just a short space of time mobile solutions have come so far. Writing notes on paper while with the patient then heading back to the office to input them into the appropriate clinical systems on a desktop is, thankfully, a thing of the past.

 

Real-time note-taking

Nurses now captures notes in real-time on a mobile device during a homecare visit in a way that the patient is comfortable with and finds unobtrusive. Where nurses used to hold a pen and paper they now hold a tablet, phablet or two-in-one which helps maintain that all-important, trust-building eye contact with the patient.

 

All of this is possible because of advances in the computing power of mobile devices. To put this into perspective, it’s likely that the tablet carried by a nurse today has more computing power than the desktop of just a couple of years ago. Combine that performance with the anywhere-anytime, security-enhanced access to clinical applications via the cloud and you have nurses who do their jobs more efficiently and reduce the number of errors resulting from duplicating steps to document patient information.

 

Educating patients

We want to see patients engaging more in taking good care of themselves too. Mobile devices are helping patients better understand their condition, whether that be through showing x-rays or illustrating responses to treatment in graphical forms. Education is a crucial part of the modern nurse’s role and I’m happy to say that this part of the job is much easier today than it was when I was practicing.

 

We’ve only scratched the surface though, as when we look ahead to the opportunities presented by wearable technologies which can send information to a care team instantaneously, we start to see the true benefits of virtual care. As the population grows and people live longer, this virtual care will become increasingly important, if not essential.

 

Your future

I’d love to hear how you are using today’s technologies in your role – how are mobile devices helping you care for your patients more efficiently and effectively? What is the one feature that you couldn’t live without? And what capabilities do you need moving forward?

 

Leave a comment below or tweet me via @intelhealth – let’s keep the conversation going so that we can build the future of nursing together.

Read more >

Reaching one million database transactions per second… Aerospike + Intel SSD

        aerospike1mtps.png

 

 

We’ve known  the innovators at Aerospike for a few years now, and today we are announcing more than 1 million transaction per second (TPS) on a single server with Aerospike’s NoSQL database. That might not seem like such a big deal, until you realize we are not using DRAM for this, as you’ve seen on some previous posts about Aerospike doing 1 million TPS. We are trading out DRAM for NVM (non-volatile memory) in the classic form of NAND memory. NAND to database fanatics like us is hot, because you store so much more.  NoSQL innovators have learned how to utilize NVM with breathtaking performance and new data architectures. NVM is plenty fast when your specification is 1 millisecond per row “get”. In fact it’s the perfect trade-off of, fast, lower cost, and non-volatile. The best thing is the price. Did I tell you about the price yet?

 

NVM today and even more so tomorrow is a small fraction of the price of DRAM. Better still you are not constrained by say 256GB, or some sweet spot of memory pricing that always leaves you a bit short of goal. Terabyte class servers with NVM give you so much more headroom to grow your business and not reconstruct and upgrade your world in months.  How does 6 + Terabytes of NVM database memory on a single box sound?


Here at Intel, we say. Be bold, go deep into the Terabyte class of database server!

 

So how did we do this? Well our friends at Aerospike make it possible with a special file system (often called a database storage engine), that keeps the hash to the data in DRAM (a very small amount of DRAM, we set it to 64 GB), and the actual 1k or greater (key,value) row is kept in a large and growth capable “namespace” on 4 PCIe SSDs. Aerospike likes Intel SSD for their block level response consistency, because when you replace DRAM and concurrently run at this level of process threading, consistency becomes paramount. In fact we like to target 99% consistency of reads under 1 millisecond, during our tests. Here are the core performance results.

 

95% read Database Results (Aerospike’s asmonitor and Linux iostat)


asmonitor data

Record Size

Number of clients threads

Total TPS

Percent below 1ms (Reads)

Percent below 1ms
(Writes)

Std  Dev of Read Latency

(ms)

Std Dev of Write Latency (ms)

Database size

1k

576

1,124,875

97.16

99.9

0.79

0.35

100G

2k

448

875,446

97.33

99.57

0.63

0.18

200G

4k

384

581,272

97.22

99.85

0.63

0.05

400G

1k with replication 512 1,003,471 96.11 99.98 0.87 0.30 200G

 

 

iostat data

Record Size

Read MB/sec

Write MB/sec

Avg queue depth on SSD

Average drive latency

CPU % busy

1k

418

29

31

0.11

93

2k

547

43

27

0.13

81

4k

653

52

20

0.16

52

1k (replication)

396

51

30

0.13

94

 

Notes:

1. Data is averaged and summarized across 2 hours of warmed up runs. Many runs executed for consistency.

2. 4k test was network constrained, hence the lower CPU attained during this test.

 

We ran our tests on 1k, 2k and 4k row sizes, and 1k again with asynch replication turned on. We kept the data row-wise small, which is common for operational databases that manage cookies, user profiles and trade/bidding information in an operational row structure. The Aerospike database does have a binning process that can give you columns, but so many usages exist for strings, so we configured for no-bin (i.e. 1 column). This configuration will give you the highest performance for Aerospike.

 

The databases we built were from 100GB to 400GB, but as made the database bigger we did not see any drop in performance. We used a small database to maintain some agility in building and re-working this effort over and over. Our scalability problems came about as we scaled the rows sizes and that was at the network level, and no longer as a balancing act between the SSD and threading levels on  the CPU. We simply need more network infrastructure to go to larger row sizes. Taking a server beyond 20Gbit of networking per server at a 4k row sizes was a wall for us. Supporting nodes that are producing 40Gbit and higher throughput rates can become an expensive undertaking.  This network throughput and cost factor will affect your expense thresholds and be a decision factor on truly how dense of an Aerospike cluster you wish to attain.

 

Configuration and Key Results

We used Intel’s best 18 core Xeon Xeon v3 family servers which support 72 cpu hardware threads per machine. Aerospike is very highly threaded and can use lots of cores and threads per server and with htop we were recording over 100 active threads per monitoring sample, loading the CPU queues nicely. As far as balance to the SSD and queue depths of the SSD we found that achieving  our range of 95% to 100% consistency under 1 ms db record retrieval was most perfected at a queue depths of under 32 on these Intel NVMe (non-volatile memory express)  SSD’s. The numbers in the asmonitor data table shows that we were actually getting mostly 97% of all transactions running under 1 millisecond. A very high achievement.

 

Configuration details is below, for those attempting to replicate this work. All components and software is available on the market today. Try the Aerospike Community Edition free for download here.

 

AEROSPIKE DATABASE CONFIGURATION

 

Description

Details

Edition

Community Edition

Version

3.3.40

Bin

Single Bin

Number of nodes

Two

Replication Factor

One (*Two used with 1k rows and replication)

RAM Size

64 GB

Devices

Two P3700 PCIe Devices per node ( 4 total)

Write block Size

128k

 

 

 

 

 

 

 

 

 

 

 

AEROSPIKE BENCHMARK TOOL CONFIGURATION

Example command used to load the database:

./run_benchmarks -h 172.16.5.32 -p 3000 -n test -k 100000000 -l 23 -b 1 -o S:2048 -w I -z 64

Example command used to run the benchmark from client:

./run_benchmarks -h 172.16.5.32 -p 3000 -n test -k 100000000 -l 23 -b 1 -o S:2048 -w RU,95 -z 64 -g 125000

Flags of Aerospike Client:

-u              Full usage

-b              set the number of Aerospike bins (Default is 1)

-h            set the Aerospike host node

-p            set the port on which to connect to Aerospike

-n            set the Aerospike namespace

-s            set the Aerospike set name

-k            set the number of keys the client is dealing with

-S            set the starting value of the working set of keys

-w            set the desired workload (I – Linear ‘insert’| RU, – Read-Update with 80% reads & 20% writes)

-T            set read and write transaction timeout in milliseconds

-z            set the number of threads the client will use to generate load

-o            set the type of object(s) to use in Aerospike transactions (I – Integer| S: – String | B: – Java blob)

-D          Run benchmarks in Debug mode

 

System

Details

Dell R730xd Server System

One primary (dual system with replication testing)

Dual CPU socket, rack mountable server system

Dell A03 Board, Product Name: 0599V5

CPU Model used

2 each – Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz max frequency: 4Ghz

18 cores, 36 logical processors per CPU

36 cores, 72 logical processors total

DDR4 DRAM Memory

128GB installed

BIOS Version

Dell* 1.0.4 , 8/28/2014

Network Adapters

Intel® Ethernet Converged 10G X520 – DA2 (dual port PCIe add-in card)

1 – embedded 1G network adapter for management

2 – 10GB port for workload

Storage Adapters

None

Internal Drives and  Volumes

/ (root) OS system – Intel SSD for Data Center Family S3500 – 480GB Capacity

/dev/nvme0n1 Intel SSD for Data Center Family P3700 – 1.6TB Capacity, x4 PCIe AIC

/dev/nvme1n1 Intel SSD for Data Center Family P3700 –  1.6TB Capacity, x4 PCIe AIC

/dev/nvme2n1 Intel SSD for Data Center Family P3700 –  1.6TB Capacity, x4 PCIe AIC

/dev/nvme3n1 Intel SSD for Data Center Family P3700 –  1.6TB Capacity, x4 PCIe AIC

6.4TB of raw capacity for Aerospike database namespaces

Operating System, kernel

& NVMe driver

Red Hat Enterprise Linux Server Version 6.5

Linux kernel version changed to 3.16.3

nvme block driver version 0.9 (vermagic: 3.16.3)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note: Intel PCIe drives use the Non-Volatile Memory express storage standard for Non-volatile memory, this requires an NVMe SSD software driver in your Linux kernel. The currently recommended kernel is 3.19 based for work such as this, benchmark results.

 

PCIe NVMe Intel drives latest firmware update and tool

Intel embeds its most stable maintenance release support software for Intel SSD’s into a tool we call Intel Solid State Drive Data Center Tool. Our latest release just landed and it important that you use the MR2 release included in the latest version 2.2.0 to achieve these kind of results for small blocks.  Intel’s firmware for the Intel SSD for Data Center PCIe family gets tested worldwide by hundreds of labs many of them directly touched by software companies such as Aerospike. No other SSD manufacturer is as connected both in the platform and in the software vendor collaboration space as Intel is. Guaranteeing you the Solutions level scalability you see in this blog. Intel’s SSD products are truly platform connected and end user software inspired.

https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=23931

 

Conclusion

The world of deep servers that dish out row-based Terabytes has arrived, and feeding a Hadoop cluster or vice-versa from these kind of ultra-fast NoSQL clusters is gaining traction. These are TPS numbers never heard of in the Relational SQL world from a single server. NoSQL has gained traction as purpose built, fast, and excellent for use cases such as trading, session and profile management. Now you see this web scale friendly architecture move into the realm of immense data depth per node. If you are thinking 256GB of DRAM per node is your only option for critical memory scale, think again, those days are behind us now.

 

Come see Holly Watson, and Frank Ober at Strata + Hadoop World at the Intel Booth #415. We’d love to talk to you more about our NVMe SSD’s and how open industry standards are changing the future of databases and the hardware you run them on.

 

Special thanks to Swetha Rajendiran of Intel and Young Paik of Aerospike for their commitment and efforts in building and producing these test results with me.

Read more >

Wearable Technology Makes the Jump to the Workplace

wearables.jpgConsumers can be a bit finicky when it comes to wearable technology. While wellness wearables have made a small impact in the consumer market, not much else has. Consumers face device fatigue, investment justification, fashion judgments, and a profound lack of benefits. Endeavour Partners studied early wearable adopters and found that of the U.S consumers who purchased an activity tracker, more than half no longer use their device. More over, one-third of those surveyed stopped using the device within six months of receiving it.

 

The real value proposition is in the enterprise, and 2015 is poised to be a year of change for wearables in the workplace. These devices allow for real-time access of data while freeing the hands for more tactile work, in turn giving the enterprise valuable information. “Wearables can help improve employee efficiency, enhance training and ongoing communication, reduce nonproductive time and rework, shrink decision time frames, minimize exposure to hazardous conditions, decrease travel time and more,” according to Accenture Technology. Companies have the opportunity to streamline training and the decision-making process by having real-time access to employees, which might be especially useful in fieldwork and manufacturing.

 

Wearables have the potential to disrupt every industry, but currently only 3 percent of companies are investing in enterprise wearables. In its Digital IQ study — slated for release in fall 2015 — PwC reported on the top five industries that have adopted wearables thus far: healthcare (10 percent), technology (7 percent), automotive (6 percent), industrial products (5 percent), and business and professional services (4 percent). In order to stay competitive and relevant, companies need to take notice of wearable technology and how it can positively impact their bottom line. Giants like Salesforce have already set the pace with the Salesforce Wear initiative, and the Apple Watch and Microsoft’s HoloLens are hot on their heels.

 

Unobtrusive Wearables

 

For wearables to be successfully adopted into the workplace, companies will need to plan for the following considerations: user experience, workflow modifications, analytics, IT infrastructure, privacy and security, and battery life. “To succeed,” according to PwC, “wearables must first and foremost be human-centered—that is, designed to meet the needs of the user without getting in his or her way.”

 

Workplace wearables might still be in the “clunk” phase, not unlike their technological predecessors — think flip phones, pagers, Bluetooth headsets, etc. While the challenges are there, so is the technological capabilityto refine design, utility and functionality. This technology will continue to evolve as our offices become wire-free, deskless, and remote.  The possibilities are simply endless.

 

Check out our Make it Wearable campaign for more information on how we’re transforming the wearable market.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

What is the Future of the Medical Sales Professional?

The question I get asked more often than anything is, “given what is happening in the medical/pharmaceutical world, is there a future for the pharmaceutical sales representative?”

 

The short answer, in my opinion, is yes. But, there is no doubt that with all the restrictions on access, the sunshine act and the changes in what representatives are allowed to say the way a medical sales representative can interact with health care professionals (HCPs) has changed forever.

 

When it comes down to the essentials, HCPs still need to be kept updated, and who better to be the purveyor of that information than the representative of the company who provides the service, device or drug. It happens in every other industry, so why not in medicine? Somewhere the perception has arisen that doctors will prescribe bad/wrong drugs for reasons beyond what is best for the patient. Now, in all walks of life there are some people who do things for personal gain, but in medicine I still firmly believe that the vast majority of doctors will give a patient the most appropriate drug for the individual and their condition as if they were a family member. The day has come where the doctor does not have the prescribing freedom that he/she once had. With restricted formularies, financial constraints and insurance coverage the decision on which drug is most appropriate has to include these intangibles.

 

In much of the research I have done over the last two years, HCPs and in particular specialists have told me that their available time during a working day to see medical sales professionals is limited at best. They have to see more patients and spend less time with each patient in order to maintain their income and cover their expenses. The attraction of private practice gets is diminishing with more and more doctors are joining larger groups or going “on-staff” as this is the only way they can practice medicine and live a life.

 

So, back to the question as to what is the future of the medical/device sales representative? The sales rep of the future is going to “carry” more than just 2 or 3 drugs in their hypothetical bag. They will have a scientific, nursing or medical background and be trained on a portfolio of products maybe even all the products that their company has. They will be available at a time of day and in a way that suits the needs and work day of the specialists they cover.

 

Many doctors stated that the only time they can interact with pharmaceutical sales reps is out of hours or at conferences. Many of the big institutions do not allow sales reps into the building. Yet, doctors still need to be informed of current developments and new therapies. They will seek out this information and expect to be able to get it when they want it. Online sales professionals will be available for virtual sales calls at all times of the day and night.

 

In all the research I have done, doctors do not like telesales but are more than happy to be “detailed” online as long as there is a visual component to the presentation and there is a real person at the other end of the interaction. Bottom line is that doctors cannot do their job without the pharmaceutical, medical device and biotech industry. They need to know what is current and in the pipeline and being people/people they would prefer to get this information directly from representatives of that industry. The industry needs doctors to test, use and refine what they do. A symbiotic existence continues to be needed but a status quo needs to be established in the new world we exist in.

 

What questions about the future of pharmaceutical sales do you have?

Read more >

Transforming the Workplace by Breaking Habits

Humans are creatures of habit. Whether it’s the tall, non-fat, extra foam vanilla latte that you can’t start your day without, the route you walk to work, or “your” table in the cafeteria, I’ll bet you follow a whole host of little routines every day. They put you at ease, remove the need to worry so much about what might be around the corner, and generally help you get through the day a little easier.

 

MB.pngI’m not knocking it; I like my coffee as much as the next guy. But when this love of familiarity starts to infiltrate the way we work, it can become dead weight. So today, in this fourth blog in my series exploring the impact of the third Industrial Revolution on the financial services industry, we’re going to look at cultural change.

 

The Revolution from Within

 

As we’ve seen already, if you want to succeed in this new world of SMAC stack, cloud, and big data analytics, it’s essential that your business remains innovative and agile. You need to be able to change direction quickly when customer demand or the market calls for it. While driving up business velocity, organizations also need to optimize their productivity — there is no point in moving faster if you have to do everything twice. It’s also becoming increasingly important to attract and retain the best and brightest talent — including Millennials and Digital Natives — and to figure out ways to unlock the hidden intelligence buried in your organization.

 

At Intel, we are working on initiatives with the HR, IT, and facilities teams — both internally and at our customers’ sites — looking at how we can transform workplaces while inspiring employees to get on board with the changes. We’re exploring collaboration, facilities, and personal productivity to help change-wary workers see how they can benefit from more connected and efficient processes. Even simple things can make a big difference, like identifying the person who has the information you need for a project, finding a free conference room, or eliminating wires.

 

Untitled.pngCreating a Better Way to Work

 

We have an exciting workplace transformation roadmap in place, with new user experiences for enterprise devices, which we shared at our Intel Developer Forum in San Francisco in September 2014. The new features include integrating wireless display technology, wireless docking, wireless charging, and our You Are Your Password concept. This is a multifactor authentication model that uses biometrics, your phone, and your badge to identify you, dispensing the need for employees to remember passwords.

 

Indeed, 2015 is set to see a lot of exciting advances and activity in this area. The key to making the most of them lies in making sure you communicate the benefits clearly to your teams, and share best practices with them. Once they see that by making a process more efficient they can spend an extra five minutes speaking to customers (or at their daily tai chi session), they’ll soon jump on board.

 

Tune in soon for the last blog in this series, where I’ll be discussing the importance of IT security for fostering trust in your brand’s identity.

 

To continue the conversation, let’s connect on Twitter.

 

Mike Blalock

Global Sales Director

Financial Services Industry, Intel

 

This is the fourth installment of a five part series on Tech & Finance. Click here to read blog 1, blog 2, and blog 3.

Read more >

Telco NFV and Security – Main Threads of Investigation

By Mike Bursell, Architect and Strategic Planner for the Software Defined Network Division at Intel and the chair of the ETSI Security Workgroup.

 

Telecom operators are getting very excited about network function virtualization (NFV). The basic premise is that operators get to leverage the virtualization technology created and perfected in the cloud to reduce their own CAPEX and OPEX. So great has been the enthusiasm, in fact, that NFV has crossed over and is now being used to also describe non-telecom deployments of network functions such as firewalls and routers.

 

A great deal of work and research is going on in areas where a telecom operator’s needs are different to those of other service providers. One such area is security. The ETSI NFV workgroup is a forum where operators, Telecom Equipment Manufacturers (TEMs), and other vendors have been meeting over the past two years to drive a consensus of understanding around the required NFV architecture and infrastructure. Within ETSI NFV, the “SEC” (Security) working group, of which I am the chair, is focusing on various NFV security architectural challenges. So far, the working group has published two documents:

 

 

The various issues that they address are worth discussing in more depth, and I plan to write some separate pieces in the future, but they can all be categorized as follows:

 

  • Host security
  • Infrastructure security
  • Virtualization network function (VNF)/tenant security
  • Trust management
  • Regulatory concerns

 

Let’s cover those briefly, one by one.

 

Host security

For NFV, the host is the hypervisor host (an older term is the VMM or Virtual Machine Manager), which runs the Virtual Machines. In the future, hypervisor solutions are unlikely to be the only way of providing virtualization, but the type of security issues they raise are likely to be replicated with other technologies. There are two sub-categories – multi-tenant isolation and host compromise.

 

Infrastructure

The host is not the only component of the NFV infrastructure. There may be routers, switches, storage, and other elements which need to be considered, and sometimes the infrastructure requirements, including the host and any virtual switches, local storage, etc. should be included as part of the overall picture.

 

VNF/tenant security

The VNFs, and how they are protected from external threats, including the operator and each other in a multi-tenant environment, would fall under this point.

 

Trust management

Whenever a new component is incorporated into an NFV deployment, issues of how much – or whether – it should trust the other components arise.  Sometimes trust is simple, and does not need complex processes and technologies, but a number of the use cases which operators are interested in may require significant and complex trust relationships to be created, managed, and destroyed.

 

Regulatory concerns

In most markets, governments place regulatory constraints on telecom operators before they are allowed to offer services.  These concerns may have security implementations.

 

Although the ETSI NFV workgroup didn’t set out to specifically focus on problem areas, these categories have turned out to be useful for generating possible concerns which need to be considered.  In future blogs, I will consider these questions and a number of the possible answers.

Read more >

Unstructured Data Management: Finding Meaning in Unearthed Dark Data

CP.pngArchaeologists dig up the earth looking for items from yesteryear; they find raw, untouched data — ecofacts, artifacts, architecture, tombs — and analyze it, hoping to uncover a snippet of past cultures or some buried treasure.

 

This is not unlike how organizations approach unstructured data, or dark data. Dark data represents a pooled set of untapped facts, documents, and media that are stored and sit undisturbed until we dig at it, hoping to find those valuable gems in all the clutter that can give us opportunities for prediction and help us better understand the culture, strategy, or bottom line of our enterprise.

 

Dark Data Is Appearing – and Disappearing – at an Alarming Rate

 

With all the digitization of data we’ve seen since late in the 20th century, we’ve got a data flood on our hands: In 2012 alone, we created 2.5 quintillion bytes of data per day. That number has continued to grow at unprecedented rates since then. It’s estimated that in the next decade, a whopping 90 percent of all the data created will be unstructured, which is defined by Gartner as “the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes.”

 

A subset of this unstructured data will soon come from the growing popularity of the Internet of Things (IoT). According to Randy Bean at MIT Sloan Management review, data generated by “things” is projected to grow from 2 percent in 2013 to 10 percent in 2020. Real-time data generated from the IoT will add a unique spin on the management of unstructured data, as the enterprise will need to start to find ways to use, process, and analyze this device-generated data as it occurs.

 

CPP.png


Finding and Using Dark Data Before It’s Archived

 

Dark data is not organized in a predefined, relational model database, like its structured counterpart. It’s variable and rich, and contains word processing documents, social media posts, images, presentations, emails. The majority might be digital noise, but by linking unstructured and structured data, there is real opportunity to make sense of this vast amount of information and unearth new intelligence.

 

Before we go on a digging expedition, however, it’s important to establish a system that can help your business analyze and create context around your dark data. An archeologist only begins excavation once he or she formalizes an objective and surveys the land. Find out how much data you have and where it is. Find out what types of data you have. Find out what types of data should be destroyed, kept for further analysis, or migrated to a less expensive facility.

 

At Intel, we’ve employed a multiple platform strategy for analyzing different data types including an EDW platform, Apache Hadoop platform, and low-cost Massively Parallel Processing platform. The Apache Hadoop platform is designed to process big batches of unstructured data. The Hadoop clusters work well with unstructured data since it acts as a cheap storage repository where potentially valuable data can be stored until a strategy can be implemented for its use.

 

While mining unstructured data can be a costly venture, it can deliver incredible value by pointing to trends that can cut cost, boost productivity, improve your ROI, and ultimately give you deeper insight into your organization.

 

For more resources on big data and predictive analytics, click here.

 

To continue this conversation, or to react to the topic, connect with me at @chris_p_intel or use #ITCenter.

Read more >

The Business Value of Enterprise Collaboration and the Intel 5th Gen Core Processor

“Capital isn’t that important in business. Experience isn’t that important. You can get both of these things. What is important is ideas.” – Harvey Firestone

 

The world is in need of a better way to work.

thumbnail.jpg


We come to the office with the expectation of getting things done. The workplace is supposed to be a technology-fueled hub for collaboration, connection, problem solving, and simplified information sharing. It’s where we collectively drive innovation in hopes of building a greater enterprise.

 

But when technology impedes that productive environment, employees lose valuable time. Frustration mounts, workflow is disturbed, ideas lay dormant, collaboration grinds to a halt. And inevitably, the business loses money.

 

It’s time to change that. It’s time for the business and technology to align to build a more constructive and inspiring environment. It’s time to talk about how Intel 5th Gen Core™ vPro™ processor based systems will change the way we work.

 

The Power of 5th Gen Technology

 

When we imagined 5th Gen, this is what we saw — a workplace that is operating at the speed of your employees. A wire-free space that emphasizes mobility, flexibility, and maximum productivity without compromising user satisfaction. Technology that’s as fast as it is secure and manageable, that allows for greater information sharing and the delivery of actionable insights to the enterprise.

 

With Intel Wireless Display (Intel WiDi), Intel Wireless Docking, and the optimized performance of the 5th Gen Core vPro processor, we’re striving to bring you the workplace of the future. No cords and longer battery life mean more freedom and mobility. Wireless sharing capabilities lead to smoother teamwork. And our processor is equipped with Intel Identity Protection Technology that keeps users safe and secure.

 

The 5th Gen Core vPro family was built with a simple concept in mind: Technology should always act as an economic driver for your business. It should always be an enabler, never an inhibitor. Our desire was to minimize delays in daily processes so that employees can focus solely on the task at hand and maximize the probability of good ideas coming to light.

 

The Power of the Idea.

 

If your employees are hindered by their devices, they’re not providing their fullest value to your organization. Think of the increments of time spent on menial tech-related tasks that detract from overall productivity. The tiny actions that add up to tremendous loss and detract from your business’s bottom line. The ideas that lay sleeping for so long they were eventually forgotten.

 

Now think of what the 5th Gen Core vPro family could bring to your business. It’s time we change the way we work; join me as we enter the next phase of business computing and collaboration, and a new world of ideation for the enterprise.

 

To continue this conversation on Twitter, please use #ITCenter or #WorkingBetter.

Read more >