ADVISOR DETAILS

RECENT BLOG POSTS

A Bucket of Wings: A Case Study of Better-Informed Decisions

In my blog Use Data To Support Arguments, Not Arguments To Support Data, I articulated how better-informed decisions are typically made and the role that business intelligence (BI) should play. Shortly after I wrote the blog, I experienced a real-life event that clearly illustrates three main phases of “data-entangled decisions.”

 

Since my family likes to take a day off from cooking on Fridays, we recently visited the deli of our favorite organic grocery store. At the take-out bar, I noticed an unusually long line of people under a large sign reading, “In-House Made Wing Buckets. All You Can Fill. On Sale for $4.99, Regular $9.99.” Well, I love wings and couldn’t resist the temptation to get a few.

 

The opportunity was to add wings (one of my favorite appetizers) to my dinner. But instead of using the special wings bucket, I chose the regular salad bar container, which was priced at $8.99 per pound regardless of the contents. I reasoned that the regular container was an easier-to-use option (shaped like a plate) and a cheaper option (since I was buying only a few wings). My assumptions about the best container to use led to a split-second decision—I “blinked” instead of “thinking twice.”

 

Interestingly, a nice employee saw me getting the wings in the regular container and approached me. Wary of my reaction, he politely reminded me of the sale and pointed out that I may pay more if I use the regular container because the wing bucket had a fixed cost (managed risk).

 

Although at first this sounded reasonable, when I asked if it would weigh enough to result in a higher cost, he took it to one of the scales behind the counter and discovered it was less than half a pound. This entire ordeal took less than 30 seconds and now I had the information I needed to make a better-informed decision.

 

This clinched it, because now two factors were in my favor. I knew that a half pound of the $8.99, regular-priced option was less than the $4.99, fixed-priced bucket option. And I knew that they would deduct the weight of the regular deli container at the register, resulting in an even lower price. I ended up paying $4.02.

 

This every-day event provides a good story to demonstrate the three phases as it relates to the business of better-informed decisions and the role of BI—or data in general.

 

Phase 1: Reaction

When the business opportunity (wing purchase) presented itself, I made some assumptions with limited data and formed my preliminary conclusion. If it weren’t for the store employee, I would have continued to proceed to the cash register ignorant of all the data. Sometimes in business, we tend to do precisely the same thing. We either don’t validate our initial assumptions and/or we make a decision based on our preliminary conclusions.

 

Phase 2: Validation

By weighing the container, I was able to obtain additional data and validate my assumptions to quickly take advantage of business opportunities —exactly what BI is supposed to do. With data, I was able to conclude with a great degree of confidence that I had mitigated the risk that it was the right approach. This is also typical of how BI can shed more light on many business decisions.

 

Phase 3: Execution

I made my decision by taking into account reliable data to support my argument, not arguments to support data. I was able to do this because I (as the decision maker) had an interest in relying on data and the data I needed was available to me in an objective form (use of the scale). This allowed me to eliminate any false personal judgments (like my initial assumptions or the employee’s recommendation).

  • From the beginning, I could have disregarded the employee’s warning or simply not cared much about the final price. If that had been my attitude, then no data or BI tool would have made a difference in my final decision. And I might have been wrong.
  • On the other hand, if I had listened to the initial argument by that nice employee without backing it up with data, I would have been equally wrong. I would have made a bad decision based on what appeared to be a reasonable argument that was actually flawed.
  • When I insisted on asking the question that would validate the employee’s argument, I took a step that is the business equivalent of insisting on more data because we may not have enough to make a decision.
  • By resorting to an objective and reliable method (using the scale), I was able to remove personal judgments.

 

In 20/20 Hindsight

Now, I realize that business decisions are never this simple. Organizations’ risk is likely measured in the millions of dollars, not cents. And sometimes we don’t have the luxury of finding objective tools (such as the scale) in time to support our decision making. However, I believe that many business decisions mirror the same sequences.

 

Consider the implications if this were a business decision that resulted in a decision of $100 in the wrong direction. Now simply assume that these types of less-informed or uninformed decisions were made once a week throughout the year by 1000 employees. The impact would be $5 million.

 

Hence, the cost to our organization increases as:

  • The cost of the error rises
  • Errors are made more frequently
  • The number of employees making the error grows

 

Bottom Line

Better-informed decisions start and end with leadership that is keen to promote the culture of data-driven decision making. BI, if designed and implemented effectively, can be the framework that enables organizations of all sizes to drive growth and profitability.

 

What other obstacles do you face in making better-informed decisions?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Malicious links could jump the air gap with the Tone Chrome extension

The new Google Tone extension is simple and elegant.  On one machine, the browser can generate audio tones which browsers on other machines will listen to and then open a website.  Brilliant.  No need to be connected to the same network, spell out a long URL to your neighbor, or cut/paste a web address into a text message for everyone to join.  But it has some serious potential risks.

Chrome Tone.jpg

Imaging being on an audio bridge, in a coffee shop, or a crowded space with bored people on their phones, tablets, or laptops.  One compromised system may be able to propagate and infect others on different networks, effectively jumping the proverbial ‘air gap’.  Malware could leverage the Tone extension and introduce a series of audible instructions which, if enabled on targeted devices, would direct everyone to automatically open a malicious website, download malware, or be spammed with phishing messages. 

 

Will such tones eventually be embedded in emails, documents, and texts?  A Tone icon takes less space than a URL.  It is convenient but obfuscates the destination, which may be a phishing site or dangerous location.  Tone could also be used to share files (an early usage for the Google team).  Therefore it could also share malware without the need for devices to be on the same networks.  This bypasses a number of standard security controls.  

 

On the less malicious side, but still annoying, what about walking by a billboard and having a tone open advertisements and marketing pages in your browser.   The same could happen as you are shopping in a store to promote sales, products, and coupons.  Will this open a new can of undesired marketing pushing into our lives?

 

That said, I must admit I like the technology.  It has obviously useful functions, fills a need, and shows the innovation of Google to make technology a facilitator of information sharing for people.  But, we do need controls to protect from unintended and undesired usages as well as security to protect from equally impressive malicious innovations.  My advice: use with care.  Enterprises should probably not enable it just yet, until the dust settles.  I for one will be watching how creative attackers will wield this functionality and how long it takes for security companies to respond to this new type of threat.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >

Why Ransomware will Rise in 2015

Bomb2.jpgBe afraid. Seriously. Ransomware is growing up fast, causing painful disruptions across the Internet, and it will get much worse in 2015.


Ransomware is the criminal activity of taking hostage a victims important digital files and demanding a ransom payment to return access to the rightful owner. In most cases files are never removed, simply encrypted in place with a very strong digital lock, denying access to the user. If you want the key to restore access to precious family photos, financial documents, or business files, you must pay. 


An entertaining and enlightening opinion-editorial piece in The New York Times highlighted how an everyday citizen was impacted, the difficulties in paying the ransom, and how professional the attackers support structure has become. 

 

Everyone is at risk. Recently, several law enforcement agencies and city governments were impacted.  Some of which paid the attackers for their “Decrypt Service.” This form of digital extortion has been around for some time, but until recently it has not been too much of a concern.  It is now rapidly gaining in popularity as it proves an effective way of fleecing money from victims both large and small. 

 

With success comes the motivation to continue and improve. Malware writers are investing in new capabilities, such as Elliptic Curve Cryptography for more robust locks, using the TOR network for covert communications, including customer support features to help victims pay with crypto-currency, and expanding the technology to target more than just static files.

 

Attackers are showing how smart, strategic, and dedicated they are. They are working hard to bypass evolving security controls and processes. It is a race. Host based security is working to better identify malware as it lands on the device, but a new variant, Fessleak, bypasses the need to install files on disk by delivering malicious code directly into system memory. TorrentLocker has adapted to avoid spam filters on email systems.  OphionLocker sneaks past controls via web browsing by using malicious advertising networks to infect unsuspecting surfers.   

 

One of the most disturbing advances is a newcomer RansomWeb’s ability to target databases and backups. This opens up an entirely new market for attackers. Web databases have traditionally been safe from attacks due to technical complexities of encrypting an active database and the likelihood of good backups which could be used in the event of an infection. RansomWeb and the future generations which will use its methods, will target more businesses.  Every person and company on the web could come across these dastardly traps and should be worried.


Cybersecurity Predictions

 

In this year’s Top10 Cybersecurity Predictions, I forecasted the growth of ransomware and a shifting of attacks to become more personal. The short term outlook is definitely leaning toward the attackers. In 2015 we will see the likes of CryptoWall, CoinVault, CryptoLocker, RansomWeb, OphionLocker, Fessleak, TeslaCrypt, TorrentLocker, Cryptobit and others, continue to evolve and succeed at victimizing users across the globe.  It will take the very best security minds and a depth of capabilities working together to stunt the growth of ransomware. 


Security organizations will eventually get the upper hand, but it will take time, innovation, and a coordinated effort. Until then, do the best you can in the face of this threat. Be careful and follow the top practices to protect from ransomware:


  1. A layered defense (host, network, web, email, etc.) to block malware delivery
  2. Savvy web browsing and email practices to reduce the inadvertent risk of infection
  3. Be prepared to immediately disconnect from the network if you suspect malware has begun encrypting files
  4. Healthy regular backups in the event of you become a victim and must recover

 

Alternatively, if you choose not to take protective measures, I recommend becoming familiar with cryptocurrency transfers and stress management meditation techniques.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >

Using Electronic Data Exchange to Coordinate Care and Improve Member Experience

The health and well-being of any workforce has a direct impact on worker productivity, efficiency and happiness, all critical components of any successful organization. With this in mind, Intel has developed a next-generation healthcare program, called Connected Care, which includes an integrated delivery system based on a patient-centered medical home (PCMH) approach to care.

The shift to value-based compensation and team-based care is driving the need for improved collaboration and patient data sharing between a growing number of providers and medical systems. While we’ve successfully introduced the Connected Care program in smaller locations, bringing it to Oregon and the larger Portland Metropolitan area presented us with a common healthcare IT challenge, interoperability. Shah.PNG

 

Advanced Interoperability Delivers Better Experiences for Clinicians, Patients

 

Intel is using industry standards to address these challenges, geared towards advancing interoperability in healthcare. The ability to quickly share clinical information between on-site Health for Life Center Clinics and delivery system partners (DSPs) enables:

 

  • Efficient and seamless experiences for members
  • Informed decision-making by clinicians
  • Improved patient safety
  • Increased provider efficiency
  • Reduced waste in the delivery of healthcare, by avoiding redundant testing

 

These improvements will help us make the Institute for Healthcare Improvement’s (IHI’s) Triple Aim a reality, by improving the patient experience (quality and satisfaction), the health of populations, and reducing the per-capita cost of health care.

 

Kaiser and Providence Part of Intel’s Connected Care Program

 

Intel’s Connected Care program is offering Intel employees and their dependents two new options in Oregon. Kaiser Permanente Connected Care and Providence Health & Services Connected Care have both been designed to meet the following requirements of Intel and their employees:

 

  • “Optimize my time” – member and provider have more quality interactions
  • “Don’t make me do your work” – no longer rely on members to provide medical history
  • “Respect my financial health” – lower incidence of dropped hand-offs/errors
  • “Seamless member and provider experience” – based on bi-directional flow of clinical data

 

Now that we have eliminated the interoperability barrier, we can enable strong coordination between providers at Health For Life Centers (on-campus clinics at Intel), and the Kaiser and Providence network providers, enabling the ability to quickly share vital electronic health records (EHRs) between varying systems used by each organization.

 

In our efforts to deliver optimal care to every Intel employee, we sought solutions that would ensure all providers serving Intel Connected Care members are able to see an up-to-date patient health record, with accurate medications, allergies, problem lists and other key health data, every time a Connected Care member needs care.

 

Learn More: Advancing Interoperability in Healthcare

 

What questions do you have?

 

Prashant Shah is a Healthcare Architect with Intel Health & Life Sciences

Read more >

Today We Have True Clinical-Grade Devices

When I used to work for the UK National Health Service, I encouraged doctors and nurses to use mobile devices. But that was 15 years ago, and the devices available only had about two hours of battery life and weighed a ton. In other words, my IT colleagues and I were a bit overly optimistic about the mobile devices of the time being up to the task of supporting clinicians’ needs.

 

So it’s great to be able to stand up in front of health professionals today and genuinely say that we now have a number of clinical-grade devices available. They come in all shapes and sizes. Many can be sanitized and can be dropped without damaging them. And they often have a long battery life that lasts the length of a clinician’s shift. The work Intel has done over the last few years on improving device power usage and efficiency has helped drive the advancements in clinical-grade devices.

 

It is very clear that each role in health has different needs. And as you can see from the following real-world examples, today’s clinical-grade devices are up to the task whatever the role.

 

Wit-Gele Kruis nurses are using Windows 8 Dell Venue 11 Pro tablets to help them provide better care to elderly patients at home. The Belgian home-nursing organization selected the tablets based on feedback from the nurses who would be using them. “We opted for Dell mainly because of better battery life compared to the old devices,” says Marie-Jeanne Vandormael, Quality Manager, Inspection Service, at Wit-Gele Kruis, Limburg. “The new Dell tablets last at least two days without needing a charge. Our old devices lasted just four hours. Also, the Dell tablets are lightweight and sit nicely in the hand, and they have a built-in electronic ID smartcard reader, which we use daily to confirm our visits.”

 

In northern California, Dr. Brian Keeffe, a cardiologist at Marin General Hospital loves that he can use the Microsoft Surface Pro 3 as either a tablet or a desktop computer depending on where he is and the task at hand (watch video below).

 

 

When he’s with patients, Dr. Keeffe uses it in its tablet form. “With my Surface, I am able to investigate all of the clinical data available to me while sitting face-to-face with my patients and maintaining eye contact,” says Dr. Keeffe.

 

And when he wants to use his Surface Pro 3 as a desktop computer, Dr. Keeffe pops it into the Surface docking station, so he can be connected to multiple monitors, keyboards, mice, and other peripherals. ”In this setup, I can do all of my charting, voice recognition, and administrative work during the day on the Surface,” explains Dr. Keeffe.

 

These are just two examples of the wide range of devices on the market today that meet the needs of different roles in health. So if you’re an IT professional recommending mobile devices to your clinicians, unlike me 15 years ago, you can look them in the eye and tell them you have a number of great clinical-grade options to show them.

 

Gareth Hall is Director, Mobility and Devices, WW Health at Microsoft

Read more >

OPNFV Day at OpenStack Summit

By Tony Dempsey


I’m here attending the OpenStack Summit in Vancouver, BC and wanted to find out more about OPNFV, a cross industry initiative to develop a reference architecture for operators to use as a reference for their NFV deployments. Intel is a leading contributor to OPNFV, and I was keen to find out more, so I attended a special event being held as part of the conference.

 

Heather Kirksey (OPNFV Director) kicked off today’s event by describing what OPNFV is all about, including the history around why OPNFV was formed as well as an overview on what areas OPNFV is focused on. OPNFV is a carrier-grade integrated open source platform to accelerate the introduction of new NFV products and services, which was an initiative coming out of the ETSI SIG group and its initial focus is on the NFVI layer.

 

OPNFV’s first release will be called Arno (naming is themed on names of rivers) and will include OpenStack, OpenDaylight, and Open vSwitch.  No date for the release is available just yet but is thought to be soon. Notably, Arno is expected to be used in lab environments initially, versus a commercial deployment. High Availability (HA) will be part of the first release (control and deployment side is supported). The plan is to make OpenStack Telco-Grade instead of trying to make a Telco-Grade version of OpenStack. AT&T gave an example as to how they were going to use the initial Arno release.  As an example of how this release will be implemented, AT&T indicated they going to bring the Arno release into their lab, add additional elements to it, and test for performance and security. They see this release very much as a means to uncover gaps in open source projects, help identify fixes and upstream these fixes. OPNFV is committed to working with the upstream communities to ensure a good relationship.  Down the road it might be possible for OPNFV releases to be deployed by service providers but currently this is a development tool.

 

An overview on OPNFV’s Continuous Integration (CI) activities was given along with a demo. The aim of the CI activity is to give fast feedback to developers in order to increase and improve the rate at, which software is developed. Chris Price (TSC Chair) spoke about requirements for the projects and working with upstream communities. According to Chris, OPNFV’s focus is working with the open source projects to define the issues, understand which open source community can likely solve the problem, work with that community to find a solution, and then upstream that solution. Mark Shuttleworth (founder of Canonical) gave an auto-scaling demo showing a live VIMS core (from Metaswitch) with CSCF auto-scaling running on top of Arno. 

 

I will be on the lookout for more OPNFV news throughout the Summit to share. In the meantime, check out Intel Network Builders for more information on Intel’s support of OPNFV and solutions delivery from the networking ecosystem.

Read more >

Intel’s Role in Building Diversity at the OpenStack Summit

By Suzi Jewett, Diversity & Inclusion Manager, Data Center Group, Intel

 

 

I have the fantastic job of driving diversity and inclusion strategy for the Data Center Group at Intel.  For me it is the perfect opportunity to align my skills, passions, and business imperatives in a full time role.  I have always had the skills and passions, but it was not until recently that the business imperative portion grew within the company to a point that we needed a full time person to fill this role and many similar throughout Intel.  Being a female mechanical engineer I have always known I am one of the few and at times that was awkward, but even I didn’t know the business impact of not having diverse teams.


Over the last 2-3 years the information on the bottom line results to the business of having diverse persons on teams and in leadership positions has become clear, and has provided overwhelming evidence that says that we can no longer be okay with having a flat or dwindling diverse persons representation in our teams.  We also know that all employees actually have more passion for their work and are able to bring their whole-selves to work when we have an inclusive environment.  Therefore, we will not achieve the business imperatives we need to unless we embrace diverse backgrounds, experiences, and thoughts in our culture and in our every decision.

 

Within the Data Center Group one area that we recognize as well below where we need it to be is female participation in open source technologies. So, I decided that we should host a networking event for women at the OpenStack Summit this year and really start making our mark in increasing the number of women in the field.

 

Today I had my first opportunity to interact with people working in OpenStack at the Women of OpenStack Event. We had a beautiful cruise around the Vancouver Harbor and then chatted the night away at Black + Blue Steakhouse. About 125 women attended and a handful of male allies (yeah!). The event was put on by the OpenStack foundation and sponsored by Intel & IBM. The excitement of women there and the non-stop conversation was so energizing to be a part of and it was obvious that the women loved having some kindred spirits to talk tech and talk life with. I was able to learn more about how OpenStack works, why it’s important, and see the passion of everyone in the room to work together to make it better. I learned that many of the companies design features together, meeting weekly and assigning ownership to divvy up the work between the companies to complete feature delivery to the code…being new to open source software I was amazed that this is even possible and excited at the same to see the opportunities to really have diversity in our teams because the collaborative design has the opportunity to bring in a vast amount of diversity and create a better end product.

 

A month or so ago I got asked to help create a video to be used today to highlight the work Intel is doing in OpenStack and the importance to Intel and the industry of having women as contributors. The video was shown tonight along with a great video from IBM and got lots of applause and support throughout the venue as different Intel women appeared to talk about their experiences. Our Intel ‘stars’ were a hit and it was great to have them be recognized for their technical contributions to the code and leadership efforts for Women of OpenStack. What’s even more exciting is that this video (LINK HERE) will play at a keynote this week for all 5000 attendees to highlight what Intel is doing to foster inclusiveness and diversity in OpenStack!

 

Read more >

Accelerating Business Intelligence and Insights

By Mike Pearce, Ph.D. Intel Developer Evangelist for the IDZ Server Community

 

 

On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family.  One key area of focus for the new processor family is that it is designed to accelerate business insight and optimize business operations—in healthcare, financial, enterprise data center, and telecommunications environments—through real-time analytics. The new Xeon processor is a game-changer for those organizations seeking better decision-making, improved operational efficiency, and a competitive edge.

 

The Intel Xeon processor E7 v3 family’s performance, memory capacity, and advanced reliability now make mainstream adoption of real-time analytics possible. The rise of the digital service economy, and the recognized potential of “big data,” open new opportunities for organizations to process, analyze, and extract real-time insights. The Intel Xeon processor E7 v3 family tames large volumes of data accumulated by cloud-based services, social media networks, and intelligent sensors, and enable data analytics insights, aided by optimized software solutions.

 

A key enhancement to the new processor family is its increased memory capacity – the industry’s largest per socket1 – enabling entire datasets to be analyzed directly in high-performance, low-latency memory rather than traditional disk-based storage. For software solutions running on and/or optimized for the new Xeon processor family, this means businesses can now obtain real-time analytics to accelerate decision-making—such as analyzing and reacting to complex global sales data in minutes, not hours.  Retailers can personalize a customer’s shopping experience based on real-time activity, so they can capitalize on opportunities to up-sell and cross-sell.  Healthcare organizations can instantly monitor clinical data from electronic health records and other medical systems to improve treatment plans and patient outcomes.

 

By automatically analyzing very large amounts of data streaming in from various sources (e.g., utility monitors, global weather readings, and transportation systems data, among others), organizations can deliver real-time, business-critical services to optimize operations and unleash new business opportunities. With the latest Xeon processors, businesses can expect improved performance from their applications, and realize greater ROI from their software investments.

 

 

Real Time Analytics: Intelligence Begins with Intel

 

Today, organizations like IBM, SAS, and Software AG are placing increased emphasis on business-intelligence (BI) strategies. The ability to extract insights from data is a something customers expect from their software to maintain a competitive edge.  Below are just a few examples of how these firms are able to use the new Intel Xeon processor E7 v3 family to meet and exceed customer expectations.

 

Intel and IBM have collaborated closely on a hardware/software big data analytics combination that can accommodate any size workload. IBM DB2* with BLU Acceleration is a next-generation database technology and a game-changer for in-memory computing. When run on servers with Intel’s latest processors, IBM DB2 with BLU Acceleration optimizes CPU cache and system memory to deliver breakthrough performance for speed-of-thought analytics. Notably, the same workload can be processed 246 times faster3 running on the latest processor than the previous version of IBM DB2 10.1 running on the Intel Xeon processor E7-4870.

 

By running IBM DB2 with BLU Acceleration on servers powered by the new generation of Intel processors, users can quickly and easily transform a torrent of data into valuable, contextualized business insights. Complex queries that once took hours or days to yield insights can now be analyzed as fast as the data is gathered.  See how to capture and capitalize on business intelligence with Intel and IBM.

 

From a performance speed perspective, Apama* streaming analytics have proven to be equally impressive. Apama (a division of Software AG) is an extremely complex event process engine that looks at streams of incoming data, then filters, analyzes, and takes automated action on that fast-moving big data. Benchmarking tests have shown huge performance gains with the newest Intel Xeon processors. Test results show 59 percent higher throughput with Apama running on a server powered by the Intel Xeon processor E7 v3 family compared to the previous-generation processor.4

 

Drawing on this level of processing power, the Apama platform can tap the value hidden in streaming data to uncover critical events and trends in real time. Users can take real-time action on customer behaviors, instantly identify unusual behavior or possible fraud, and rapidly detect faulty market trades, among other real-world applications. For more information, watch the video on Driving Big Data Insight from Software AG. This infographic shows Apama performance gains achieved when running its software on the newest Intel Xeon processors.

 

SAS applications provide a unified and scalable platform for predictive modeling, data mining, text analytics, forecasting, and other advanced analytics and business intelligence solutions. Running SAS applications on the latest Xeon processors provides an advanced platform that can help increase performance and headroom, while dramatically reducing infrastructure cost and complexity. It also helps make analytics more approachable for end customers. This video illustrates how the combination of SAS and Intel® technologies delivers the performance and scale to enable self-service tools for analytics, with optimized support for new, transformative applications. Further, by combining SAS* Analytics 9.4 with the Intel Xeon processor E7 v3 family and the Intel® Solid-State Drive Data Center Family for PCIe*, customers can experience throughput gains of up to 72 percent. 5

 

The new Intel Xeon processor E7 v3 processor’s ability to drive new levels of application performance also extends to healthcare. To accelerate Epic* EMR’s data-driven healthcare workloads and deliver reliable, affordable performance and scalability for other healthcare applications, the company needed a very robust, high-throughput foundation for data-intensive computing. Epic’s engineers benchmark-tested a new generation of key technologies, including a high performance data platform from InterSystem*, new virtualization tools from VMware*, and the Intel Xeon processor E7 v3 family. The result was an increase in database scalability of 60 percent,6, 7 a level of performance that can keep pace with the rising data access demands in the healthcare enterprise while creating a more reliable, cost-effective, and agile data center. With this kind of performance improvement, healthcare organizations can deliver increasingly sophisticated analytics and turn clinical data into actionable insight to improve treatment plans and ultimately, patient outcomes.

 

These are only a handful of the optimized software solutions that, when powered by the latest generation of Intel processors, are enabling tremendous business benefits and competitive advantage. From the highly improved performance, memory capacity, and scalability, the Intel Xeon E7 v3 processor family helps deliver more sockets, heightened security, increased data center efficiency and the critical reliability to handle any workload, across a range of industries, so that your data center can bring your business’s best ideas to life. To learn more, visit our software solutions page and take a look at our Enabled Applications Marketing Guide.

 

 

 

 

 

 

1 Intel Xeon processor E7 v3 family provides the largest memory footprint of 1.5 TB per socket compared to up to 1TB per socket delivered by alternative architectures, based on published specs.

2 Up to 6x business processing application performance improvement claim based on SAP* OLTP internal in-memory workload measuring transactions per minute (tpm) on SuSE* Linux* Enterprise Server 11 SP3. Configurations: 1) Baseline 1.0: 4S Intel® Xeon® processor E7-4890 v2, 512 GB memory, SAP HANA* 1 SPS08. 2) Up to 6x more tpm: 4S Intel® Xeon® processor E7-8890 v3, 512 GB memory, SAP HANA* 1 SPS09, which includes 1.8x improvement from general software tuning, 1.5x generational scaling, and an additional boost of 2.2x for enabling Intel TSX.

3 Software and workloads used in the performance test may have been optimized for performance only on Intel® microprocessors. Previous generation baseline configuration: SuSE Linux Enterprise Server 11 SP3 x86-64, IBM DB2* 10.1 + 4-socket Intel® Xeon® processor E7-4870 using IBM Gen3 XIV FC SAN solution completing the queries in about 3.58 hours.  ‘New Generation’ new configuration: Red Hat* Enterprise LINUX 6.5, IBM DB2 10.5 with BLU Acceleration + 4-socket Intel® Xeon® processor E7-8890 v3 using tables in-memory (1 TB total) completing the same queries in about 52.3 seconds.  For more complete information visit http://www.intel.com/performance/datacenter

4 One server was powered by a four-socket Intel® Xeon® processor E7-8890 v3 and another server with a four-socket Intel Xeon processor E7-4890 v2. Each server was configured with 512 GB DDR4 DRAM, Red Hat Enterprise Linux 6.5*, and Apama 5.2*. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

5 Up to 1.72x generational claim based on SAS* Mixed Analytics workload measuring sessions per hour using SAS* Business Analytics 9.4 M2 on Red Hat* Enterprise Linux* 7. Configurations: 1) Baseline: 4S Intel® Xeon® processor E7-4890 v2, 512 GB DDR3-1066 memory, 16x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.11 sessions/hour. 2) Up to 1.72x more sessions per hour: 4S Intel® Xeon® processor E7-8890 v3, 512 GB DDR4-1600 memory, 4x 2.0 TB Intel® Solid-State Drive Data Center P3700 + 8x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.19 sessions/hour.

6 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance

7 Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

Read more >

Introduction to Intel Ethernet Flow Director

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued its Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”

 

In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).  1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which, ironically, has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.

 

motophone.jpg

 

The key reason for Ethernet’s longevity, IMHO, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.

 

The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it, is done when data gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.

 

Intel developed a technology to help address that problem, and we call it Intel® Ethernet Flow Director.  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does, in a nutshell, is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic.

 

I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  I hope you find both the video and white paper enjoyable and illuminating.

 

Intel, the Intel logo, and Intel Ethernet Flow Director are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

Read more >

Intel and Citrix Take 3D Visualization to Remote Engineers

In today’s world, engineering teams can be located just about anywhere in the world, and the engineers themselves can work from just about any location, including home offices. This geographic dispersion creates a dilemma for corporations that need to arm engineers with tools that make them more productive while simultaneously protecting valuable intellectual property—and doing it all in an affordable manner.

 

Those goals are at the heart of hosted workstations that leverage new combinations of technologies from Intel and Citrix*. These solutions, unveiled this week at the Citrix Synergy 2015 show in Orlando, allow engineers to work with demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted in a secure data center. Remote users can work from the same data set, with no need for high-volume data transfers, while enjoying the benefits of fast, clear graphics running on a dense, cost-effective infrastructure.

 

These solutions are in the spotlight at Citrix Synergy. Event participants had the opportunity to see demos of remote workstations capitalizing on the capabilities of the Intel® Xeon® processor E3-1200 product family and Citrix XenApp*, XenServer*, XenDesktop*, and HDX 3D Pro* software.

 

Show participants also had a chance to see demos of graphics passthrough with Intel® GVT-d in Citrix XenServer* 6.5, running Autodesk* Inventor*, SOLIDWORKS*, and Autodesk Revit* software. Other highlights included a technology preview of Intel GVT-g with Citrix HDX 3D Pro running Autodesk AutoCAD*, Adobe* Photoshop*, and Google* Earth.

 

Intel GVT-d and Intel GVT-g are two of the variants of Intel® Graphics Virtualization Technology. Intel GVT-d allows direct assignment of an entire GPU’s capabilities to a single user—it passes all of the native driver capabilities through the hypervisor. Intel GVT-g allows multiple concurrent users to share the resources of a single GPU.

 

The new remote workstation solutions showcased at Citrix Synergy build on a long, collaborative relationship between engineers at Intel and Citrix. Our teams have worked together for many years to help our mutual customers deliver a seamless mobile and remote workspace experience to a distributed workforce. Users and enterprises both benefit from the secure and cost-effective delivery of desktops, apps, and data from the data center to the latest Intel Architecture-based endpoints.

 

For a closer look at the Intel Xeon processor E3-1200 product family and hosted workstation infrastructure, visit intel.com/workstation.

 

 

Intel, the Intel logo, Intel inside, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Citrix, the Citrix logo, XenDesktop, XenApp, XenServer, and HDX are trademarks of Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. * Other names and brands may be claimed as the property of others.

Read more >

Health IT Critical for Underserved, Entire Ecosystem

The National Health IT (NHIT) Collaborative for the Underserved kicked off their Spring Summit with a briefing at the White House in April to commemorate the 30-year anniversary of the Heckler Report.

 

This landmark task force report, published in 1985 by then-DHHS Secretary Margaret Heckler, first introduced the country to the documented health disparities that our racial and ethnic minority populations were facing. MJ blog.jpg

 

While we have made progress since, recent advances in technology have provided us with a unique opportunity to introduce real change, right now. To help carry this momentum, I participated in a lively panel discussion with industry leaders at the Summit, “Moving the Needle” for innovation success, where we discussed key action items that will help us deliver an effective and efficient healthcare ecosystem:

 

• Engage consumers to participate and manage their own health and wellness through education.

• Work with providers serving multicultural communities to increase Health IT adoption and their participation in programs that support delivery of high quality, cost effective care.

• Deliver effective educational, training and placement programs that can prepare members of multicultural communities for Health IT related careers.

• Establish and implement policies that support individual and community health empowerment and promote system transformation.

• Identify priority areas where gaps exist regarding the ability to use innovative health technologies to address disparities and plan actionable next steps.

 

Reactive approach to healthcare costly for payers and providers

Managing the complex health needs of the underserved has long been labor intensive and costly for both patients and clinicians. The lack of health coverage and other complications have traditionally presented significant challenges for a large portion of this population.

 

While the Affordable Care Act (ACA) now makes healthcare financially feasible for millions of newly insured individuals, a troubling trend may persist among some members of underserved communities who continue to only seek care after experiencing an acute health emergency, making their visits extremely costly to payers and providers. These visits usually require several medications, frequent monitoring of vitals, and lifestyle changes in diet and exercise.

 

They also typically require people who may live with instability in multiple aspects of life, to schedule and adhere to ongoing medical appointments and diagnostic tests. This isn’t an effective, realistic, or affordable approach to health and wellness, for payers, providers or consumers. But it can be addressed through raised awareness regarding the impact of health decisions and improved access to healthy options.

 

Organized data critical for effective education and outreach

Access to accurate and organized data is key when we talk about making personalized healthcare a reality. Actionable data is driving today’s cutting-edge research, leading to improvements in preventative health and wellness, as well as life-saving treatments.

 

Edge devices, like wearables, biosensors, and other consumer devices, can gather large amounts of data from various segments of the population, correlating behaviors related to diet and exercise. With end-to-end edge management systems, researchers and clinicians can have real-time access to locally filtered actionable data, helping them make accurate and educated discoveries on population behavior with amazing levels of insight.

 

Understanding where individual and population health trends are headed in advance will enable providers to customize their education and outreach services, saving time and resources from being wasted on programs with little to no impact. With electronic health records (EHR), clinicians can access a patient’s history on secure mobile devices, tracking analyzed data that impacts wellness plans and treatments.

 

Quality measures for prevention, risk factor screening, and chronic disease management are then identified and evaluated to provide support for practice interventions and outreach initiatives. Along with edge and embedded devices, they can play a key role in promoting self-management and self-empowerment through better communication with clinical staff.

 

Gathering data from the underserved population

Providers who treat underserved populations and vulnerable citizens often have less access to EHRs and other technologies that help them collect, sort and analyze data from patients. Another factor is that these clinics, hospitals and community centers are often reacting to crisis, instead of preventative outreach and education. This places greater strain on staff, patients and resources, while straining budgets that are partly limited by payer reimbursement.

 

So the big question is, “how do we leverage the power of data within complex populations that are often consumed by competing real-world priorities?”

 

It starts with education, outreach, and improved access to healthier lifestyle options. It continues by equipping clinics, hospitals and resource centers in underserved communities with the latest Health IT devices, wearables, software and services. As innovators it is our job to craft and articulate a value proposition that is so compelling, payers will realize that an initial investment in innovation, while potentially costly, will reduce expenditures significantly in the long run.

 

By educating and empowering all consumers to more actively participate in the management of their own wellness, the need for costly procedures, medications and repeated visits will go down, saving time and resources for payer and provider – while delivering a better “quality of life” for everyone.

Read more >

Is Your Data Center Ready for the IoT Age?

How many smartphones are there in your household? How about laptops, tablets, PCs? What about other gadgets like Internet-enabled TVs or smart room temperature sensors? Once you start to think about it, it’s clear that even the least tech-savvy of us has at least one of these connected devices. Each device is constantly sending or receiving data over the Internet, data which must be handled by a server somewhere. Without the data centres containing these servers, the devices (or the apps they run) are of little value. Intel estimates that for every 400 smartphones, one new server is needed. That’s about one server per street I’d say.

 

We’re approaching 2 billion smartphones in service globally, each with (Intel estimates) an average of 26 apps installed. We check our phones an average of 110 times per day, and on top of that, each app needs to connect to its data centre around 20 times daily for updates. All of this adds up to around one trillion data centre accesses every day. And that’s just for smartphones. Out-of-home IoT devices like wearable medical devices or factory sensors need even more server resource.

 

Sounds like a lot, right? Actually, if we were watching a movie about the Internet, it’d be an epic and we’d still just be in the opening credits. Only about 40 percent of the world’s population is connected today, so there’s a huge amount of story yet to tell as more and more people come to use, like and expect on-demand, online services. With use of these applications and websites set to go up, and connected devices expected to reach 50 billion by 2020, your data centre is a critically important piece of your business.

 

Man-On-Subway-Reading-Tablet.pngHere Comes the Hybrid Cloud

 

What fascinates me about all this is the impact it’s going to have on the data centre and how we manage it. Businesses are finding that staggering volumes of data and demand for more complex analytics mean that they must be more responsive than ever before. They need to boost their agility and, as always, keep costs down – all in the face of this tsunami of connected devices and data.

 

The cost point is an important one. Its common knowledge that for a typical organisation, 75 percent of the IT budget goes on operating expenditure. In a bid to balance this cost/agility equation, many organizations have begun to adopt a Hybrid Cloud approach.

 

In the hybrid model, public cloud or SaaS is used to provide some of the more standard business services – such HR, expenses or CRM systems; but also to provide overspill capacity in times of peak demand. In turn, the private cloud hosts the organizations most sensitive or business-critical services, typically those delivering true business differentiating capabilities.

 

This hybrid cloud model may mean you get leading edge, regularly updated commodity services which consume less of your own valuable time and resource. However, to be truly effective your private cloud also needs to deliver highly efficient cost/agility dynamics – especially when faced with the dawning of the IoT age and its associated demands.

 

For many organizations the evolution of their data centre(s) to deliver upon the promise of private cloud is a journey they’ve been on for a number years, but one that’s brought near term benefits on the way. In fact, each stage in the journey should help drive time, cost and labour out of running your data centre.

 

The typical journey can be viewed as a series of milestones:

 

  • Stage 1: Standardization. Consolidating storage, networking and compute resources across your data centres can create a simplified infrastructure that delivers cost and time savings. With standardized operating system, management tools and development platform, you can reduce the tools, skills, licensing and maintenance needed to run your IT.
  • Stage 2: Virtualization. By virtualising your environment, you enable optimal use of compute resources, cutting the time needed to build new environments and eliminating the need to buy and operate one whole server for each application.
  • Stage 3: Automation. Automated management of workloads and compute resource pools increases your data centre agility and helps save time. With real-time environment monitoring and automated provisioning and patching, you can do more with less.
  • Stage 4: Orchestration. Highly agile, policy-based rapid and intelligent management of cloud resource pools can be achieved with full virtualization of compute, storage and networking into software-defined resource pools. This frees up your staff to focus on higher-value, non-routine assignments.
  • Stage 5: Real-time Enterprise. Your ultra-agile, highly optimized, real-time management of federated cloud resources enables you to meet business-defined SLAs while monitoring your public and private cloud resources in real time. Fully automated management and composable resources enable your IT talent to focus on strategic imperatives for the business.


Man-Works-On-Servers.png

A typical reaction from organizations first considering the journey is “That sounds great!” However, this is quickly followed by two questions, the first being “Where do I begin?”


Well, let’s start with the fact that it’s hard to build a highly efficient cloud platform that will enable real-time decision making using old infrastructure. The hardware really does matter, and it needs to be modern, efficient and regularly refreshed – evergreen, if you will. If you don’t do this, you could be losing an awful lot of efficiency.

 

Did you know, for example, that according to a survey conducted by a Global 100 company in 2012, 32 percent of its servers were more than four years old? These servers made up just four percent of total server performance capabilities but yet they constituted 65 percent of the total energy consumption. Clearly, there are better ways to run a data centre.

 

It’s All About Meeting Business Expectations

 

And as for that second question? You guessed it, “How can we achieve steps 4 and 5?” This is a very real consideration, even for the most innovative of organisations. Even those companies considered leaders in their private cloud build-out are generally only at Stage 3: Automation, and experimenting with how to tackle Stage 4: Orchestration.

 

The key thing to remember is that your on-line services, web sites and apps run the show. They are a main point of contact with your customers (both internally and externally), so they must run smoothly and expectations must be met. This means your private cloud must be elastic – flexing on-demand as the businesses requires. Responding to business needs in weeks to months is no longer acceptable as the clock speed of business continues to ramp. Hours to minutes to seconds is the new order.

 

Time for a New Data Centre Architecture

 

I believe the best way to achieve this hyper-efficient yet agile private cloud model is to shift from the hardware-defined data centre of today to a new paradigm that is defined by the software: the software-defined infrastructure (SDI).

 

Does this mean I’m saying the infrastructure doesn’t matter? Not at all, and we’ll come on to this later in this blog series. I’ll be delving into the SDI architecture model in more detail, looking at what it is, Intel’s role in making it possible, and how it’ll enable your private cloud to get the Holy Grail – Stage 5: Real-time Enterprise.

 

In the meantime, I’d love to hear from you. How is your organization responding to the connected device deluge, and what does your journey to the hybrid cloud look like?


To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Top 10 Predictions: Evolution of Cybersecurity in 2015

Cybersecurity is poised for a notorious year. The next 12 to 18 months will see greater, bolder, and more complex attacks emerge. This year’s installment for the top computer security predictions highlights how the threats are advancing, outpacing defenders, and the landscape is becoming more professional and organized. Although the view of our cybersecurity future is obscured, one thing is for certain: We’re in for an exciting ride.

 

In this blog I’ll discuss my top 10 predictions for Cybersecurity in 2015.

 

Top Predictions:

 

1. Cyber warfare becomes legitimate

Cyber-Warefare.png

Governments will leverage their professional cyber warfare assets as a recognized and accepted tool for governmental policy. For many years governments have been investing in cyber warfare capabilities, and these resources will begin to pay dividends.

 

 

 

 



2. Active government intervention

Governement-Intervention.png

Governments will be more actively involved in responding to major hacking events affecting their citizens. Expect government response and reprisals to foreign nation-state attacks, which ordinary business enterprises are not in a position to act or counter. This is a shift in policy, both timely and necessary to protect how the public enjoys life under the protection of a common defense.




 

 

 

3. Security talent in demand

Security-Talent.png

The demand for security professionals is at an all-time high, but the workforce pool is largely barren of qualified candidates. The best talent has been scooped up. A lack of security workforce talent, especially in leadership roles, is a severe impediment to organizations in desperate need of building and staffing in-house teams. We will see many top-level security professionals jump between organizations, lured by better compensation packages. Academia will struggle to refill the talent supply in order to meet the demand.

 

 

 

 

 

4. High profile attacks continue

High-Profile-Attacks.png

High-profile targets will continue to be victimized. As long as the return is high for attackers while the effort remains reasonable, they will continue to target prominent organizations. Nobody, regardless of how large, is immune. Expect high-profile companies, industries, government organizations, and people to fall victim to theft, hijacking, forgery, and impersonation.

 

 

 

 

 

5. Attacks get personal

Attacks-Get-Personal.png

We will witness an expansion in strategies in the next year, with attackers acting in ways that put individuals directly at risk. High profile individuals will be threatened with embarrassment, exposing sensitive healthcare, photos, online activities, and communication data. Everyday citizens will be targeted with malware on their devices to siphon bank information, steal crypto-currency, and to hold their data for ransom. For many people this year, it will feel like they are being specifically targeted for abuse.

 

 

 

 

6. Enterprise risk perspectives change

Enterprise-Risk-Perspectives.png

Enterprises will overhaul how they view risks. Serious board-level discussions will be commonplace, with a focus on awareness and responsibility. More attention will be paid to the security of products and services, with the protection of privacy and customer data beginning to supersede “system availability” priorities. Enterprise leaders will adapt their perspectives to focus more attention on security as a critical aspect of sustainable business practices.




 



7. Security competency and attacker innovation increase

Security-Competency.png

The security and attacker communities will make significant strides forward this year. Attackers will continue to maintain the initiative and succeed with many different types of attacks against large targets. Cybercrime will grow quickly in 2015, outpacing defenses and spurring smarter security practices across the community. Security industry innovation will advance as the next wave of investments emerge and begin to gain traction in protecting data centers, clouds, and the ability to identify attackers.

 

 

 

 

 

8. Malware increases and evolves

Malware-Evolves.png

Malware numbers will continue to skyrocket, increase in complexity, and expand more heavily beyond traditional PC devices. Malicious software will continue to swell at a relentless pace, averaging over 50 percent year-over-year growth. The rapid proliferation and rising complexity of malware will create significant problems for the security industry. The misuse of stolen certificates will compound the problems, and the success of ransomware will only reinforce more development by criminals.

 

 

 

 

 

9. Attacks follow technology growth

Attacks-Technology-Growth.png

Attackers move into new opportunities as technology broadens to include more users, devices, data, and evolving supporting infrastructures. As expansion occurs, there is a normal lag for the development and inclusion of security. This creates a window of opportunity. Where the value of data, systems, and services increases, threats surely follow. Online services, phones, the IoT, and cryptocurrency are being heavily targeted.

 

 

 

 

 

10. Cybersecurity attacks evolve into something ugly

Cybersecurity-Attacks.png

Cybersecurity is constantly changing and the attacks we see today will be superseded by more serious incursions in the future. We will witness the next big step in 2015, with attacks expanding from denial-of-service and data theft activities to include more sophisticated campaigns of monitoring and manipulation. The ability to maliciously alter transactions from the inside is highly coveted by attackers.

 

 

 

Welcome to the next evolution of security headaches.

I predict 2015 to be an extraordinary year in cybersecurity. Attackers will seek great profit and power, while defenders will strive for stability and confidence. In the middle will be a vicious knife fight between aggressors and security professionals. Overall, the world will take security more seriously and begin to act in more strategic ways. The intentional and deliberate protection of our digital assets, reputation, and capabilities will become a regular part of life and business.

 

If you’d like to check out my video series surrounding my predictions, you can find more here.

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Postshttps://communities.intel.com/people/MatthewRosenquist/blog/2015/03/04/why-ransomware-will-rise-in-2015

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >

Pharma Sales: The 90-Second Rule

I have just spent the better part of two weeks involved in the training of a new 50-strong sales team. Most of the team were experienced sales people but very inexperienced in pharmaceutical sales. They had a proven record in B2B sales, but only 30 percent of the team had previously sold pharmaceutical or medical device products to health care professionals (HCPs). Clearly, after the logistical and bureaucratic aspects of the training had been completed, most of the time was spent training the team on the medical background, disease state, product specifics and treatment landscape/competitor products.

 

Preparing the team for all eventualities and every possible question/objection they may get from HCPs was key to making sure that on the day of product launch they would be competent to go out into their new territories and speak with any potential customer. With particular reference to this product it was equally important for the team to be in a position to speak with doctor, nurse and pharmacist.

 

The last part of the training was to certify each of the sales professionals and make sure that they not only delivered the key messages but that they could also answer most of the questions HCPs would fire at them. In order to do this the sales professionals were allowed 10 minutes to deliver their presentation to trainers, managers and medical personal. The assessors were randomly assigned questions/objections to be addressed during the presentation.

 

The question remains, “does this really prepare the sales person for that first interaction with a doctor or other HCP?” Experience tells us that most HCPs are busy people and they allow little or no time for pharmaceutical sales professionals in their working day. The 90 seconds that a sales professional gets with most of their potential customers is not a pre-fixed amount. Remember, doctors are used to getting the information they need to make clinical decisions by asking the questions they need answers to in order to make a decision that will beneficially affect their patient(s). So, starting the interaction with an open question is quite simply the worst thing to do, as most doctors will take this opportunity to back out and say they do not have time.

 

The trick is to get the doctor to ask the first question (that is what they spend their lives doing and they are good at it) and within the first 10-15 seconds. Making a statement that shows you understand their needs and have something beneficial to tell them is the way you will get “mental access.” Once the doctor is engaged in a discussion, the 90-second call will quickly extend to 3+ minutes. Gaining “mental access” is showing the doctor that you have a solution to a problem they have in their clinical practice and that you have the necessary evidence to support your key message/solution. This has to be done in a way that the doctor will see a potential benefit for, most importantly, their patients. In order to do this the sales professional needs to really understand the clinical practice of the person that they are seeing (i.e. done their pre-call planning) and have the materials available to instantly support their message/solution.

 

The digital visual aid is singularly the best means of providing this supporting information/data, as whatever direction the sales professional needs to go in should be accessible in 1-2 touches of the screen. Knowing how to navigate through the digital sales aid is essential as this is where the HCP is engaged or finding a reason to move on.

 

What questions do you have? Agree or disagree?

Read more >

Should You Take the High Road or the Low Road to SDI?

stylized_city_photo-s.jpg

When I started my career in IT, infrastructure provisioning involved a lot of manual labor. I installed the hardware, installed the operating systems, connected the terminals, and loaded the software and data, to create a single stack to support a specific application. It was common to have one person who carried out all of these tasks on a single system with very few systems in an Enterprise.

 

Now let’s fast forward to the present. In today’s world, thanks to the dynamics of Moore’s Law and the falling cost of compute, storage, and networking, enterprises now have hundreds of applications that support the business. Infrastructure and applications are typically provisioned by teams of domain specialists—networking admins, system admins, storage admins, and software folks—each of whom puts together a few pieces of a complex technology puzzle to enable the business.

 

While it works, this approach to infrastructure provisioning has some obvious drawbacks. For starters, it’s labor-intensive with too many hands in order to support, it’s costly in both people and software, and it can be rather slow from start to finish. While the first two are important for TCO, it is the third that I have heard the most about… Just too slow for the pace of business in the era of fast-moving cloud services.

 

How do you solve this problem? That is what the Software Defined Infrastructure is all about. With SDI, compute, network, and storage resources are deployed as services, potentially reducing deployment times from weeks to minutes. Once services are up and running, hardware is managed as a set of resources, and software has the intelligence to manage the hardware to the advantage of the supported workloads. The SDI environment automatically corrects issues and optimizes performance to ensure you can meet your service levels and security controls that your business demands.

 

So how do you get to SDI? My current response is that SDI is a destination that sits at the summit for most organizations. At the simplest level, there are two routes to this IT nirvana—a “buy it” high road and a “build-it-yourself” low road. I call the former a high road because it’s the easiest way forward—it’s always easier to go downhill than uphill. The low road has lots of curves and uphill stretches on it to bring you to the higher plateau of SDI.  Each of these approaches has its advantages and disadvantages.

 

The high road, or the buy-the-packaged-solution route, is defined by system architectures that bring together all the components for an SDI into a single deployable unit. Service providers who take you on the high road leverage products like Microsoft Cloud Platform System (CPS) and VMware EVO: RAIL to create standalone platform units with virtualized compute, storage, and networking resources.

 

On the plus side, the high road offers faster time to market for your SDI environment, a tested and certified solution, and the 24×7 support most enterprises are looking for in a path.  These are also the things you can expect in a solution delivered by a single vendor. On the downside, the high road locks you into certain choices in the hardware and software components and forces you to rely on the vendor for system upgrades and technology enhancements, which might happen faster with other solutions, but take place in their timelines. This approach, of course, can be both Opex and Capex heavy, depending on the solution.

 

The low road, or the build-it-yourself route, gives you the flexibility to design your environment and select your solution components from the portfolio of various hardware, software vendors and open source. You gain the agility and technology choices that come with an environment that is not defined by a single vendor. You can pick your own components and add new technologies on your timelines—not your vendor’s timelines—and probably enjoy lower Capex along the way, although at the expense of more internal technical resources.

 

Those advantages, of course, come with a price. The low road can be a slower route to SDI, and it can be a drain on your staff resources as you engage in all the heavy lifting that comes with a self-engineered solution set.  Also, it is quite possible with the pace of innovations that you see today in this area, that you never really achieve the vision of SDI due to all the new choices. You have to design your solution; procure, install, and configure the hardware and software; and add the platform-as-a-service (PaaS) layer. All of that just gets to a place where you can start using the environment. You still haven’t optimized the system for your targeted workloads.

 

In practice, most enterprises will take what amounts to a middle road. This hybrid route takes the high road to SDI with various detours onto the low road to meet specific business requirements. For example, an organization might adopt key parts of a packaged solution but then add its own storage or networking components or decide to use containers to implement code faster.

 

Similarly, most organizations will get to SDI in stepwise manner. That’s to say they will put elements of SDI in place over time—such as storage and network virtualization and IT automation—to gain some of the agility that comes with an SDI strategy. I will look at these concepts in an upcoming post that explores an SDI maturity model.

Read more >

The Path to Ethernet Standards and the Intel Ethernet, NBASE-T

The “Intel Ethernet” brand symbolizes the decades of hard work we’ve put into improving performance, features, and ease of use of our Ethernet products.

 

What Intel Ethernet doesn’t stand for, however, is any use of proprietary technology. In fact, Intel has been a driving force for Ethernet standards since we co-authored the original specification more than 40 years ago.

 

At Interop Las Vegas last week, we again demonstrated our commitment to open standards by taking part in the NBASE-T Alliance public multi-vendor interoperability demonstration. The demo leveraged our next generation single-chip 10GBASE-T controller supporting the NBASE-T intermediate speeds of 2.5Gbps and 5Gbps (see a video of that demonstration here).

IMG_0254.JPG.jpeg

 

Intel joined the NBASE-T Alliance in December 2014 at the highest level of membership, which allows us to fully participate in the technology development process including sitting on the board and voting for changes in the specification.

 

The alliance, and its 33 members, is an industry-driven consortium that has developed a working 2.5GbE / 5GbE specification that is the basis of multiple recent product announcements. Based on this experience, our engineers are working diligently now to develop the IEEE standard for 2.5G/5GBASE-T.

 

By first developing the technology in an industry alliance, vendors can have a working specification to develop products, and customers can be assured of interoperability.

 

The reason Ethernet has been so widely adopted over the past 40 years is its ability to adapt to new usage models. 10GBASE-T was originally defined to be backwards compatible to 1GbE and 100Mbs, and required category 6a or category 7 cabling to get 10GbE. Adoption of 10GBASE-T is growing very rapidly in the datacenter, and now we are seeing the need for more bandwidth in enterprise and campus networks to support the next generation 802.11AC access points, local servers, workstations, and high-end PCs.

 

Copper twisted pair has long been the cabling preference for enterprise data centers and campus networks, and most enterprises have miles and miles of this cable already installed throughout their buildings. In the past 10 years alone, about 70 billion meters of category 5e and category 6 cabling have been sold worldwide.


Supporting higher bandwidth connections over this installed cabling is a huge win for our customers. Industry alliances can be a useful tool to help Ethernet adapt, and the NBASE-T alliance enables the industry to address the need for higher bandwidth connections over installed cables.


Intel is the technology and market leader in 10GBASE-T network connectivity. I spoke about Intel’s investment in the technology in an earlier blog about Ethernet’s ubiquity.

 

We are seeing rapid adoption of our 10GBASE-T products in the data center, and now through the NBASE-T Alliance we have a clear path to address enterprise customers with the need for more than 1GbE. Customers are thrilled to hear that they can get 2.5GbE/ 5GbE over their installed Cat 5e copper cabling—making higher speed networking between bandwidth-constrained endpoints achievable.

 

Ethernet is a rare technology in that it is both mature (more than 40 years old since its original definition in 1973) and constantly evolving to meet new network demands. Thus, it has created an expectation by users that the products will work the first time, even if they are based on brand new specifications. Our focus with Intel Ethernet products is to ensure that we implement solutions that are based on open standards and that these products seamlessly interoperate with products from the rest of the industry.

 

If you missed the NBASE-T demonstration at Interop, come see how it works at Cisco Live in June in San Diego.

Read more >

Demonstrating Commitment to iWARP Technology with Microsoft

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

iWARP was on display recently in multiple contexts.  If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an Internet Engineering Task Force (IETF) standard that delivers Remote Direct Memory Access (RDMA).

 

In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet.  It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding the interrupts and making of extra copies of data that accompany kernel processing.

 

A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments. More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.

 

Intel® is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs).  To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP.  Naturally it was slow – about 1 Gbps – since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust.  (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)

 

pastedImage_17.png

 

Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a poster chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications.  With SMB Direct, no new software and no system configuration changes are required for system administrators to take advantage of iWARP.

 

pastedImage_1.png

 

Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division.  Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.

 

Lastly, the Non-Volatile Memory Express* (NVMe*) community demonstrated “remote NVMe,” made possible by iWARP.  NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express.  NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices.  iWARP makes it possible to securely and efficiently access NVM across an Ethernet network.  The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.**

 

pastedImage_27.png

 

Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards.  iWARP goes anywhere the Internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.

 

Intel, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

**Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

Read more >

Redefining Sleep with Intel® Ready Mode Technology on Desktops

Did you know that many reptiles, marine mammals, and birds sleep with one side of their brains awake? This adaptation lets these creatures rest and conserve energy while remaining alert and instantly ready to respond to threats and opportunities. It also enables amazing behaviors such as allowing migrating birds to sleep while in flight. How’s that for maximizing productivity?

 

Taking a cue from nature, many new desktop PCs challenge how we define sleep with Intel® Ready Mode Technology. This innovation replaces traditional sleep mode with a low-power, active state that allows PCs to stay connected, up-to-date, and instantly available when not in use—offering businesses several advantages over existing client devices.

 

Man-On-Computer.png1. Always current, available, and productive

 

Users get the productivity boost of having real-time information ready the instant that they are. Intel Ready Mode enhances third-party applications with the ability to constantly download or access the most current content, such as the latest email messages or media updates. It also allows some applications to operate behind the scenes while the PC is in a low-power state. This makes some interesting new timesaving capabilities possible—like, for example, facial recognition software that can authenticate and log in a user instantly upon their arrival.

 

In addition, when used with third-party apps like Dropbox*, Ready Mode can turn a desktop into a user’s personal cloud that both stores the latest files and media from all of their mobile devices and makes it available remotely as well as at their desks. Meanwhile, IT can easily run virus scans, update software, and perform other tasks on user desktops anytime during off hours, eliminating the need to interrupt users’ workdays with IT admin tasks.

 

2. Efficiently energized

 

PCs in Ready Mode consume only about 10 watts or less (compared to 30 – 60 watts active) while remaining connected, current, and ready to go. That’s enough energy to power an LED lamp equal to 60 watts of luminosity. Energy savings will vary, of course; but imagine how quickly a six-fold energy-consumption reduction would add up with, say, 1,000 users who actively use their PCs only a few hours a day.

 

In the conference room, a desktop-powered display setup with Intel Ready Mode will wait patiently in an energy-sipping, low-power state when not in use, but will be instantly ready to go for meetings with the latest presentations and documents already downloaded. How much time would you estimate is wasted at the start of a typical meeting simply getting set up? Ten minutes? Multiply that by six attendees, and you have an hour of wasted productivity. Factor in all of your organization’s meetings, and it’s easy to see how Ready Mode can make a serious contribution to the bottom line.

 

3. Streamlined communication

 

Desktops with Intel Ready Mode help make it easier for businesses to move their landline or VoIP phone systems onto their desktop LAN infrastructures and upgrade from regular office phones to PC-based communication solutions such as Microsoft Lync*. Not only does this give IT fewer network infrastructures to support, but with Ready Mode, businesses can also deploy these solutions and be confident that calls, instant messages, and videoconference requests will go through even if a user’s desktop is idle. With traditional sleep mode, an idle PC is often an offline PC.

 

Ready to refresh with desktops featuring Intel® Ready Mode Technology today? Learn how at: www.intel.com/readymode

Read more >