Recent Blog Posts

Results Are In: What IT Fires Do You Need to Put Out First?

The role of IT decision maker has dramatically changed in the past few decades, as technology continues to weave tightly into business strategy. IT leaders are helping business leaders build a successful roadmap by implementing strategies built on cloud, analytics, and new digital tools. Big initiatives, however, come with big decisions and the wherewithal to know which projects take priority.

 

We launched a poll on our Intel IT Center LinkedIn showcase page to find out what fires IT decision makers tend to extinguish first. The Internet is inundated with lists, blogs, and articles dedicated to top issues and concerns plaguing IT. These buzz-worthy topics include cloud, security, and big data, and we expected one of those to top the list.

 

Some IT Surprises

IT-Fires-Poll-NSI.pngIn our poll of more than 300 participants, 34 percent pinpointed hardware refresh as their top concern. Cloud structure (20 percent), software refresh (17 percent), and mastering data analytics (12 percent) rounded out the top four.

 

Security finished seventh with a little over 1 percent; this was one of the biggest surprises of the poll, especially with the large number of high-profile breaches and cybersecurity issues troubling enterprises of late. Cloud concerns were lower than projected as well, even after Microsoft’s recent release of Windows 10.

 

Some notables in the “Other” category (which accounted for 4 percent of the results) included customer-facing systems and hiring. Should IT be putting more thought into retaining talent, company culture, or customer needs?

 

IT Decision Makers Pick Hardware Over All Else

As noted, IT executives have a lot on their plate. The majority of respondents are focusing on topnotch hardware first — ditching legacy technology in favor of higher productivity, flexibility, and less downtime. The much-discussed data analytics, cloud, and security didn’t rank as high as we thought, but we’re more interested in knowing what you think. How would you rank your biggest concerns as an IT decision maker?

Read more >

10 Mobile BI Strategy Questions: System Integration

office-worker-device-integration-tablet-desktop.png

More and more mobile devices are becoming connected with the software that runs on them. But the true value of mobility can’t be realized until these devices take advantage of the necessary integration among the underlying systems.

 

The same principles hold true for mobile business intelligence (BI). Therefore, when you’re developing a mobile BI strategy, you need to capitalize on opportunities for system integration that can enhance your end product. Typically, system integration in mobile BI can be categorized into three options.

 

 

Option One: Standard Mobile Features Expand Capabilities

 

Depending on the type of solution (built in-house or purchased), features are considered standard because they use existing and known capabilities on mobile devices such as e-mailing, sharing a link, or capturing a device screenshot. They provide methods of sharing mobile BI content, including collaboration without a lot of investment by development teams.

 

A typical example is the ability to share the report output with other users via e-mail by a simple tap of a button located on the report. This simple yet extremely powerful option allows immediate execution of actionable insight. Additional capabilities, such as annotating or sharing a specific section(s) of a report, add precision and focus to the message that’s being delivered or content shared. In custom-designed mobile BI solutions, the sharing via e-mail option can be further programmed to attach a copy of the report to an e-mail template, thereby eliminating the need for the user to compose the e-mail message from scratch.

 

Taking advantage of dialing phone numbers or posting content to internal or external collaboration sites is another example. An account executive (AE) could run a mobile BI report that lists the top 10 customers, including their phone numbers. Then, when the AE taps on the phone number, the mobile device will automatically call the number.

 

 

Option Two: Basic Integration with Other Systems Improves Productivity

 

A basic integration example is the ability to launch another mobile application from a mobile BI report. Unlike in Option One, this step requires the mobile BI report to pass the required input parameters to the target application. Looking at the same example of a top 10 customers report, the AE may need to review additional detail before making the phone call to the customer. The mobile BI report can be designed so that the customer account name is listed as a hotlink. When the AE taps the customer name, the CRM application is launched automatically and the account number is passed on, as well as the AE’s user credentials.

 

This type of integration can be considered basic because it provides automation for steps that the user could have otherwise performed manually: run the mobile BI report, copy or write down the customer account number, open the CRM app., log in to the system, and search for the account number. All of these are manual steps that can be considered “productivity leaks.” However, this type of integration differs from that described in Options One because there will be a handshake between the two systems that talk to each other. When using standard features, the report is attached to the e-mail message without any additional logic to check for anything else—hence, no handshake required.

 

 

Option Three: Advanced Integration with Other Systems Offers Maximum Value

 

Of the three options, this is the most complicated one because it requires a “true” integration of the systems involved. This category includes those cases where the handshake among the systems involved (it could be more than two) may require execution of additional logic or tasks that the end user may not be able to perform manually (unlike those mentioned in Option Two).

 

Taking it a step further, the integration may require write-back capabilities and/or what-if scenarios that may be linked to specific business processes. For example, a sales manager may run a sales forecast report and have the capability of manually overwriting one of the forecast measures. This action would then trigger multiple updates to reflect the change, not only on the mobile BI report but also on the source system. To make things more interesting, the update may need to be real time, a requirement that will further complicate the design and implementation of the mobile BI solution.

 

 

Bottom Line: System Integration Improves the Overall Value

 

No matter what opportunities for system integration exist, you must find a way to capitalize on them without, of course, jeopardizing your deliverables. You need to weigh the benefits and costs for these opportunities against your scope, timeline, and budget. If mobile BI is going to provide a framework for faster, better-informed decision making that will drive growth and profitability, system integration can become another tool in your arsenal.

 

Think about it. Besides – how can we achieve productivity gains if we’re asking our users to do the heavy lifting for tasks that could be automated through system integration?

 

Where do you see the biggest opportunity for system integration in your mobile BI strategy?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >

Ransomware is a Favorite of Cybercriminals

2015 Q3 New Ransomware.jpgCybercriminals are fully embracing ransomware.  Ransomware, a specific form or malware, which encrypts files and extorts money from victims, is quickly becoming a favorite among criminals.  It is easy to develop, simple to execute, and does a very good job at compelling users to pay in order to regain access to their precious files or systems.  Almost anyone and every business is a potential victim.  More importantly, people are paying.  Even law enforcement organizations have fallen victim, only to cede defeat and pay the criminals to restore access to their digital files or computers.

 

Ransomware is on the rise in 2015. The Intel Security Group’s August 2015 McAfee Labs Threat Report shows new ransomware growth at 58% for the second quarter of 2015. 


In just the first half of 2015 the number of ransomware samples has exploded with a near ~190% gain.  Compare that to the 127% growth for the whole of 2014.  We predicted a spike in such personal attacks for this year, but I am shocked at how fast code development has been accelerated by the criminals. 

 

Total ransomware has quickly exceeded 4 million unique samples in the wild.  If the trend continues, by the end of the year we will have over 5 million types of this malware to deal with.

 

Cybercriminals have found a spectacular method of fleecing a broad community of potential victims.  Ransomware uses proven technology to undermine security.  Encryption, the long-time friend of cybersecurity professionals, can also be used by nefarious elements to cause harm.  It is just a tool.  How it is wielded determines if it is beneficial or caustic.  In this case, ransomware uses encryption to scramble select data or critical systems files in a way only recoverable by a key they possess.  The locked files never leave the system, but are unusable until decrypted.  Attackers then offer to provide the key or an unlocking service for a fee.  Normally in the hundreds of dollars, the fee is typically requested in the form of a cryptocurrency like Bitcoin.  This makes the payment transaction un-revocable and almost impossibly difficult to track attribution and know who is on the receiving end. 

 

This type of an attack is very personal in nature and specific in what it targets.  It may lock treasured pictures, game accounts, financial records, legal documents, or work files.  These are important to us personally or professionally and is a strong motivator to pay the criminals. 

2015 Q3 Total Ransomware.jpg

Payment simply reinforces the motivation to use this method again by the attackers and adds resources for continued investment in new tools and techniques.  The technical bar for entry into this criminal activity is lowering as malware writers are making this type of attack easier for anyone to attempt.  In June, the author of the TOX variant offered ransomware as a service.  The criminal made available software for other criminals to distribute.  It would handle all the back-end transactions and provide the author a 20% skim of ransoms being paid.  Fortunately, the author was influenced to a better path after being exposed by Intel Security.  More recently an open source kit, named Hidden Tear, was developed for novices to create their own fully function ransomware code.  Although not too sophisticated, it is a watershed moment showing just how accessible making this type of malware is becoming.  I expect future open source and software-as-a-service efforts to rapidly improve in quality, features, and availability.

 

Ransomware will continue to be a major problem.  More sophisticated cybercriminals will begin integrating with other exploitation techniques such as malvertizing ad-services, malicious websites, bot uploads, fake software updates, waterhole attacks, spoofed emails, personalized phishing, signed Trojan downloads, etc.  Ransomware will grow, more people and business will be affected, and it will become more difficult to recover without paying the ransom.  The growth in new ransomware samples is an indication of things to come.

Read more >

Developing New Standards in Clinical Care through Precision Medicine

Today I gave a presentation to the NHS England Health and Care Innovation Expo alongside Dr. Jonathan Sheldon, Global VP Healthcare at Oracle where we discussed the role of precision medicine. I wanted to be able to share some of our thoughts from the session with a wider audience here in our Healthcare and Life Sciences community.

 

More specifically we talked through trends impacting healthcare and population health, what’s driving innovation to enable the convergence of precision medicine and population health and how we at Intel are working with Oracle on a shared vision.

 

Delivering Precision Medicine to Tackle Chronic Conditions

I’d like to underline all of what we discuss in precision medicine by reinforcing what I’ve said in a previous blog, that as somebody who spends a portion of my time each week working in a GP surgery, it’s essential that I am able to utilise some of the fantastic research outcomes to help deliver better healthcare to my patients. And for me, that means focusing in on the chronic conditions, such as diabetes, which are a drain on current healthcare resources.

 

The link between obesity and diabetes is well-known but it’s only when we see that 1/3rd of the global population are obese and every 30 seconds a leg is lost to diabetes somewhere in the world can we start to grasp the scale of the problem. The data we have available around diabetes in the UK highlights the scale succinctly:

 

  • 1 in 7 hospital beds are taken up by diabetics
  • 3.9m Britons have diabetes (majority Type 2, linked to obesity)
  • 2.5m thought to have diabetes but not yet diagnosed

 

To combat the rise of diabetes there is some £14bn spent by the NHS each year treating the condition, including £869m spend by family doctors. What role can precision medicine play in creating a new standard of clinical care to help meet the challenges presented by chronic conditions such as diabetes?

 

Changing Care to Reduce Costs and Improve Outcomes

I see three changing narratives around care, all driven by technology. First, ‘Care Networking’ will see a move from individuals working in silos to a team-based approach across both organisations and IT systems. Second, ‘Care Anywhere’ means a move to more mobile, home-based and community care away from the hospital setting. And third, ‘Care Customization’ brings a shift from population-based to person-based treatment. Combine those three elements and I believe we have a real chance at tackling those chronic conditions and consequently reducing healthcare costs and improving healthcare outcomes.

 

How do we achieve better care at lower costs though from a technology point of view? This is where Intel and Oracle,with industry and customers, are working together to make this possible by overcoming the challenges of storing and analysing scattered structured and unstructured data, moving irreproducible manual analysis processes to reproducible analysis and unlocking performance bottlenecks through scalable, secure enterprise-grade, mission-critical infrastructure.


Convergence of Precision Medicine and Population Health

Currently we have two separate themes of Precision Medicine and Population Health around healthcare delivery. On the one hand Population Health is concerned with operational issues, cutting costs and resource allocation around chronic diseases, while Precision Medicine still very much operates in silos and is research-oriented with isolated decision-making. Both Intel and Oracle are focused on bringing together Precision Medicine and Population Health to provide a more integrated view of all healthcare related data, simplify patient stratification across care settings and deliver faster and deeper visibility into operational financial drivers.

 

Shared Vision of All-in-One Day Genome Analysis by 2020

We have a shared vision to deliver All-in-One Day primary genome analysis for individuals by 2020 which can potentially help clinicians deliver a targeted treatment plan. Today, we’re not quite at the point where I can utilize the shared learning and applied knowledge of precision medicine to help me coordinate care and engage my patients, but I do know that our technology is helping to speed up the convergence between healthcare and life sciences to help reduce costs and deliver better care.

 

Keep up-to-date with our healthcare and life sciences work by leaving your details here.

Read more >

Pushing Machine Learning to a New Level with Intel Xeon and Intel Xeon Phi Processors

Traditionally, there has been a balance of intelligence between computers and humans where all forms of number crunching and bit manipulations are left to computers, and the intelligent decision-making is left to us humans.  We are now at the cusp of a major transformation poised to disrupt this balance. There are two triggers for this: first, trillions of connected devices (the “Internet of Things”) converting the large untapped analog world around us to a digital world, and second, (thanks to Moore’s Law) the availability of beyond-exaflop levels of compute, making a large class of inferencing and decision-making problems now computationally tractable.

 

This leads to a new level of applications and services in form of “Machine Intelligence Led Services”.  These services will be distinguished by machines being in the ‘lead’ for tasks that were traditionally human-led, simply because computer-led implementations will reach and even surpass the best human-led quality metrics.  Self-driving cars, where literally machines have taken the front seat, or IBM’s Watson machine winning the game of Jeopardy is just the tip of the iceberg in terms of what is computationally feasible now.  This extends the reach of computing to largely untapped sectors of modern society: health, education, farming and transportation, all of which are often operating well below the desired levels of efficiency.

 

At the heart of this enablement is a class of algorithms generally known as machine learning. Machine learning was most concisely and precisely defined by Prof. Tom Mitchell of CMU almost two decades back as, “A computer program learns, if its performance improves with experience”.  Or alternately, “Machine Learning is the study, development, and application of algorithms that improve their performance at some task based on experience (previous iterations).”   Its human-like nature is apparent in its definition itself.

 

The theory of machine learning is not new; its potential however has largely been unrealized due to the absence of the vast amounts of data needed to take machine performance to useful levels.  All of this has now changed with the explosion of available data, making machine learning one of the most active areas of emerging algorithm research. Our research group, the Parallel Computing Lab, part of Intel Labs, has been at the forefront of such research.  We seek to be an industry role-model for application-driven architectural research. We work in close collaboration with leading academic and industry co-travelers to understand architectural implications—hardware and software—for Intel’s upcoming multicore/many-core compute platforms.

 

At the Intel Developer Forum this week, I summarized our progress and findings.  Specifically, I shared our analysis and optimization work with respect to core functions of machine learning for Intel architectures.  We observe that the majority of today’s publicly available machine learning code delivers sub-optimal compute performance. The reasons for this include the complexity of these algorithms, their rapidly evolving nature, and a general lack of parallelism-awareness. This, in turn, has led to a myth that industry standard CPUs can’t achieve the performance required for machine learning algorithms. However, we can “bust” this myth with optimized code, or code modernization to use another term, to demonstrate the CPU performance and productivity benefits.

 

Our optimized code running on Intel’s family of latest Xeon processors delivers significantly higher performance (often more than two orders of magnitude) over corresponding best-published performance figures to date on the same processing platform.  Our optimizations for core machine learning functions such as K-means based clustering, collaborative filtering, logistic regression, support vector machine training, and deep learning classification and training achieve high levels of architectural, cost and energy efficiency.

 

In most cases, our achieved performance also exceeds best-published-to-date compute performance of special-purpose offload accelerators like GPUs. These accelerators, being special-purpose, often have significantly higher peak flops and bandwidth than our general-purpose processors. They also require significant software engineering efforts to isolate and offload parts of computations, through their own programming model and tool chain. In contrast to this, the Intel® Xeon® processor and upcoming Intel® Xeon Phi™ processor (codename Knights Landing) each offer common, non-offload-based, general-purpose processing platforms for parallel and highly parallel application segments respectively.

 

A single-socket Knights Landing system is expected to deliver over 2.5x the performance of a dual socket Intel Xeon processor E5 v3 family based system (E5-2697v3; Haswell) as measured by images per second using the popular AlexNet neural network topology.  Arguably, the most complex computational task in machine learning today is scaling state-of-the art deep neural network topologies to large distributed systems. For this challenging task, using 64 nodes of Knights Landing, we expect to train the OverFeat-FAST topology (trained to 80% classification accuracy in 70 epochs using synchronous minibatch SGD) in a mere 3-4 hours.  This represents more than a 2x improvement over the same sized two socket Intel Xeon processor E5-2697 v3 based Intel® Endeavour cluster result.

 

More importantly, the coding and optimization techniques employed here deliver optimal performance for both Intel Xeon and Intel Xeon Phi processors, both at the single-node, as well as multi-node level.  This is possible due to their shared programming model and architecture.  This preserves the software investment the industry has made in Intel Xeon, and hence reduces TCO for data center operators.

 

Perhaps more importantly, we are making these performance optimizations available to our developers through the familiar Intel-architecture tool chain, specifically through enhancements over the coming couple of quarters to the Intel® Math Kernel Library (MKL) and Data Analytics Acceleration Library (DAAL).  This significantly lowers the software barrier for developers while delivering highly performant, efficient, and portable implementations.

 

Let us together grow the use of machine learning and analytics to turn big data into deep insights and prescriptive analytics – getting machines to reason and prescribe a course of action in real-time for a smart and connected world of tomorrow, and extend the benefit of Moore’s Law to new application sectors of our society.

 

For further information click here to view the full presentation or visit http://www.intel.com/idfsessionsSF and search for SPCS008.

Read more >

Customer Service Still the Heart of a Business

Now more than ever, bookings are on the rise for the cruise industry. Cruise Lines International Association, the industry’s largest trade association, estimates that 23 million people will board ocean-bound cruise ships this year, an increase of 4.4 percent over last year.

cruise_couple_mobile_phone_technology.png


With an increasing number of potential passengers desiring to take their first cruise, making strategic business choices that strengthen the relationship between a cruise line and its passengers is of paramount importance. Princess Cruises can say confidently that they have the technology necessary to ensure and support a better user experience for both their customers and their staff. The understanding that customer service is the heart of their business, as well as the support of Intel technology, enables them to position themselves ahead of their competitors. Why is that?

 

Love at First Call

 

Princess Cruises, one of the largest global cruise lines worldwide, transports passengers to more than 300 exotic locations yearly. While providing world-class amenities, activities and entertainment for their guests, Princess Cruises uses more than 6,000 geographically dispersed PCs to handle a full range of business tasks.

 

Of these PCs, 780 make up the company’s worldwide customer call center, which provides the first interaction between potential clients and Princess Cruises’ vacation planners. With all new PCs equipped with Intel Core vPro processors, Princess Cruises can confidently meet customer service demands. Whether guests are booking their first cruise trip, or out in the open sea, exploring exotic locales; these mighty PCs are able to run a variety of business applications that are essential to meeting customer demands and building a relationship of trust.

 

Princess Cruises utilizes a special system where cruise vacation planners will be linked with the same passengers for all correspondence, forever. This special system allows for the passenger to have a more personal and familiar experience when calling for potential updates or questions they may have. This level of customer engagement is made possible due to Intel technology. With a performance increase of 15 percent for Princess Cruises’ internal booking engine, technical difficulties and poor performance issues are things of the past. Passengers are happy to speak with cruise vacation planners who can effectively search, look up and deliver information in a much more timely fashion. Who says love doesn’t start at first call?

Man-princess-cruise-mountains.png

 

Global Connectivity Support

 

One of the challenges Princess Cruises faces is ensuring that their employees are ready to provide answers and solutions to their guests’ questions and problems effectively. The solution to this challenge is making sure that employees are not only skilled in customer service, but are given the right tools to enable efficiency.

 

Being a global business, Princess Cruises has many remote locations, like Alaska, where certain services available for cruise passengers, such as motor coach or hotel services, can only be arranged through applications found on a PC. In the past, if an employee were to experience PC problems, they would be unable to move forward until they received a desk-side visit or chose to mail in their PC for repairs.

 

In our fast-paced world, where guests are increasingly accustomed to instant satisfaction, waiting days for a PC to be repaired in order to fulfill a guest’s request is not only unrealistic, but not a viable option.

 

Luckily, Princess Cruises’ new PCs allow employees to take advantage of the Intel vPro Platform which enables remote management. Finally, no matter where employees are located worldwide, they can be assured that they will have access to PC help.

 

What does that mean? More valuable time that can be spent engaging with customers and building a relationship of trust. With constant accessibility to PC help, employees can feel confident in their tools, and in turn, empowered to provide world-class customer service to their guests.

 

Read more >

Amplify Your Value: Reap the Rewards!

Amplify Your Value.jpgAmplify Your Value and you can Reap the Rewards…that’s kind of the theme of this entire series…how you can amplify the value of your IT department and how your company can reap the rewards. But this post is not a summary of our journey, it is about the next step on our journey. It is a post about an organization moving head first into the cloud, moving head first into buy versus build, and moving head first into changing its operating model, deciding to develop its own loyalty card program and execute one of the most impact “IT Projects” in the 85 year history of our company. But…let me start at the beginning.


It was mid 2010 and I had just joined Goodwill Industries of Central Indiana as CIO. That first week, one of the meetings I had, in fact the first meeting with a peer VP, was with our VP of Marketing. That meeting covered a lot of ground and various topics. One that stood out for me was when she mentioned Goodwill had been discussing gift cards and loyalty cards for about eight or ten years but it never seemed to move forward. She even pulled out a folder that had enough of a thud factor to make any contract attorney jealous. It contained page after page of meeting minutes, email correspondence, and requirements. I was floored…eight years? Of talking? What was the roadblock?


A few days later, I was meeting with the VP of Retail. Again, we talked about a lot of different topics. Sure enough, the conversation soon rolled around to gift cards and loyalty cards. We’ve talking about it for eight years…and we’ve made no progress…eight years? Of talking? What was the roadblock?


That afternoon, I met with a couple of folks from my new staff. “What’s up with this gift card and loyalty card thing?”, I asked. Eight years? Of talking? What was the roadblock?


So, since this is my blog, I get to use my “bully pulpit” to air some dirty laundry and perhaps, according to who you ask, some revisionist history. It seemed, the problem was Marketing blamed Retail’s inability to define requirements, Retail blamed IT for always saying “no we can’t do that”, and IT blamed Marketing for want to discuss ad nauseum, but never move forward. I vowed, this was going to change. So in the midst of our Strategic Planning process, I called a meeting to discuss: gift cards and loyalty cards. After all, it was very near to my sweet spot…early in my career I had spent 12 years in banking, specifically in credit cards. 


As the year progressed, we began to define requirements and search commercial offerings for gift and loyalty cards. Within a few short months, the team decided to separate the project into two phases. Phase one would be gift cards and phase two would be loyalty cards. With that decision, the project kicked into high gear. Given our Point of Sale system and our requirements, we very quickly identified a gift card software provider. Within a few short months, we launched our gift card program.


Several weeks later, we reconvened our team of Marketing, Retail and IT to start on loyalty cards. We further defined our requirements. We wanted a random reward system, not a points based system, we wanted flexibility in the rewards offered, and most importantly, we wanted to track and drive two different behaviors on the same car: shopping and donating. Throughout the winter, we evaluated many off the shelf solutions. However, it was becoming readily apparent that no off the shelf solution was going to meet our requirements. Sure, they all offered flexibility in the rewards, but they were all based on earning points and none of them could track two different behaviors on the same card. Even taking that into consideration, the team was narrowing the selection down to a handful of packages that met at least some of the requirements.


I knew we had to build it. We had to deviate from our cloud-first, buy strategy and build it ourselves. There was no other way. With that in mind, we developed a response to the RFP we had issued. It was basically a general design document of what could be built. We submitted our “RFP Response” to the team along with the two or three commercial packages that had been down-selected. As selection day quickly approached, I made it a point to discuss the proposal in detail with the VP of Retail and the VP of Marketing. I could tell they were skeptical that IT could pull it off. I assured them we could, and quite frankly, played the “new guy card” and asked for a chance.


Our proposal was selected, now it was time to put up or shut up. We engaged with a local firm (Arete Software) to build the initial database and prototype and then shifted to the internal team. As we worked feverishly on the code, the project team defined the goals and the targets for success. The launch date would be November 11, 2011 (11/11/11); we would achieve an 11% increase in retail sales, our average shopping cart would increase by $5, and we would have 100,000 cardholders at the end of the first year.


Over the course of the summer and the fall, the team worked faithfully to hit the target date. Finally…go live…the organization that was moving head first into the cloud, moving head first into buy versus build, moving head first into changing its operating model, launched its loyalty card program…Goodwill Rewards (™).


Yes, we hit our target dates; yes, we hit our budget; but, how did we do on our goals? Our increase in retail sales was 13%, beating our target by 2%; our average shopping cart did improve, but fell short of our goal (our lessons learned review identified some areas for improvement here); and, we blew past the 100,000 cardholder mark in under six months, in fact, at the end of year one we had over a quarter of a million cardholders, today we have over 550,000 (remarkable, considering our geographic territory is 29 counties in Central Indiana…yes, 550,000 cardholders in just 29 counties in Indiana).


To further validate our success, we were awarded the Society of Information Management of Indiana’s Innovation of the Year award in 2012. Additionally, we licensed the software to a couple other Goodwill organizations in the US, turning us into, if not a profit generator, at least a revenue generator for the company.


How were we able to achieve this? First, it truly was a team effort. In fact, I believe one of the most important outcomes of this project was for Marketing, Retail and IT to work together, as a team, to achieve a common goal. Second, our path to amplify our value by leveraging cloud technologies and avoiding C-F Projects (see That Project is a Real Cluster!) enabled us to spend our energy on this A-C project. Third, the environment and culture enabled us to take a risk, to step into the unknown, to ask for and receive the support to move forward.


Next month, we will eliminate even more C-F Projects by looking at disaster recovery in: Amplify Your Value: A Tale of Two Recoveries.


The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue


We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).


Jeffrey Ton is the SVP and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

Improving User Experience through Big Data

Fig1.png

Enterprise IT users switch between a multitude of programs and devices on a daily basis. Inconsistencies between user interfaces can slow enterprise users’ productivity, as those users may enter the same information repeatedly or need to figure out what format to enter data (e.g. specifying an employee might be done with an employee number, a name, or an e-mail address).   On the application development side, code for user interfaces may be written over and over again.  One approach to solving these problems is to create a common User Experience (UX) framework that would facilitate discussion and the production of shareable interface templates and code.    Intel IT took the challenge to do just that, with the goals of increasing employee productivity by at least 25% and achieving 100% adoption.  To create that unified enterprise UX frame work, Big Data approaches were critical, as described in this white paper from IT@Intel.

 

To understand the requirements for the enterprise UX, two sources of data are available, but both have unique problems.  Traditional UX research methods like surveys, narratives, or observations, typically are unstructured and often do not have statistical significance. Usage data from logs have large volumes, and user privacy is at risk.  Unstructured data, varied data, and voluminous data are a perfect fit for Big Data techniques.   We used de-identification (aka anonymization) to hide the personal information of users.  De-identification techniques were combined with Big Data to create a Cloudera Hadoop based analysis platform shown to the right.  Fig2.png

 

Using that analysis platform, Intel IT’s UX team created a single framework standard for all enterprise solutions.  60% of Intel IT’s staff can take advantage of it.   Data from this platform was also used to select and implement a new internal social platform.  The analysis platform has also been used to analyze other aspects of user behavior, which we are planning to write about in a future IT@Intel white paper.

 

In addition to the white paper, more detail on the development of the UX framework can be found in the following papers:

 

Regarding our use of de-identification/anonymization, we talked about our early explorations in this white paper, and a more detailed analysis of the challenges of using de-identification in an enterprise setting our detailed in this conference paper:

Read more >

Malware Trend Continues its Relentless Climb

Malware development continues to remain healthy.  The Intel Security Group’s August 2015 McAfee Labs Threat Report shows malware quarterly growth at 12% for the second quarter of 2015.  In totality, the overall count of known unique malware samples has reached a mesmerizing 433 million. 

2015 Q3 Total Malware.jpg

Oddly, this has become a very stable trend.   For many years malware detection rates have remained relatively consistent at about ~50% increase annually. 

 

Which makes absolutely no sense! 

 

Cybersecurity is an industry of radical changes, volatile events, and chaotic metrics.  The growth of users, devices, data, new technologies, adaptive security controls, and dissimilar types of attacks differ each year.  Yet the numbers of malware being developed plods on with a consistent and predictable gain. 

 

What is going on?

 

Well colleagues, I believe we are witnessing a macro trend which incorporates the natural equilibrium occurring between symbiotic adversaries. 

 

Let me jump off topic for a moment.  Yes, cyber attackers and defenders have a symbiotic relationship.  There, I said it.  Without attacks, security would have no justification for existence.  Nobody would invest and most, if not all, security we have today would not exist.  Conversely, attackers do need security to keep their potential victims healthy, online, and valuable as targets.  Just as lions need a healthy herd to hunt, to avoid extinction, attackers need defenders to insure computing continues to grow and be more relevant.  If security was not present to hold everything together, attackers would decimate systems and in short order nobody would use them.  The herd would disappear.  So yes, a healthy electronic ecosystem has either a proper balance of both predator and prey, or a complete omission of both.

 

Back to this mind boggling trend.  I believe the steady growth of malware samples is a manifestation, at a high level, of the innumerable combined maneuvering of micro strategies and counter tactics.  As one group moves for an advantage, the other counters to ensure they are not defeated.  This continues on many fronts all the time.  No clear winner, but no complete loser either.  The players don’t consciously think this way, instead it is simply the nature of the symbiotic adversarial relationship.    

I have a Malware Theory and only time will tell if this turns into a law or dust.  My theory “malware rates will continue to steadily increase by 50% annually, regardless of the security or threat maneuvering” reflects the adversarial equilibrium which exists between attackers and defenders.  Only something staggering, which would profoundly upset the balance will change that rate.  If my theory is correct, we should break the half-billion mark in Q4 2015.

 

So I believe this trend is likely here to stay.  It also provides important insights to our crazy industry and why we are at this balance point.

 

Even in the face of new security technologies, innovative controls, and improved configurations, malware writers continue to invest in this method because it remains successful.  Malware continues to be the preferred method to control and manipulate systems, and access information.  It just works.  Attackers, if nothing else, are practical.  Why strive to develop elaborate methods when malware gets the job done?  (See my rants on path of least resistance for more on understanding the threats.) 

 

Defensive strategies are not slowing down malware growth.  This does not mean defensive tools and practices are worthless.  I suspect the innovation in security is keeping it in check somewhat, but not slowing it down enough to reduce the overall growth rates.  In fact, without continued investment we would likely be overrun.  We must remain vigilant in malware defense.

 

The rate increase is a reflection on the overall efficacy of security.  Malware must be generated at a rate of 150% per year, in order compensate for security intervention and achieve the desired success.  Flooding defenders is only one strategy as attackers are also demanding higher quality, feature rich, smarter, and more timely weapons.

 

Malware must land somewhere in order to operate and do its dirty deeds.  PC’s, tablets, phones, servers, cloud and VM hosting systems, and soon to be joined more prominently by droves of IoT devices, are all potential hosts.  Therefore, endpoints will continue to be heavily targeted and defense will continue to be hotly contested on this crucial battleground.  Ignore anyone who claims host based defenses are going away.  Just the opposite my friends.

 

At a rate of over three-hundred thousand new unique samples created per day, I speculate much of the malware is being generated automatically.  It is interesting on the defensive side, anti-malware companies are beginning to apply machine-learning, community reporting, and peer-validation to identify malicious code.  It is showing promise.  But just wait.  The malware writers can use the same type of machine-learning and community reporting to dynamically write code which either subverts detection or takes advantage of time delays in verification.  Malware code can quickly reinvent itself before it is verified and prosecuted.  This should be an interesting arms race.  Can the malware theory sustain?  Strangely, I suspect this battle, although potentially significant, may be exactly what the malware model anticipates.  The malware metronome ticks on.

 

 

Connect with me:

Twitter: @Matt_Rosenquist

Intel IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >

SmartGrid Security: Q&A with Bob Radvanosky, Co-Founder, Infracritical

Bob Radvanosky is one of the world’s leading cyber security experts, with more than two decades of experience designing IT security solutions for the utility, homeland security, healthcare, transportation, and nuclear power industries. The author of nine books on cyber … Read more >

The post SmartGrid Security: Q&A with Bob Radvanosky, Co-Founder, Infracritical appeared first on Grid Insights by Intel.

Read more >

Paving a Path to Reliable Hyper-Converged Solutions

While talking to enterprise and cloud data center operators, the subject of hyper-converged infrastructure is a very hot topic. This new approach to infrastructure brings together server, storage, and networking components into an appliance designed for quicker installation and easier management. Some industry observers say hyper-converged systems are likely to play a significant role in meeting the scalability and deployment requirements of tomorrow’s data centers.

 

One view, for example comes from IDC analyst Eric Sheppard: “As businesses embark on a transformation to become data-driven entities, they will demand a data infrastructure that supports extreme scalability and flexible acquisition patterns and offer unprecedented economies of scale. Hyperconverged systems hold the promise and the potential to assist buyers along this data-driven journey.”

 

Today, Intel is helping fuel this hyper-converged infrastructure trend with a line of new server products announced at this week’s VMworld 2015 U.S. conference in San Francisco. Intel® Server Products for Hyper-Converged Infrastructure are designed to be high quality, unbranded, semi-integrated, and configure-to-order server building blocks optimized for the hyper-converged infrastructure solutions that enterprise IT and cloud environments have requested.

 

These new offerings, which provide certified hardware for VMware EVO:RAIL* solutions, combine storage, networking, and compute in an all-in-one system to support homogenous enterprise IT environments in a manner that reduces labor costs. OEMs and channel partners can now provide hyper-converged infrastructure solutions featuring Intel’s most innovative technologies, along with world-class validation, compatibility, certification, warranty, and support.

 

For OEMs and channel partners, these products pave a path to the rapidly growing and potentially lucrative market for hyper-converged solutions. Just how big of a market are we talking about? According to IDC, workload and geographic expansion will help push hyper-converged systems global revenues past the $800 million mark this year, up 116 percent over 2014.  Intel® Server Products for Hyper-Converged Infrastructure also bring together key pieces of the infrastructure puzzle, including Intel’s most innovative technologies designed hyper-converged infrastructure enterprise workloads.

 

Intel® Server Products for Hyper-Converged Infrastructure include a 2U 4-Node chassis supporting up to 24 hot-swap hard disk drives, dual-socket compute modules offering dense performance and support for the Intel® Xeon® processor E5-2600 v3 product family, and eight high-speed NVMe* solid-state drives acting as cache to deliver high performance for VMware Virtual SAN* (VSAN*).

 

With all key server, storage, and networking components bundled together, OEMs and channel partners have what they need to accelerate the delivery of hyper-converged solutions that are easily tuned to the requirements of customer environments. Better still, they can provide their customers with the confidence that comes with Intel hardware that is fully validated and optimized for VMware EVO:RAIL and integrated into enterprise-class VSAN-certified solutions.

 

For a closer look at these new groundbreaking server products, visit the Intel hyper-converged infrastructure site.

 

 

 

 

 

 

 

1 IDC MarketScape: Worldwide Hyperconverged Systems 2014 Vendor Assessment. December 2014. Doc # 253267.

2 IDC news release. “Workload and Geographic Expansion Will Help Push Hyperconverged Systems Revenues Past $800 Million in 2015, According to IDC” April 30, 2015.

Read more >