Recent Blog Posts

How does business recover from a large-scale cyber security disaster?

Corporations need to get three things right in cyberspace: protect their valuable information, ensure that business operations continue during disturbances and maintain their reputation as trustworthy. These goals support one another and enable successful utilization of the digital world. Yet due to its dynamic nature there is no absolute security in cyberspace. What to do when something goes wrong? The best way to survive from a blast is to prepare for it in advance.

 

Cyber security requires transformed security thinking. Security should not be seen as an end-state once achieved through tailored investment in technology but as an on-going process that needs to adapt to changes in the environment. Effective security production is agile and innovative. It aligns cyber security with the overall business process so that the former supports the latter. When maintaining cyber security is seen as one of the corporation’s core managerial functions, its importance is raised to the correct level. Not only IT-managers and -officers need to understand cyberspace and realize how it relates to their areas of responsibility.

 

Integration of cyber security point of view in business process can be done, for example, via constructing and executing a specific cyber strategy for the corporation. This should start with enablement and consider opportunities that the corporation wishes to take advantage of in the digital world. It should also recognize threats in cyberspace and designate how these are counteracted. The strategy process should be led by the highest managerial level yet be responsive to ideas and feedback from both operational and technical levels of execution. Thus the entire organization will be committed to the strategy and feel it has an ownership in it. Moreover, the strategy will be realistic without attempting to reach unachievable goals or utilize processes which construction is technically impossible.

 

It is a common practice for corporations to do business continuity planning. However, operations in the digital world are not always included in this – regardless of the acknowledged dependency on cyberspace that characterizes modern business. There seems to be a strong belief in bits; that they won’t let us down. The importance of plan B is often neglected and the ability to operate without functioning cyberspace is lost. What should be in the plan B – which is an essential building block in cyber strategy – is the guidelines for partners, managers and employees in case of a security breach or a large cyber security incident. What to do; whom to inform; how to address the issue in public?

 

The plan B should include enhanced intrusion detection, adequate responses to security incidents and a communication strategy. Whom to inform, at what level of details and in which stage of the recovery process? Too little communication may give the impression that the corporation is trying to hide something or isn’t up-to-date with its responsibilities. Too much communication in too early stage of the mitigation and restoration process may lead to panic or exaggerated loss estimations. In both cases the reputation of the corporation suffers. Openness and correct timing are the key words here.

 

A resilient corporation is able to continue its business operations even when the digital world does not function the way it is supposed to. Digital services may be scaled down without customer experience suffering from it too much. Effective detection of both breaches and associated losses and fast restoration of services do not only serve the corporation’s immediate business goals but also enable projecting good cyber security. Admitting that there are problems but simultaneously demonstrating that necessary security measures are being taken is essential throughout the recovery period. So is honest communication to stakeholders at the right level of details.

 

Without adequate strategy work and its execution trust felt towards the corporation and its digital operations is easily lost. Without trust it is difficult to find to partners to cyber dependent business operations and customers turn away from the corporation’s digital offerings. Trust is the most valuable asset in cyberspace.

 

Planning in advance and building a resilient business entity safeguard corporations from digital disasters. In case such a thing has already happened it is important to speak up, demonstrate that lessons have been learned and show what is being done differently from now. The corporation must listen to those who have suffered and carry out its responsibilities. Only this way can market trust be restored.

 

- Jarno

 

Find Jarno on LinkedIn

Start a conversation with Jarno on Twitter

Read previous content from Jarno

Read more >

Breaking Down Battery Life

Many consumer devices have become almost exclusively portable. As we rely more and more on our tablets, laptops, 2-in-1s, and smartphones, we expect more and more out of our devices’ batteries. The good news is, we’re getting there. As our devices evolve, so do the batteries that power them. However, efficient batteries are only one component of a device’s battery life. Displays, processors, radios, and peripherals all play a key role in determining how long your phone or tablet will stay powered.

GreaterBatteryLife_1.png

Processing Power

Surprisingly, the most powerful processors can also be the most power-friendly. By quickly completing computationally intensive jobs, full-power processors like the Intel Core™ i5 processor can return to a lower power state faster than many so-called “power-efficient” processors. While it may seem counterintuitive at first glance, laptops and mobile devices armed with these full-powered processors can have battery lives that exceed those of smaller devices. Additionally, chip makers like Intel work closely with operating system developers like Google and Microsoft in order to optimize processors to work seamlessly and efficiently.


Display

One of the biggest power draws on your device is your display. Bright LCD screens require quite a bit of power when fully lighted. As screens evolve to contain more and more pixels, battery manufacturers have tried to keep up. The growing demand for crisp high-definition displays makes it even more crucial for companies to find new avenues for power efficiency.

 

Radios

Almost all consumer electronic devices being produced today have the capacity to connect to an array of networks. LTE, Wi-Fi, NFC, GPS — all of these acronyms pertain to some form of radio in your mobile phone or tablet, and ultimately mean varying levels of battery drain. As the methods of wireless data transfer have evolved, the amount of power required for these data transfers has changed. For example, trying to download a large file using a device equipped with older wireless technology may actually drain your battery faster than downloading the same file using a faster wireless technology. Faster downloads mean your device can stay at rest more often, which equals longer battery life.

 

Storage

It’s becoming more and more common for new devices to come equipped with solid-state drives (SSD) rather than hard-disk drives (HDD). By the nature of the technology, HDDs can use up to 3x the power of SSDs, and have significantly slower data transfer rates.

 

These represent just a few things you should evaluate before purchasing your next laptop, tablet, 2-in-1, or smartphone. For more information on what goes into evaluating a device’s battery life, check out this white paper. To join the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter

Read more >

Dishing up Some SMAC Talk

I have been a huge proponent of social media and social networking for the past few years. It’s been an interesting to see how social networking, once reserved for friends and family, has made its way into the enterprise workplace. Individuals are now more mobile and have a range of choices for what device(s) they utilize for any given task. There is more data than ever before, and a desire to turn those bits of information into insights and actions. And the cloud has created new opportunities to deliver applications, services, and value.

 

The combination of these transformative trends is known as SMAC: social, mobile, analytics, and cloud. And it’s the result of the increasing consumerization of IT, with users demanding the devices and capabilities they enjoy at home.

 

Intel IT has embraced the SMAC model with fervor. It’s a great way to give Intel employees the information and services they want, no matter where they are or what device they are using. And helps IT continually improve the speed and efficiency of resource and service delivery.

 

You can find out more about our SMAC model from Intel Vice President and General Manager of IT, David Aires, and how he and his team are moving to the leading edge of change in the Intel IT Business Review.

 

http://itbusinessreview.intel.com/leading-it/110-moving-to-the-leading-edge-of-the-change-wave

 

Here are a few examples of the progress made by David and his team:

 

  • Intel IT distributed nearly 14,000 touch-enabled Ultrabooks to our workforce in 2013 to give users a lighter, more mobile computing platform than PCs and laptops.

 

  • Intel IT implemented a BYOD program two years ago, and a majority of the 45,000 mobile devices at Intel are now employee-owned.

 

  • The increase in mobile devices has upped the demand for mobile apps. They developed 57 enterprise mobile apps in 2013 alone, and have delivered 123 mobile apps to the Intel workforce since 2011.

   Mobile.jpg

  • To increase IT agility and efficiency, they have virtualized more than 80 percent of Intel’s infrastructure and are delivering more services through IT’s internal cloud.

  cloud computing.jpg

 

These changes aren’t just good for our employees. They are also good for business. By adopting and promoting SMAC, this Intel IT team is boosting productivity, keeping costs down, and staying in front of industry trends.

 

To learn more how this team is delivering operational excellence, increasing employee productivity, reducing costs, and deploying new technologies raises expectations of IT, download the Intel IT Business Review mobile app. http://itbusinessreview.intel.com/

 

IIBR.jpg

 

 

Download the Intel IT Business Review mobile app to see how we are putting the latest technology trends to use.

 

And perhaps we can engage in some friendly “SMAC talk.”  Follow me on Twitter: @davidlaires #IntelIT

 

David Aires

General Manager of Operations

Intel Information Technology

Read more >

Episode Recap – Transform IT with Guest Ray Noonan, CEO, Cogent

How did you like what Ray Noonan, CEO of Cogent, had to say about collaboration and the need to focus on business value?

 

Did it challenge you?Screen Shot 2014-10-03 at 10.57.47 AM.png

 

It probably should have. If I can summarize what Ray shared with us, it would be that we need to:

 

Break down the walls that separate us and keep us apart and to always put the business value above the needs of IT.


I’m quite sure that some of what he said sent shivers down the spines of IT people everywhere. But Ray wasn’t focused on “IT” – only on what IT can do to deliver value to the organization.

 

He believes that IT is too important to be segregated in a separate function and so he integrated it into the business units directly. He believes that we should all be technologists and so that we need to trust our people with technology decisions. He believes that the sense of “ownership” – to the degree that it inhibits sharing and collaboration – must be eliminated so that our teams can work together rapidly and fluidly. And he believes that the only thing that matters is the value that is generated for the business – so if an IT process or policy is somehow disrupting the delivery of value, then it should be changed.

 

If you keep your “IT hat” on, these ideas can seem scary and downright heretical. But if you think like a CEO, they make a lot more sense.

 

And that was Ray’s big challenge to all of us.

 

To break down our “ownership walls”.

To focus, instead, on how we create value for the organization.

To understand and embrace that value.

And then to deliver and protect it.

 

The question for you is how you’re going to start doing that. How will you begin?

 

Share with us the first step that you’re going to take to begin breaking down your own “ownership walls” and to focus on value.  I believe that your ability to understand how value is created for your business and how you, personally, contribute to that value, is perhaps one of the most critical first steps in your own personal transformation to becoming a true digital leader.

 

So decide what you will do to begin this process and start now. There’s no time to wait!

 

If you missed Episode 2, you can watch it on-demand here: http://intel.ly/1rrfyg1

 

Also, make sure you tune in on October 14th when I’ll be talking to Patty Hatter, Sr. VP Operations & CIO at McAfee about “Life at the Intersection of IT and Business.” You can register for a calendar reminder here.


You can join the Transform IT conversation anytime using the Twitter hashtags #TransformIT and #ITChat.

Read more >

Upgrade A NVME Capable Linux Kernel

Here in Intel NVM and SSD group (NSG) we build and test Linux systems a lot and we’ve been working to mature the nvme driver stack on all kinds of operating systems. The Linux kernel is the innovation platform today, and it has come a long way now with NVMe stability. We always had a high level kernel build document but never in a blog (bad Intel, we are changing those ways). We also wanted to refresh it a bit as maturity is well along now with NVMe and Linux. Kernel 3.10 (*spring 2014) is when integration really happened, and the important data center Linux OS vendors are fully supporting the driver. In case you are on a 2.6 kernel and want to move up to a newer kernel, here are the steps to build a kernel for your testing platform and try out one of Intel’s Data Center SSD’s for PCIe and NVMe.. This assumes you want the latest and greatest for testing and are not interested in an older or vendor supported kernel. By the way, on those “6.5 distributions” you won’t be able to get a supported 3.x kernel, that’s one reason I wrote this blog. But it will run and allow you test with something newer. You may have your own reasons I am sure. As far as production goes you will probably want to make sure you work together with your OS vendor.

 

I run a 3.16.3 kernel on some of the popular 6.5 distros, you can too.

 

1.    NVM Express background

The NVM express (NVMe) is optimized PCI Express SSD interface, NVM Express specification defines an optimized register interface, command set and feature set for PCI express (PCIe)-based Solid State Drives(SSD). Please refer towww.nvmexpress.org for background on NVMe.

The NVM Express Linux driver development utilizes the typical open-source process used by kernel.org. The development mailing list is linux-nvme@lists.infradead.org

The Linux NVMe driver intercepts kernel 3.10 and integrates to kernels above 3.10.

 

2.    Development tools required (possible pre-requisites)

In order to clone, compile and build new kernel/driver, the following packages are needed

  1. ncurses
  2. build tools
  3. git  (optional you could be using wget to get the Linux package)

You must be root to install these packages  

Ubuntu based

apt-get install git-core build-essential libncurses5-dev  

RHEL based

yum install git-core ncurses ncurses-develyum install groupinstall “Development Tools”  

SLES based        

zipper install ncurses-devel git-core              zipper install –type pattern Basis-Devel

 

3.    Build new Linux kernel with NVMe driver

Pick up a starting distribution, it doesn’t matter from driver’s perspective which distribution you use since it is going to put a new kernel on top of it, so use whatever you are most comfortable with and/or has the tools required.Get kernel and driver

  1. Or you can download “snapshot” from the top commit (here’s an example)

            wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.16.3.tar.xz

            tar –xvf linux-3.16.3.tar.xz

 

    2.      Build and install

Run menuconfig (which uses ncurses):

make menuconfig

Confirm the NVMe Driver under Block is set to <M>

Device Drivers-> Block Devices -> NVM Express block device

This creates .config file in same directory.

Then, run as root these make commands (use the j flag as ½ your cores to improve make time)

make –j10

make modules_install –j10

make install –j10

 

Depending on distribution you use, you may have to run update-initramfs and update-grub, but this is typically unnecessary. Once install is successful, reboot system to load new kernel and drivers. Usually the new kernel becomes default to boot which is the top line of menu.lst. Verify it with “uname –a” after booting, that the  running kernel is what you expect. , Use “dmesg | grep –i error”  and resolve any kernel loading issues.

 

4.  NVMe Driver basic tests and tools

          There are some basic open source nvme test programs you can use for checking nvme devices:

          http://git.infradead.org/users/kbusch/nvme-user.git

          Git’ing source codes

git clone git://git.infradead.org/users/kbusch/nvme-user.git

Making testing programs

Add/modify Makefile with proper lib or header links and compile these programs

make

 

Example, check nvme device controller “identify”, “namespace” etc

>>sudo ./nvme_id_ctrl /dev/nvme0n1

>>sudo ./nvme_id_ns /dev/nvme0n1

 

Intel SSD Data Center Tool 2.0 supports NVMe

 

Here are more commands you’ll find useful.

Zero out and condition a drive sequentially for performance testing:

dd if=/dev/zero of=/dev/nvme0n1 bs=2048k count=400000 oflag=direct

Quick test a drive, is it reading at over 2GB a second?

hdparm -tT –direct /dev/nvme0n1

 

Again enjoy these Gigabyte/s class SSD’s with low microsecond controller free performance!


Read more >

Quick Q & A: Safer Highways, Smarter Cities, and Fewer Traffic Jams

It’s fascinating to think about how innovations in transportation, including the trend toward Internet of Things implementations, can enhance the quality of life for people across the globe. Indeed, technology can help us address significant challenges around really important aspects … Read more >

The post Quick Q & A: Safer Highways, Smarter Cities, and Fewer Traffic Jams appeared first on IoT@Intel.

Read more >

Health IT Does Not Transform Healthcare; Healthcare Cannot Transform Without Health IT

Below is a guest post from Steven E. Waldren, MD MS.

 

I was listening to the Intel Health videocast[1] of Eric Dishman, Dr. Bill Crounse, Dr. Andy Litt, and Dr. Graham Hughes. There was an introductory line that rang true, “EHR does not transform healthcare.” This statement prompted me to write this post.

 

The healthcare industry and policy makers have frequently seen health information technology (health IT) as a relatively easy fix to the quality and cost issues plaguing the U.S. health system. If we adopt health IT and make it interoperable, we will drastically improve quality and lower cost. Research provides evidence that health IT can do both.

 

I believe, however, that interpretation of this research misses a very important dependent variable; that variable is the sociotechnical system within which the health IT is deployed. For the uninitiated, Wikipedia provides a good description of a sociotechnical system.[2] In essence, it is the system of people, workflow, information, and technology in a complex work environment. Healthcare is definitely a complex adaptive environment[3]. To put a finer point on this, if you deploy health IT in an environment in which the people, workflow, and information are aligned to improve quality and lower cost, then you are likely to see those results. On the other hand, if you implement the technology in an environment in which the people, workflow, and information are not aligned, you will likely not see in either area.

 

Another reason it is important to look at health IT as a sociotechnical system is to couple the provider needs and capabilities to the health IT functions needed. I think, as an industry, we have not done this well. We too quickly jump into the technology, be it patient portal, registry, or e-prescribing, instead of focusing on the capability the IT is designed to enable, for example, patient collaboration, population management, or medication management, respectively.

 

Generally, the current crop of health IT has been focused on automating the business of healthcare, not on automating care delivery. The focus has been on generating and submitting billing, and generating documentation to justify billing. Supporting chronic disease management, prevention, or wellness promotion take a side seat if not a backseat. As the healthcare industry transitions to value-based payment, the focus has begun to change. As the healthcare system, we should focus on the capabilities that providers and hospitals need to support effective and efficient care delivery. From those capabilities, we can define the roles, workflows, data, and technology needed to support practices and hospitals in achieving those capabilities. Instead of adopting a standard, acquiring a piece of technology, or sending a message, by loosely coupling to the capabilities, we have a metric to determine whether we are successful.

 

If we do not focus on the people, workflow, data, and technology, but instead only focus on adopting health IT, we will struggle to achieve the “Triple Aim™,” to see any return on investment, or to improve the satisfaction of providers and patients. At this time, a real opportunity exists to further our understanding of the optimization of sociotechnical systems in healthcare and to create resources to deploy those learnings into the healthcare system. The opportunity requires us to expand our focus to the people, workflow, information, AND technology.

 

What questions do you have about healthcare IT?

 

Steven E. Waldren, MD MS, is the director, Alliance for eHealth Innovation at the American Academy of Family Physicians

 


[1] https://t.co/J7jISyg2NI

[2] http://en.wikipedia.org/wiki/Sociotechnical_system

[3]http://ti.gatech.edu/docs/Rouse%20NAEBridge2008%20HealthcareComplexity.pdf

Read more >

Will the Invincible Buckeyes Team from OSU and OSC Prove to be Invincible?

Mike Bernhardt is the Community Evangelist for Intel’s Technical Computing Group

 

Karen Tomko, Scientific Applications Group Manager at the Ohio Supercomputer Center (OSC), has assembled a team of fellow Buckeyes to attempt the Intel Parallel Universe Computing Challenge (PUCC) at SC14 in November.

 

We asked Karen a few questions about her team, called the Invincible Buckeyes (IB), and their proposed participation in the PUCC.

 

The 2014 Invincible Buckeyes (IB) team includes (from l to r) Khaled Hamidouche, a post-doctoral researcher at The Ohio State University (OSU); Raghunath Raja, Ph.D student (CS) at OSU; team captain Karen Tomko; and Akshay Venkatesh, Ph.D student (CS) at OSU. Not pictured is Hari Subramoni, a senior research associate at OSU

 

Q: What was the most exciting thing about last year’s PUCC?

A: Taking a piece of code from sequential to running in parallel on the Xeon Phi in 15 minutes, in a very close performance battle against the Illinois team was a lot of fun.

 

Q: How will your team prepare for this year’s challenge?

A: We’ll do our homework for the trivia, brush up on the parallel constructs, look at some Fortran codes, and make sure we have at least one vi user on the team.

 

Q: What would you suggest to other teams who are considering participation?

A: First I’d say, if you are considering it then sign up. It’s a fun break from the many obligations and talks at SC. When you’re in a match don’t over think, the time goes very quick. Also, watch out for the ‘Invincible Buckeyes’!

 

Q: SC14 is using the theme “HPC Matters” for the conference. Can you explain why “HPC Matters” to you?

A: HPC systems allow scientists and engineers to tackle grand challenge problems in their respective domains and make significant contributions to their fields. It has enabled innumerous discoveries in the fields of astro-physics, earthquake analysis, weather prediction, nanoscience modeling, multi-scale and multi-physics modeling, biological computations, and computational fluid dynamics, to name a few. Being able to contribute directly/indirectly to these discoveries by means of the research we do matters a lot to our team.

Read more >