Recent Blog Posts

Unlock Bio IT Puzzles with New Code Pipelines

The saying that “life sciences is like a puzzle” has never been more true than it is today. The life sciences are in the midst of a dramatic transformation as technology redefines what is possible for human health and healthcare. That’s why the upcoming Bio-IT World event in Boston, April 21-23, holds so much promise for moving the conversation forward and sharing knowledge that truly helps people.

 

As the show approaches, we’re excited to roll out a new resource for you that offers an optimized compendium of codes with benchmarks and replication recipes. When used on Intel®-based computing platforms, and in concert with other Intel® software tools and products, such as Intel® Solid-State Drives (Intel® SSDs), the optimized code can help you decipher data and accelerate the path to discovery. rubiks-01_v2.jpg

 

Industry leaders and authors of key genomic codes have supported this new resource to ensure that genome processing runs as fast as possible on Intel® based systems and clusters. The results have been significantly improved speed of key genomic programs and the development of new hardware and system solutions to get genome sequencing and processing down to minutes instead of days.

 

Download codes

On the new resource page, you can currently download the following codes to run on Intel® Xeon®processors:

 

  • BWA
  • MPI-HMMER
  • BLASTn/BLASTp
  • GATK

 

If you’re looking for new tools to help handle growing molecular dynamics packages, which can span from hundreds to millions of particles, take advantage of these codes that are compatible with both Intel® Xeon® processors and Intel® Xeon® Phi™ coprocessors and allow you to “reuse” rather than “recode:”

 

  • AMBER 14
  • GROMACS 5.0 RC1
  • NAMD
  • LAMMPS
  • Quantum ESPRESSO
  • NWChem


Solve the cube

Finally, because life sciences is like a puzzle, look for a little fun and games at Bio-IT World that will test your puzzle solving skills and benefit charity.

 

If you’ll be at the show, be sure to grab a customized, genomic-themed Rubik’s Cube at the keynote session on Thursday, April 23, and join the fun trying to solve the puzzle after the speeches at our location on the show floor. Just by participating you will be eligible to win great prizes like a tablet, a Basis watch, or SMS headphones. Here’s a little Rubik’s Cube insight if you need help.

 

Plus, we’re giving away up to $10,000 to the Translational Genomics Research Institute (TGEN) in a tweet campaign that you can support. Watch for more details.

 

What questions do you have? We’re looking forward to seeing you at Bio-IT World next month.

Read more >

How to Configure Oracle Redo on the Intel PCIe SSD DC P3700

Back in 2011, I made the statement, “I have put my Oracle redo logs or SQL Server transaction log on nothing but SSDs” (Improve Database Performance: Redo and Transaction Logs on Solid State Disks (SSDs). In fact since the release of the Intel® SSD X25-E series in 2008, it is fair to say I have never looked backed. Even though those X25-Es have long since retired, every new product has convinced me further still that from a performance perspective a hard drive configuration just cannot compete. This is not to say that there have not been new skills to learn, such as configuration details explained here (How to Configure Oracle Redo on SSD (Solid State Disks) with ASM). The Intel® SSD 910 series provided a definite step-up from the X25-E for Oracle workloads (Comparing Performance of Oracle  Redo on Solid State Disks (SSDs)) and proved concerns for write peaks was unfounded (Should you put Oracle Database Redo on Solid State Disks (SSDs)). Now with the PCIe*-based Intel® SSD DC P3600/P3700 series we have the next step in the evolutionary development of SSDs for all types of Oracle workloads.

 

Additionally we have updates in operating system and driver support and therefore a refresh to the previous posts on SSDs for Oracle is warranted to help you get the best out of the Intel SSD DC P3700 series for Oracle redo.

 

NVMe

 

One significant difference in the new SSDs is the change in interface and driver from AHCI and SATA to NVMe (Non-volatile memory express).  For an introduction to NVMe see this video by James Myers and to understand the efficiency that NVMe brings read this post by Christian Black. As James noted, high performance, consistent, low latency Oracle redo logging also needs high endurance, therefore the P3700 is the drive to use. With a new interface comes a new driver, which fortunately is included in the Linux kernel at the Oracle supported Linux releases of Red Hat and Oracle Linux 6.5, 6.6 and 7. 

I am using Oracle Linux 7.


Booting my system with both a RAID array of Intel SSD DC S3700 series and Intel SSD DC P3700 series shows two new disk devices:


First the S3700 array using the previous interface


Disk /dev/sdb1: 2394.0 GB, 2393997574144 bytes, 4675776512 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Second the new PCIe P3700 using NVMe

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 1562824368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Changing the Sector Size to 4KB

 

As Oracle introduced support for 4KB sector sizes at Oracle release 11g R2, it is important to be at a minimum of this release for Oracle 12c to take full advantage of SSD for Oracle redo. However ‘out of the box’ as shown the P3700 presents a 512 byte sector size. We can use this ‘as is’ and set the Oracle parameter ‘disk_sector_size_override’ to true. With this we can then specify the blocksize to be 4KB when creating a redo log file. Oracle will then use 4KB redo log blocks and performance will not be compromised.


As a second option, the P3700 offers a feature called ‘Variable Sector Size’. Because we know we need 4KB sectors, we can set up the P3700 to present a 4KB sector size instead. This can then be used transparently by Oracle without the requirement for additional parameters. It is important to do this before you have configured or started to use the drive for Oracle as the operation is destructive of any existing data on the device.

 

To do this, first check that everything is up to date by using the Intel Solid State Drive Data Center Tool from https://downloadcenter.intel.com/download/23931/Intel-Solid-State-Drive-Data-Center-Tool Be aware that after running the command it will be necessary to reboot the system to pick up the new configuration and use the device.


[root@haswex1 ~]# isdct show -intelssd
- IntelSSD Index 0 -
Bootloader: 8B1B012D
DevicePath: /dev/nvme0n1
DeviceStatus: Healthy
Firmware: 8DV10130
FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
Index: 0
ProductFamily: Intel SSD DC P3700 Series
ModelNumber: INTEL SSDPEDMD800G4
SerialNumber: CVFT421500GT800CGN


Then run the following command to change the sector size. The parameter LBAFormat=3 sets it to 4KB and LBAFormat=0 sets it back to 512b.

 

[root@haswex1 ~]# isdct start -intelssd 0 Function=NVMeFormat LBAFormat=3 SecureEraseSetting=2 ProtectionInformation=0 MetaDataSetting=0
WARNING! You have selected to format the drive! 
Proceed with the format? (Y|N): Y
Running NVMe Format...
NVMe Format Successful.


A reboot is necessary because I am on Oracle Linux 7 with a UEK kernel at 3.8.13-35.3.1 and the NVMe needs to reset on the device. At Linux kernels 3.10 and above you can also run the following command with the system online to do the reset.

 

echo 1 > /sys/class/misc/nvme0/device/reset


The disk should now present the 4KB sector size we want for Oracle redo.

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 195353046 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Configuring the P3700 for ASM

 

For ASM (Assembly Specific  Monitor) we need a disk with a single partition and, after giving the disk a gpt label, I use the following command to create and check the use of an aligned partition.

 

(parted) mkpart primary 2048s 100%                                        
(parted) print                                                            
Model: Unknown (unknown)
Disk /dev/nvme0n1: 195353046s
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End         Size        File system  Name     Flags
1      2048s  195352831s  195350784s               primary

(parted) align-check optimal 1
1 aligned
(parted)  

     

I then use udev to set the device permissions. Note: the scsi_id command can be run independently to find the device id to put in the file and the udevadm command used to apply the rules. Rebooting the system is useful during configuration to ensure that the correct permissions are applied on boot.

 

[root@haswex1 ~]# cd /etc/udev/rules.d/
[root@haswex1 rules.d]# more 99-oracleasm.rules 
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="3600508e000000000c52195372b1d6008", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="nvme0n1p1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="365cd2e4080864356494e000000010000", OWNER="oracle", GROUP="dba", MODE="0660"


Successfully applied, the oracle user now has ownership of the DC S3700 RAID array device and the P3700 presented by NVMe.

 

[root@haswex1 rules.d]# ls -l /dev/sdb1
brw-rw---- 1 oracle dba 8, 17 Mar  9 14:47 /dev/sdb1
[root@haswex1 rules.d]# ls -l /dev/nvme0n1p1 
brw-rw---- 1 oracle dba 259, 1 Mar  9 14:39 /dev/nvme0n1p1


Use ASMLIB to mark both disks for ASM.

 

[root@haswex1 rules.d]# oracleasm createdisk VOL2 /dev/nvme0n1p1
Writing disk header: done
Instantiating disk: done

[root@haswex1 rules.d]# oracleasm listdisks
VOL1
VOL2


As the Oracle user, use the ASMCA utility to create the ASM disk groups.

 

fult1.png

 

I now have 2 disk groups created under ASM.

 

fult2.png

 

Because of the way the disk were configured Oracle has automatically detected and applied the sector size of 4KB.

 

[oracle@haswex1 ~]$ sqlplus sys/oracle as sysasm
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 12 10:30:04 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
SQL> select name, sector_size from v$asm_diskgroup;

NAME                     SECTOR_SIZE
------------------------------ -----------
REDO                          4096
DATA                          4096

 

 

SPFILES in 4K DISKGROUPS

 

In previous posts I noted Oracle bug “16870214 : DB STARTUP FAILS WITH ORA-17510 IF SPFILE IS IN 4K SECTOR SIZE DISKGROUP” and even with Oracle 12.1.0.2 this bug is still with us.  As both of my diskgroups have a 4KB sector size, this will affect me if I try to create a database in either without having applied patch 16870214.


With this bug, upon creating a database with DBCA you will see the following error.

 

fult3.png


The database is created and the spfile does exist so can be extracted as follows:

 

ASMCMD> cd PARAMETERFILE
ASMCMD> ls
spfile.282.873892817
ASMCMD> cp spfile.282.873892817 /home/oracle/testspfile
copying +DATA/TEST/PARAMETERFILE/spfile.282.873892817 -> /home/oracle/testspfile


This spfile is corrupt and attempts to reuse it will result in errors.

 

ORA-17510: Attempt to do i/o beyond file size
ORA-17512: Block Verification Failed


However, you can extract the parameters by using the strings command and create an external spfile or a spfile in a diskgroup with a 52b sector size. Once complete, the Oracle instance can be started.

 

SQL> create spfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileTEST.ora' from pfile='/home/oracle/testpfile';
SQL> startup
ORACLE instance started


Creating Redo Logs under ASM


In viewing the same disks within the Oracle instance, the underlying sector size has been passed right through to the database.

 

SQL> select name, SECTOR_SIZE BLOCK_SIZE from v$asm_diskgroup;

NAME                   BLOCK_SIZE
------------------------------ ----------
REDO                      4096
DATA                      4096


Now it is possible to create a redo log file with a command such as follows:

 

SQL> alter database add logfile ‘+REDO’ size 32g; 


…and Oracle will create a redo log automatically with an optimal blocksize of 4KB.

 

SQL> select v$log.group#, member, blocksize from v$log, v$logfile where v$log.group#=3 and v$logfile.group#=3;

GROUP#
----------
MEMBER
-----------
BLOCKSIZE
----------
       3
+REDO/HWEXDB1/ONLINELOG/group_3.256.874146809
      4096


Running an OLTP workload with Oracle Redo on Intel® SSD DC P3700 series


To put the Oracle redo on P3700 through its paces I used a HammerDB workload. The redo is set with a standard production type configuration without commit_write and commit_wait parameters.  A test shows we are running almost 100,000 transactions per second at redo over 500MB / second and therefore we would be archiving almost 2 TBs per hour.

 

Per Second

Per Transaction

Per Exec

Per Call

Redo size (bytes):

504,694,043.7

5,350.6

 

 


Log file sync even at this level of throughput is just above 1ms

 

Event

Waits

Total Wait Time (sec)

Wait Avg(ms)

% DB time

Wait Class

DB CPU

 

35.4K

 

59.1

 

log file sync 19,927,449 23.2K 1.16 38.7 Commit


…and the average log file parallel write showing the average disk response time to just 0.13ms

 

Event

Waits

%Time -outs

Total Wait Time (s)

Avg wait (ms)

Waits /txn

% bg time

log file parallel write 3,359,023 0 442

0.13

0.12

2237277.09


 

There are six log writers on this system. As with previous blog posts on SSDs I observed the log activity to be heaviest on the first three and therefore traced the log file parallel write activity on the first one with the following method:

 

SQL> oradebug setospid 67810;
Oracle pid: 18, Unix process pid: 67810, image: oracle@haswex1.example.com (LG00)
SQL> oradebug event 10046 trace name context forever level 8;
ORA-49100: Failed to process event statement [10046 trace name context forever level 8]
SQL> oradebug event 10046 trace name context forever, level 8;

The trace file shows the following results for log file parallel write latency to the P3700.

 

Log Writer Worker

Over  1ms

Over 10ms

Over 20ms

Max Elapsed

LG00 1.04% 0.01% 0.00% 14.83ms

 

Looking at a scatter plot of all of the log file parallel write latencies recorded in microseconds on the y axis clearly illustrate that any outliers are statistically insignificant and none exceed 15 milliseconds. Most of the writes are sub-millisecond on a system that is processing many millions of transactions a minute while doing so.

fult4.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A subset of iostat data shows the the device is also far from full utilization.

 

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          77.30    0.00    8.07    0.24    0.00   14.39
Device:         wMB/s avgrq-sz avgqu-sz   await w_await  svctm  %util
nvme0n1        589.59    24.32     1.33    0.03    0.03   0.01  27.47

 

Conclusion


As a confirmed believer in SSDs, I have long been convinced that most experiences of poor Oracle redo performance on SSDs has been due to an error in configuration such as sector size, block size and/or alignment as opposed to performance of the underlying device itself. In following the configuration steps I have outlined here, the Intel SSD DC P3700 series shows as an ideal candidate to take Oracle redo to the next level of performance without compromising endurance.

Read more >

Tablets Improve Engagements, Workflows

 

Mobility is expected to be a hot topic once again at HIMSS 2015 in Chicago. Tablets like the Surface and Windows-based versions of electronic health records (EHRs) from companies such as Allscripts are helping clinicians provide better care and be more efficient with their daily workflows.

 

The above video shows how the Surface and Allscripts’ Wand application are helping one cardiologist improve patient engagement while allowing more appointments throughout the day.  You can read more in this blog.

 

Watch the video and let us know what questions you have. How are you leveraging mobile technology in your facility?

Read more >

OpenStack® Kilo Release is Shaping Up to Be a Milestone for Enhanced Platform Awareness

By: Adrian Hoban

 

The performance needs of virtualized applications in the telecom network are distinctly different from those in the cloud or in the data center.  These NFV applications are implemented on a slice of a virtual server and yet need to match the performance that is delivered by a discrete appliance where the application is tightly tuned to the platform.

 

The Enhanced Platform Awareness initiative that I am a part of is a continuous program to enable fine-tuning of the platform for virtualized network functions. This is done by exposing the processor and platform capabilities through the management and orchestration layers. When a virtual network function is instantiated by an Enhanced Platform Awareness enabled orchestrator, the application requirements can be more efficiently matched with the platform capabilities.

 

Enhanced Platform Awareness is composed of several open source technologies that can be considered from the orchestration layers to be “tuning knobs” to adjust in order to meaningfully improve a range of packet-processing and application performance parameters.

 

These technologies have been developed and standardized through a two-year collaborative effort in the open source community.  We have worked with the ETSI NFV Performance Portability Working Group to refine these concepts.

 

At the same time, we have been working with developers to integrate the code into OpenStack®. Some of the features are available in the OpenStack Juno release, but I anticipate a more complete implementation will be a part of the Kilo release that is due in late April 2015.

 

How Enhanced Platform Awareness Helps NFV to Scale

In cloud environments, virtual application performance may often be increased by using a scaling out strategy such as by increasing the number of VMs the application can use. However, for virtualized telecom networks, applying a scaling out strategy to improve network performance may not achieve the desired results.

 

NFV scaling out will not ensure that improvement in all of the important aspects of the traffic characteristics (such as latency and jitter) will be achieved. And these are essential to providing the predictable service and application performance that network operators require. Using Enhanced Platform Awareness, we aim to address both performance and predictability requirements using technologies such as:

 

  • Single Root IO Virtualization (SR-IOV): SR-IOV divides a PCIe physical function into multiple virtual functions each with the capability to have their own bandwidth allocations. When virtual machines are assigned their own VF they gain a high-performance, low-latency data path to the NIC.
  • Non-Uniform Memory Architecture (NUMA): With a NUMA design, the memory allocation process for an application prioritizes the highest-performing memory, which is local to a processor core.  In the case of Enhanced Platform Awareness, OpenStack® will be able to configure VMs to use CPU cores from the same processor socket and choose the optimal socket based on the locality of the relevant NIC device that is providing the data connectivity for the VM.
  • CPU Pinning: In CPU pinning, a process or thread has an affinity configured with one or multiple cores. In a 1:1 pinning configuration between virtual CPUs and physical CPUs, some predictability is introduced into the system by preventing host and guest schedulers from moving workloads around. This facilitates other efficiencies such as improved cache hit rates.
  • Huge Page support: Provides up to 1-GB page table entry sizes to reduce I/O translation look-aside buffer (IOTLB) misses, improves networking performance, particularly for small packets.

 

A more detailed explanation of these technologies and how they work together can be found in a recently posted paper that I co-authored titled: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release

 

 

Virtual BNG/BRAS Example

The whitepaper also has a detailed example of a simulation we conducted to demonstrate the impact of these technologies.

 

We created a VNF with the Intel® Data Plane Performance Demonstrator (DPPD) as a tool to benchmark platform performance under simulated traffic loads and to show the impact of adding Enhanced Platform Awareness technologies. The DPPD was developed to emulate many of the functions of a virtual broadband network gateway / broadband remote access server.

 

We used the Juno release of OpenStack® for the test, which was patched with huge page support. A number of manual steps were applied to simulate the capability that should be available in the Kilo release such as CPU pinning and I/O Aware NUMA scheduling.

 

The results shown in the figure below are the relative gains in data throughput as a percentage of 10Gpbs achieved through the use of these EPA technologies. Latency and packet delay variation are important characteristics for BNGs. Another study of this sample BNG includes some results related to these metrics: Network Function Virtualization: Quality of Service in Broadband Remote Access Servers with Linux* and Intel® Architecture®

 

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations..PNG

Cumulative performance impact on Intel® Data Plane Performance Demonstrators (Intel® DPPD) from platform optimizations

 

 

The order in which the features were applied impacts the incremental gains so it is important to consider the results as a whole rather than infer relative value from the incremental increases. There are also a number of other procedures that you should read more about in the whitepaper.

 

The two years of hard work by the open source community has brought us to the verge of a very important and fundamental step forward for delivering carrier-class NFV performance. Be sure to check back here for more of my blogs on this topic, and you can also follow the progress of Kilo at the OpenStack Kilo Release Schedule website.

Read more >

Bring Your Own Device in EMEA – Part 2 – Finding the Balance

In my second blog focusing on Bring Your Own Device (BYOD) in EMEA I’ll be taking a look at the positives and negatives of introducing a BYOD culture into a healthcare organisation. All too often we hear of blanket bans on clinicians and administrators using their personal devices at work, but with the right security protocols in place and enhanced training there is a huge opportunity for BYOD to help solve many of the challenges facing healthcare.

 

Much of the negativity surrounding BYOD occurs because of the resulting impact to both patients (privacy) and healthcare organisations (business/financial) of data breaches in EMEA. While I’d agree that the headline numbers outlined in my first blog are alarming, they do need to be considered in the context of the size of the wider national healthcare systems.

 

A great example I’ve seen of an organisation seeking to operate a more efficient health service through the implementation of BYOD is the Madrid Community Health Department in Spain. Intel and security expert Stack Overflow assessed several mobile operating systems with a view to supporting BYOD for physicians in hospitals within their organisation. I highly recommend you read more about how Madrid Community Health Department is managing mobile with Microsoft Windows-based tablets.

 

 

The Upside of BYOD

There’s no doubt that BYOD is a fantastic enabler in modern healthcare systems. But why? We’ll look at some best practice tips in a later blog but suffice to say here that much of the list below should be underpinned by a robust but flexible BYOD policy, an enhanced level of staff training, and a holistic and multi-layered approach to security.

 

1) Reduces Cost of IT

Perhaps the most obvious benefit to healthcare organisations is a reduction in the cost of purchasing IT equipment. Not only that, it’s likely that employees will take greater care of their own devices than they would of a corporate device, thus reducing wastage and replacement costs.

 

2) Upgrade and Update

Product refresh rates are likely to be more rapid for personal devices, enabling employees to take advantage of the latest technologies such as enhanced encryption and improved processing power. And with personal devices we also expect individuals to update software/apps more regularly, ensuring that the latest security updates are installed.

 

3) Knowledge & Understanding

Training employees on new devices or software can be costly and a significant drain on time, notwithstanding being able to schedule in time with busy clinicians and healthcare administrators. I believe that allowing employees to use their personal everyday device, with which they are familiar, reduces the need for device-level training.  There may still be a requirement to have app-level training but that very much depends on the intuitiveness of the apps/services being used.

 

4) More Mobile Workforce

The holy grail of a modern healthcare organisation – a truly mobile workforce. My points above all lead to clinicians and administrators being equipped with the latest mobile technology to be able to work anytime and anywhere to deliver a fantastic patient experience.

 

 

The Downside of BYOD

As I’ve mentioned previously, much of the comment around BYOD is negative and very much driven by headline news of medical records lost or stolen, the ensuing privacy ramifications and significant fines for healthcare organisations following a data breach.

 

It would be remiss of me to ignore the flip-side of the BYOD story but I would hasten to add that much of the risk associated with the list below can be mitigated with a multi-layered approach that not only combines multiple technical safeguards but also recognises the need to apply these with a holistic approach including administrative safeguards such as policy, training, audit and compliance, as well as physical safeguards such as locks and secure use, transport and storage.


1)  Encourages a laissez-faire approach to security

We’ve all heard the phrase ‘familiarity breeds contempt’ and there’s a good argument to apply this to BYOD in healthcare. It’s all too easy for employees to use some of the same workarounds used in their personal life when it comes to handling sensitive health data on their personal device. The most obvious example is sharing via the multitude of wireless options available today.


2) Unauthorised sharing of information

Data held at rest on a personal devices is at a high risk of loss or theft and is consequently also at high risk of unauthorized access or breach. Consumers are increasingly adopting cloud services to store personal information including photos and documents.

 

When a clinician or healthcare administrator is in a pressured working situation with their focus primarily on the care of the patient there is a temptation to use a workaround – the most obvious being the use of a familiar and personal cloud-based file sharing service to transmit data. In most cases this is a breach of BYOD and wider data protection policies, and increases risk to the confidentiality of sensitive healthcare data.


3) Loss of Devices

The loss of a personal mobile device can be distressing for the owner but it’s likely that they’ll simply upgrade or purchase a new model. Loss of personal data is quickly forgotten but loss of healthcare data on a personal device can have far-reaching and costly consequences both for patients whose privacy is compromised and for the healthcare organisation employer of the healthcare worker. An effective BYOD policy should explicitly deal with loss of devices used by healthcare employees and their responsibilities in terms of securing such devices, responsible use, and timely reporting in the event of loss or theft of such devices.


4) Integration / Compatibility

I speak regularly with healthcare organisations and I know that IT managers see BYOD as a mixed blessing. On the one hand the cost-savings can be tremendous but on the other they are often left with having to integrate multiple devices and OS into the corporate IT environment. What I often see is a fragmented BYOD policy which excludes certain devices and OS, leaving some employees disgruntled and feeling left out. A side-effect of this is that it can lead to sharing of devices which can compromise audit and compliance controls and also brings us back to point 2 above.

 

These are just some of the positives and negatives around implementing BYOD in a healthcare setting. I firmly sit on the positive side of the fence when it comes to BYOD and here at Intel Security we have solutions to help you overcome the challenges in your organisation, such as Multi-Factor Authentication (MFA) and SSDs Solid State Drives including in-built encryption which complement the administrative and physical safeguards you use in your holistic approach to managing risk.

 

Don’t forget to check out the great example from the Madrid Community Health Department to see how our work is having a positive impact on healthcare in Spain. We’d love to hear your own views on BYOD so do leave us a comment below or if you have a question I’d be happy to answer it.

 

 

David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Read more >

Tackling Information Overload in Industrial IoT Environments

Feeling inundated by too much industrial IoT data? Well, you’re not alone. According to an Economist Intelligence Unit report, most manufacturers are experiencing  information overload due to the increasing volume of data generated by automated processes. Senior factory executives in … Read more >

The post Tackling Information Overload in Industrial IoT Environments appeared first on IoT@Intel.

Read more >

Ready, Set, Action. Enhanced Platform Awareness in OpenStack for Line Rate NFV

By: Frank Schapfel

 

One of challenges in deploying Network Functions Virtualization (NFV) is creating the right software management of the virtualized network.  There are differences between managing an IT Cloud and a Telco Cloud.  IT Cloud providers take advantage of centralized and standardized servers in large scale data centers.  IT Cloud architects aim to maximize the utilization (efficiency) of the servers and automate the operations management.  In contrast, Telco Cloud application workloads are different from IT Cloud workloads.  Telco Cloud application workloads have real-time constraints, government regulatory constraints, and network setup and teardown constraints.  New tools are needed to build a Telco Cloud to these requirements.

 

OpenStack is the open software community developing IT Cloud orchestration management since 2010.  The Telco service provider community of end users, telecomm equipment manufacturers (TEMs), and software vendors have rallied around adapting the OpenStack cloud orchestration for Telco Cloud.  Over the last few releases of OpenStack, the industry has been shaping and delivering Telco Cloud ready solutions. For now, let’s just focus on the real-time constraints. For IT Cloud, the data center is viewed as a large pool of compute resources that need to operate a maximum utilization, even to the point of over-subscription of the server resources. Waiting a few milliseconds is imperceptible to the end user.  On the other hand, a network is real-time sensitive – and therefore cannot tolerate over-subscription of resources.

 

To adapt OpenStack to be more Telco Cloud friendly, Intel contributed to the concept of “Enhanced Platform Awareness” to OpenStack. Enhanced Platform Awareness in OpenStack offers a fine-grained matching of virtualized network resources to the server platform capabilities.  Having a fine-grained view of the server platform allows the orchestration to accurately assign the Telco Cloud application workload to the best virtual resource.  The orchestrator needs NUMA (Non-Uniform Memory Architecture) awareness so that it can understand how the server resources are partitioned, and how CPUs, IO devices, and memory are attached to sockets.  For instance, when workloads need line rate bandwidth, high speed memory access is critical, and huge page access is the latest technology in the latest Intel® Xeon™ E5-2600 v3 processor.

 

Now in action at the Oracle Industry Connect event in Washington, DC, Oracle and Intel demonstrate the collaboration using Enhanced Platform Awareness in OpenStack.  The Oracle Communications Network Service Orchestration uses OpenStack Enhanced Platform Awareness to achieve carrier grade performance for Telco Cloud. Virtualized Network Functions are assigned based on the needs for huge page access and NUMA awareness.  Other cloud workloads, which are not network functions, are not assigned specific server resources.

 

The good news – the Enhanced Platform Awareness contributions are already up-streamed in the OpenStack repository, and will be in the OpenStack Kilo release later this year.  At Oracle Industry Connect this week, there is a keynote, panel discussions and demos to get even further “under the hood.”  And if you want even more details, there is a new Intel White Paper: A Path to Line-Rate-Capable NFV Deployments with Intel® Architecture and the OpenStack® Juno Release.

 

Adapting OpenStack for Telco Cloud is happening now. And Enhanced Platform Awareness is finding its way into a real, carrier-grade orchestration solution.

Read more >