Recent Blog Posts

Analytics – Delivering insights worth millions

In my last insight into the Intel IT Business Review I am looking at the impact of one of the BIGGEST trends in business IT, Big Data or as I prefer to use, Analytics.

 

In an age when organizations such as Intel are rich in data, finding value in this data lies in the ability to analyze it and derive actionable business intelligence (BI). Intel IT continues to invest in tools that can transform data into insights to solve high-value business problems. We have seen significant BI results from our investments in a number of areas.

 

For example, Intel IT have developed a recommendation engine to help Intel sales teams strategically focus their sales efforts to deliver greater revenue. This engine uses predictive algorithms and real-time data analysis to prioritize sales engagements with resellers that show the greatest potential for high-volume sales. We saw USD 76.2 million in revenue uplift for 2014 through the use of this capability.

 

Integrating multiple data sources has enabled Intel to use its decision support system to significantly impact revenue and margins by optimizing supply, demand, and pricing decisions. This work resulted in revenue optimization of USD 264 million for 2014.

 

And the big data platform for web analytics is yielding insights that enable more focused and effective marketing campaigns, which, in turn, increase customer engagement and sales.

 

The exploration and implementation of Assembly Test Manufacturing (ATM) cost reduction initiatives involve complex algorithms and strong computation capabilities due to the high volume and velocity of data that must be processed quickly. The ATM data sets–containing up to billions of rows–cannot be effectively processed with traditional SQL platforms. To address this gap, IT have implemented a reusable big data analytics correlation engine. This tool will support various high-value projects. The estimated value for the first of these projects, a pilot project for one of Intel’s future processors, is greater than USD 13 million.

 

Intel IT are exploring additional use cases for data collection and analytics across Intel’s manufacturing, supply chain, marketing, and other operations to improve Intel’s operational efficiency, market reach, and business results. In 2014 alone, Intel IT’s use of BI and analytics tools increased Intel revenue by USD 351 million.

 

To read the Intel IT Business Review in full go to www.intel.com/ITAnnualReport

Read more >

Creating Confidence in the Cloud

In every industry, we continue to see a transition to the cloud. It’s easy to see why: the cloud gives companies a way to deliver their services quickly and efficiently, in a very agile and cost-effective way.

 

Financial services is a good example — where the cloud is powering digital transformation. We’re seeing more and more financial enterprises moving their infrastructure, platforms, and software to the cloud to quickly deploy new services and new ways of interacting with customers.

 

But what about security? In financial services, where security breaches are a constant threat, organizations must focus on security and data protection above all other cloud requirements.

 

This is an area Intel is highly committed to, and we offer solutions and capabilities designed to help customers maintain data security, privacy, and governance, regardless of whether they’re utilizing public, private, or hybrid clouds.

 

Here’s a brief overview of specific Intel® solutions that help enhance security in cloud environments in three critical areas:

  • Enhancing data protection efficiency. Intel® AES-NI are instructions in the processor that accelerate encryption based on the widely-used Advanced Encryption Standard (AES) algorithm.  These instructions enable fast and secure data encryption and decryption, removing the performance barrier to allow more extensive use of this vital data protection mechanism. With this performance penalty reduced, cloud providers are starting to embrace AES-NI to promote the use of encryption.
  • Enhancing data protection strength. Intel® Data Protection Technology with AES-NI and Secure Key is the foundation for cryptography without sacrificing performance. These solutions can enable faster, higher quality cryptographic keys and certificates than pseudo-random, software-based approaches in a manner better suited to shared, virtual environments.
  • Protecting the systems used in the cloud or compute infrastructure. Intel® Trusted Execution Technology (Intel® TXT) is a set of hardware extensions to Intel® processors and chipsets with security capabilities such as measured launch and protected execution. Intel TXT provides a hardware-enforced, tamper resistant mechanism to evaluate critical, low level system firmware and OS/Hypervisor components from power-on. With this, malicious or inadvertent code changes can be detected, helping assure the integrity of the underlying machine that your data resides on. And at the end of the day, if the platform can’t be proven secured, the data on it probably can’t really be considered secured.

 

Financial services customers worldwide are using the solutions to provide added security at both the platform and data level in public, private, and hybrid cloud deployments.

 

Cloud-Security.jpgPutting It into Practice with our Partners

 

At Intel®, we are actively engaged with our global partners to put these security-focused solutions into practice. One of the more high-profile examples is our work with IBM. IBM is using Intel TXT to deliver a secure, compliant, and trusted global cloud for SoftLayer, its managed hosting and cloud computing provider. When IBM SoftLayer customers order cloud services on the IBM website, Intel TXT creates an extra layer of trust and control at the platform level. We are also working with IBM to offer Intel TXT-enhanced secure processing solutions including VMware/Hytrust, SAP, and the IBM Cloud OpenStack Services.

 

In addition, Amazon Web Services (AWS), a major player in financial services, uses Intel AES-NI for additional protection on its Elastic Compute Cloud (EC2) web service instances. Using this technology, AWS can speed up encryptions and avoid software-based vulnerabilities because the solution’s encryption and decryption instructions are so efficiently executed in hardware.

 

End-to-End Security

 

Intel security technologies are not only meant to help customers in the cloud. They are designed to work as end-to-end solutions that offer protection — from the client to the cloud. In my previous blog, for example, I talked about Intel® Identity Protection Technology (Intel® IPT), a hardware-based identity technology that embeds identity management directly into the customer’s device. Intel IPT can offer customers critical authentication capabilities that can be integrated as part of a comprehensive security solution.

 

It’s exciting to see how our technologies are helping financial services customers increase confidence that their cloud environments and devices are secure. In my next blog, I’ll talk about another important Intel® initiative: data center transformation. Intel® is helping customers transform their data centers through software-defined infrastructures, which are changing the way enterprises think about defining, building, and managing their data centers.

 

 

Mike Blalock

Global Sales Director

Financial Services Industry, Intel

 

This is the final installment of a seven part series on Tech & Finance. Click here to read blog 1, blog 2, blog 3, blog 4, blog 5, and blog 6.

Read more >

Unlock Bio IT Puzzles with New Code Pipelines

The saying that “life sciences is like a puzzle” has never been more true than it is today. The life sciences are in the midst of a dramatic transformation as technology redefines what is possible for human health and healthcare. That’s why the upcoming Bio-IT World event in Boston, April 21-23, holds so much promise for moving the conversation forward and sharing knowledge that truly helps people.

 

As the show approaches, we’re excited to roll out a new resource for you that offers an optimized compendium of codes with benchmarks and replication recipes. When used on Intel®-based computing platforms, and in concert with other Intel® software tools and products, such as Intel® Solid-State Drives (Intel® SSDs), the optimized code can help you decipher data and accelerate the path to discovery. rubiks-01_v2.jpg

 

Industry leaders and authors of key genomic codes have supported this new resource to ensure that genome processing runs as fast as possible on Intel® based systems and clusters. The results have been significantly improved speed of key genomic programs and the development of new hardware and system solutions to get genome sequencing and processing down to minutes instead of days.

 

Download codes

On the new resource page, you can currently download the following codes to run on Intel® Xeon®processors:

 

  • BWA
  • MPI-HMMER
  • BLASTn/BLASTp
  • GATK

 

If you’re looking for new tools to help handle growing molecular dynamics packages, which can span from hundreds to millions of particles, take advantage of these codes that are compatible with both Intel® Xeon® processors and Intel® Xeon® Phi™ coprocessors and allow you to “reuse” rather than “recode:”

 

  • AMBER 14
  • GROMACS 5.0 RC1
  • NAMD
  • LAMMPS
  • Quantum ESPRESSO
  • NWChem


Solve the cube

Finally, because life sciences is like a puzzle, look for a little fun and games at Bio-IT World that will test your puzzle solving skills and benefit charity.

 

If you’ll be at the show, be sure to grab a customized, genomic-themed Rubik’s Cube at the keynote session on Thursday, April 23, and join the fun trying to solve the puzzle after the speeches at our location on the show floor. Just by participating you will be eligible to win great prizes like a tablet, a Basis watch, or SMS headphones. Here’s a little Rubik’s Cube insight if you need help.

 

Plus, we’re giving away up to $10,000 to the Translational Genomics Research Institute (TGEN) in a tweet campaign that you can support. Watch for more details.

 

What questions do you have? We’re looking forward to seeing you at Bio-IT World next month.

Read more >

How to Configure Oracle Redo on the Intel PCIe SSD DC P3700

Back in 2011, I made the statement, “I have put my Oracle redo logs or SQL Server transaction log on nothing but SSDs” (Improve Database Performance: Redo and Transaction Logs on Solid State Disks (SSDs). In fact since the release of the Intel® SSD X25-E series in 2008, it is fair to say I have never looked backed. Even though those X25-Es have long since retired, every new product has convinced me further still that from a performance perspective a hard drive configuration just cannot compete. This is not to say that there have not been new skills to learn, such as configuration details explained here (How to Configure Oracle Redo on SSD (Solid State Disks) with ASM). The Intel® SSD 910 series provided a definite step-up from the X25-E for Oracle workloads (Comparing Performance of Oracle  Redo on Solid State Disks (SSDs)) and proved concerns for write peaks was unfounded (Should you put Oracle Database Redo on Solid State Disks (SSDs)). Now with the PCIe*-based Intel® SSD DC P3600/P3700 series we have the next step in the evolutionary development of SSDs for all types of Oracle workloads.

 

Additionally we have updates in operating system and driver support and therefore a refresh to the previous posts on SSDs for Oracle is warranted to help you get the best out of the Intel SSD DC P3700 series for Oracle redo.

 

NVMe

 

One significant difference in the new SSDs is the change in interface and driver from AHCI and SATA to NVMe (Non-volatile memory express).  For an introduction to NVMe see this video by James Myers and to understand the efficiency that NVMe brings read this post by Christian Black. As James noted, high performance, consistent, low latency Oracle redo logging also needs high endurance, therefore the P3700 is the drive to use. With a new interface comes a new driver, which fortunately is included in the Linux kernel at the Oracle supported Linux releases of Red Hat and Oracle Linux 6.5, 6.6 and 7. 

I am using Oracle Linux 7.


Booting my system with both a RAID array of Intel SSD DC S3700 series and Intel SSD DC P3700 series shows two new disk devices:


First the S3700 array using the previous interface


Disk /dev/sdb1: 2394.0 GB, 2393997574144 bytes, 4675776512 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Second the new PCIe P3700 using NVMe

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 1562824368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Changing the Sector Size to 4KB

 

As Oracle introduced support for 4KB sector sizes at Oracle release 11g R2, it is important to be at a minimum of this release for Oracle 12c to take full advantage of SSD for Oracle redo. However ‘out of the box’ as shown the P3700 presents a 512 byte sector size. We can use this ‘as is’ and set the Oracle parameter ‘disk_sector_size_override’ to true. With this we can then specify the blocksize to be 4KB when creating a redo log file. Oracle will then use 4KB redo log blocks and performance will not be compromised.


As a second option, the P3700 offers a feature called ‘Variable Sector Size’. Because we know we need 4KB sectors, we can set up the P3700 to present a 4KB sector size instead. This can then be used transparently by Oracle without the requirement for additional parameters. It is important to do this before you have configured or started to use the drive for Oracle as the operation is destructive of any existing data on the device.

 

To do this, first check that everything is up to date by using the Intel Solid State Drive Data Center Tool from https://downloadcenter.intel.com/download/23931/Intel-Solid-State-Drive-Data-Center-Tool Be aware that after running the command it will be necessary to reboot the system to pick up the new configuration and use the device.


[root@haswex1 ~]# isdct show -intelssd
- IntelSSD Index 0 -
Bootloader: 8B1B012D
DevicePath: /dev/nvme0n1
DeviceStatus: Healthy
Firmware: 8DV10130
FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
Index: 0
ProductFamily: Intel SSD DC P3700 Series
ModelNumber: INTEL SSDPEDMD800G4
SerialNumber: CVFT421500GT800CGN


Then run the following command to change the sector size. The parameter LBAFormat=3 sets it to 4KB and LBAFormat=0 sets it back to 512b.

 

[root@haswex1 ~]# isdct start -intelssd 0 Function=NVMeFormat LBAFormat=3 SecureEraseSetting=2 ProtectionInformation=0 MetaDataSetting=0
WARNING! You have selected to format the drive! 
Proceed with the format? (Y|N): Y
Running NVMe Format...
NVMe Format Successful.


A reboot is necessary because I am on Oracle Linux 7 with a UEK kernel at 3.8.13-35.3.1 and the NVMe needs to reset on the device. At Linux kernels 3.10 and above you can also run the following command with the system online to do the reset.

 

echo 1 > /sys/class/misc/nvme0/device/reset


The disk should now present the 4KB sector size we want for Oracle redo.

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 195353046 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Configuring the P3700 for ASM

 

For ASM (Assembly Specific  Monitor) we need a disk with a single partition and, after giving the disk a gpt label, I use the following command to create and check the use of an aligned partition.

 

(parted) mkpart primary 2048s 100%                                        
(parted) print                                                            
Model: Unknown (unknown)
Disk /dev/nvme0n1: 195353046s
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End         Size        File system  Name     Flags
1      2048s  195352831s  195350784s               primary

(parted) align-check optimal 1
1 aligned
(parted)  

     

I then use udev to set the device permissions. Note: the scsi_id command can be run independently to find the device id to put in the file and the udevadm command used to apply the rules. Rebooting the system is useful during configuration to ensure that the correct permissions are applied on boot.

 

[root@haswex1 ~]# cd /etc/udev/rules.d/
[root@haswex1 rules.d]# more 99-oracleasm.rules 
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="3600508e000000000c52195372b1d6008", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="nvme0n1p1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="365cd2e4080864356494e000000010000", OWNER="oracle", GROUP="dba", MODE="0660"


Successfully applied, the oracle user now has ownership of the DC S3700 RAID array device and the P3700 presented by NVMe.

 

[root@haswex1 rules.d]# ls -l /dev/sdb1
brw-rw---- 1 oracle dba 8, 17 Mar  9 14:47 /dev/sdb1
[root@haswex1 rules.d]# ls -l /dev/nvme0n1p1 
brw-rw---- 1 oracle dba 259, 1 Mar  9 14:39 /dev/nvme0n1p1


Use ASMLIB to mark both disks for ASM.

 

[root@haswex1 rules.d]# oracleasm createdisk VOL2 /dev/nvme0n1p1
Writing disk header: done
Instantiating disk: done

[root@haswex1 rules.d]# oracleasm listdisks
VOL1
VOL2


As the Oracle user, use the ASMCA utility to create the ASM disk groups.

 

fult1.png

 

I now have 2 disk groups created under ASM.

 

fult2.png

 

Because of the way the disk were configured Oracle has automatically detected and applied the sector size of 4KB.

 

[oracle@haswex1 ~]$ sqlplus sys/oracle as sysasm
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 12 10:30:04 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
SQL> select name, sector_size from v$asm_diskgroup;

NAME                     SECTOR_SIZE
------------------------------ -----------
REDO                          4096
DATA                          4096

 

 

SPFILES in 4K DISKGROUPS

 

In previous posts I noted Oracle bug “16870214 : DB STARTUP FAILS WITH ORA-17510 IF SPFILE IS IN 4K SECTOR SIZE DISKGROUP” and even with Oracle 12.1.0.2 this bug is still with us.  As both of my diskgroups have a 4KB sector size, this will affect me if I try to create a database in either without having applied patch 16870214.


With this bug, upon creating a database with DBCA you will see the following error.

 

fult3.png


The database is created and the spfile does exist so can be extracted as follows:

 

ASMCMD> cd PARAMETERFILE
ASMCMD> ls
spfile.282.873892817
ASMCMD> cp spfile.282.873892817 /home/oracle/testspfile
copying +DATA/TEST/PARAMETERFILE/spfile.282.873892817 -> /home/oracle/testspfile


This spfile is corrupt and attempts to reuse it will result in errors.

 

ORA-17510: Attempt to do i/o beyond file size
ORA-17512: Block Verification Failed


However, you can extract the parameters by using the strings command and create an external spfile or a spfile in a diskgroup with a 52b sector size. Once complete, the Oracle instance can be started.

 

SQL> create spfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileTEST.ora' from pfile='/home/oracle/testpfile';
SQL> startup
ORACLE instance started


Creating Redo Logs under ASM


In viewing the same disks within the Oracle instance, the underlying sector size has been passed right through to the database.

 

SQL> select name, SECTOR_SIZE BLOCK_SIZE from v$asm_diskgroup;

NAME                   BLOCK_SIZE
------------------------------ ----------
REDO                      4096
DATA                      4096


Now it is possible to create a redo log file with a command such as follows:

 

SQL> alter database add logfile ‘+REDO’ size 32g; 


…and Oracle will create a redo log automatically with an optimal blocksize of 4KB.

 

SQL> select v$log.group#, member, blocksize from v$log, v$logfile where v$log.group#=3 and v$logfile.group#=3;

GROUP#
----------
MEMBER
-----------
BLOCKSIZE
----------
       3
+REDO/HWEXDB1/ONLINELOG/group_3.256.874146809
      4096


Running an OLTP workload with Oracle Redo on Intel® SSD DC P3700 series


To put the Oracle redo on P3700 through its paces I used a HammerDB workload. The redo is set with a standard production type configuration without commit_write and commit_wait parameters.  A test shows we are running almost 100,000 transactions per second at redo over 500MB / second and therefore we would be archiving almost 2 TBs per hour.

 

Per Second

Per Transaction

Per Exec

Per Call

Redo size (bytes):

504,694,043.7

5,350.6

 

 


Log file sync even at this level of throughput is just above 1ms

 

Event

Waits

Total Wait Time (sec)

Wait Avg(ms)

% DB time

Wait Class

DB CPU

 

35.4K

 

59.1

 

log file sync 19,927,449 23.2K 1.16 38.7 Commit


…and the average log file parallel write showing the average disk response time to just 0.13ms

 

Event

Waits

%Time -outs

Total Wait Time (s)

Avg wait (ms)

Waits /txn

% bg time

log file parallel write 3,359,023 0 442

0.13

0.12

2237277.09


 

There are six log writers on this system. As with previous blog posts on SSDs I observed the log activity to be heaviest on the first three and therefore traced the log file parallel write activity on the first one with the following method:

 

SQL> oradebug setospid 67810;
Oracle pid: 18, Unix process pid: 67810, image: oracle@haswex1.example.com (LG00)
SQL> oradebug event 10046 trace name context forever level 8;
ORA-49100: Failed to process event statement [10046 trace name context forever level 8]
SQL> oradebug event 10046 trace name context forever, level 8;

The trace file shows the following results for log file parallel write latency to the P3700.

 

Log Writer Worker

Over  1ms

Over 10ms

Over 20ms

Max Elapsed

LG00 1.04% 0.01% 0.00% 14.83ms

 

Looking at a scatter plot of all of the log file parallel write latencies recorded in microseconds on the y axis clearly illustrate that any outliers are statistically insignificant and none exceed 15 milliseconds. Most of the writes are sub-millisecond on a system that is processing many millions of transactions a minute while doing so.

fult4.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A subset of iostat data shows the the device is also far from full utilization.

 

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          77.30    0.00    8.07    0.24    0.00   14.39
Device:         wMB/s avgrq-sz avgqu-sz   await w_await  svctm  %util
nvme0n1        589.59    24.32     1.33    0.03    0.03   0.01  27.47

 

Conclusion


As a confirmed believer in SSDs, I have long been convinced that most experiences of poor Oracle redo performance on SSDs has been due to an error in configuration such as sector size, block size and/or alignment as opposed to performance of the underlying device itself. In following the configuration steps I have outlined here, the Intel SSD DC P3700 series shows as an ideal candidate to take Oracle redo to the next level of performance without compromising endurance.

Read more >

Cloud – Putting the Cloud to work for Intel

In my second insight into the IT Business report I am focusing on the impact of Cloud inside Intel.

 

The cloud is changing the business landscape and here at Intel it has transformed the IT culture to align with the strategies of the business groups. Intel IT brings technical expertise and business acumen to bear on the highest priority projects at Intel to accelerate business at a faster pace than ever before. Intel IT have simplified the way Intel’s business groups interact with IT to identify workflow and process improvements that IT can drive. Because they understand their businesses, they can tailor cloud hosting decisions to specific business priorities.

 

Our private cloud, with on-demand self-service, enables Intel business groups to innovate quickly and securely. In the annual Intel IT Business Review Intel IT reveals that 85 percent of all new services installed for our Office, Enterprise and Services divisions are hosted in the cloud.

 

Intel IT attribute the success of our private cloud to implementing a provider-like cloud hosting strategy, advancing self-service infrastructure as a service and platform as a service, and enabling cloud-aware applications. Intel’s private cloud saves about USD 7.5 million annually while supporting an increase of 17 percent in operating system instances in the environment.

 

Cloud-aware applications can maximize cloud advantages such as self-service provisioning, elasticity, run-anywhere design, multi-tenancy, and design for failure. To enhance Intel developers’ skill sets, in 2013 Intel IT delivered 8 code-a-thons in 3 geographical regions, training over 100 Intel developers in how to build cloud-aware applications.

 

To increase our understanding of how hybrid clouds can benefit Intel, IT are also conducting a hybrid cloud Proof of Concept using open source OpenStack APIs. Hybrid cloud hosting can provide additional external capacity to augment our own private cloud while enabling us to optimize our internal capacity.

 

Hybrid cloud hosting also increases flexibility, allowing us to dynamically adjust capacity when needed to support business initiatives efficiently.

 

Intel IT have accelerated hosting decisions for the business customers by developing a methodical approach to determine the best hosting option. They consider security, control, cost, location, application requirements, capacity, and availability before arriving at a hosting decision for each use case. Offering optimized hosting solutions improves business agility and velocity while reducing costs.

 

For more go to www.intel.com/ITAnnualReport

 

Read more >

How to benchmark SSDs with FIO Visualizer

There are many ways and software tools available for benchmarking SSDs today. Many of them are consumer oriented with very nice looking interface, others are command line based, ugly looking, doing something strange. I’m not going to criticize none of these in this blog, I’ll share the approach we’re using at Solution Architecture team at Intel NVM Solutions Group.

 

There are two proven software tools for IO benchmark used there – Iometer (http://www.iometer.org) for Windows and FIO (http://freecode.com/projects/fio) for Linux OS. Both of them offer many advanced features for simulating different types of workloads. Unfortunately, FIO lacks of GUI interface, it’s only command based. Having an amazing feature set, simply was not enough to be used as a demo tool. That’s how an idea of a FIO Visualizer (http://01.org/fio-visualizer) appeared, developed at Intel and released to the Open Source.

 

What is FIO Visualizer? – It’s a GUI for the FIO. It parses console output in real-time, displays visual details for IOPS, bandwidth and latency of each device’s workload. The data is gathered from FIO console output at assigned time intervals and updates the graphs immediately. It is especially valuable for benchmarking SSDs, particularly those based on NVMe specifications.

 

Let’s have a quick look on the interface features:


  • Real time. Minimum interval is 1 second, can be adjusted to even lower value by simple FIO source code change.
  • Monitors IOPS, bandwidth, latency for reads, writes and unique QoS analytics.
  • Multithread / multi jobs support makes a value for NVMe SSD benchmarking.
  • Single GUI Windows, no overlap windows or complicated menus.
  • Customizable layout. User defines which parameter needs to be monitored.
  • Workload manager for FIO settings. Comes with base workload settings used in all Intel SSD datasheets.
  • Written on Python with QtGraph; uses third-party libraries to simplify GUI code.

 

fiovisualizer.pngFIO Visualizer GUI screen with an example of running workload.

 

Graph screen is divided for two vertical blocks corresponding for read / write statistic. It’s also divided for three horizontal segments displaying IOPS, bandwidth and latency. Every graph supports auto-scaling in both dimensions. Individual zoom is also supported for each graph. Once zoomed, it can roll back to auto-scaling by popup button. There is possibility to disable certain graphs and change the view for the control panel on the right.

 

multijob.PNG

This example demonstrates handling of multi-job workloads, which are executed by FIO in separate threads.

 

 

Running FIO Visualizer.

 

Having a GUI written in Python gives us great flexibility to make the changes and adopt the enhancements. However it uses few external python libraries, which are not the part of default installation.

This results in the OS compatibility/dependency:

 

Here are exact steps to make it running under CentOS 7:

 

  0. You should have python and PyQt installed with the OS

 

  1. Install pyqtgraph-develop (0.9.9 required) form http://www.pyqtgraph.org

        $ python setup.py install

 

  2. Install Cyphon from http://cython.org Version 0.21 or higher is required.

        $ python setup.py install

 

  3. Install Numpy from http://numpy.org

        $ python setup.py build

        $ python setup.py install

 

  4. Install FIO 2.1.14 (latest supported at the moment) from http://freecode.com/projects/fio

        # ./configure

        # make

        # make install

 

  5. Run Visualizer under root.

        # ./fio-visualizer.py

 

 

SSD Preconditioning.


Before running the benchmark you need to prepare the drive. This usually calls “SSD Preconditioning”, i.e. achieving sustained performance state on “fresh” drive. Here are basic steps to follow to get reliable results at the end:

 

  • Secure Erase SSD with vendor tools. For Intel® Data Center SSDs this tool called Intel® Solid-State Drive Data Center Tool.
  • Fill SSD with sequential data twice of it’s capacity. This will guarantee all available memory is filled with a data including factory provisioned area. DD is the easiest way to do so:

          dd if=/dev/zero bs=1024k of=/dev/”devicename”

  • If you’re running sequential workload to estimate the read or write throughput then skip the next step.
  • Fill the drive with 4k random data. The same rule, total amount of data is twice drive’s capacity.

          Use FIO for this purpose. Here is an example script for NVMe SSD:

      [global]

        name=4k random write 4 ios in the queue in 32 queues

        filename=/dev/nvme0n1

        ioengine=libaio

        direct=1

        bs=4k

        rw=randwrite

        iodepth=4

        numjobs=32

        size=100%

        loops=2   

        [job1]

  • Now you’re ready to run your workload. Usually measurements start after 5 minutes of runtime in order to let the SSD FW adapting to the workload. It will enter the drive into sustained performance state.

 


Workload manager.


Workload manager is a set of FIO settings grouped in files. It comes together with FIO Visualizer package. Each file represents specific workload. It can be loaded directly into FIO Visualizer tool. From where it starts FIO job automatically.

Typical workload scenarios are included in the package. These are basic datasheet workloads used for Intel® Data Center SSDs and some additional ones which simulate real use cases. These configuration files can be easy changes in any text editor. It’s great start point for the benchmarking.

 

workloadm.png

You see some workloads definitions have a prefix SATA, while others come with NVMe. There are few important reasons why they are separate. AHCI and NVME software stack are very different. SATA drives utilize single queue of 32 I/Os max (AHCI), while NVMe drives were architectured as massively paralleled devices. According to NVMe specification, these drives may support up to 64 thousands of queue of 64 thousands commands each.  On practice that means certain workloads such as small block random ones will have a benefits of executing them in parallel. That’s the reason, random workloads for NVMe drives use multiple FIO jobs at a time. Check it in the section “numjobs”. 

 

To learn more about NVMe, please see public IDF presentations explaining all details of this:

 

NVM Express*: Going Mainstream and What’s Next

 

Supercharge Your Data Transfers with NVM Express* based PCI Express* Solid-State Drives

Read more >

Rethinking Cybersecurity Strategy

Cybersecurity is a significant problem and it continues to grow.  Addressing symptoms will not achieve the desired results.  A holistic approach must be applied which involves improving the entire technology ecosystem.  Smarter security innovation, open collaboration, trustworthy practices, technology designed to be hardened against compromise, and comprehensive protections wherever data flows is required. 

The technology industry must change in order to meet ever growing cybersecurity demands.  It will not be easy, but technologists, security leaders, and end-users must work together to make the future of computing safer.

 

2015 CTO Forum - Security Transformation.jpg

 

I recently spoke at the CTO Forum Rethink Technology event on Feb 13 2015.  Presenting to an audience of thought-leading CTO’s and executives.  I was privileged to speak on a panel including Marcus Sachs (VP National Security Policy, Verizon), Eran Feigenbaum (Director of Security for Google for Work, Google), Rob Fry (Senior Information Security Architect, Netflix), and Rick Howard (CSO, Palo Alto Networks).  We all discussed the challenges facing the cybersecurity sector and what steps are required to help companies strengthen their security.

 

I focused on the cybersecurity reality we are in, how we all have contributed to the problem, and consequently how we must all work together to transform the high technology industry to become sustainably secure.


The complete panel video is available at the CTO Forum website http://www.ctoforum.org/

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Read more >

Tablets Improve Engagements, Workflows

 

Mobility is expected to be a hot topic once again at HIMSS 2015 in Chicago. Tablets like the Surface and Windows-based versions of electronic health records (EHRs) from companies such as Allscripts are helping clinicians provide better care and be more efficient with their daily workflows.

 

The above video shows how the Surface and Allscripts’ Wand application are helping one cardiologist improve patient engagement while allowing more appointments throughout the day.  You can read more in this blog.

 

Watch the video and let us know what questions you have. How are you leveraging mobile technology in your facility?

Read more >

Família Intel Core, Sistema no Futuro Windows 10

  Ainda muitas pessoas se perguntam sobre as aplicações dos Processadores intel Core no Windows 10, porem a Microsoft garante que esta Trabalhando junto

com a intel para que não exista perca de desempenho e estabilidade no seu sistema operacional Windows 10, a pergunta que não quer calar: COMO ANDA A APLICAÇÃO

DOS PROCESSADORES INTEL CORE M em tablet’s com a vinda do WINDOWS 10 da microsoft?

 

ainda que possa especular ainda temos certeza que muito ainda virá acontecer.

Read more >