Recent Blog Posts

Argumenty za zbudowaniem dużego, wewnętrznego działu IT

W magazynie ITwiz opisaliśmy ciekawy case z firmy Gaspolu, w którym prezentowany jest 7-letni “cykl” współpracy dostawcy z klientem – od jej rozpoczęcia po przejście na nierealne stawki. Na to nałożył się dynamiczny rozwój firmowego oprogramowania, w tym systemów ERP i aplikacji biznesowych. W efekcie Gaspol na nowo zdefiniował strategię w zakresie IT. Zamiast rozbudowywać portfolio firm zewnętrznych odpowiedzialnych za rozwój oprogramowania, postawił na rozwój własnych zasobów IT. Dziś jest to ok. 30 osób.


Impulsem do zmian w Gaspolu były rosnące koszty związane z utrzymaniem systemu ERP, w tym usług zewnętrznych konsultantów. „Na to nałożył się fakt, że w Gaspolu jesteśmy bardzo kreatywni, jeśli chodzi o rozwój naszej działalności. Jako pierwsi w Polsce staliśmy się firmą multienergetyczną i – oprócz gazu płynnego – do oferty wprowadziliśmy energię elektryczną, gaz ziemny sieciowy i skroplony LNG, energię odnawialną i systemy hybrydowe. Dlatego też naszą nową marką jest GASPOL ENERGY” – wspomina Michał Kozieł, dyrektor Departamentu IT w Gaspolu.


Outsourcing ograniczyć do infrastruktury IT

 

W ślad za rozbudową oferty w Gaspolu rozpoczęły się prace nad integracją systemów sprzedażowych, marketingowych oraz tworzeniem aplikacji, które ułatwiłyby klientom zarządzanie różnymi mediami, które od niedawna mogą kupować w tej samej firmie. „O ile mogę wyobrazić sobie wynajem od firmy zewnętrznej serwerów czy infrastruktury IT, o tyle rozwój naszego systemu wolałem pozostawić w naszych rękach. Zwłaszcza że nie sprzedajemy standardowych produktów. Moim zamierzeniem było dostosowanie procesów do tego, jak są opisane w systemie ERP, jednak z zachowaniem możliwości elastycznego wprowadzania zmian” – opowiada Michał Kozieł.

 

Gaspol potrzebował systemu służącego do obsługi sprzedaży detalicznej gazu i energii elektrycznej. Poza tym potrzebna była funkcjonalność wystawiania kompleksowej faktury za energię elektryczną i gaz ziemny. Wdrożono też system do obsługi procesu zmiany dostawcy energii przez klientów. Trwają prace nad systemem do tzw. grafikowania – czyli prognozowania zużycia energii i planowania zakupów innych jej nośników, np. gazu. Aktualnie Gaspol współpracuje z niewielką firmą, która stworzyła dedykowane rozwiązanie pokrywające podstawowe potrzeby w zakresie rozliczania energii elektrycznej i gazu ziemnego. Firma ta świadczy również serwis systemu. „Przeskanowanie przez nas rynku pod względem możliwości firm zewnętrznych potwierdziło, że najmocniejszy know-how o tym, jak funkcjonuje nasz biznes mamy jednak wewnątrz organizacji. To tutaj – w tworzącym się dopiero dziale energetycznym – powstają pomysły na temat przyszłych funkcjonalności docelowego rozwiązania” – wyjaśnia Michał Kozieł.


Na to wszystko nakłada się swego rodzaju „cykl” współpracy z firmą wdrażającą i utrzymującą systemy IT dla Gaspolu. „Zwykle trwa on ok. 7 lat – od rozpoczęcia współpracy do momentu podnoszenia stawek na poziom, którego już nie jesteśmy w stanie zaakceptować. To zmusza nas do poszukiwania nowego dostawcy i rozpoczynania całego procesu na nowo” – dodaje. „Bywało również tak, że na realizację pewnego zlecenia zewnętrzny dostawca potrzebował 3 razy więcej czasu niż pracownik u mnie w dziale. Różnie też bywało z testowaniem. Praktyka pokazała, że testerów też najlepiej mieć u siebie” – dodaje.


Powody budowy dużego, własnego działu IT

 

Kiedy kończyłem analizę kosztową naszej nowej strategii IT, zdarzyło się, że jeden z partnerów za dwie modyfikacje związane ze zmianami w prawie zażądał dodatkowego wynagrodzenia za zrealizowanie prac z uwzględnieniem kalendarza legislacyjnego. Tymczasem w ramach umowy miał obowiązek dostosowania naszego rozwiązania do nowego prawa. Ten incydent przyspieszył tylko finalną decyzję” – wspomina Michał Kozieł. W efekcie, w Gaspolu powstaje własny działu rozwoju aplikacji biznesowych. Do 1 kwietnia br. firma zakończy jego budowę. Docelowo dział będzie liczył 32 osoby, z czego 20 konsultantów i programistów będzie odpowiedzialnych za utrzymanie i rozwój systemu Microsoft Dynamics AX (d. Axapta) oraz aplikacje mobilne. Firma wdrożyła metodologię Scrum. Gdy policzyliśmy koszty utrzymania systemu ERP siłami wewnętrznymi, są one o ok. 40% niższe niż stawki dostawców zewnętrznych. Do tego dochodzi znacznie szybsza i lepsza jakość wykonania modyfikacji” – podsumowuje Michał Kozieł.

 

Jak się okazuje, dziś coraz częściej firmy, zwłaszcza te o strukturach rozproszonych, decydują się na wzmacnianie kompetencji własnych w zakresie głównie rozwoju i utrzymania oprogramowania oraz baz danych, oddając na zewnątrz część infrastrukturalną. Część praktyków uważa także, że nawet jeżeli własny dział IT – zbudowany z głową oczywiście – będzie wykorzystywany w 50%, to w ogólnym rozrachunku może by wielokrotnie tańszy niż dewelopment, utrzymanie i rozwój powierzone na zewnątrz.


“Załóżmy, że dostawca i wewnętrzny dział IT mogą mieć tylko dwa stany: silny i słaby. Sinusoida, niedopasowanie pojawia się tylko wtedy, kiedy wewnętrzny dział IT lub dostawca jest słaby. W pierwszym przypadku dział IT nie wykorzysta dostawcy w kluczowych projektach i rozwiązaniach, a w drugim – dostawca na początku wejdzie z konkurencyjnymi cenami, co dla niego stanowi eldorado. Wniosek jest jeden. Musi być silny dział IT, aby coś z sensem wynegocjować z silnym dostawcą” – komentują inni.

 

Co zaś się tyczy cyklu współpracy, to pierwsza umowa jest zawsze dosyć dobrze zoptymalizowana kosztowo, bo wkładamy w to odpowiednio dużo wysiłku, firmy rzeczywiście walczą o to, aby pozyskać kontrakt. Problem leży w kolejnych umowach, gdyż bardzo często są one już zwykłym przedłużeniem obecnej. Wychodzi wówczas z tego sinusoida kosztowa i jak ktoś na to popatrzy, to mamy wówczas do czynienia z rozwojem i/lub ograniczaniem własnych działów IT.

Read more >

How Intel Xeon Processors Are Securing Tomorrow’s Food Supply

Tractor.pngOne of the challenges facing the world today is securing the food supply for everyone in the face of rising populations and environmental change. An important part of the solution is to analyze the DNA of crops in order to better understand what makes crops resistant to pests, drought, and other environmental stresses. That’s a huge challenge. The wheat genome is at least five times larger than the human genome and contains many repeated sequences. Bread wheat also has three distinct ancestral subgenomes, so trying to sequence and assemble the bread wheat genome is as difficult as sequencing and trying to interpret the genomes of a human, chimpanzee, and gorilla at the same time.

Genome Sequencing Powered by Intel Processors

 

The Genome Analysis Centre (TGAC) has taken on that challenge, using one of the largest SGI UV 2000 HPC systems in the U.K., which is powered by the Intel Xeon processor E5-4650L product family.

 

Laboratory data comes from high-throughput sequencers that analyze the physical matter DNA. After this primary analysis, the data is interpreted to read out the sequence of letters that represent each strand of DNA. It is then submitted to quality control, assembly, and annotation, where TGAC’s scientists can then start to interpret this data in order to understand each part of the genome.

 

Richard Leggett, project leader for quality control and primary analysis at TGAC, explains the assembly process: “We think of a ‘genome’ as a string of millions or billions of letters that represent four basic biological compounds — the wheat genome, for example, is represented by a string of 17 billion characters. But the most common DNA sequencing machines can only ‘read’ around 100 to 300 letters of DNA at a time, so when we sequence a genome we have to split it up into lots of smaller chunks. Assembly is the process of putting them back together again; unfortunately, there is no way to know where in the genome each sequenced chunk comes from. It’s a bit like taking 30 copies of a novel, cutting up all the words, putting them together in a big pile and then trying to re-create the novel. It requires a lot of computing power.”

Xeon Proves Vital to Sequencing

 

Lab operations can process between 2 and 4 TB of data per week — approximately 2 TB of which then needs to be stored. The SGI UV 2000 is a large shared memory platform, which, in combination with the Intel Xeon processor E5-4650L product family, makes up to 4,096 cores and 64 TB of coherent main memory for in-memory computing available in a single image system. TGAC is using 20 TB of coherent main memory, 2,560 cores, and 64 TB of RAM.

 

TGAC’s scientists have now sequenced and assembled 17 of the 21 chromosomes of the wheat genome. Researchers estimate that the full wheat genome sequence will be available within three years. It will help take the guesswork out of breeding new crops and give farmers rapid insight into which crops can resist local pathogens, so crop failures can be avoided.

 

Watch the video below to find out more.

 

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

 

Jane Williams

Online Sales Development Manager

Intel Corporation

Read more >

Insights from NRF: Top Trends from Retail’s BIG Show 2015

Once the dust settles from Black Friday doorbusters and end-of-year clearance sales, retailers — and the tech vendors that work with them — gather in New York City for the BIG Show, the National Retail Federation’s (NRF) Annual Convention and EXPO.

 

BIG-Show.png

Every year, the NRF EXPO offers intriguing glimpses into where the future of retail technology is heading, and this year was no exception. For 2015, I noted several big trends with the potential to revolutionize how retailers engage with and delight their customers. They included the following:

 

Endless Aisles

 

Consumer research reveals that even though tech-savvy shoppers do a significant amount of their buying online, they still love the in-store shopping experience. Intelligent endless aisle solutions enable retailers to offer the best of both worlds with self-service kiosks that expand inventory selection to include not only product in store, but also product in other retail locations. Advances in design for the classic PC have ignited innovation within retail solutions. In addition to the traditional tower design, new PC categories including All-in-One and Mini PC are enabling manageable and engaging virtual merchandizing solutions that easily tap into a store’s ecommerce systems to give customers a convenient way to explore an infinite array of additional products, sizes, colors, and options and arrange for fast home delivery.

 

4K Signage

 

Store signage has taken a huge leap forward with 4K ultra-high-definition (UHD) displays. With four times higher resolution than 1080p HD, 4K displays are not only capable of blowing customers’ minds with stunning detail and color, but, as part of a digital signage solution, they also allow retailers to offer customers richer shopping experiences with dynamic, personalized promotions and immersive, interactive displays throughout their stores.

 

3D Meets VR

 

Another recent breakthrough pushing retail solutions into the next dimension is the ability to capture 3D images. Intel RealSense 3D cameras make it increasingly easy to scan anything from auto parts to xylophones in highly detailed 3D, and give customers a more complete and appealing view of products.

 

Meanwhile, augmented reality solutions such as MemoryMirror enable Neiman Marcus and other clothing retailers to offer large screen digital fitting rooms that delight their customers with virtual try-ons, 360-degree views, and the ability to remember and share outfits.

 

A common thread linking all of these emerging retail solutions is how they utilize the performance and versatility of today’s Tower, Mini, and All-in-One PCs to blur the lines between online and in-store to offer customers consistently outstanding experiences everywhere.

 

Want more trends and highlights from retail’s BIG Show? Visit Intel’s NRF 2015 page.

 

To continue this conversation on Twitter, please use #IntelDesktop.

Read more >

Intel Atom x7 Processor Powers Microsoft’s Thinnest, Lightest Portable Device – the Surface 3

We’re thrilled that Microsoft today announced its newest addition to the Surface family, the Surface 3, powered by the recently announced Intel® AtomTM x7 processor, the highest performing Intel Atom processor currently available. Surface 3 powered by the Intel Atom … Read more >

The post Intel Atom x7 Processor Powers Microsoft’s Thinnest, Lightest Portable Device – the Surface 3 appeared first on Technology@Intel.

Read more >

Analytics – Delivering insights worth millions

In my last insight into the Intel IT Business Review I am looking at the impact of one of the BIGGEST trends in business IT, Big Data or as I prefer to use, Analytics.

 

In an age when organizations such as Intel are rich in data, finding value in this data lies in the ability to analyze it and derive actionable business intelligence (BI). Intel IT continues to invest in tools that can transform data into insights to solve high-value business problems. We have seen significant BI results from our investments in a number of areas.

 

For example, Intel IT have developed a recommendation engine to help Intel sales teams strategically focus their sales efforts to deliver greater revenue. This engine uses predictive algorithms and real-time data analysis to prioritize sales engagements with resellers that show the greatest potential for high-volume sales. We saw USD 76.2 million in revenue uplift for 2014 through the use of this capability.

 

Integrating multiple data sources has enabled Intel to use its decision support system to significantly impact revenue and margins by optimizing supply, demand, and pricing decisions. This work resulted in revenue optimization of USD 264 million for 2014.

 

And the big data platform for web analytics is yielding insights that enable more focused and effective marketing campaigns, which, in turn, increase customer engagement and sales.

 

The exploration and implementation of Assembly Test Manufacturing (ATM) cost reduction initiatives involve complex algorithms and strong computation capabilities due to the high volume and velocity of data that must be processed quickly. The ATM data sets–containing up to billions of rows–cannot be effectively processed with traditional SQL platforms. To address this gap, IT have implemented a reusable big data analytics correlation engine. This tool will support various high-value projects. The estimated value for the first of these projects, a pilot project for one of Intel’s future processors, is greater than USD 13 million.

 

Intel IT are exploring additional use cases for data collection and analytics across Intel’s manufacturing, supply chain, marketing, and other operations to improve Intel’s operational efficiency, market reach, and business results. In 2014 alone, Intel IT’s use of BI and analytics tools increased Intel revenue by USD 351 million.

 

To read the Intel IT Business Review in full go to www.intel.com/ITAnnualReport

Read more >

Creating Confidence in the Cloud

In every industry, we continue to see a transition to the cloud. It’s easy to see why: the cloud gives companies a way to deliver their services quickly and efficiently, in a very agile and cost-effective way.

 

Financial services is a good example — where the cloud is powering digital transformation. We’re seeing more and more financial enterprises moving their infrastructure, platforms, and software to the cloud to quickly deploy new services and new ways of interacting with customers.

 

But what about security? In financial services, where security breaches are a constant threat, organizations must focus on security and data protection above all other cloud requirements.

 

This is an area Intel is highly committed to, and we offer solutions and capabilities designed to help customers maintain data security, privacy, and governance, regardless of whether they’re utilizing public, private, or hybrid clouds.

 

Here’s a brief overview of specific Intel® solutions that help enhance security in cloud environments in three critical areas:

  • Enhancing data protection efficiency. Intel® AES-NI are instructions in the processor that accelerate encryption based on the widely-used Advanced Encryption Standard (AES) algorithm.  These instructions enable fast and secure data encryption and decryption, removing the performance barrier to allow more extensive use of this vital data protection mechanism. With this performance penalty reduced, cloud providers are starting to embrace AES-NI to promote the use of encryption.
  • Enhancing data protection strength. Intel® Data Protection Technology with AES-NI and Secure Key is the foundation for cryptography without sacrificing performance. These solutions can enable faster, higher quality cryptographic keys and certificates than pseudo-random, software-based approaches in a manner better suited to shared, virtual environments.
  • Protecting the systems used in the cloud or compute infrastructure. Intel® Trusted Execution Technology (Intel® TXT) is a set of hardware extensions to Intel® processors and chipsets with security capabilities such as measured launch and protected execution. Intel TXT provides a hardware-enforced, tamper resistant mechanism to evaluate critical, low level system firmware and OS/Hypervisor components from power-on. With this, malicious or inadvertent code changes can be detected, helping assure the integrity of the underlying machine that your data resides on. And at the end of the day, if the platform can’t be proven secured, the data on it probably can’t really be considered secured.

 

Financial services customers worldwide are using the solutions to provide added security at both the platform and data level in public, private, and hybrid cloud deployments.

 

Cloud-Security.jpgPutting It into Practice with our Partners

 

At Intel®, we are actively engaged with our global partners to put these security-focused solutions into practice. One of the more high-profile examples is our work with IBM. IBM is using Intel TXT to deliver a secure, compliant, and trusted global cloud for SoftLayer, its managed hosting and cloud computing provider. When IBM SoftLayer customers order cloud services on the IBM website, Intel TXT creates an extra layer of trust and control at the platform level. We are also working with IBM to offer Intel TXT-enhanced secure processing solutions including VMware/Hytrust, SAP, and the IBM Cloud OpenStack Services.

 

In addition, Amazon Web Services (AWS), a major player in financial services, uses Intel AES-NI for additional protection on its Elastic Compute Cloud (EC2) web service instances. Using this technology, AWS can speed up encryptions and avoid software-based vulnerabilities because the solution’s encryption and decryption instructions are so efficiently executed in hardware.

 

End-to-End Security

 

Intel security technologies are not only meant to help customers in the cloud. They are designed to work as end-to-end solutions that offer protection — from the client to the cloud. In my previous blog, for example, I talked about Intel® Identity Protection Technology (Intel® IPT), a hardware-based identity technology that embeds identity management directly into the customer’s device. Intel IPT can offer customers critical authentication capabilities that can be integrated as part of a comprehensive security solution.

 

It’s exciting to see how our technologies are helping financial services customers increase confidence that their cloud environments and devices are secure. In my next blog, I’ll talk about another important Intel® initiative: data center transformation. Intel® is helping customers transform their data centers through software-defined infrastructures, which are changing the way enterprises think about defining, building, and managing their data centers.

 

 

Mike Blalock

Global Sales Director

Financial Services Industry, Intel

 

This is the final installment of a seven part series on Tech & Finance. Click here to read blog 1, blog 2, blog 3, blog 4, blog 5, and blog 6.

Read more >

Unlock Bio IT Puzzles with New Code Pipelines

The saying that “life sciences is like a puzzle” has never been more true than it is today. The life sciences are in the midst of a dramatic transformation as technology redefines what is possible for human health and healthcare. That’s why the upcoming Bio-IT World event in Boston, April 21-23, holds so much promise for moving the conversation forward and sharing knowledge that truly helps people.

 

As the show approaches, we’re excited to roll out a new resource for you that offers an optimized compendium of codes with benchmarks and replication recipes. When used on Intel®-based computing platforms, and in concert with other Intel® software tools and products, such as Intel® Solid-State Drives (Intel® SSDs), the optimized code can help you decipher data and accelerate the path to discovery. rubiks-01_v2.jpg

 

Industry leaders and authors of key genomic codes have supported this new resource to ensure that genome processing runs as fast as possible on Intel® based systems and clusters. The results have been significantly improved speed of key genomic programs and the development of new hardware and system solutions to get genome sequencing and processing down to minutes instead of days.

 

Download codes

On the new resource page, you can currently download the following codes to run on Intel® Xeon®processors:

 

  • BWA
  • MPI-HMMER
  • BLASTn/BLASTp
  • GATK

 

If you’re looking for new tools to help handle growing molecular dynamics packages, which can span from hundreds to millions of particles, take advantage of these codes that are compatible with both Intel® Xeon® processors and Intel® Xeon® Phi™ coprocessors and allow you to “reuse” rather than “recode:”

 

  • AMBER 14
  • GROMACS 5.0 RC1
  • NAMD
  • LAMMPS
  • Quantum ESPRESSO
  • NWChem


Solve the cube

Finally, because life sciences is like a puzzle, look for a little fun and games at Bio-IT World that will test your puzzle solving skills and benefit charity.

 

If you’ll be at the show, be sure to grab a customized, genomic-themed Rubik’s Cube at the keynote session on Thursday, April 23, and join the fun trying to solve the puzzle after the speeches at our location on the show floor. Just by participating you will be eligible to win great prizes like a tablet, a Basis watch, or SMS headphones. Here’s a little Rubik’s Cube insight if you need help.

 

Plus, we’re giving away up to $10,000 to the Translational Genomics Research Institute (TGEN) in a tweet campaign that you can support. Watch for more details.

 

What questions do you have? We’re looking forward to seeing you at Bio-IT World next month.

Read more >

How to Configure Oracle Redo on the Intel PCIe SSD DC P3700

Back in 2011, I made the statement, “I have put my Oracle redo logs or SQL Server transaction log on nothing but SSDs” (Improve Database Performance: Redo and Transaction Logs on Solid State Disks (SSDs). In fact since the release of the Intel® SSD X25-E series in 2008, it is fair to say I have never looked backed. Even though those X25-Es have long since retired, every new product has convinced me further still that from a performance perspective a hard drive configuration just cannot compete. This is not to say that there have not been new skills to learn, such as configuration details explained here (How to Configure Oracle Redo on SSD (Solid State Disks) with ASM). The Intel® SSD 910 series provided a definite step-up from the X25-E for Oracle workloads (Comparing Performance of Oracle  Redo on Solid State Disks (SSDs)) and proved concerns for write peaks was unfounded (Should you put Oracle Database Redo on Solid State Disks (SSDs)). Now with the PCIe*-based Intel® SSD DC P3600/P3700 series we have the next step in the evolutionary development of SSDs for all types of Oracle workloads.

 

Additionally we have updates in operating system and driver support and therefore a refresh to the previous posts on SSDs for Oracle is warranted to help you get the best out of the Intel SSD DC P3700 series for Oracle redo.

 

NVMe

 

One significant difference in the new SSDs is the change in interface and driver from AHCI and SATA to NVMe (Non-volatile memory express).  For an introduction to NVMe see this video by James Myers and to understand the efficiency that NVMe brings read this post by Christian Black. As James noted, high performance, consistent, low latency Oracle redo logging also needs high endurance, therefore the P3700 is the drive to use. With a new interface comes a new driver, which fortunately is included in the Linux kernel at the Oracle supported Linux releases of Red Hat and Oracle Linux 6.5, 6.6 and 7. 

I am using Oracle Linux 7.


Booting my system with both a RAID array of Intel SSD DC S3700 series and Intel SSD DC P3700 series shows two new disk devices:


First the S3700 array using the previous interface


Disk /dev/sdb1: 2394.0 GB, 2393997574144 bytes, 4675776512 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Second the new PCIe P3700 using NVMe

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 1562824368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Changing the Sector Size to 4KB

 

As Oracle introduced support for 4KB sector sizes at Oracle release 11g R2, it is important to be at a minimum of this release for Oracle 12c to take full advantage of SSD for Oracle redo. However ‘out of the box’ as shown the P3700 presents a 512 byte sector size. We can use this ‘as is’ and set the Oracle parameter ‘disk_sector_size_override’ to true. With this we can then specify the blocksize to be 4KB when creating a redo log file. Oracle will then use 4KB redo log blocks and performance will not be compromised.


As a second option, the P3700 offers a feature called ‘Variable Sector Size’. Because we know we need 4KB sectors, we can set up the P3700 to present a 4KB sector size instead. This can then be used transparently by Oracle without the requirement for additional parameters. It is important to do this before you have configured or started to use the drive for Oracle as the operation is destructive of any existing data on the device.

 

To do this, first check that everything is up to date by using the Intel Solid State Drive Data Center Tool from https://downloadcenter.intel.com/download/23931/Intel-Solid-State-Drive-Data-Center-Tool Be aware that after running the command it will be necessary to reboot the system to pick up the new configuration and use the device.


[root@haswex1 ~]# isdct show -intelssd
- IntelSSD Index 0 -
Bootloader: 8B1B012D
DevicePath: /dev/nvme0n1
DeviceStatus: Healthy
Firmware: 8DV10130
FirmwareUpdateAvailable: Firmware is up to date as of this tool release.
Index: 0
ProductFamily: Intel SSD DC P3700 Series
ModelNumber: INTEL SSDPEDMD800G4
SerialNumber: CVFT421500GT800CGN


Then run the following command to change the sector size. The parameter LBAFormat=3 sets it to 4KB and LBAFormat=0 sets it back to 512b.

 

[root@haswex1 ~]# isdct start -intelssd 0 Function=NVMeFormat LBAFormat=3 SecureEraseSetting=2 ProtectionInformation=0 MetaDataSetting=0
WARNING! You have selected to format the drive! 
Proceed with the format? (Y|N): Y
Running NVMe Format...
NVMe Format Successful.


A reboot is necessary because I am on Oracle Linux 7 with a UEK kernel at 3.8.13-35.3.1 and the NVMe needs to reset on the device. At Linux kernels 3.10 and above you can also run the following command with the system online to do the reset.

 

echo 1 > /sys/class/misc/nvme0/device/reset


The disk should now present the 4KB sector size we want for Oracle redo.

 

Disk /dev/nvme0n1: 800.2 GB, 800166076416 bytes, 195353046 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Configuring the P3700 for ASM

 

For ASM (Assembly Specific  Monitor) we need a disk with a single partition and, after giving the disk a gpt label, I use the following command to create and check the use of an aligned partition.

 

(parted) mkpart primary 2048s 100%                                        
(parted) print                                                            
Model: Unknown (unknown)
Disk /dev/nvme0n1: 195353046s
Sector size (logical/physical): 4096B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start  End         Size        File system  Name     Flags
1      2048s  195352831s  195350784s               primary

(parted) align-check optimal 1
1 aligned
(parted)  

     

I then use udev to set the device permissions. Note: the scsi_id command can be run independently to find the device id to put in the file and the udevadm command used to apply the rules. Rebooting the system is useful during configuration to ensure that the correct permissions are applied on boot.

 

[root@haswex1 ~]# cd /etc/udev/rules.d/
[root@haswex1 rules.d]# more 99-oracleasm.rules 
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="3600508e000000000c52195372b1d6008", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="nvme0n1p1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="365cd2e4080864356494e000000010000", OWNER="oracle", GROUP="dba", MODE="0660"


Successfully applied, the oracle user now has ownership of the DC S3700 RAID array device and the P3700 presented by NVMe.

 

[root@haswex1 rules.d]# ls -l /dev/sdb1
brw-rw---- 1 oracle dba 8, 17 Mar  9 14:47 /dev/sdb1
[root@haswex1 rules.d]# ls -l /dev/nvme0n1p1 
brw-rw---- 1 oracle dba 259, 1 Mar  9 14:39 /dev/nvme0n1p1


Use ASMLIB to mark both disks for ASM.

 

[root@haswex1 rules.d]# oracleasm createdisk VOL2 /dev/nvme0n1p1
Writing disk header: done
Instantiating disk: done

[root@haswex1 rules.d]# oracleasm listdisks
VOL1
VOL2


As the Oracle user, use the ASMCA utility to create the ASM disk groups.

 

fult1.png

 

I now have 2 disk groups created under ASM.

 

fult2.png

 

Because of the way the disk were configured Oracle has automatically detected and applied the sector size of 4KB.

 

[oracle@haswex1 ~]$ sqlplus sys/oracle as sysasm
SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 12 10:30:04 2015
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option
SQL> select name, sector_size from v$asm_diskgroup;

NAME                     SECTOR_SIZE
------------------------------ -----------
REDO                          4096
DATA                          4096

 

 

SPFILES in 4K DISKGROUPS

 

In previous posts I noted Oracle bug “16870214 : DB STARTUP FAILS WITH ORA-17510 IF SPFILE IS IN 4K SECTOR SIZE DISKGROUP” and even with Oracle 12.1.0.2 this bug is still with us.  As both of my diskgroups have a 4KB sector size, this will affect me if I try to create a database in either without having applied patch 16870214.


With this bug, upon creating a database with DBCA you will see the following error.

 

fult3.png


The database is created and the spfile does exist so can be extracted as follows:

 

ASMCMD> cd PARAMETERFILE
ASMCMD> ls
spfile.282.873892817
ASMCMD> cp spfile.282.873892817 /home/oracle/testspfile
copying +DATA/TEST/PARAMETERFILE/spfile.282.873892817 -> /home/oracle/testspfile


This spfile is corrupt and attempts to reuse it will result in errors.

 

ORA-17510: Attempt to do i/o beyond file size
ORA-17512: Block Verification Failed


However, you can extract the parameters by using the strings command and create an external spfile or a spfile in a diskgroup with a 52b sector size. Once complete, the Oracle instance can be started.

 

SQL> create spfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileTEST.ora' from pfile='/home/oracle/testpfile';
SQL> startup
ORACLE instance started


Creating Redo Logs under ASM


In viewing the same disks within the Oracle instance, the underlying sector size has been passed right through to the database.

 

SQL> select name, SECTOR_SIZE BLOCK_SIZE from v$asm_diskgroup;

NAME                   BLOCK_SIZE
------------------------------ ----------
REDO                      4096
DATA                      4096


Now it is possible to create a redo log file with a command such as follows:

 

SQL> alter database add logfile ‘+REDO’ size 32g; 


…and Oracle will create a redo log automatically with an optimal blocksize of 4KB.

 

SQL> select v$log.group#, member, blocksize from v$log, v$logfile where v$log.group#=3 and v$logfile.group#=3;

GROUP#
----------
MEMBER
-----------
BLOCKSIZE
----------
       3
+REDO/HWEXDB1/ONLINELOG/group_3.256.874146809
      4096


Running an OLTP workload with Oracle Redo on Intel® SSD DC P3700 series


To put the Oracle redo on P3700 through its paces I used a HammerDB workload. The redo is set with a standard production type configuration without commit_write and commit_wait parameters.  A test shows we are running almost 100,000 transactions per second at redo over 500MB / second and therefore we would be archiving almost 2 TBs per hour.

 

Per Second

Per Transaction

Per Exec

Per Call

Redo size (bytes):

504,694,043.7

5,350.6

 

 


Log file sync even at this level of throughput is just above 1ms

 

Event

Waits

Total Wait Time (sec)

Wait Avg(ms)

% DB time

Wait Class

DB CPU

 

35.4K

 

59.1

 

log file sync 19,927,449 23.2K 1.16 38.7 Commit


…and the average log file parallel write showing the average disk response time to just 0.13ms

 

Event

Waits

%Time -outs

Total Wait Time (s)

Avg wait (ms)

Waits /txn

% bg time

log file parallel write 3,359,023 0 442

0.13

0.12

2237277.09


 

There are six log writers on this system. As with previous blog posts on SSDs I observed the log activity to be heaviest on the first three and therefore traced the log file parallel write activity on the first one with the following method:

 

SQL> oradebug setospid 67810;
Oracle pid: 18, Unix process pid: 67810, image: oracle@haswex1.example.com (LG00)
SQL> oradebug event 10046 trace name context forever level 8;
ORA-49100: Failed to process event statement [10046 trace name context forever level 8]
SQL> oradebug event 10046 trace name context forever, level 8;

The trace file shows the following results for log file parallel write latency to the P3700.

 

Log Writer Worker

Over  1ms

Over 10ms

Over 20ms

Max Elapsed

LG00 1.04% 0.01% 0.00% 14.83ms

 

Looking at a scatter plot of all of the log file parallel write latencies recorded in microseconds on the y axis clearly illustrate that any outliers are statistically insignificant and none exceed 15 milliseconds. Most of the writes are sub-millisecond on a system that is processing many millions of transactions a minute while doing so.

fult4.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

A subset of iostat data shows the the device is also far from full utilization.

 

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          77.30    0.00    8.07    0.24    0.00   14.39
Device:         wMB/s avgrq-sz avgqu-sz   await w_await  svctm  %util
nvme0n1        589.59    24.32     1.33    0.03    0.03   0.01  27.47

 

Conclusion


As a confirmed believer in SSDs, I have long been convinced that most experiences of poor Oracle redo performance on SSDs has been due to an error in configuration such as sector size, block size and/or alignment as opposed to performance of the underlying device itself. In following the configuration steps I have outlined here, the Intel SSD DC P3700 series shows as an ideal candidate to take Oracle redo to the next level of performance without compromising endurance.

Read more >

Cloud – Putting the Cloud to work for Intel

In my second insight into the IT Business report I am focusing on the impact of Cloud inside Intel.

 

The cloud is changing the business landscape and here at Intel it has transformed the IT culture to align with the strategies of the business groups. Intel IT brings technical expertise and business acumen to bear on the highest priority projects at Intel to accelerate business at a faster pace than ever before. Intel IT have simplified the way Intel’s business groups interact with IT to identify workflow and process improvements that IT can drive. Because they understand their businesses, they can tailor cloud hosting decisions to specific business priorities.

 

Our private cloud, with on-demand self-service, enables Intel business groups to innovate quickly and securely. In the annual Intel IT Business Review Intel IT reveals that 85 percent of all new services installed for our Office, Enterprise and Services divisions are hosted in the cloud.

 

Intel IT attribute the success of our private cloud to implementing a provider-like cloud hosting strategy, advancing self-service infrastructure as a service and platform as a service, and enabling cloud-aware applications. Intel’s private cloud saves about USD 7.5 million annually while supporting an increase of 17 percent in operating system instances in the environment.

 

Cloud-aware applications can maximize cloud advantages such as self-service provisioning, elasticity, run-anywhere design, multi-tenancy, and design for failure. To enhance Intel developers’ skill sets, in 2013 Intel IT delivered 8 code-a-thons in 3 geographical regions, training over 100 Intel developers in how to build cloud-aware applications.

 

To increase our understanding of how hybrid clouds can benefit Intel, IT are also conducting a hybrid cloud Proof of Concept using open source OpenStack APIs. Hybrid cloud hosting can provide additional external capacity to augment our own private cloud while enabling us to optimize our internal capacity.

 

Hybrid cloud hosting also increases flexibility, allowing us to dynamically adjust capacity when needed to support business initiatives efficiently.

 

Intel IT have accelerated hosting decisions for the business customers by developing a methodical approach to determine the best hosting option. They consider security, control, cost, location, application requirements, capacity, and availability before arriving at a hosting decision for each use case. Offering optimized hosting solutions improves business agility and velocity while reducing costs.

 

For more go to www.intel.com/ITAnnualReport

 

Read more >