ADVISOR DETAILS

RECENT BLOG POSTS

Citrix XenClient XT and Intel vPro Combine for Mission Critical Security

18312140_l.jpgA recent article in Gov’t Computer News (GCN) highlights a “Ground breaking” solution known as SecureView.  In the past, to meet requirements for security, government agencies have had to deploy multiple different computer systems on to an analyst’s desk, each connected to a different network with different requirements for security.  SecureView vastly simplified this scenario, without compromising security or performance.

 

It’s important to note that SecureView is based on Citrix XenClient XT.  XenClient XT uses a type one, or bare metal hypervisor.  SecureView is also designed to run on Intel vPro Technology, which provides both the performance and underlying hardware enhanced security. With XenClient XT and Intel vPro technology as the foundation, SecureView provides a level of security required by key government agencies.

 

The author mentions that SecureView had to “meet seemingly conflicting security and performance requirements.”  Because SecureView uses a bare metal hypervisor it provides two important benefits.  First, on startup, Intel vPro Technology provides a hardware based trusted boot.  This hardware root of trust provides an extra measure of protection against threats like root kits and other malware.  Second, because the type 1 hypervisor runs directly on hardware it’s able to deliver outstanding performance.

 

As mentioned, SecureView is based on Citrix XenClient XT and Intel vPro Technology.  You can see XenClient XT running on the latest Intel vPro Technology at the Intel Developer Forum being held in San Francisco from September 10th to 12th. Look for the Citrix booth in the Business Client Zone on the demo floor at IDF.  And while you’re there be sure to check out the other new capabilities and new Ultrabook 2 in 1 designs.

 

Follow all of the action at #IDF13 on @IntelITS and see what to expect at IDF 2013 in this short video from Intel Corp.

Read more >

InformationWeek 500 Recognizes Companies for Digital Business

With the release of the 2013 InformationWeek 500 list, InformationWeek recognized some of the nation’s most innovative users of business technology. The theme for this year’s award is digital business. According to InformationWeek Editor in Chief Rob Preston, “It’s a movement, rooted in data analytics, mobile computing, social networking and other customer-focused technologies that are turning companies and industries on their ear”.

 

Intel is continually learning and adapting from our own digital transformation journey. For example, when we first implemented a Bring Your Own Device (BYOD) program, we discovered a diversity of devices and applications that employees used in their personal lives that made them more productive. We then spent a lot of engineering time to be able to support a broad list of devices and capabilities so that employees could use devices that would make them more productive in their professional lives.

 

We also foster and encourage direct feedback from our employees to understand their usage and interaction with technology. This has taught us not to stay married to a solution if it doesn’t work in real world usage. This feedback mechanism forms a critical part of our user-centered approach to delivering IT services. Our philosophy is not about looking at what’s easiest to implement but what makes employees more productive and provides greater user satisfaction. After all, no one wants to provide technology that users find frustrating.

 

You can find out more about this approach from our white paper on Best Practices in User-Centered IT to learn why Intel was ranked 40th on the InformationWeek list.

 

What are some lessons that your organization has learned in its digital business journey?

Read more >

InformationWeek 500 Recognizes Companies for Digital Business

With the release of the 2013 InformationWeek 500 list, InformationWeek recognized some of the nation’s most innovative users of business technology. The theme for this year’s award is digital business. According to InformationWeek Editor in Chief Rob Preston, “It’s a movement, rooted in data analytics, mobile computing, social networking and other customer-focused technologies that are turning companies and industries on their ear”.

 

Intel is continually learning and adapting from our own digital transformation journey. For example, when we first implemented a Bring Your Own Device (BYOD) program, we discovered a diversity of devices and applications that employees used in their personal lives that made them more productive. We then spent a lot of engineering time to be able to support a broad list of devices and capabilities so that employees could use devices that would make them more productive in their professional lives.

 

We also foster and encourage direct feedback from our employees to understand their usage and interaction with technology. This has taught us not to stay married to a solution if it doesn’t work in real world usage. This feedback mechanism forms a critical part of our user-centered approach to delivering IT services. Our philosophy is not about looking at what’s easiest to implement but what makes employees more productive and provides greater user satisfaction. After all, no one wants to provide technology that users find frustrating.

 

You can find out more about this approach from our white paper on Best Practices in User-Centered IT to learn why Intel was ranked 40th on the InformationWeek list.

 

What are some lessons that your organization has learned in its digital business journey?

Read more >

Code-For-Good in the Cloud Hackathon #IDF13

10805116_l.jpg

This team event brings college students and IT professionals together in a good-natured, mini-hackathon to code and do good. The participants will be divided into three teams where they will develop innovative math apps for middle school students. These apps will be hosted in our IDF private cloud which incorporates Intel Xeon E5 servers and Intel IT’s Platform as a Service (PaaS). Apps will be designed to be cloud-aware, including a stateless model, web services approach, massive scaling, and design for failure. In addition, the apps will run across a spectrum of client devices via HTML5 and utilize sensor technology, cross-platform design and Intel HR pedagogical best-practices. Success will be judged by our team of subject matter experts from Intel and academia. Fun prizes will be distributed at our award ceremony.

 

This event showcases the hackathon learning approach which benefits students and professionals alike.  As opposed to a rigid instructor led class with specific exercises, the hackathon allows participants to explore ideas at their own pace, immersing them into technology areas which excite and interest them. The Code-For-Good program uses this approach to introduce new technology to the next generation of engineers. Intel IT uses a similar approach called “code-a-thons” to expand the skills of its workforce.

 

This event demonstrates Intel leadership in multiple ways:

 

  • Cloud Computing and Data Center technologies
  • Client and Mobile Computing
  • Education and training

 

While you’re in the Technology Showcase don’t miss going to the Networking Plaza where there are special activities planned such as Chip-Chat and Engineering Power Hour. Additional inspiring topics around hackathons, robots, and innovation promise to be hits.

 

Follow all of the action at #IDF13 on @IntelITS and see what to expect at IDF 2013 in this short video from Intel Corp.

Read more >

Code-For-Good in the Cloud Hackathon #IDF13

10805116_l.jpg

This team event brings college students and IT professionals together in a good-natured, mini-hackathon to code and do good. The participants will be divided into three teams where they will develop innovative math apps for middle school students. These apps will be hosted in our IDF private cloud which incorporates Intel Xeon E5 servers and Intel IT’s Platform as a Service (PaaS). Apps will be designed to be cloud-aware, including a stateless model, web services approach, massive scaling, and design for failure. In addition, the apps will run across a spectrum of client devices via HTML5 and utilize sensor technology, cross-platform design and Intel HR pedagogical best-practices. Success will be judged by our team of subject matter experts from Intel and academia. Fun prizes will be distributed at our award ceremony.

 

This event showcases the hackathon learning approach which benefits students and professionals alike.  As opposed to a rigid instructor led class with specific exercises, the hackathon allows participants to explore ideas at their own pace, immersing them into technology areas which excite and interest them. The Code-For-Good program uses this approach to introduce new technology to the next generation of engineers. Intel IT uses a similar approach called “code-a-thons” to expand the skills of its workforce.

 

This event demonstrates Intel leadership in multiple ways:

 

  • Cloud Computing and Data Center technologies
  • Client and Mobile Computing
  • Education and training

 

While you’re in the Technology Showcase don’t miss going to the Networking Plaza where there are special activities planned such as Chip-Chat and Engineering Power Hour. Additional inspiring topics around hackathons, robots, and innovation promise to be hits.

 

Follow all of the action at #IDF13 on @IntelITS and see what to expect at IDF 2013 in this short video from Intel Corp.

Read more >

Networking is a Key Component in Intel® Atom™ Processor C2000-based Microserver Systems

On Wednesday, Intel took a product leadership role in the emerging market for microserver processors, thanks in part to our strength in networking.

 

We announced the second generation 64-bit Intel® Atom™ processor C2000 product family of system-on-chip (SoC) designs for microserver and cold storage (code named “Avoton”) and entry networking (code named “Rangeley”).

 

One of the key features in this product line is the integrated Intel® Ethernet controller technology derived from Intel’s extensive line of Ethernet controllers. This provides the Intel Atom processor C2000 family with high-bandwidth integrated Ethernet network connections up to 2.5Gbps.  Some of the first microserver products to feature the Intel Atom processor C2000 SoC come from NEC, Quanta, and Supermicro.

 

2.5 Gbps Ethernet is ideal for dense microserver designs, providing headroom for applications requiring more than one Gigabit of bandwidth. This plus the integration of Intel Ethernet controller technology allows the use of Intel’s industry-proven software drivers along with support for features such as I/O virtualization, stateless offloads, receive-side scaling, IEEE 1588 time stamping, and Energy Efficient Ethernet (EEE) power management.

 

Along with the Intel Atom processor C2000 product family, Intel introduced the Intel® Ethernet Switch FM5224, which is an ideal network switch chip to complement the Intel Atom processor C2000 family in microserver systems due to its high number of 2.5GbE ports, enabling high-density compute installations.

 

The Intel Ethernet Switch FM5224 runs Open Network Software from Wind River Systems, extending the Intel® Open Network Platform from currently available top-of-rack switch reference designs into microserver systems. The Intel Ethernet Switch FM5224 includes the following features tailored to microserver environments:

  • Up to 64 2.5GbE ports along with two 40GbE or eight 10GbE uplinks
  • Intel® FlexPipe™ frame processing pipeline optimized for software-defined networking (SDN) environments
  • Advanced load distribution mechanisms to balance loads across microserver modules
  • Support for data center tunneling features such as VXLAN and NVGRE
  • Low cut-through latency to improve clustering performance
  • Built-in KR and KR4 PHY technology for DA-copper uplinks

 

Compared to competing microserver networking solutions, the Intel Ethernet Switch FM5224 provides 25 percent higher node density, 2.5 times higher bandwidth and two times lower latency[1].  If you want more information on the FM5224 along with how it can be used in microserver systems, come and see session #CLDS006 at the Intel Developer Forum in San Francisco next week or visit the following link:

http://www.intel.com/content/www/us/en/switch-silicon/ethernet-switch-fm5224-series.html

 

 


[1] Source: Lippis Report: Open Industry Network Performance & Power Test for Cloud Networks Evaluating 10/40 GbE Switches, Fall 2011 Edition

Read more >

Intel Embracing Consumerization Panel at IDF 2013

Intel Embracing Consumerization Panel at IDF 2013

 

Time: Wednesday, September 11th  at 4pm, 2013

Where: Networking Plaza near The Business Client Pavilion


Attend the Intel Embracing Consumerization Panel at IDF 2013 and join us for a candid discussion with CTOs of industry leading companies as they explore Consumerization in the Enterprise.

 

This is your chance to be a part of the discussion on how these big companies are handling consumerization in their own environments and as importantly, how they see these trends playing out across the industry. Panel host and Intel CTO, Yasser Rasheed, will lead this intriguing discussion amongst industry peers. Panelists to include:


Yasser Rasheed

Intel’s Yasser Rasheed is the CTO and Director of Architecture in the Business Client Platforms Division at Intel Corporation. He drives technical roadmap definition for next generation business client platforms and solutions, covering business desktops, notebooks, tablets and other end-point devices. Yasser Rasheed also leads the Business Client partner innovation program, focusing on advanced innovation initiatives with strategic business partners. Dr. Rasheed has been with Intel since 2000.


Rich Stern

As Corporate VP, Global Technology Infrastructure, Rich Stern leads the Global consulting workforce for Workplace Technology Services. Rich and his leadership team are responsible for setting the strategic direction of the business via the development of assets and solutions, working closely with sales teams to understand customer demand and delivering successful project outcomes. Rich has a Bachelor’s of Science in Marketing from Saint John’s University.  He and his wife, Carol, live in Northern California.


Dave Buchholz

Dave Buchholz is a Principal Engineer and Director of Consumerization for Intel’s Information Technology group where he is responsible for investigation of future client technology adoption and engineering for approximately 100,000 end users With over 20  years of experience, Dave specializes in Consumerization, BYO, Generation Y, technology trends, and usage models.


Michael Fey

Michael Fey is the Executive Vice President, General Manager of Corporate Products & Chief Technology Officer for McAfee where he drives the company’s long‐term strategic vision and core innovation efforts. He is responsible for overall business operations and strategy for McAfee’s Corporate Product Business Units. In his role, Fey is also responsible for overseeing McAfee’s go‐to‐market initiatives and cross-functional collaboration across McAfee’s sales, product management, development, and research teams.


Ahmed Sallam

Ahmed Sallam is the VP of Product Strategy/ CTO of Client Virtualization at Citrix Systems driving technology innovation and product strategy for the emerging virtualization-based system devices management and security working closely with software and hardware ecosystem partners as they adopt and integrate into Citrix open extensible platforms. Ahmed is a renowned expert across the industry well known for pioneering new models in computer system virtualization-based security and management delivering flexible, well-managed and secure computer experience with high safety assurances.

 

Don’t forget to drop off your entry at the Intel Business Client Pavilion by 4pm. The prize drawing is at the Networking Plaza at 4:55pm. Must be present to win!

 

While you’re in the Technology Showcase don’t miss going to the Networking Plaza where there are special activities planned such as Chip-Chat and Engineering Power Hour. Additional inspiring topics around hackathons, robots, and innovation promise to be hits.

 

Follow all of the action at #IDF13 on @IntelITS and see what to expect at IDF 2013 in this short video from Intel Corp.

Read more >

Intel Embracing Consumerization Panel at IDF 2013

Intel Embracing Consumerization Panel at IDF 2013

 

Time: Wednesday, September 11th  at 4pm, 2013

Where: Networking Plaza near The Business Client Pavilion


Attend the Intel Embracing Consumerization Panel at IDF 2013 and join us for a candid discussion with CTOs of industry leading companies as they explore Consumerization in the Enterprise.

 

This is your chance to be a part of the discussion on how these big companies are handling consumerization in their own environments and as importantly, how they see these trends playing out across the industry. Panel host and Intel CTO, Yasser Rasheed, will lead this intriguing discussion amongst industry peers. Panelists to include:


Yasser Rasheed

Intel’s Yasser Rasheed is the CTO and Director of Architecture in the Business Client Platforms Division at Intel Corporation. He drives technical roadmap definition for next generation business client platforms and solutions, covering business desktops, notebooks, tablets and other end-point devices. Yasser Rasheed also leads the Business Client partner innovation program, focusing on advanced innovation initiatives with strategic business partners. Dr. Rasheed has been with Intel since 2000.


Rich Stern

As Corporate VP, Global Technology Infrastructure, Rich Stern leads the Global consulting workforce for Workplace Technology Services. Rich and his leadership team are responsible for setting the strategic direction of the business via the development of assets and solutions, working closely with sales teams to understand customer demand and delivering successful project outcomes. Rich has a Bachelor’s of Science in Marketing from Saint John’s University.  He and his wife, Carol, live in Northern California.


Dave Buchholz

Dave Buchholz is a Principal Engineer and Director of Consumerization for Intel’s Information Technology group where he is responsible for investigation of future client technology adoption and engineering for approximately 100,000 end users With over 20  years of experience, Dave specializes in Consumerization, BYO, Generation Y, technology trends, and usage models.


Michael Fey

Michael Fey is the Executive Vice President, General Manager of Corporate Products & Chief Technology Officer for McAfee where he drives the company’s long‐term strategic vision and core innovation efforts. He is responsible for overall business operations and strategy for McAfee’s Corporate Product Business Units. In his role, Fey is also responsible for overseeing McAfee’s go‐to‐market initiatives and cross-functional collaboration across McAfee’s sales, product management, development, and research teams.


Ahmed Sallam

Ahmed Sallam is the VP of Product Strategy/ CTO of Client Virtualization at Citrix Systems driving technology innovation and product strategy for the emerging virtualization-based system devices management and security working closely with software and hardware ecosystem partners as they adopt and integrate into Citrix open extensible platforms. Ahmed is a renowned expert across the industry well known for pioneering new models in computer system virtualization-based security and management delivering flexible, well-managed and secure computer experience with high safety assurances.

 

Don’t forget to drop off your entry at the Intel Business Client Pavilion by 4pm. The prize drawing is at the Networking Plaza at 4:55pm. Must be present to win!

 

While you’re in the Technology Showcase don’t miss going to the Networking Plaza where there are special activities planned such as Chip-Chat and Engineering Power Hour. Additional inspiring topics around hackathons, robots, and innovation promise to be hits.

 

Follow all of the action at #IDF13 on @IntelITS and see what to expect at IDF 2013 in this short video from Intel Corp.

Read more >

Intel Unveils New Technologies for Efficient Cloud Datacenters

It’s no secret the world is becoming more and more mobile.  As a result, the pressure to support billions of devices and users is changing the very composition of datacenters, which is why Intel is introducing a portfolio of datacenter products and technologies for cloud service providers that handle a diverse set of lightweight workloads in the microserver, cold storage and entry networking segments.

 

Our goal at Intel is to provide key innovations original equipment manufacturers (OEMs), telecommunications equipment makers and cloud service providers can use to build the datacenters of the future.

 

It’s leadership in silicon and system-on-chip (SoC) design—rack architecture and software enabling—that allows us to create the new Intel® Atom™ Processor C2000 Product Family.

 

Intel® Atom™ Processor C2000 Product Family


The portfolio includes the second-generation, 64-bit Intel Atom C2000 product family of SoC designs. These new SoCs are Intel’s first products based on the Silvermont micro-architecture and include 13 customized configurations. 


 

New 64-bit, System-on-Chip Family for the Datacenter


We also introduced a new silicon—the Intel® Ethernet Switch FM5224—which, when combined with the Wind River Open Network Software suite, brings Software Defined Networking (SDN) solutions to servers for improved density and lower power. 

 

Switches based on the Intel Ethernet Switch FM5224 silicon can connect up to 64 microservers, providing 30 percent higher node density, 2.5 times higher bandwidth and two times lower latency.

 

First Live Demo of a RSA-Based System


In addition to the silicon and system announcements Intel made today, we are showing the first operational Intel Rack Scale Architecture (RSA)-based rack with Intel® Silicon Photonics Technology and MXC connector and ClearCurve optical fiber developed in partnership with Corning.

 

The RSA-based rack enables more data density and speeds of up to 1.6 terabits per second at distances up to 300 meters.

 

For more information on the announcements—including Diane Bryant’s presentation, additional documents and pictures, please check out Intel’s newsroom.

Read more >

Inside IT: Developing an Enterprise Mobile Application Framework

Emerging technologies are creating great new opportunities in the enterprise. Context-aware computing and new interaction methods like perceptual computing promise increased productivity and greater collaboration. But they also create new challenges that need to be met in order to leverage them to the fullest. Intel IT has developed an enterprise mobile application framework to support cross- platform mobile application development. It’s designed to give developers the necessary tools to take advantage of the latest technologies and platforms. In this podcast we hear from Krishnan Saikrishnan Manager in Intel’s Information Technology Group, and Arijit Bandyopadhyay, Enterprise Architect in the IT Strategy Architecture and Innovation Group. They outline the development and deployment of the framework, the challenges they faced implementing it, and the benefits it brings to the entire enterprise. Listen to more on developing a mobile application framework now.

Read more >

Inside IT: Developing an Enterprise Mobile Application Framework

Emerging technologies are creating great new opportunities in the enterprise. Context-aware computing and new interaction methods like perceptual computing promise increased productivity and greater collaboration. But they also create new challenges that need to be met in order to leverage them to the fullest. Intel IT has developed an enterprise mobile application framework to support cross- platform mobile application development. It’s designed to give developers the necessary tools to take advantage of the latest technologies and platforms. In this podcast we hear from Krishnan Saikrishnan Manager in Intel’s Information Technology Group, and Arijit Bandyopadhyay, Enterprise Architect in the IT Strategy Architecture and Innovation Group. They outline the development and deployment of the framework, the challenges they faced implementing it, and the benefits it brings to the entire enterprise. Listen to more on developing a mobile application framework now.

Read more >

Perfect for Web, Big Data or Hosting Applications: HP ProLiant m300 Server Cartridge

Guest blog written by Nigel Church, HP Servers


 

HP announces   the HP ProLiant m300 Server Cartridge for the HP Moonshot System. This server features the new Intel® Atom® C2000 processor, an eight-core system on a chip (SOC) with at 2.4GHz, and up to32GB memory.

 

Now, in just one Moonshot System with ProLiant m300 Servers, it’s possible to have 360 cores, 1,440Gb memory and up to 45TB of storage. For select workloads, you can accomplish the same work using just 19% of the power of a traditional server!

 

Examples of workloads it can support are as follows:

 

  • Companies serving web pages, or files over the Internet at scale need to carry out simultaneous lightweight computing tasks over and over, at distributed locations. Traditional servers usually come with more horsepower and more cost, than what’s needed for these lightweight computing tasks.
  • Hosters looking for right-sized IT to keep utilization rates high at the lowest possible cost will find this server cartridge interesting in providing yet another competitive tier of service.
  • Companies embracing NoSQL/NewSQL technologies designed to operate in distributed clusters of shared-nothing nodes, struggle at scale because the only choice today are powerful single-node 1U servers which are not right-sized for the job. The ProLiant m300 servers provide a very cost-effective way to scale.

 

For more information visit HP Moonshot or learn more about the Intel® Atom™ Processors & Intel® Xeon® Processor E3 Family.

Read more >

Join Intel’s Dena Lumbang and Dan Brunton at IDF 2013 in SFO

For IDF13, we’ve renewed our commitment to go deep into our technology, and to share Intel’s roadmaps and plans. We look forward to seeing you there. You think you know IDF? Think again.

dl.png

Dena Lumbang

Intel SCS Product Marketing Engineer
PC Client Group / Business Client Platform Division
17 Years @ Intel

db.jpg

Dan Brunton

Senior Implementation Architect
PC Client Group / Business Client Platform Division
13 Years @Intel

All technical sessions will be presented by Intel and industry experts including, Dena Lumbang and Dan Brunton. Their session on Intel Setup and Configuration Software will explain the framework for discovering and enabling Intel architecture capabilities for the vPro platform. Topics will include:

 

  • Use Intel SCS to configure, tune, and discover Intel technologies, including Intel Active Management Technology and Intel SSD Pro Series
  • Augment 3rd party manageability business processes (e.g. management consoles) with the Intel SCS framework and tools
  • Maintain interoperability with Intel SCS, making for a consistent customer experience with your solution

When not presenting, Den Lumbang can be found at the Intel booth software demo. She’ll also be mingling with some of our key software partners throughout the week.

“Software developers can take advantage of this extensibility, and focus on the value added use cases ‘post-provisioning’,” Dena Lumbang.

Access to Intel IT experts won’t be limited to technical sessions. IDF 2013 attendees can interact with other IT professionals and participate in product demos performed by Intel IT experts in the Business Client pavilion.

 

“When I’m not presenting, I will be in the Business Client Pavilion to answer questions and provide demos of our many technologies,” says Dan Brunton. “Dena Lumbang and I will be giving a session on the Intel Setup and Configuration Software. We hope that the attendees who join us in our session will walk away with a clear understanding of the capabilities of the Intel SCS and how it can be easily integrated into both IT and third party ISV solutions.”

 

Be sure to join Dan, Dena and other Intel IT experts in the Business Client Pavilion or the Advanced Technology Pavilion.

 

IDF13 will take place September 10-12th in San Francisco, CA at the Moscone Convention Center.

 

To join the conversation, follow @IntelITS and @IntelvPro on Twitter. For more details, visit Intel® Setup and Configuration Software.

Read more >

Cloud Computing Cost: Saving with a Hybrid Model

Intel IT needed to determine an investment strategy that could meet our growing capacity needs for new businesses that required hosting solutions. We could have built out the capacity and capability internally. Or we could have outsourced everything to a public cloud provider. However, we explored and chose a third option—a hybrid cloud model (a mix of public and private clouds). This flexible approach allows us to dynamically adjust the amount of capacity we are using in the public or private hosting environment, thereby achieving high levels of agility, scalability, and efficiency.

 

We are working to accelerate Intel’s adoption of a hybrid cloud by establishing the following key design strategies:

• Design applications and the hosting environment for automated self-healing.

• Design our hybrid cloud so that it can meet unpredictable demand automatically.

• Design cloud-aware applications that accommodate infrastructure outages and that can be concurrently active at multiple locations.

 

Developing a hybrid cloud model required us to look at various trade-offs between the public and private cloud environments, choosing a mix that represents the best solution from several perspectives. You can read more about this hybrid cloud model in our recently release white paper.

Read more >

Hadoop Tutorials: Using Hive with HBase

About Chandeep Singh: I am a Software Engineer working at Intel as part of Intel Distribution for Apache Hadoop, Professional Services. I’ve been working in the Hadoop world for a while and got my hands dirty with MapReduce, Hive, Pig & Sqoop but have enjoyed every bit of it.


Here is another interesting use case that came up when I was working with one of our clients in the insurance industry. The client had enormous amount of claim data residing in multiple databases in SQL Server which were to be consolidated into one. Some of the queries on this data took days because of which we were looking for an alternate solution that could process data in a distributed fashion and save us some time. We started looking into a Hadoop based solution since the company was already using Hadoop.

 

We had few options on the table such as Hive, Pig, Hbase etc and after some brainstorming decided to go with HBase for the following reasons:

  1. It is an open source distributed database which would yield higher performance while being cost effective at the same time.
  2. We do not have to worry about distributing the data for faster processing since Hadoop takes care of it.
  3. Batch processing with no real indexes.
  4. Data integrity as HBase confirms a write after its write-ahead log reaches all the three in-memory HDFS replicas.
  5. Easily scalable, fault tolerant and highly available.

 

Now the next step was to move data from the SQL database to HDFS for which we used Sqoop. It imports all the data, stores it in CSV by default and can be used as:

 

sqoop import –connect ‘jdbc:sqlserver://<ServerName>;username=<UserName>;password=<Password>;database=<Database Name>’ –table <Table Name> –target-dir <Destination Path>

 

The next step was to create an HBase table and insert all the data from the CSV into it in the following ways:

 

hadoop jar <Path To HBase Jar> importtsv -Dimporttsv.columns=<Column Names> ‘-Dimporttsv.separator=,’ <Table To Import Into> <Input Directory>


Or use the complete bulk loader:


hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] <Input Directory> <Table To Import Into>

 

We gained everything we had hoped for by moving to HBase but there was something still missing. Querying an HBase database was not everyone’s cup of tea and hence the process needed to be simplified. Since the insurance folks were already familiar with SQL the easiest way out was to build a Hive schema on top of the HBase table.

 

There can be two cases while creating a Hive table on top of HBase:

  1. We do not know the column names or need all the columns for which we could explode all the data into a map as key value pairs.
  2. We need only specific columns in which case we need to specify the mappings for every column.

 

Let’s look at an example:

 

The first step is to create a sample HBase table.

create ‘MY_TABLE’, {NAME=>’TEST’, VERSIONS => ’3′, COMPRESSION => ‘NONE’, TTL => ’2147483647′, BLOCKSIZE => ’65536′, IN_MEMORY => ‘false’, BLOCKCACHE => ‘false’}
describe ‘MY_TABLE’
enable ‘MY_TABLE’

 

The next step is to insert some sample data into MY_TABLE

put ‘MY_TABLE’ , ’1′ ,’TEST:mydata1′ ,’value1′
put ‘MY_TABLE’ , ’2′ ,’TEST:mydata2′ ,’value2′
put ‘MY_TABLE’ , ’3′ ,’TEST:mydata3′ ,’value3′
put ‘MY_TABLE’ , ’4′ ,’TEST:mydata4′ ,’value4′
put ‘MY_TABLE’ , ’5′ ,’TEST:mydata5′ ,’value5′
put ‘MY_TABLE’ , ’5′ ,’TEST:mydata1′ ,’value1′

 

As mentioned above we can create a Hive external table in two ways:


Use all columns:

CREATE EXTERNAL TABLE test_all (id string,colname map<string,string>)
STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler’
WITH SERDEPROPERTIES (“hbase.columns.mapping” = “:key,TEST:”)
TBLPROPERTIES(“hbase.table.name” = “MY_TABLE”);

hive> SELCT * FROM test_all;
1 {“mydata1″:”value1″}
2 {“mydata2″:”value2″}
3 {“mydata3″:”value3″}
4 {“mydata4″:”value4″}
5 {“mydata1″:”value1″,”mydata5″:”value5″}

or map every column by name
CREATE EXTERNAL TABLE test_map(id string,colname1 string,colname2 string)
STORED BY ‘org.apache.hadoop.hive.hbase.HBaseStorageHandler’
WITH SERDEPROPERTIES (“hbase.columns.mapping” = “:key,TEST:mydata1,TEST:mydata5″)
TBLPROPERTIES(“hbase.table.name” = “MY_TABLE”);

hive> SELECT * FROM test_map;
1       value1  NULL
5       value1  value5

 

Please let me know if you have any questions about this example but posting a comment below.  Thanks for your time

 

Chandeep

Read more >

Great Things Come in Micro Packages

By Drew Schulke, Executive Director, Dell Data Center Solutions

 

Five years ago, a large hosting customer in France provided Dell with a problem statement and a question:

 

“Our data center is out of power and space, but we need to add at least 10,000 more servers to support (our) business plan.  What can you do to help us?” 


This problem statement became a challenge and a small team of highly-skilled and creative engineers was unleashed to solve it.  Just a few short months later, Dell created the world’s first microserver…and the data center hasn’t been the same since.

 

For the better part of the five years since that customer conversation, I’ve been fortunate to have a front-row seat as the concept of microservers gained traction and credibility in the industry.  Here in Dell’s Data Center Solutions (DCS) organization, we’ve had countless more customer conversations on the workloads that drive today’s web and cloud-based business models.

 

One of those critical, emerging workloads is known as “cold storage.”  The business need for cold storage is relatively straightforward. Web-based businesses produce huge volumes of data every day but over time the frequency with which such data is accessed decreases rapidly — yet the data itself cannot be destroyed.

dell_cold_storage_graphic.png

To justify keeping access to such “cold” data, one needs an extremely cost-effective and flexible solution. Such a solution must balance reasonable response time with minimal space and power consumption while also providing a seamless transition from an application and architectural perspective…all at the lowest possible cost-per-gigabyte.

 

That brings us to today’s announcement of our DCS 1300 platform, built specifically for the needs of cold storage workloads.  Featuring the just-announced Intel Atom C2000 (code name ‘Avoton’) product family, this solution is the next leap forward in the microserver journey we began five years ago.

 

Customers asked for breakthrough levels of storage density and power efficiency and Dell has delivered once again with an industry-standard 64-bit x86 solution that provides extreme storage density at an unbelievably low cost, making the DCS1300 a compelling solution for cold storage environments.

Read more >

Detecting Fraud Using Big Data Analytics with IDH

Palanivelu Balasubramanian (Bala), Business Development Manager, Big-Data, Intel.

Bala has more than 25 years of experience in the information management (IM) and analytics domain. Over the years as a consultant, he has excellent track record in influencing customers and architecting solutions for fortune 100 customers in the IM space (Big data, BI, data warehousing….). He has held leadership roles in various capacities supporting sales and delivery organizations. Prior to joining Intel, he was the Practice Principal (FSI) within the Information Management & Analytics division at HP.  He joined HP with the acquisition of Knightsbridge Solutions.


1. Introduction

Industry research says financial institutions lose billions of dollars in fraudulent activities. The impact of fraud is just not limited to money but it also impacts the customer relationship, reputation and goodwill of the institution.  As the influence of technology increases, fraudsters use creative ways manipulate the system to their advantage. Some of the fraud schemes involve AML, forgery, identity theft, fraudulent claims, insider trading, credit card fraud, mortgage fraud, wire transfer fraud, and cyber-attacks. Preventing and detecting fraud has always been one of the biggest challenges in the financial service industry. As client interactions become more complex, instantaneous, and data-intensive, banks have to adapt by deploying smarter ways to prevent fraud, enforce governance measures, and reduce risks.


Being ahead of fraudsters is the key step to prevent fraud. Using Analytics can aid in detection and prevention process. The first step is to learn from past history to prevent similar future events. Understanding the fraud history, patterns of fraud, situations which trigger fraud, customer behavior patterns, knowing your customer/employee, sentiment analysis are some of the key analysis steps institutions need to follow. The next step is to define the rules, models to detect and build alert mechanisms to automatically monitor on an ongoing basis. Using machine learning methods to predict such future incidents is also a key detection step. Securing the data and the data access is equally important to effectively combat fraud.


Analyzing years of history data and integrating new kinds of data are normal steps in such activities. Having a scalable, high performance, cost-effective, robust data management framework is essential to support the ever growing data volume in this kind of data intensive processes. Some of the core data research activities include mining, profiling, searching, match-merging, building predictive models, and adopting to machine learning methods. It’s important to note that these activities use both structured and semi/ unstructured data.


2. Hadoop for Analytics

Apache Hadoop* is an open-source framework that uses a simple programming model to enable distributed processing of large data sets on clusters of computers. The full stack includes common utilities, a distributed file system, analytics and data storage platforms, and an application layer that manages distributed processing, parallel computation, workflow, and configuration management. In addition to high availability, the Hadoop framework is more cost-effective at handling large, complex, or unstructured data sets than conventional approaches and offers massive scalability and speed.


Hadoop can store any kind of data both structured and unstructured and doesn’t need any data conversion. Since data can be stored as-is from the source (no Schema design, no data loss), it aids faster implementation and enables quick data exploration capabilities. Traditional tools and infrastructure struggle to address larger and more varied data sets coming in at high speed. As the volume, variety, and velocity of data increases, enterprises are turning to a new approach to data analytics based on the use of the open source Apache Hadoop* platform.


Traditional data analysis of structured data is managed through models that define the parameters for a type of query. As data grew from megabyte to gigabyte and then to terabyte, data warehouse appliances that use massively parallel processing (MPP) to distribute processing across the compute nodes emerged. Over time, these traditional systems were optimized to work at terascale with structured data.


With the use of petabyte-scale datasets, RDBMSs and MPP systems are unable to handle the volume of unstructured data. While MPP systems have limited horizontal scalability, the cost to add proprietary appliances is often prohibitive. 


The following core Apache Hadoop ecosystem provides capabilities for effective data management:

  • Core: A set of shared libraries
  • HDFS: The Hadoop filesystem
  • MapReduce: Parallel computation framework
  • ZooKeeper: Configuration management and coordination
  • HBase: Column-oriented database on HDFS
  • Hive: Data warehouse on HDFS with SQL-like access
  • Pig: Higher-level programming language for Hadoop computations
  • Oozie: Orchestration and workflow management
  • Mahout: A library of machine learning and data mining algorithms
  • Flume: Collection and import of log and event data
  • Sqoop: Imports data from relational databases


Using the Hadoop ecosystem for Fraud analytics can be the preferred solution to address business and technology needs that are disrupting traditional data management and processing. Enterprises can gain competitive advantage adopting to big data Analytics.

 

3. Why IDH – Intel® Distribution for Apache Hadoop software?

IDH is a software platform that provides distributed data processing and data management for enterprise applications that analyze massive amounts of diverse data. The Intel Distribution includes Apache Hadoop and other software components with enhancements from Intel. Intel® Distribution for Apache Hadoop software is enterprise-grade big data storage and analytics system that delivers real-time big data processing optimized for Intel processor-based infrastructure. It is supported by experts at Intel with deep optimization experience in the Apache Hadoop software stack as well knowledge of the underlying processor, storage, and networking components.

IDH architecture.jpg

IDH is designed to enable the widest range of use cases on Hadoop by delivering the performance and security that enterprises need. Intel® Manager provides the management console for the IDH. Designed to meet the needs of some of the most demanding enterprises in the world, Intel Manager simplifies the deployment, configuration, tuning, monitoring, and security of your Hadoop deployment. Along with the Intel® Xeon processor, SSD, and Intel® 10GbE networking, IDH offers a robust platform upon which the ecosystem can innovate in delivering new analytics solutions. Intel delivers platform innovation in open source and is committed to supporting the Apache developer community with code and collaboration. Intel believes that every organization and individual should have the ability to generate value from all the data they can access.

The bottom line…

IDH is designed to reflect ongoing innovation in the hardware platform by delivering value in the Apache Hadoop software stack. Software engineers at Intel continue to enable advanced hardware capabilities in every layer of the software stack – from the hypervisor and Linux operating system to Java, Hadoop, HDFS, HBase, and Hive. This robust platform enables the entire software ecosystem to build innovative solutions for analytics. Built ground up, Intel is committed to continuously strengthen IDH’s enterprise capabilities – above are some of the key focus areas.

 

In summary, being ahead of fraudsters is the key step to prevent fraud. Organizations have to build an efficient people, process and technology framework to effectively combat fraud. From a technology perspective, IDH provides a robust data management framework is essential to enable analytics way of fraud detection. IDH provides those capabilities and has enhanced the Hadoop framework to support enterprise needs. Intel continuously invests in research and enhances the capabilities in both the hardware and the software layers. Intel’s global team of experienced professionals have Decades of Deployment Expertise specific to Hadoop, BI, and Security-domain Experience Developers of Best-Practice Deployment Methodologies & Tools Providers of Advanced Integration and Operational Assistance

Read more >