ADVISOR DETAILS

RECENT BLOG POSTS

Public Cloud is Unsecure… Isn’t It?

When the discussion turns to the use of public cloud, statements are often made that the cloud is not secure. According to the National Institute of Standards and Technologies, public cloud is an “infrastructure [that] is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.

 

While enterprises often do not question the security of their private clouds, the level of security concerns seems to rise when it comes to public clouds.

 

Why is that?

 

Public Cloud Suffers from a Lack of Transparencycv.png

 

Traditionally, large public cloud providers have been very secretive about their security measures – not responding to client’s requests for information, and definitely not allowing their clients to audit their environments to understand whether appropriate security measures are taken.

 

The lack of breaches seems to demonstrate that environments are actually quite secure, but the lack of communication has a tendency to leave the situation open for interpretation. Amazon Web Services (AWS) definitely improved its communication when it released the Amazon Web Services: Overview of Security Processes document last June. Google also documented its security approach (although it is still questionable whether it truly addresses the issue in a transparent manner). And Microsoft Azure’s white paper on security, dated August 2010, can be publicly found here.

 

These documents actually describe the security aspects of the infrastructure on which your applications will run. However, it does not describe the end-to-end security that will protect your application once you expose it to the Internet. Ultimately that is what you need to think about.

 

Public Cloud Security is a Shared Responsibility

 

Infrastructure security is more often than not handled by a service provider. In other words, that service provider will ensure your applications and data are fully isolated from other companies on a multi-tenant environment. This ensures that another user of the same service cannot access your applications and data from within the infrastructure environment in which both companies run their applications. But it is your responsibility to ensure an external hacker cannot get into your applications and steal your data. You cannot expect your service provider to take that responsibility if you use IaaS.

 

How Should I Manage My Public Cloud?

 

So what do you need to take care of when you develop an application or a service that will run on a public cloud environment?

 

The Cloud Security Alliance published a document titled “Practices for Secure Development of Cloud Applications.” AWS also developed an interesting document describing the best practices to secure applications that run on their service. Although some of them are quite specific to AWS, it’s interesting to look at these documents and extract best practices.

 

In a nutshell, the public cloud service provider will ensure your compartment is secure.

 

This leaves you responsible for ensuring the content of your compartment is not hacked into from the outside. And this is actually not that different of what you do in your datacenter. The difference of course is that you are now operating in a virtual environment rather than a physical one. 

 

Often I keep hearing that OpenSource is not secure.  Let me share with you the OpenStack security guide, a very comprehensive document describing how the OpenStack security is set-up. As you will see, it is quite similar to what other service providers are already doing.

 

Nevertheless, public cloud service providers should be more transparent in the description of their services. The only way you can truly compare the security levels of each of them is using the CSA (Cloud Security Alliance) STAR (Security, Trust and Assurance Registry) submissions.  While it doesn’t tell it all, it’s a good starting point.

 

What do you think? 

 

Let’s continue the conversation. I would love to hear your opinions, stories and experience.

 

- Christian V

Read more >

Securing your enterprise with the trifecta of partnerships at HP Discover

Adversaries R Us continue to gain momentum through collaboration and innovative techniques planning their next attack.  They get better and better at continuously gaining access to the most valuable asset for all enterprises – data.  The most effective way to combat Adversaries R Us is to beat them at their own game by through collaborative security intelligence.  Such ecosystems are best enabled through global partnerships across all layers of the IT infrastructure including the processor, servers, the operating system or the software. This theme clearly emerges from the HP Discover Security Track sessions that highlight how the trifecta of HP, Microsoft and Intel come together to empower global organizations with multiple options to build out a better, more secure enterprise.

3 CEOs at Discover.jpg

At the last HP Discover Conference in Las Vegas, HP CEO Meg Whitman had quipped “Sometimes 30-year marriages need a little rejuvenation,” on stage with Intel CEO Brian Krzanich and Microsoft CEO Satya Nadella by video.  Perhaps one can witness the outcomes of said rejuvenation through the trifecta of this partnership.

 

In the IT6611 session, Jonathan Donaldson – General Manager, Software Defined Infrastructure, Cloud Platforms, Intel asks what we tend to give up when we carry on our daily lives in our own ecosystem of devices, apps and the Internet of Things. Donaldson highlights how Intel is collaborating with partners to protect the privacy of individuals.  Intel’s approach to doing this is with a foundation that requires the datacenter be more agile, secure, and anonymous.

 

Frank Mong – HP VP & GM, Security Solutions and Bret Arsenault – Microsoft VP & CISO encourage you to start thinking like a bad guy in TK6315.  “While the security industry remains overinvested in products and technology, and underinvested in people and processes, hackers are spending more money and sharing information”, reads the abstract.  Next generation of security challenges require a new style of thinking fostered by meaningful collaboration.  In IT6680, Arsenault asserts – “Cyber Security: It is not if, but when!”

 

Guess who is responsible for Cloud Security? In B6557, Intel Fellow, Nigel Cook and HP Director Michael Aday detail how HP Helion OpenStack can enable enterprises to better trust their clouds and ensure that their most sensitive and important information is treated appropriately. Remember the 3 equations for the most effective cloud solutions? The session highlights HP+Intel strategy for enabling business critical and highly secure workloads in the cloud through a variety of delivery models to address security requirements such as data governance and sovereignty.  Check out DT4252 that details a service provider’s experience deploying scalable, secure, enterprise-class, and cost-effective platform through HP Helion OpenStack.

 

These are but small but significant windows into the work being done by each of these partners. The synergies realized by such partnerships to build a better, more secure enterprise, manifest themselves through these sessions.

 

How about you? What are some of the partnership strategies that apply in the context of your enterprise?  Overall, what is the role of partnerships from the perspective of Enterprise Security?


Tell us your story.

 

Team up with HP Technology Expert, E.G.Nadhan

 

Connect with Nadhan on: Twitter, Facebook, Linkedin and Journey Blog.

 

References:

Read more >

Intel and HP Team Up in Barcelona for HP Discover 2014

Are you equipped with all you need to know to transform your business in 2015? Come and see what Intel has to say about the technology and business hot topics you are interested in data center, cloud, analytics, storage, big data and clients and our vision of how these are essential to the Internet of Things.

 

We are very honored to once again be the premier Conference Sponsor at HP Discover and are looking forward to seeing you all in Barcelona this year! As ever, the pace of innovation and change we see in the information technology market continues to excite. No doubt the recent announcement from HP on the separation of its business in 2015 and how this development will position HP for further accelerated performance and clear industry leadership in key areas is going to be a hot conversation topic throughout the three days.

 

However, let’s stay focused on the leading business and technology solutions that Intel and HP are dynamically collaborating on to bring to market over the next year; we promise there will be no shortage of exciting things for us to talk about as we count down the final days of 2014 and launch into 2015.

 

800.jpgBetter Business With Intel

 

This year the Intel showcase theme is Intelligent Business Transformation   powered by your Intel-based data center and business clients. Are you ready for some energized discussions, and keen to learn how Intel technology in conjunction with our partner’s enhanced solutions can help shape your business strategy?

 

We invite you to come and find us at HP Discover to experience exciting platform innovations and groundbreaking technologies which will change the way you work, live, and interact in the future.

 

Welcome to the Innovation Station

 

We’d especially like to highlight one of our Innovation Theater sessions that will explore how the Internet of Things built on the foundations of cloud, analytics, and data center needs to fundamentally protect the privacy of individuals as we fuel the build-out of enhanced services made possible by analytics and cloud architectures. The build-out requires the data center to be more agile, secure, and anonymous. Come and learn how Intel and partners are working toward a world driven by data-rich services that maintain your digital privacy.

 

Please sign up for Intel sessions when you register for HP Discover and check out our Partner Page to learn how you can start following us on social media.

 

Finally, don’t forget to get in touch and sign up for the Intel IT Center — we’re looking forward to meeting you in Barcelona!

Read more >

Hybrid Cloud Offers More Flexibility for Innovation

According to Franklin Morris at PCWorld, almost 90 percent of businesses are using cloud computing in some form. The question in the last few years isn’t if the cloud should be implemented, but how to do so most effectively. With private, public, and hybrid options available, IT decision makers are faced with the tough decision of choosing the model that best fits the needs of their organization at any given moment.

 

The benefits of public cloud computing include cost-efficiency and scalable architecture. Organizations gravitate toward public cloud solutions for business support applications like email and CRM systems. The benefits of private cloud services include superior customization, security, and privacy.  This is attractive to businesses who want tighter control over core applications.

cloud computing models.jpg

According to Morris, “Businesses now have an expanded set of options for operating in the cloud. They can choose to manage their cloud infrastructure in-house, or opt for a managed cloud and have their cloud provider shoulder the burden of day-to-day management. In short, the cloud of the past was a one-size-fits-all offering. Today it is easy for businesses to design a custom solution around their precise needs.”

 

It’s clear in today’s business environment that the needs of an organization aren’t being met by choosing just one way of doing things; by combining the services of both public and private clouds, IT decision makers are increasingly looking toward a hybrid cloud solution to provide the best of both worlds.

 

In a recent Wired article by Jeff Borek, he states that “hybrid cloud models using both private, dedicated IT resources and public variable infrastructure are likely to be less expensive for clients than either private or public clouds alone. However, each organization must evaluate its own business requirements to determine which type of cloud is the best fit for them.”

 

The hybrid model offers the most flexible, agile, and scalable option for your business and your IT department. Hybrid allows businesses to keep costs down and adapt to changing environments without sacrificing the customizability needed for continuing innovation.

 

For more information on the different ways cloud can work for your business, watch this helpful video on the changing face of cloud computing.

 

And to join the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

The Coding Illini Claim Victory in the Intel® 2014 Parallel Universe Computing Challenge

$26,000 awarded to National Center for Women and Information Technology charity

 

The Coding Illini, a team from NCSA and the University of Illinois at Urbana–Champaign, was declared the winner of the 2014 Intel® Parallel Universe Computing Challenge (PUCC) after a final competition that had plenty of excitement as both the Coding Illini and the Brilliant Dummies met their match with a tough coding round.

 

The final challenge was more substantial than prior matches and was the only one this year that used Fortran. The larger code was the undoing of both teams, as each made more changes than they were able to debug in their short ten minutes. The Coding Illini added to the drama when their final submission contained an error in their coding which appears to have broken the convergence of a key algorithm in the application. Their modified application continued iterating until long after the victor was declared and the crowds had dispersed. Co-host of the event, James Reinders, suspected both teams were only a few minutes away from success based on their progress and if either team had tried to do a little less they could have won easily by posting a programming result. The Coding Illini were declared the winner of the match based on the strength of their performance in the trivia round. Based on the Illini’s choice for a charitable organization, Intel will award the National Center for Women and Information Technology a donation of $26,000.

 

The Coding Illini, who were runners-up in the 2013 competition, celebrate the charitable award Intel will make to the National Center for Women and Information Technology on their behalf. The team includes team captain Mike Showerman, Andriy Kot, Omar Padron, Ana Gianaru, Phil Miller, and Simon Garcia de Gonzalo.

 

 

James later revealed that all the coding rounds were based on code featured in the new book High Performance Parallelism Pearls (specifically based on code from Chapters 5, 9, 19, 28, 8, 24 and 4, in that order. The original programs, effectively the solutions, are available from http://lotsofcores.com.) The competition problems were created by minimally changing the programs through the deletion of some pragmas, directives, and keywords associated with the parallel execution of the applications.

 

Complete Recap

 

This year’s PUCC at SC14 in New Orleans started with broad global participation with three U.S. teams, two teams each from Asia and Europe, and a Latin American team. In recognition of SC’s 26th anniversary, the teams were playing for a $26,000 prize to be awarded to the winning team’s selected charity.

 

On the opening night of the SC14 exhibition hall, last year’s winners, the Gaussian Elimination Squad from Germany who were playing for World Vision, eliminated their first round opponent, the Invincible Buckeyes from the Ohio Supercomputer Center and the Ohio State University who were playing for CARE. The German team had a slight lead after the first round that included SC conference and HPC trivia. Then their masterful performance in the coding round even amazed James Reinders, Intel’s Software Evangelist and the designer of the parallel coding challenge.

 

In the second match, The Brilliant Dummies from Korea selected Asia Injury Prevention Foundation as their charity. They faced off against the Linear Scalers from Argonne National Lab who chose Asha for Education. After round one, the Brilliant Dummies were in the lead with their quick and accurate answers to the trivia questions. Then in round two, the Seoul National University students managed to get the best Intel® Xeon™ and Intel® Xeon Phi™ performance with their changes to parallelize the code in the challenge. This performance cemented their lead and sent them on to the next round.

 

With the first two matches complete, the participants for the initial semi-final round were now identified. The Gaussian Elimination Squad would face The Brilliant Dummies.

 

Match number three, another preliminary round match, pitted Super Computación y Calculo Cientifico (SC3) representing four Latin American countries against the Coding Illini. The Coding Illini had reached the finals in the 2013 PUCC, and were aiming to improve their performance this year.  This was the first year for SC3, who chose to play for Forum for African Women Educationalists. In a tightly fought match, the Coding Illini came out on top.

 

In the final preliminary round match, Team Taiji representing four of the top universities in China chose Children and Youth Science Center, China Association for Science and Technology for their charity. They faced the EXAMEN representing the EXA2CT project in Europe and were playing for Room to Read. The team from China employed a rarely used strategy by fielding four different contestants in the trivia and coding rounds of the match and held the lead after the first round. Up until the very last seconds of the match it looked as though Taiji might be victorious. However, the EXAMEN submitted a MAKE at the very last second which improved the code performance significantly. That last second edit proved to be the deciding factor in the victory for the team from Europe.

 

So the Coding Illini would face the EXAMEN in the other semifinal round.

 

When the first semifinal match between the Gaussian Elimination Squad and The Brilliant Dummies started, the Germans were pretty confident. After all, they were the defending champions and had performed extraordinarily well in their first match. They built up a slight lead after the trivia round. When the coding round commenced, both teams struggled with what was a fairly difficult coding challenge that Reinders had selected for this match. As he had often reminded the teams, if they were not constrained by the 10 minute time limit, these parallel coding experts could have optimized the code to perform at the same or even better level than the original code had before Reinders deconstructed it for the purposes of the competition. As time ran out, The Brilliant Dummies managed to eke out slightly better performance and thus defeated the defending champions. The Brilliant Dummies would move on to the final round to face the winner of the EXAMEN/Coding Illini semi-final match.

 

In the other semifinal match, the Coding Illini took on the EXAMEN. At the end of the trivia round, the Coding Illini were in the lead. But as the parallel coding portion of the challenge kicked in, the EXAMEN looked to be the winner…until the Coding Illini submitted multiple MAKE commands at the last second to pull out a victory by just a small margin. They had used the same strategy on the EXAMEN that the EXAMEN had used in their match against Taiji. Coding Illini had once again made it to the final round and set up the final match with The Brilliant Dummies.

Read more >

Building a Truly Collaborative Enterprise

The benefits of having a highly collaborative enterprise is a given.  It’s not just the positive impact on business results like reduced time to market of products, their quality and improved customer satisfaction; the benefits also translates to better knowledge and people retention, workforce motivation and cohesiveness of the overall organization.  On the other hand, the challenges to ingrain the culture of collaboration within the organization are equally large, if not more.

 

The fundamental level of collaboration does happen in all enterprise.  People share content, files, e-mails, ideas, apps and whatever else is necessary to get the work done.  I call this collaboration by necessity.  This includes demonstration of collaborative behavior when ‘collaboration’ is mandated by the senior management.  Collaboration by choice is when people will proactively start on any task with a collaborative mindset in absence of any mandate, necessity or to fulfill any obligation. 

 

When thinking of creating a collaborative organization start with people centric approach instead of tools and technology.  Don’t be afraid to review and revamp the holy cows of annual performance reviews, rewards & recognitions and career promotions.  Identify the key areas where you would like to see more collaboration and remove any hurdles – process, workflow, budget and tools – that would be a hindrance.  Define a balanced scorecard that would give you an indication of the progress and not just motion.Collaboration.jpg

 

Tools and Technology


Ask any IT manager or a technologist about how to improve collaboration within the organization, they will come up with a list of tools and technologies that should be deployed that will guarantee improved collaboration.  A fancy looking dashboard will show how many groups have been created, documents and other content shared, comments posted, adoption rate and other indicators that, collectively, are expected to show how much of collaboration is happening within the organization.

 

As someone once said, “Do not confuse motion with progress.  A rocking horse keeps moving but does not make much progress”. The indicators and dashboard have to be developed that reflect the impact on business results.  Have we accelerated the design, development or some other process?  Has the day-1 quality of our product improved?  In order to track return on investment the dashboard has to include hard data that shows a clear and direct impact to business results; e.g. number of support calls dropped by 50% with new product launch compared to previous product launch.

 

Processes and Workflow

In most cases, when an organization selects tools and technologies for enabling collaboration they compare feature and functions.  In fact, I would go on a limb and say that there never is any mapping done to see if the selected tools will adapt to the processes and workflow of the organization.  It is usually assumed that management mandate, training and change management will encourage users to adapt to the tools instead of the other way around.

 

This assumption works only if the management is also willing to do away with the processes that are a hurdle to frictionless collaboration.  If the processes and workflow are not in sync with the tools, the extra burden of adapting to these tools will erode productivity of the workforce. Yes, there will be some productivity loss during the ramp up phase but in steady state, the collaboration tools and the organization processes should be in sync to be frictionless.

 

People and Incentives

While tools, technologies and processes enable or facilitate collaboration it is the people who actually collaborate.  Unfortunately, this fact usually comes as an after-thought to most of the organizational leaders.  On more than one occasion I have read and heard about the typical management chutzpah where they announce restructuring, cutbacks and layoffs on one hand as they ‘encourage’ the organization to become more collaborative and share knowledge on the other!

The other irony I see is that in most of the knowledge based industry, where collaboration is of paramount importance and can clearly create a differentiator, the incentives are stacked against it.  Individual performance is rewarded more than the team performance.  Deep expertise is touted more than collaborative results. Teams are scattered around the globe without any globalization strategy in place that is conducive to collaboration. Travel budgets are cut assuming that video conferencing can replace face-to-face highly interactive discussions and team building.  In short, the human and humane aspect is ignored with a faulty assumption that technology can bridge the gap.

Increasing collaboration within the organization is about culture shift, management & leadership and people empowerment supplemented with tools and technologies.  The strategy should be thought out at the highest possible level of the organization instead of driving it bottoms up.

This shift in mindset and behavior of the organization is complex and requires focused attention from the management.  It cannot happen overnight and, if ignored, will revert back to non-collaborative behavior very rapidly.

It can be done and rewards are all worth the effort!

 

Opinions expressed herein are my own and do not reflect that of my employer, Intel Corporation. My other posts can be read here and more about me is available on my website.

Read more >

What Is Mobile Business Intelligence?

What Is Mobile Business Intelligence?You might have heard this statistic by now: more people own a cell phone than a toothbrush.

 

In a Forbes post, Maribel Lopez lists a number of recent statistics about mobility. “While we could debate the numbers, the trend is clear,” she writes. ”The pace of mobile adoption across devices and applications is accelerating.” Mobility is no longer a nice-to-have option. Instead, it’s become a must for many businesses. Many surveys support this view. According to the Accenture CIO Mobility Survey 2013, “79% of respondents cited mobility as a revenue-generator and 84% said mobility would significantly improve customer interactions.”

 

The evolution of mobile BI

 

With this paradigm shift comes the natural extension of business intelligence (BI) to mobile business intelligence (mobile BI) or mobile intelligence. This term may mean different things to different people, and it’s sometimes used interchangeably, but your perception of mobile BI will be influenced primarily by your understanding of BI.

 

In my post “What Is Business Intelligence?” I defined BI as the framework that enables organizations of all sizes to make faster, better-informed business decisions. Mobile BI extends this definition and puts the emphasis on the application of mobile devices such as smartphones or tablet computers.

 

Therefore, you can argue that the fundamentals remain unchanged—Mobile BI is the enabler that, if designed, implemented, and executed effectively, can help organizations drive growth and profitability.

 

However, the way organizations go about realizing the true value of mobile BI may depend on the state of their enterprise mobility (for example, whether or not a formal mobile enterprise strategy and a road map exist) and the level of their BI maturity.

 

Harnessing the power of mobile BI

 

Mobile BI is more prevalent and more relevant today because the gap between the experience of traditional BI content consumed on a desktop PC and that accessed on a mobile device is disappearing rapidly. We now talk about the gap between a smartphone and a tablet device. The tablet devices are getting smaller both in size and weight to compete with our smartphones.

 

Rapid growth in areas such as the cloud, in-memory technology, big data, and predictive analytics are fueling this innovation cycle. As a result, companies are looking for ways to harness the power of mobile BI through innovation and without disruption.

 

As businesses face more obstacles and are forced to deal with more complex challenges, they increasingly require greater mobile access to more processed data coming from both structured sources (such as sales data by markets and geography), and unstructured sources (like social media or email data that can’t be easily queried with traditional tools and technologies).

 

Companies at the leading edge seek to gain the edge to exploit mobile BI to support a workforce that’s becoming more and more mobile.

 

Mobile BI can become a key differentiator

 

According to IDC, the “world’s mobile worker population will reach 1.3 billion, representing 37.2% of the total workforce by 2015.” The share of the mobile workforce is even higher if we focus on the business roles such as sales, where mobility is a critical component for success. Business models that rely on insight thru outdated or limited capabilities can no longer compete in an ever-increasing global market, which simply dictates mobile execution.

 

Today, there’s no doubt that both for-profit and not-for-profit organizations must deliver more for their customers and stakeholders. In this context, mobile BI can become a key differentiator in helping organizations cope with both the complexity and the real-time challenges they face with the execution of their strategy.

 

It’s a transformative force that has the power to change how businesses deliver value today, because mobile BI further breaks down the walls of information silos, thus dramatically extending the ability to gain actionable insight thru data-driven analyses for all decision makers at all levels of an organization. Where do you see Mobile BI adding value to your organization?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on The Decision Factor.

Read more >

Part 4 – Transforming the Workplace: An Integrated Strategy for Change

This is part 4 of my blog series about transforming the workplace. Be sure to read part 1, part 2, and part 3 in the series.


“If you can’t help people change, technology changing all around them won’t make the slightest difference.”

– Dave Coplin, Business Reimagined

 

It’s a common misconception in the business world that new technology equals change. This blog series has been exploring how the workplace is changing and the inevitable challenges of innovation. And while we know that technology is key to achieving transformation in the workplace, it’s only part of the story. Here I want to discuss the final component: applying an inclusive, integrated strategy to facilitate change throughout the organization with the right partnerships and culture change.


The need for a triumvirate approach: Culture, IT, and facilities

After the technology foundation is established, the rubber meets the road. The next step is putting the vision of workplace transformation into practice. To enable true transformation across the business, Intel recommends a triumvirate approach to address company culture, IT, and facilities.

JH.jpg  

Culture: Supporting change at every turn
A few companies are leading the pack when it comes to progressive culture. And why? It’s because they have embraced new styles of working from the top down. And in many cases, it involves playing games, supporting physical fitness, and so on. To facilitate change throughout your organization, it’s important to embrace the following key attributes:

  • Innovation
  • Velocity
  • Openness
  • Accountability

 

And a final note on the technology angle: One of the major challenges companies face is “tool fatigue.” If a new tool is brought in without an explanation of its value and an introduction, employees may forgo it as unnecessary and, ultimately, the project is seen as a failure. The missing link here is simply leadership and communication.

IT via the SMAC stack

There is consensus across the IT industry and analyst community that the social, mobile, analytics, cloud (SMAC) paradigm is the new platform for enabling the digital business. In the convergence of these four components, IT can change the way work gets done and ultimately drive transformation.

 

Social

Social computing provides a natural, intuitive way for people to communicate and collaborate by eliminating traditional communication hierarchies.

Mobile
Today, work is no longer a place that you go to; it’s what you do. Mobile computing is what makes this possible, with the ability to work anywhere, anytime, for greater business agility.

Analytics

Advanced analytics deliver insights at the point of decision to help speed decision making. Analytics can also enable a “Smart Advisor” to bring business-critical data to all employees.

Cloud

With shared IT systems in the cloud, employees can have access to the information they need anytime, on any device, from any location—including device and data synchronization.

 

Facilities innovation
Finally, to support new ways of working, you need the right work environment. It all boils down to achieving a level of harmony between the workplace and the work style so that there is alignment. This means that physical spaces should be places that employees actively want to engage in—versus feeling like they have to be there.

On one hand, facilities need to cater to the needs of the work group and collaborators, yet they must also serve those needing interruption-free environments for intensive tasks. Unfortunately, many offices today offer little to inspire people, poor collaboration facilities, and inefficient space utilization that ultimately impacts the bottom line.

It’s also interesting to consider how facilities and IT are set to come together. For example, a conference table in a meeting room today is just a table. Yet in the near future, it may be equipped with a touch-screen surface and Internet connectivity. Due to this inevitable crossover between facilities and IT, an ideal workplace transformation strategy requires those responsible for both facilities and IT to work together to realise the best environment.

 

Intel paves the way

In the next and final blog in this series, I’ll step through some examples of how Intel has implemented a triumvirate approach across its culture, IT, and facilities. And as previously mentioned, I’m currently working on a paper that will expand on Intel’s vision of workplace transformation that will be available soon.

How is your organization managing workplace transformation? Please join the conversation and share your thoughts. And be sure to click over to the Intel® IT Center to find resources on the latest IT topics.

 

Until the next time …

 

Jim Henrys, Principal Strategist

Read more >

Intel Adds New Dimension to SSDs

Imagine a fast and powerful 1 terabyte solid-state drive (SSD) that fits on your fingertip.

 

That’s enough storage capacity to hold more than 200,000 songs or more than 150 hours of high definition video! The day is coming when your tablet will have enough room to hold every song you can imagine, plus all your photos, videos and more. And it’s coming sooner than you think.

 

At Intel’s Investor Day yesterday, Rob Crooke, Intel vice president and general manager of the Non-Volatile Memory Solutions Group (NSG), unveiled Intel’s plans to begin production of 3D NAND for use in consumer and data center SSDs starting in the second half of 2015.

 

3D NAND is a sensational technological advancement allowing SSDs to store more data in less space, increase overall drive capacity, reduce power consumption and improve system-level performance at a lower cost to users. Intel achieves this by packing more storage density onto the SSD. It’s like taking a plot of land and building a high-rise apartment building as opposed to a single-family home. To show off the new 3D NAND, Rob presented from a computer featuring a prototype SSD utilizing the new technology.

 

Intel capitalized on its decades-long history of microchip manufacturing innovation to overcome the challenge of drilling 4 billion holes in a silicon chip. This means Intel is able to deliver unprecedented density at 256 Gbits per die, meaning we can deliver higher capacities at a lower cost. This enables us to continue to deliver on the promise of Moore’s Law by doubling storage capacity and enabling our CPUs to really show off their unique capabilities and tremendous performance. The potential 3D NAND brings to Intel SSDs is truly inspiring.

 

In data center applications having more storage closer to the CPU enables fast transactions, quick access to real-time data and short wait times for content. Intel’s 3D NAND delivers stunning performance and is very cost effective. Just one 4-inch server rack of Intel SSDs can deliver 11 million IOPS (input/output operations per second). For comparison, you would need a rack of hard disk drives measuring 500 feet tall to churn out the same performance. Beyond the savings in the cost of the drives, imagine the immense savings in power and cooling!

 

For consumers it means more storage where you need it: tablets and notebooks for photos, music and games; home theaters for hours of HD content delivered with almost no lag; and in vehicle infotainment systems to store maps, music and more. These benefits are just the tip of the iceberg.

 

Intel will continue its fruitful and long-term relationship with Micron and jointly held IM Flash Technologies (IMFT) to produce the new multi-level cell (MLC) flash chips with products available in the second half of 2015. For more information on Intel SSDs and non-volatile memory, visit http://www.intel.com/ssd.

 

Frank Ober

Read more of Frank’s SSD related posts

Read more >

The Finer Points Of Evaluating Battery Life

Our increasingly mobile lifestyles force us to rely heavily on our device’s batteries. We’re constantly seeking to get a little extra juice out of our laptops, phones, and tablets. Tablets, in particular, have become a prominent platform for both the home and office, and we rely on them to feature better battery life than many of our other devices. While some tablets boast 12+ hours of battery life, it’s important to understand that these devices are much more than just a battery — the rest of the device’s hardware specifications may have even more to do with battery life than the actual battery does.

 

For example, it’s a common misconception that so-called “power-efficient” processors may drain batteries slower, therefore giving you a device that can put in a full day of work. In many cases the opposite is true. Full-powered processors that perform computations quickly and efficiently can actually have less impact on a device’s battery by completing tasks and returning the device to a resting state faster.

GreaterBatteryLife_1.png

Battery life is also dependent on many factors beyond processing speed. While primarily a concern for laptops and 2-in-1 devices, connected peripherals like external hard drives and speakers may leach battery life from your device, lowering the probability that you’ll make it through the day without a charge. Other factors that determine your device’s battery life include your operating system, number of running programs, and whether or not you’re running an animated wallpaper.

 

Operating System & Battery Life

 

Some operating systems are optimized to work in conjunction with your device’s processor to optimize battery life. Google and Microsoft coordinate with chip makers in order to ensure tablet processors are designed with a specific mobile operating system in mind. Additionally, your operating system may have power-saving features that allow you to control display brightness and other settings to decrease power consumption.

 

Wallpapers & Background Processes

 

Some of the biggest battery killers hide behind the scenes. While animated wallpapers can be a fun way to personalize your device, enabling them on your tablet can drain your battery faster than you want. The animations represent a persistent task that your processor has to run, which lowers power efficiency.

 

In addition to your wallpaper, the number of apps running in the background can significantly affect your battery life. To keep them in check, consider quitting any applications not in immediate use in order to give your processor and battery a rest.

 

These are only a few factors that determine your device’s battery life. To learn more, read the blog Breaking Down Battery Life. You can also get a  comprehensive look at how your device distributes power, by checking out this white paper on evaluating battery life.

Read more >

Transforming Healthcare through Big Data

Frustration with electronic health record (EHR) systems notwithstanding, the data aggregation processes that have grown out of healthcare’s adoption of the electronic health record are now spawning analytical capabilities that were unthinkable just 15 years ago. By leveraging big data to track everything from patient recovery rates to hospital finances, healthcare organizations are capturing and storing data sets that are changing the way doctors, caregivers and payers tackle larger scale health issues.

 

It’s not just happening on the clinical side, either, where EHRs are extending real-time patient information to doctors and predictive analytics are helping physicians to better track and understand their patients’ medical conditions.

 

In Kentucky, for example, tech investments by the state’s largest provider systems are estimated at over $600 million, a number that doesn’t even reflect investments from two of the biggest local organizations, Baptist Health and University of Kentucky HealthCare. The data collected by these hospitals includes—and far exceeds—the EMR basics mandated under ARRA, according to an article in The Lane Report.

 

While the goal of improving quality of care is, of course, a key driver of such investments, so is the government mandate tying Medicare and Medicaid reimbursement to outcomes. According to a recent report from McKinsey & Company, more than 50 percent of doctors’ offices and almost 75 percent of hospitals nationwide are managing patient information electronically. So, it’s not surprising that big data is catching the attention of healthcare’s management teams.

 

By quantifying and analyzing an endless variety of metrics—including things like R&D, claims, costs, and insights gleaned from patients—the industry is refining its approach to both preventative care and treatment, and saving money in the process. A good example can be found in the analysis of data surrounding regression rates, which some hospitals are now using to stave off premature releases and, by extension, exorbitant penalties.

 

Others, such as Brigham and Women’s Hospital, already are applying algorithms to generate savings beyond readmissions, in areas that include: high-cost patients, triage, decompensation, adverse events, and treatment optimization.

 

While there’s room to debate the extent to which big data is improving patient outcomes—or the scope of savings attributable to big data initiatives given the associated system costs—the trend toward leveraging data for better outcomes and savings will only continue to grow as CIOs advance meaningful implementations of solutions, and major technology companies continue to expand the industry’s basket of options.

 

How is your healthcare organization applying big data to overcome challenges? Have the results proven worthwhile?

 

As a B2B journalist, John Farrell has covered healthcare IT since 1997 and is a sponsored correspondent for Intel Health & Life Sciences.

Read John’s other blog posts

Read more >

Recapping Intel Highlights at SAP TechEd 2014: Videos and Animations

SAP TechEd 2014 at Las Vegas was an exciting and enjoyable show, brimming with opportunities to learn about the latest innovations and advances in the SAP ecosystem. Intel had its own highlights, as I explain in this video overview of Intel’s key activities. These included the walk-on appearance of Shannon Poulin, vice president of Intel’s Data Center Group, during SAP President Steve Lucas’s executive keynote. Shannon did his best to upstage the shiny blue Ford Mustang that Steve gave away during the keynote, but that was a hard act to top. Curt Aubley, Intel Data Center Group’s vice president and CTO, took part in an executive summit with Nico Groh, SAP’s data center intelligence project owner, that addressed ongoing Intel and SAP engineering efforts to optimize SAP HANA* power and performance management on Intel® architecture.

 

I was at the conference filming man-on-the-street interviews with some of Intel’s visiting executives. I had a great conversation with Pauline Nist, general manager of Intel’s Enterprise Software Strategy, on the subject of Cloud: Public, Private, and Hybrid for the Enterprise, and the future of the in-memory data center. I also spoke to Curt Aubley about How Intel is Influencing the Ecosystem Data Center and how sensors and telemetry can provide real-time diagnostics on the health of your data center.

 

In the Intel booth, we also had the fun of launching our latest animation, Intel and SAP: The Perfect Team for Your Real-Time Business, a light-hearted look at the rich, long-standing alliance between SAP and Intel. In the video, the joint SAP HANA and Intel® Xeon® processor platform has the power of a space rocket—a bit of an exaggeration, perhaps. But SAP HANA is a mighty powerful in-memory database, designed from the ground up for Intel Xeon processors. Dozens of Intel engineers were involved in the development of SAP HANA, working directly with SAP to optimize SAP HANA for Intel architectures.

 

 

 

It’s not too late to catch some of the action from our booth! We filmed a number of our Intel Tech Talks, so click on these links to watch industry experts discussing the latest news and advances in the overlapping orbits of SAP and Intel.

 

 

Follow me at @TimIntel and search #TechTim to get the latest on analytics and data center news and trends.

Read more >

On the Ground at SC14: Fellow Traveler Companies

Let’s talk about Fellow travelers at SC14 – companies that Intel is committed to collaborating with in the HPC community. In addition to the end-user demos in the corporate booth, Intel took the opportunity to highlight a few more companies in the channel booth and on the Fellow Traveler tour.

 

Intel is hosting three different Fellow Traveler tours on Discovery, Innovation, and Vision. A tour guide leads a small group of SC14 attendees through the show floor to visit eight company booths (with a few call outs to additional Fellow Travelers along the way). Yes, you wear an audio headset to hear your tour guide. And yes, you follow a flag around the show floor. On our 30 minute journey around the floor, my Discovery tour visited (official stops are bolded):

  • Supermicro: Green/power efficient supercomputer installation at the San Diego Supercomputer Center
  • Cycle Computing: Simple and secure cloud HPC solutions
  • ACE Computers: ACE builds customized HPC solutions, and customers include scientific research/national labs/large enterprises. The company’s systems handle everything from chemistry to auto racing and are powered by the Intel Xeon processor E5 v3. Fun fact, the company’s CEO is working on the next EPEAT standard for servers.
  • Kitware: ParaView (co-developed by Los Alamos National Laboratory) is an open-source, multi-platform, extensible application designed for visualizing large data sets.
  • NAG: A non-profit working on numerical analysis theory, they also take on private customers and have worked with Intel for decades on tuning algorithms for modern architecture. NAG’s code library is an industry standard.
  • Colfax: Offering training for parallel programming (over 1,000 trained so far).
  • Iceotope: Liquid cooling experts, their solutions offer better performance/watt than liquid and air cooling hybrid.
  • Huawei: Offering servers, clusters (they’re Intel Cluster Ready certified) and Xeon Phi coprocessor solutions.
  • Obsidian Strategics: Showcasing a high-density Lustre installation.
  • AEON: Offering fast and tailored Lustre storage solutions in a variety of industries including research, scientific computing and entertainment; they are currently architecting a Lustre storage system for the San Diego Supercomputer Center.
  • NetApp: Their booth highlighted NetApp’s storage and data management solutions. A current real-world deployment includes 55PB of NetApp E-Series storage that provides over 1TB/sec to a Lustre file system.
  • Rave Computer: The company showcased the RT1251 flagship workstation, featuring dual Intel Xeon processor E5-2600 series with up to 36 cores and up to 90MB of combined cache. It can also make use of the Intel Xeon Phi co-processor for 3D modeling, visualization, simulation, CAD, CFD, numerical analytics, computational chemistry, computational finance, and digital content creation.
  • RAID Inc: Demo included a SAN for use in big data, running the Intel Enterprise Edition of Lustre with OpenZFS support. RAID’s systems accelerate time to results while lowering costs.
  • SGI: Showcased the SGI ICE X supercomputer, the sixth generation in the product line and the most powerful distributed memory system on the market today. It is powered by the Intel Xeon processor E5 v3 and includes warm water cooling technology.
  • NCAR: Is answering the question, how do you refactor an entire climate code. NCAR, in collaboration with the University of Colorado at Boulder is an Intel Parallel Computer Center aiming to develop tools and knowledge to help with the performance improvements of CESM, WRF, and MPAS on Intel Xeon and Intel Xeon Phi processors.


Intel Booth – Fellow Traveler Tours depart from the front right counter

 

After turning in my headset, I decided to check out the Intel Channel Pavilion next to Intel’s corporate booth. The Channel Pavilion has multiple kiosks (so many that they switched halfway through the show), each showcasing a demo with Intel Xeon and/or Xeon Phi processors, and highlighting a number of products and technologies. Here’s a quick rundown:

  • Aberdeen: Custom servers and storage featuring Intel Xeon processors
  • Acme Micro: Solutions utilizing the Intel Xeon processor and Intel SSD PCIe cards
  • Advanced Clustering Technologies: Clustered solutions in 2U of space
  • AIC: Alternative storage hierarchy to achieve high bandwidth and low latency via Intel Xeon processors
  • AMAX: Many core HPC solutions featuring Intel Xeon processor E5-2600 v3 and Intel Xeon Phi coprocessors
  • ASA Computers: Einstein@Home uses an Intel Xeon processor based server to search for weak astrophysical signals from spinning neutron stars
  • Atipa Technologies: Featuring servers, clustering solutions, workstations and parallel storage
  • Ciara: The Orion HF 620-G3 featuring the Intel Xeon processor E5-2600 v3
  • Colfax: Colfax Developer Training on efficient parallel programming for Xeon Phi coprocessors
  • Exxact Corporation: Accelerating simulation code up to 3X with custom Intel Xeon Phi coprocessor solutions
  • Koi Computers: Ultra Enterprise Class servers with the Intel Xeon processor E5-2600 v3 and a wide range of networking options
  • Nor-Tech: Featuring a range of HPC clusters/configurations and integrated with Intel, ANSYS, Dassault, Simula, NICE and Altair
  • One Stop Systems: The OSS 3U high density compute accelerator can utilize up to 16 Intel Xeon Phi coprocessors and connect to 1-4 servers

 

The Intel Channel Pavilion

 

Once completing the booth tours, I decided to head back to the Intel Parallel Computing Theater to listen to a few more presentations on how companies and organizations are putting these systems into action.

 

Joseph Lombardo, from the National Supercomputing Center for Energy and the Environment stopped by the theater to talk about the new data center they’ve recently put into action, as well as their use of a data center from Switch Communications. The NSCEE has a couple of challenges – massive computing needs (storage and compute power); time sensitive projects (those with governmental and environmental significance) and numerous and complex workloads. In their Alzheimer’s research, the NSCEE compares the genomes of Alzheimer’s patients with those of normal genomes. They worked with Altair and Intel on a system that reduces their runtime from 8 hours to 3 hours, while improving system manageability and extensibility.

 

Joseph Lombardo from the NSCEE

 

Then I listed in to Michael Klemm from Intel talking about offloading Python to the Intel Xeon Phi coprocessor. Python is a quick and high productivity language (packages include: iPython, Numpy/SciPy, and Pandas) that can help compose scientific applications. Michael talked through design principles for the pyMIC offload infrastructure: Simple usage, slim API, fast code and keep control in a programmer’s hand.

 

Michael Klemm from Intel

 

Wolfgang Gentzsch from UberCloud covered HPC for the Masses via cloud computing. Currently more than 90% of an engineer or scientist’s in-house HPC is completed via workstations and 5% via servers. Less than 1% is completed using HPC Clouds, which offers a ripe opportunity if challenges like security/privacy/trust, control of data (where and how is your data running), software licensing, and the transfer of heavy data can be resolved. There are some hefty benefits – pay per use, easily scaling resources up or down, low risk with a specific cloud provider – that may start to entice more users shortly. UberCloud has 19 providers and 50 products currently in their marketplace.

 

Wolfgang Gentzsch from UberCloud

 

The Large Hadron Collider is probably tops on my list of places to see before I die, so I was excited to see Niko Neufeld from LHCb CERN talk about their data acquisition/storage challenge. I know, yet another big data problem. But the LHC generates one petabyte of data EVERY DAY. Nikko talked through how they’re able to use some sophisticated filtering (via ASICS and FPGA) to get that down to storing 30PB a year, but that’s still an enormous challenge. The team at CERN is interested in looking at the Intel OmniPath Architecture to help them move data faster, and then integrating Intel Xeon + FPGA with Intel Xeon and Intel Xeon Phi processors to help them shave off the amount of data stored even more.

 

Niko Neufeld from LHCb CERN

 

And finally, the PUCC held matches 4 and 5 today, the last of the initial matches and the first of the playoffs. In the last regular match, Taji took on the Examen and, in a stunning last-second “make” run, the Examen took it by a score of 4763 to 2900. In the afternoon match, the Brilliant Dummies took on the Gaussian Elimination Squad (defending champs). It was a hard fought battle – for many of the questions both teams had answered before the multiple choice possibilities were shown to the audience. In the end, the Brilliant Dummies were able to eliminate the defending champions by a score of 5082 to 2082. Congratulations to the Brilliant Dummies, we’ll see you in the final on Thursday.

 

We’ll see the Brilliant Dummies in the PUCC finals on Thursday

Read more >

The Final Day for the 2014 Parallel Universe Computing Challenge @ SC14

Thursday, November 20, 2014

Dateline:  New Orleans, LA, USA

 

This morning at 11:00AM (Central time, New Orleans, LA), the second semi-final match of the 2014 Parallel Universe Computing Challenge will take place at the Intel Parallel Theater (Booth 1315) as the Coding Illini team from NCSA and UIUC, faces off against the EXAMEN from Europe.   Coding Illini earned its spot in is semi-final match by beating the team from Latin America (SC3), and the EXAMEN earned their semi-final slot by beating team Taiji from China.

 

The winner of this morning’s semi-final match will go on to play the Brilliant Dummies from Korea in the final competition match this afternoon at 1:30PM, live on stage from Intel’s Parallel Universe Theater.

 

The teams are playing for the grand prize of $26,000 to be donated to a charitable organization of their choice.

 

Don’t miss the excitement:

  • Match #5 is scheduled at 11:00AM
  • The Final Match is scheduled at 1:30PM

 

Packed crowd watching the PUCC

Read more >

5 Questions for Dr. Sandhya Pruthi, Medical Director for Patient Experience, Breast Diagnostic Clinic, Mayo Clinic Rochester

Clinicians are on the front lines when it comes to using healthcare technology. To get a doctor’s perspective on health IT, we caught up with Dr. Sandhya Pruthi, medical director for patient experience, breast diagnostic clinic, at Mayo Clinic Rochester, for her thoughts on telemedicine and the work she has been undertaking with remote patients in Alaska.

 

sandhya-pruthi-11254262.jpg

 

Intel: How are you involved in virtual care?

 

Pruthi: I have a very personal interest in virtual care. I have been providing telemedicine care to women in Anchorage, Alaska, right here from my telemedicine clinic in Rochester, Minnesota. I have referrals from providers in Anchorage who ask me to meet their patients using virtual telemedicine. We call it our virtual breast clinic, and we’ve been offering the service twice a month for the past three years.

 

Intel: What services do you provide through telemedicine?

 

Pruthi: We know that in some remote parts of the country, it’s hard to get access to experts. What I’ve been able to provide remotely is medical counseling for women who are considered high risk for breast cancer. I remotely counsel them on breast cancer prevention and answer questions about genetic testing for breast cancer when there is a very strong family history. The beauty is that I get to see them and they get to see me, rather than just writing out a note to their provider and saying, “Here’s what I would recommend that the patient do.”

 

Intel: How have patients and providers in Alaska responded to telemedicine?

 

Pruthi: We did a survey and asked patients about their experience and whether they felt that they received the care they were expecting when they came to a virtual clinic. The result was 100 percent satisfaction by the patients. We also surveyed the providers and asked if their needs were met through the referral process. The results were that providers said they were very pleased and would recommend the service again to their patients.

 

Intel: Where would you like to see telemedicine go next?

 

Pruthi: The next level that I would love to see is the ability to go to the remote villages in the state of Alaska, where people have an even harder time coming to a medical center. I’d also like to be able to have a pre-visit with patients who may need to come in for treatment so we can better coordinate their care before they arrive.

 

Intel: When it comes to telemedicine, what keeps you up at night?

 

Pruthi: Thinking about how we can improve the patient experience. I really feel that for a patient who is dealing with an illness, the medical experience should wow them. It should be worthwhile to the patient and it should follow them on their entire journey—when they make their appointment, when they meet with their physician, when they have tests done in the lab, when they undergo procedures. Every step plays a role in how they feel when they go home. That’s what we call patient-centered care.

Read more >

On the Ground at SC14: Technical Sessions, Women in Science and Technology, and the Community Hub

Apparently there’s a whole world that exists beyond the SC14 showcase floor…the technical sessions. Intel staffers have been presenting papers (on Lattice Quantum Chromodynamics and Recycled Error Bits), participating in panels (HPC Productivity or Performance) and delivering workshops (covering OpenMP and OpenCL) over the past few days, with a plethora still to come.

 

To get a flavor for the sessions, I sat in on the ACM Gordon Bell finalist presentation: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. It’s one of five papers in the running for the Gordon Bell award and was presented at the conference by Michael Bader from TUM. The team included scientists from TUM, LMU Munich, Leibniz Supercomputing Center, TACC, National University of Defense Technology, and Intel. Their paper details optimization of the seismic software SeisSol via Intel Xeon Phi coprocessor platforms, achieving impressive earthquake model complexity of the propagation of seismic waves. The hope is that we can use optimized software and supercomputing to understand the wave movement of earthquakes, eventually anticipating real-world consequences to help adequately prepare for and minimize after effects. The Gordon Bell prize will be announced on Thursday, so good luck to the team!

 

Michael Bader from TUM

 

From there I headed back to the Intel booth to see how the demos are helping to solve additional real-world problems. First up was the GEOS-5/University of Tennessee team, which deployed a workstation with two Intel Xeon processors E5 v3 and two Intel Xeon Phi coprocessors to run the VisIT app for visual compute analysis and rendering. GEOS-5 simulates climate variability on a wide range of time scales, from near-term to multi-century, helping scientists comprehend atmospheric transport patterns that affect climate change. A real climate model (on a workstation!) which could be used to predict something like the spread and concentration of radiation around the world.

 

Predicting Climate Change with GEOS-5

 

Next up, the Ayasdi demo on precision medicine – a data analytics platform running on the Intel Xeon processor E5 V3 and a cluster with Intel True Scale Fabric that is looking for similarities in data, rather than using specific queries as searches. The demo shows how the shape of data can be employed to find unknown insights in large and complex data sets, something like “usually three hours after this type of surgery there is a fluctuation in vitals across patients.” The goal is to combine new mathematical approaches (TDA) with big data to identify biomarkers, drug targets, and potential adverse effects to support more successful patient treatment.

 

 

Ayasdi Precision Medicine Demo

 

Since I’m usually on a plane every couple of weeks, I was excited to talk to the Onera team on how they’re using the elsA simulation software to streamline aerospace engineering. The simulation capabilities of elsA enable reductions in ground-based and in-flight testing requirements. The Onera team optimized elsA to run in a highly scalable environment of an Intel Xeon and Xeon Phi processor based cluster with Intel True Scale fabric and SSDs, allowing for large scale modeling of elsA.

 

Aerospace Design Demo from Onera

 

Up last, I headed over to the team at the Texas Advanced Computing Center to talk about their demo combining ray tracing (OSPRay) and computing power (Intel Xeon processor E5 v3) to run computational fluid dynamics simulations and assemble flow data from every pore in the rock in Florida’s Biscayne Bay. Understanding how the aquifer transports water and contaminants is critical to providing safe resources, but eventually the researchers hope to move the flow simulation to the human brain.

 

TACC Demo in Action

 

One of the areas in the Intel booth I’d yet to visit was the Community Hub, an area to socialize and collaborate on ideas that can help drive discoveries faster. Inside the Hub, Intel and various third parties are on-hand to collaborate and discuss technology directions, best known methods, future use cases, etc. of a wide variety of technologies and topics. Hopefully attendees will create, improve or expand their social network with respect to peers engaged in similar optimization and algorithm development.

 

One of the community discussions with the highest interest on Tuesday was led by Debra Goldfarb, the Senior Director of Strategy and Pathfinding Technical Computing at Intel. The Hub was packed for a session on encouraging Women in Science and Technology – the stats are pretty dismal and Intel is committed to changing that. The group brainstormed reasons for the gap and how we can begin to address it. A couple of resources for those interested in the topic: www.intel.com/girlsintech and www.womeninhpc.org.uk. Intel also attended in the “Women in HPC: Mentorship and Leadership” BOF and will participate in “Woman in HPC” panel on Friday.

 

 

Above and below: Women in Science and Technology Community Hub discussion lead by Debra Goldfarb

 

 

 

 

Women in HPC BOF

 

Community Hub discussions coming up on Wednesday include Fortran & Vectorization, OpenMP, MKL, Data Intensive HPC, Life Sciences and HPC, and HPC and the Oil and Gas industry.

 

At the other end of the booth, the Intel Parallel Universe Theater was hopping all day. I checked out a presentation from Eldon Walker of the Lerner Research Institute at the Cleveland Clinic who discussed their 1.2 petabyte mirrored storage system (DC and server room) and their 270 terabytes of Lustre storage which enables DNA sequence analysis, finite element analysis, natural language processing, image processing and computational fluid dynamics. Dr. Eng Lim Goh from SGI presented the company’s energy efficient supercomputers, innovative cooling systems, and SGI MineSet for machine learning. And Tim Cutts from Wellcome Sanger Trust made it through some audio and visual issues to present his topic on working with genomics and the Lustre file system and how they solved a couple of tricky issues (denial of service issue via samtools and performance issues with concurrent file access).

 

Eldon Walker, Lerner Research Institute

 

Dr. Eng Lim Goh, SGI

 

 

Tim Cutts, Wellcome Trust Sanger

 

And lastly, for those following along with the Intel Parallel Universe Computing Challenge – in match two, The Brilliant Dummies from Korea defeated the Linear Scalers from Argonne by a score of 5790 to 3588. And in match three, SC3 (Latin America) fell to the Coding Illini (NCSA and UIUC) with a score of 2359 to 5359, which means both the Brilliant Dummies and Coding Illini move on in the Challenge. Match 4 and 5 will be up on Wednesday. See you in booth 1315!

 

Read more >

The World is Your Office: A Study in Telecommuting

macdesk.gifImage source: bestreviews.com


If you look down at your workspace right now and analyze the way it has changed in the past few decades, you’ll likely be amazed by the contrast. Technology has given us the capacity to eliminate waste and optimize our workplaces for productivity, but it has also fundamentally changed the way we work. Less ties to a physical desk in a physical workspace has led to an upswing in the mobile workforce. According to the “The State of Telework in the U.S.” — which is based on U.S. Census Bureau statistics — businesses saw a 61% increase in telecommuters between 2005 and 2009.

 

IT decision makers have witnessed this growth from the trenches, where they enable the business to grow through technological advancements.  But there are several key questions IT leaders will face in the coming waves of virtualization…

 

  • What type of work model should be used to manage knowledge workers?
  • When workers are increasingly distributed globally at multiple physical locations, how do effective interpersonal relationships form and grow?
  • How will technology and people considerations impact the locations where people come together?
  • How can the office environment be configured to invoke optimum worker productivity?
  • How will organizations source the best workers and cope with differing attitudes across a five-generation workforce?

 

Telecommuters Today

 

Though there are a significant number of mobile workers today, the number is still small in comparison to what it will be one day. According to the “The State of Telework in the U.S.” 50 million U.S. employees work jobs that are telework compatible, but only 2.9 million consider home their primary place of work. This represents 2.3 percent of the workforce. Meaning the full impact of virtualization has yet to be realized.

 

Some are dubious as to whether the workplace will continue to move in a virtualized direction. Rawn Shah, director and social business architect at Rising Edge, recently wrote on Forbes, “We are only starting to understand what the future of work looks like. In my view, the imagined idea of entirely virtual organizations is similar to how we used to think of the future as full of flying cars and colonies in space. Reality is much more invested in hybrid in-office plus remote scenarios. Physical space is still a strong element of work that we need to keep track of, and understand better to learn how we truly collaborate.”

 

Telecommuters Tomorrow

 

According to Tim Hansen in his white paper “The Future of Knowledge Work,” there are already several trends influencing the current workplace that will directly impact virtualization of the enterprise in the future:

 

  • Defining employees on the cusp of transformation
  • Dynamic, agile team structures will become the norm
  • The location of work will vary widely
  • Smart systems will emerge and collaborate with humans
  • A second wave of consumerization is coming via services

 

The questions IT leaders are asking now can be answered by isolating these already-present factors driving virtualization.

 

Our offices are changing rapidly — don’t let your employees suffer through legacy work models. Recognizing the change swirling around you will help you strategize for the coming changes on the horizon.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

Read more >

Chief Human Resources Officers will be the Next Security Champion in the C-suite

HR and security? Don’t be surprised. Although a latecomer to the security party, HR organizations can play an important role in protecting assets and influencing good security behaviors. They are an influential force when managing risks of internal threats and excel at the human aspects which are generally snubbed in the technology heavy world of cybersecurity. At a recent presentation given to the CHO community, I discussed several overlapping areas of responsibilities which highlight the growing importance HR can influence to improve the security posture of an organization. 

 

The audience was lively and passionate in their desire to become more involved and apply their unique expertise to the common goal.  The biggest questions revolved around how best they could contribute to security.  Six areas were discussed.  HR leadership can strengthen hiring practices, tighten responses for disgruntled employees, spearhead effective employee security education, advocate regulatory compliance and exemplify good privacy practices, be a good custodian of HR data, and rise to the challenges of hiring good cybersecurity professionals.  Wake up security folks, the HR team might just be your next best partner and a welcomed advocate in the evolving world of cybersecurity

 


Pivotal-Role-of-HR-in-Cybersecurity from Matthew Rosenquist

 

 

Presentation available via SlideShare.net: http://www.slideshare.net/MatthewRosenquist/pivotal-role-of-hr-in-cybersecurity-cho-event-nov-2014

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

My Blog: Information Security Strategy

Read more >

SC14: Understanding Gene Expression through Machine Learning

This guest blog is by Sanchit Misra, Research Scientist, Intel Labs, Parallel Computing Lab, who will be presenting a paper by Intel and Georgia Tech this week at SC14.

 

Did you know that the process of winemaking relies on yeast optimizing itself for survival? When we put yeast in a sugar solution, it turns on genes that produce the enzymes that convert sugar molecules to alcohol. The yeast cell makes a living from this process (by gaining energy to multiply) and humans get wine.

 

This process of turning on a gene is called expression. The genes that an organism can express are all encoded in its DNA. In multi-cellular organisms like humans, the DNA of each cell is the same, but cells in different parts of the body express different genes to perform the corresponding functions. A gene also interacts with several other genes during the execution of a biological process. These interactions, modeled mathematically using “gene networks,” are not only essential in developing a holistic understanding of an organism’s biological processes, they are invaluable in formulating hypotheses to further the understanding of numerous interesting biological pathways, thus playing a fundamental role in accelerating the pace and diminishing the costs of new biological discoveries. This is the subject of a paper presented at the SC14 by Intel Labs and Georgia Tech.

 

Owing to the importance of the problem, numerous mathematical modeling techniques have been developed to learn the structure of gene networks. There appears, not surprisingly, to be a correlation between the quality of learned gene networks and the computational burden imposed by the underlying mathematical models. A gene network based on Bayesian networks is of very high quality but requires a lot of computation to construct. To understand Bayesian networks, consider the following example.

 

A patient visits a doctor for diagnosis with symptoms A, B and C. The doctor says that there is a high probability that the patient is suffering from ailments X or Y and recommends further tests to zero in on one of them. What the doctor does is an example of probabilistic inference, in which the probability that a variable has a certain value is estimated based on the values of other related variables. Inference that is based on Bayes’ laws of probability is called Bayesian inference. The relationships between variables can be stored in the form of a Bayesian network. Bayesian networks are used in a wide range of fields including science, engineering, philosophy, medicine, law, finance, etc. In the case of gene networks, the variables are genes and the corresponding Bayesian network models for each gene what other genes are related to it and what is the probability of expression of the gene given the expression values of the related genes.

 

Through a collaboration between Intel Labs’ Parallel Computing Lab and researchers at Georgia Tech and IIT Bombay, we now have the first ever genome-scale approach for construction of gene networks using Bayesian network structure learning. We have demonstrated this capability by constructing the whole-genome network of the plant Arabidopsis thaliana from over 168.5 million gene expression values by computing a mathematical function 7.3 trillion times with different inputs. For this, we collected a total of 11,760 Arabidopsis gene expression datasets (from NASC, AtGenExpress and GEO public repositories). A problem of this scale would have consumed about six months using the state-of-the-art solution. We can now solve the same problem in less than 3 minutes!

 

To achieve this, we have not only scaled the problem to a much bigger machine – 1.5 million cores of Tianhe-2 supercomputer with 28 PFLOP/s peak performance, we also applied algorithm-level innovations including avoiding redundant computation, a novel parallel work decomposition technique and dynamic task distribution. We also made implementation optimizations to extract maximum performance out of the underlying machine.

 

sanchit image3.jpg

 

sanchit image 2.jpg

 

  • (Top)    Root Development subnetwork                 (Bottom) Cold Stress subnetwork

 

Using our software, we generated gene regulatory networks for several datasets – subsets of the Arabidopsis dataset – and validated them using known knowledge from the TAIR (The Arabidopsis Information Resource) database. As a demonstration of the validity and how genome-scale networks can be used to aid biological research, we conducted the following experiment. We picked the genes that are known to be involved in root development and cold stress and randomly picked a subset of those genes (red nodes in the above figures). We took the whole-genome network generated by our software for Arabidopsis and extracted subnetworks that contain our randomly picked subset of genes and all the other genes that are connected to them. The extracted subnetworks contain a rich presence of other genes known to be in the respective pathways (green nodes) and closely associated pathways (blue nodes), serving as a validation test. The nodes shown in yellow are genes with no known function. Their presence in the root development subnetwork indicates they might function in the same pathway. The biologists at Georgia Tech are performing experiments to see if the genes corresponding to yellow nodes are indeed involved in root development. Similar experiments are being conducted for several other biological processes.

 

Arabidopsis is a model plant for which NSF had launched a 10 year initiative in 2000 to find the functions of all of its genes, yet the functions of 40 percent of its genes are still not known. This method can help accelerate the discovery of the functions of the rest of the genes. Moreover, it can easily be scaled to other species including human beings. The understanding of how genes function and interact with each other in a broad variety of organisms can pave the way for new medicines and treatments. Moreover, we can also compare the gene networks across organisms to enhance our understanding of the similarities and differences between them ultimately aiding in a deeper understanding of evolution.

 

What questions do you have?

Read more >