ADVISOR DETAILS

RECENT BLOG POSTS

Podcast: Talking Data Security and Privacy in Healthcare

I recently spoke to the Apps Alliance, a non-profit global membership organization that supports developers as creators, innovators, and entrepreneurs, on the latest trends in healthcare security.

 

It was a fascinating 40 minutes and a great opportunity to take a look at security issues not just from the healthcare professional or patient perspective, but also from a developers’ point of view. In this podcast, we take a look at what’s important to all three groups when it comes to privacy, security and risk around healthcare data.


Listen to the podcast here

 

We discussed:

 

  • Best practices for developers looking to secure healthcare data
  • Security challenges that stem from the flow of data from mobile healthcare devices
  • The relationship between usability and security

 

I recently wrote a blog looking at the perceived trade-off between usability and security in healthcare IT and how you can mitigate risks in your own organisation. We have solutions to help you overcome these challenges, many of which are outlined in our Healthcare Friendly Security whitepaper.

 

We’d love to get your feedback on the issues discussed in the podcast so please leave a comment below – we’re happy to answer questions you may have too.

 

Thanks for listening.


David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Read more >

The Evolving Workplace

Ever since visiting my father’s office as a small child, I realized the importance of personalizing my workplace. Seeing pictures of the family on his desk, diplomas on the wall, and the surrounding library of books communicated who he was and how capable he was at his job.

 

Mixed with those personal effects were productivity tools: whiteboards, inbox/outbox, paper, printers, Rolodexes, various office supplies, and the ever-abundant Post-it note. Today, much of this clutter has been automated by modern applications, technology and devices — and this is especially true when I consider social media tools. I found the cartoon below summed up the integration of these desk-based productivity tools nicely.

We’ve significantly reduced our desktop clutter through digital devices and applications. As a result, many of the traditional workplace effects are now relics of the past, replaced with new tools that house photos, collect our notes, organize our contacts, send our communications, and so on.

 

As new digital tools are introduced and virtualized workspaces continue to evolve, the traditional desk setup will evolve again. If we focus on what is cluttering our desks today, why do we still deal with the mess of tangled wires and plugging and unplugging from the devices needed to get our work done?

C:UserscppetersAppDataLocalMicrosoftWindowsTemporary Internet FilesContent.OutlookHM188KTFScreenshot 2015-02-05 15 00 01.png C:UserscppetersAppDataLocalMicrosoftWindowsTemporary Internet FilesContent.OutlookHM188KTFScreenshot 2015-02-05 15 00 11.png

Last week, Intel introduced its 5th Generation Core processors, which offer a wire-free work experience and mobile collaboration tools, resulting in an improvement in work productivity. Thinner, lighter devices will leave us free and flexible while working on the go — all day long, with battery life topping eight hours on a single charge. A secure wireless docking experience and wireless display experience paired with new hands-free voice command technology take the hassle out of working on the go.

 

I found this Intel IT Center white paper highlighting modern collaborative technologies to be helpful in showcasing the future of work technology and experiences. As a remote worker, conventional office accessories prove ineffective and unwieldy for my daily tasks. The modern collaboration technology I look forward to most is the shared virtual interactive whiteboard.

 

What technology do you want that can provide you a better way to work?

 

Chris

 

To continue this conversation on Twitter, connect with me at @chris_p_intel or use #workingbetter.

Read more >

Checklist For Designing a New Server Room

Joel.jpgDesigning a new server room may initially seem to be a daunting task, there are after all, many factors and standards to consider. However, setting up the space and equipment doesn’t have to be an ordeal as long as you plan in advance and make sure you have all the necessary items. Here’s a checklist to facilitate the design of your data center.


Spatial Specifications

  • Room should have no windows.
  • Ensure space is large enough for future growth
  • Ceiling should be at least nine feet
  • Should have drop ceiling return to exhaust heat

 

Equipment Specifications

 

  • Computer racks should have a clearance of at least 42 inches.
  • All racks should have proper grounding and seismic bracing.
  • Computing equipment should have a maximum electrical intensity of 300 watts per square foot.
  • Server room should contain fire, smoke, water and humidity monitors.

 

Cooling Specifications

 

  • Racks should be arranged in a hot-aisle/ cold-aisle configuration.
  • Use cooling equipment with variable speed fans.
  • Plan for redundancy, do not rely on building cooling for back-up.
  • Under floor cooling systems require a raised floor with a minimum height of 24 inches, with the ability to hold the weight of server racks and equipment.


Electrical Systems Specifications

  • Computer equipment and HVAC should have separate power panels.
  • There should be no heat-generating support equipment.
  • Electrical systems should have an isolated ground, grounding grid and dedicated neutral.
  • Separate back-up power should be available for data center.
  • The electrical system should have a shunt trip for purposes of emergency shutdown.

 

Data Center Resources has had a reputation for providing superior data center solutions since 2002. Our dedicated team understands that while the solution is important, it is only a part of the overall relationship with our clients. Responsiveness, after sale service, ease of purchasing and breadth of product offerings are other important factors, and we are committed to exceeding expectations in all of these areas. Our principals and project specialists each have several years of experience in providing technical environment solutions.  Contact our team today to find out how we can help you design a new server room.

Read more >

2015 CIOs: Need to Be in the Digital Driving Seat – Not a Passenger

McMahon.jpg2014 was another challenging year for the CIO with plenty of column inches given over to debating the control and usage of technology across the enterprise with much speculation about the validity of the role itself.

Personally, I think talk of the demise of the CIO role is presumptuous though what is critical right now is that the CIO role needs to evolve with 2015 being the time to flourish and show their true worth in helping set the strategic direction of their organisation.  The CIO role is like no other in that it allows visibility across the organisation that others rarely get to achieve and those that are commercially astute with a capacity to add tangible value to the business will excel – those who are not will likely be sitting in a different chair at the start of 2016.

As a result of the recent economic turmoil and rapidity of change across the commercial landscape, many organisations are now looking for a different type of CIO or technology leader than they have in the past. They are diluting the need for a more technically focused individual to one who is able to unravel the complexity of IT, increase the accessibility to technology, and be open to new ideas with the ability to work with peers on getting the right things done.  One of the key factors in this evolutionary change in the CIO role is the need to understand and appreciate they no longer have ultimate say over what technologies are used within their organisation but they will still be held accountable for making sure it all works.

Gartner research has shown that 38% of IT spend is already outside of IT and that they expect this to reach 50% by 2017.  This is going to send a shiver down the spine of many a CIO but they must understand the diversification of technology usage and need across their organisation.  This is quite the culture shift for many who have migrated in to the CIO role from the traditional ‘lights on’ IT director role of old but this will make absolute sense for those who have the ability to evolve in to this new model which will free them up to get more involved in defining and executing the ‘big picture’ strategy.  Too long the CIO has been identified as the strategic and commercial weak link in the c-suite and not adding tangible value across the business – they must seize this opportunity to transform their role and reputation in to one that thinks collectively, understanding how best to resolve the issues that matter across the business and ultimately delivering commercial value.

 

The main theme and focus for many of us this year is how to transformand drive a digital business.  Naturally this is a hot topic for CIO’s and the challenge of how to implement and transform your business to a digital operating model is now top billing on the agendas of many boardrooms across the globe.  This is exactly where the CIO can step up and work with peers and key stakeholders across the business to define a strategy which is moulded around a ‘customer first’ approach where digital technologies will form the cornerstones of how your services are delivered and consumed going forward.  This will require much managing of change, process, and incumbent technology and possibly need a marked change in strategic direction – a role tailor-made for the commercially astute CIO in harness with the CMO.

The impact of digital business on industries and individual organisations cannot be underestimated and Gartner have predicted that by 2017 one in five industry leaders will have ceded their market dominance to a company founded after 2000.  This is a bold claim but one which I support as no longer can you rely on historical dominance of your sector – either embrace disruption now or start planning your burial in the corporate graveyard alongside luminaries such as Kodak and Blockbusters.

 

CIO’s must embrace a “Bi-Modal IT” mind-set where they simultaneously embark on the digital transformation journey whilst maintaining Business as Usual (BAU) services.
It’s no secret that the most successful CIO’s are those who are able to run the business and transform it at the same time. Many industry observers and consultants will tell you that they have witnessed more transformation in the last 3 years than in the previous 20 years combined, so this shows how important these skills are in the modern CIO.  I don’t see any lessening in this pace as the demand for new and simpler ways to consume data, information, products and solutions is only going to increase year on year as the technology and accessibility to it improves.

 

CIO’s will also need to start concentrating on what talent they need to bring in to their organisations this year to manage this “Bi-Modal IT” approach as the market for the best talent is already stretched and growing ever more taut.  CIO’s should help their business colleagues and the CEO think outside the box to imagine new scenarios for digital business that cross companies and industries, providing a great opportunity for CIO’s to amplify their role in the organisation.

 

Gone are the days where you can supply rigid corporate systems, which are only accessible on site – the corporate world has evolved and everyone wants to consume technology in different ways with previously inaccessible data being lusted after to analyse for new operational and commercial insights.

 

CIO’s need to help create the right mind-set and a shared understanding among key decision makers in the enterprise – to help them “get” the possibilities of digital business.
They must take a leadership role in helping their organisations change their mind-set to what’s possible – and what’s inevitable in a digital business future.

 

This should not be done in isolation or be detrimental to any key relationships such as that with the CMO as it’s imperative you work together and deliver the ‘right’ digital strategy for your organisation.

 

Get yourself in the digital driving seat and don’t become a passenger.  It’s going to be a busy year with a fair amount of turbulence, so buckle up and enjoy the ride.

 

ChristianMcMahon

Read more >

Part II: 5 Significant Health IT Trends for 2015

In my last post, we looked at two of the top five health IT trends I’m seeing for 2015. In this blog, we’ll conclude with a more in-depth look at the remaining three trends.

 

To recap, the five areas that I strategically see growing rapidly in 2015 are focused on the consumerism of healthcare, personalization of medicine, consumer-facing mobile strategies, advancements in health information interoperability including consumer-directed data exchange and finally, innovation focused on tele-health and virtual care.

 

While all of these trends can be independent of each other and will respectively grow separately, I see the fastest growth occurring where they are combined or integrated because they improve each other.

 

Here’s my take on the three remaining trends:

 

  1. Consumer-facing mobile strategies: To control spiraling healthcare costs related to managing patients with chronic conditions as well as to navigate new policy regulations, 70 percent of healthcare organizations worldwide will invest in consumer-facing mobile applications, wearables, remote health monitoring and virtual care by 2018. This will create more demand for big data and analytics capability to support population health management initiatives. And to further my earlier points, the personalization of medicine relies on additional quality and population health management initiatives so these innovations and trends will fuel each other at faster rates as they become more integrated and mature.

  2. Consumer-directed interoperability: Along with the evolution of the consumerism of healthcare, you will see the convergence of health information exchange with consumer-directed data exchange. While this has been on the proverbial roadmap for many years, consumers are getting savvier as they engage their healthcare and look to manage their increasing healthcare costs better along with their families’ costs. Meaningful use regulations for stage 3 will drive this strategy this year but also just the shear demand by consumers will be a force as well. I am personally seeing a lot of exciting innovation in this area today.
  3. Virtual care: Last but certainly not least, tele-health, tele-medicine and virtual care will be top-of-mind in 2015. The progression of tele-health in recent years is perhaps best demonstrated by a recent report finding that the number of patients worldwide using tele-health services is expected to grow from 350,000 in 2013 to approximately 7 million by 2018. Moreover, three-fourths of the 100 million electronic visits expected to occur in 2015 will occur in North America. We are seeing progress not only on the innovation and provider adoption side but slowly public policy is starting to evolve. While the policy evolution should have occurred much sooner, last Congressional session we saw 57 bills introduced and as of June 2013, 40 out of 50 states had introduced legislation addressing tele-health policy. I see in every corner of the country that care providers want to use this type of technology and innovation to improve care coordination, increase access and efficiency, increase quality and decrease costs. Patients do as well so let’s keep pushing policy and regulation to catch up with reality.

 

While the headlines this year will be dominated by meaningful use (good and bad stories), ICD-10, interoperability (or data-blocking), and other sensational as well as eye-catching topics, I am extremely encouraged by the innovations emerging across this country. We are starting to bend the cost curve by implementing advanced payment and care delivery models. While change and evolution aer never easy, we are surrounded by clinicians, patients, consumers, administrators, innovators and even legislators and regulators who are all thinking and acting in similar directions with respects to healthcare. This is fueling these changes “on the ground” in all of our communities. This year will be as tough as ever in the industry but also, a great opportunity to be a part of history.

 

What do you think? Agree or disagree with these trends?

 

As a healthcare innovation executive and strategist, Justin is a corporate, board and policy advisor who also serves as an Entrepreneur-in-Residence with the Georgia Institute of Technology’s Advanced Technology Development Center (ATDC). In addition, Mr. Barnes is Chairman Emeritus of the HIMSS EHR Association as well as Co-Chairman of the Accountable Care Community of Practice. Barnes has appeared in more than 800 journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. He recently launched a weekly radio show, “This Just In.”

Read more >

Enabling Software Defined Infrastructure through the Convergence of Great Technologies

By Albert Diaz, Intel VP Data Center Group, GM Product Collaboration and Systems Division



When Intel’s Platform Collaboration Solution Division (PCSD) was approached by EMC & VMware® about collaborating on a best of breed Hyper-Converged Infrastructure appliance we realized that PCSD had the ability to integrate assets across Intel’s product groups and make a compelling solution to meet a growing storage market need. The EMC® VSPEX® BLUE hyper-converged infrastructure appliance is more than the convergence of software-defined compute, networking and storage infrastructure; it is the convergence of great brands.   Each company brings its expertise to ensure that the resulting product addresses the many challenges that Enterprise IT is facing as their organizations evolve to meet the real-time workload demands of private/hybrid cloud deployments.  VSPEX BLUE gives IT managers what they need, but without complicating their lives, the product just works! 


Hiding all the complexity from the user is…well its complex, but we knew we were up for challenge.  It is all about ensuring that pluggable fixed configuration H/W is all synchronized through a common S/W stack.  We needed to ensure the product had the memory and I/O bandwidth to meet the demands of the enterprise and mid-market, and what better choice than the Intel® Xeon® Processor E5-2600 Product Family.  Putting 8 processors into a dense 2U chassis was made easier by our 20+ years of experience in the server board and system business. The solution includes a modular Intel 10GbE Network connection from our Networking Division (ND), giving users the choice of fiber SFP+ or copper RJ45 connectivity and ensuring that users have the flexibility to integrate as their cable plant requires.  Adding high performance solid state disks with technology from the Intel Non-Volatile Memory Solutions Group (NSG) and being able to seamlessly scale, from 100 to 400 Virtual Machines and 250 to 1000 VDTs, with the goal of getting customers up and running in 15 minutes, was super challenging from a H/W integration perspective. 


From the initial requirements discussion with EMC and VMware through final production release, we always kept the design goal of SIMPLICITY in mind.   Installation and management needed to be fully orchestrated.   Patching and upgrading needed to be intuitive.  Very importantly, the EMC VSPEX BLUE appliance had to easily grow and contract based on business needs in order to offer mid-market enterprise customers the fastest, lowest-risk path to new application and technology adoption.


I want to be sure to mention that our team enjoyed working on the VSPEX BLUE project.  Storage has reached an important inflection point.  Delivering truly converged solutions that have great brands doing the validation and integration together makes successful deployments of private/hybrid clouds predictable. IT Directors want proven configurations that enable the businesses they support to go from idea to solution without incurring the risk normally associated with new cloud deployments.  Our team is proud to have been part of the creation of a product that is simple to manage and simple to scale, so that IT Directors can invest their valuable resources elsewhere, because I know their lives are complex enough!

Read more >

Part I: 5 Significant Health IT Trends for 2015

While I know meaningful use (stages 2 and 3), electronic health record (EHR) interoperability, ICD-10 readiness, patient safety and mobile health will all continue to trend upwards with great importance, the five areas that I strategically see growing rapidly in 2015 are focused on the consumerism of healthcare, personalization of medicine, consumer-facing mobile strategies, advancements in health information interoperability including consumer-directed data exchange and finally, innovation focused on tele-health and virtual care.

 

While all of these trends can be independent of each other and will respectively grow separately, I see the fastest growth occurring where they are combined or integrated because they improve each other. It’s like a great marriage where the spouses make each other better and usually more successful because of their unity. I see the same occurring in 2015 and why I am so bullish on these integrated opportunities and innovations.

 

In this first part of my 2015 outlook blog, we’ll look at two of the top trends:

 

  1. Treating the patient as a consumer: This is due to numerous factors but a significant driver is the shift in various CMS regulations and incentives that have care providers and healthcare organizations focused on increased patient engagement as well as patient empowerment to improve communication, care coordination, patient satisfaction and even discharge management with hospitals. As a result of an increased focus on improving the patient/consumer experience, 65 percent of consumer transactions with healthcare organizations will be mobile by 2018, thus requiring healthcare organizations to develop omni-channel strategies to provide a consistent experience across the web, mobile and telephonic channels. I have already begun to see this in hundreds of area hospitals and practices in Georgia and know it is occurring across the country.

  2. Personalized medicine: While this concept is not new, the actual care plan implementation as well as technology and services innovations supporting this implementation is being driven quickly by the increased pressure for all care providers to improve quality and manage costs. You will see this increase dramatically once Congress passes SGR Reform that received bipartisan and bicameral support last Congressional Session and Congressional leaders are poised to take up this legislation again in the next month. The latest statistics show that 15 percent of hospitals will create a comprehensive patient profile by 2016 that will allow them to deliver personalized treatment plans.

 

Tomorrow we’ll look closely at the other three 2015 trends in health IT.

 

What questions do you have? What are the trends you are seeing in the marketplace?

 

As a healthcare innovation executive and strategist, Justin is a corporate, board and policy advisor who also serves as an Entrepreneur-in-Residence with the Georgia Institute of Technology’s Advanced Technology Development Center (ATDC). In addition, Mr. Barnes is Chairman Emeritus of the HIMSS EHR Association as well as Co-Chairman of the Accountable Care Community of Practice. Barnes has appeared in more than 800 journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. He recently launched a weekly radio show, “This Just In.”

Read more >

Addressing Analytics: Extracting Value from the New Data Currency

This is the third installment of a Mike Blalock blog series on Tech & Finance.

Click here to read blog #1.

Click here to read blog #2.


Mike1.jpgAs many financial services organizations are discovering, there’s a new currency in town and it’s not like any we’ve dealt with before. The more of it you have, the more each piece is worth. And many banks and other financial institutions are sitting on huge stocks of it yet failing to get any return.


This new currency is data, and today I’m continuing my exploration of the Third Industrial Revolution by taking a look at analytics. Because it’s not just about how much data you have, but whether you can extract the value from it.


Financial services is a data-driven enterprise. Banks manipulate and process data like a manufacturing company processes raw materials. It’s no surprise then that almost every financial services customer I have spoken to in the last year has identified big data and analytics top priorities. They know it’s critical, but many still struggle with what to do and how to do it.


Learning to Manage Volumes of Data


Intel recently sponsored a report on Big Data Cases in Banking Securities, created with the STAC Benchmark Council, which looked at the big data/analytics use cases common in both investment and retail banking today. Among other things, the report revealed a mix of approaches, with some organizations using big data to do old things faster or better, and others using it to do completely new things. Of the famous three Vs, volume was found to be the most challenging issue among participating financial organizations.


To avoid being overwhelmed, a good first step is to narrow the focus to the top two or three use cases that will provide the most value or impact on the business. In my view, these are the three pillars of big data/analytics workloads in financial services which represent the greatest opportunities for investment:


1.      Risk management and portfolio optimization: A consolidated view of data across the enterprise that is required by regulatory requirements. This touches areas like enterprise credit risk reporting, securities fraud early warning, credit card fraud detection, and anti-money laundering.


2.      Customer engagement optimization: Achieving a 360-degree view of the customer (both consumer and business) with personalized and contextual information to enable targeted cross-selling and up-selling.


3.      Increasing operational efficiency: Using big data to improve internal processes and drive incremental innovation in areas such as modeling branch behavior or IT operations analysis.


When bringing big data analytics to one of these areas, there’s a lot to consider. How much will it, and should it, cost? How can companies hire the right data scientists? Most obviously, how can financial services companies cope with the volume, velocity, and variety of data, and develop usage models that will help drive insight from it?

 

Mike2.jpgEmpowering Customers to Leverage Analytics


Our goal when approaching these areas with our financial services clients is to help create an open, interoperable analytics infrastructure and data platform that will empower them to develop the solutions, approaches and processes that will work for them and their customers. In addition to core platform technology like CPUs, solid-state drives, networking, fabric, and security, we also encourage them to think about easier implementations and management (i.e. using analytic data management software such as Cloudera, which is based on the open-source software Hadoop). Using standards-based architecture helps with the recruitment challenge, and also helps to reduce up-front and ongoing technology costs.


As a data-rich financial organization, think of big data, analytics, and the technologies that enable them as your new toolkit. They’re just as important as your online banking platform, your CRM software or your sales database. In fact, they’re the piece that will bring all these disparate elements together and help you extract maximum value from your data currency.

 

Let’s continue the conversation on Twitter: @blalockm


Mike Blalock

Global Sales Director

Financial Services Industry, Intel


This is the third installment of a Mike Blalock blog series on Tech & Finance.

Click here to read blog #1.

Click here to read blog #2.

Read more >

How Security Doesn’t Always Mean a Trade-Off for Usability in Healthcare

A consequence of the unprecedented rate of advances in technology has brought the topic of usability of devices in the workplace to the fore. Usability used to be a ‘nice to have’ but with experiences and expectations heightened by the fantastic usability of personal mobile devices it has become a ‘must-have’. The corporate healthcare IT environment is faced with a challenge.

 

Taming the BYOD culture

Either they invest in great corporate IT user experiences for employees or they’ll be exposed to the dangers of the ‘Bring Your Own Device’ (BYOD) to work movement. And healthcare workers are amongst the first to look for workarounds such as BYOD when usability of their IT is having a negative impact on their workflow.

 

If organisations allow a BYOD culture to become established they face heightened security and privacy risks which can often result in data breaches. Since 2010, the Information Commissioner’s Office (ICO) in the UK has fined organisations more than £6.7m for data protection breaches. Of this, the healthcare sector suffered fines of some £1.3m alone, which accounts for nearly 30% of the British public sector penalties.

 

These costs highlight the importance of avoiding data breaches, particularly as the UK’s public sector health organisations rapidly moved towards cloud-based electronic health records under the Personalised Health and Care 2020 framework. If data security is lacking because of workarounds it may well negate the predicted cost-effective benefits of moving to electronic health records for both patient and provider.

 

The 2020 framework acknowledges that, “In part, some of the barriers to reaping those benefits are comparatively mundane: a lack of universal Wi-Fi access, a failure to provide computers or tablets to ward or community-based staff, and outmoded security procedures that, by frustrating health and care professionals, encourage inappropriate ‘workarounds.’”

 

Mitigating risk of loss or theft

Loss or theft of devices is another common cause of data breaches in healthcare. An audit of 19 UK health-related organisations by the ICO concluded that “a number of organisations visited did not have effective asset management in place for IT hardware and software; this raises the risk of the business not knowing what devices are in circulation and therefore not becoming aware if one is lost or stolen.”

 

There are a number of options to mitigate risk in these circumstances. First, usability and security can be vastly enhanced using Multi-Factor Authentication (MFA), which when combined with Single Sign On (SSO) reduces the overall number of device logins required. Second, replacing unencrypted conventional hard drives with SSDs (Solid State Drives) + encryption lowers the risk in the event of theft or loss but also improves data access performance. And that’s a win-win result for all healthcare professionals.

 

Effective security is like a chain, it requires the securing of all points and either removing or repairing the weak links. Intel Security Group’s solutions has security covered from mobile devices, through networks to back-end servers. We’re already helping healthcare organisations across the globe to embrace the rapidly changing face of technology in the healthcare sector while managing risk and improving that all-important usability.

 

We’ve produced a whitepaper on Healthcare Friendly Security which will help you strike the balance between fantastic usability and industry-leading security in your organisation. Grab your free download today.

 

 

David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Read more >

Amplify Your Value – A Seismic Shift Forward

Amplify Your Value.pngDid you feel it? Did you feel the shifting of the earth? Did you hear it? Did you hear the rumble so loud you could feel it in your bones? What? You missed it? What were you doing on Saturday night January 10th that you missed it? By now, my friends in SoCal are asking each other, “What? Was there a quake on the 10th? Must have been so small I slept through it.” Well, you can rest easy, because no, you didn’t miss another earthquake. What you missed was a seismic shift forward in technology, well at least a seismic shift forward in the way technology is delivered for my organization.


On January 10th, we completed a journey that we started five years ago (in some ways it started long before that), and we finished it a year ahead of schedule.  We have succeeded in taking an 80 year old organization and migrating its architecture to one that is 100% cloud-based! (OK, those purists out there will dispute my math of 100% and perhaps even my definition of cloud, but stick with me).


2014 (plus the first 10 days of 2015) saw the completion three major steps on this journey. The year started with our implementation of Recovery-as-a-Service (RaaS), modernizing our disaster recovery processes and providing a Recovery Time Objective (RTO) of under two hours and a Recovery Point Objective (RPO) of…get this…THIRTY SECONDS! (Insert legal disclaimer here “our experience may not match your actual results SLA’s do apply”). Too good to be true? We’ve done it!


The next step was the migration of the head-end of our network to a Tier 4 data center on the east side of town. We have had a hub and spoke network topology for a number of years, however, the hub was our headquarters building. Far from a Tier 4 data center, we had a server room (read glorified closet), no raised floor, cooling provided by a single Lieber unit, water-based fire suppression (ok we fixed THAT three years ago), and no backup power supply. Now, our headquarters building is just another of 70+ spokes on the network.


The final step we took on January 10th, was the migration of ALL our production and test servers and data storage arrays into a private cloud hosted by an Indianapolis-based tech company. The task was incredibly complex. Imagine…taking apart your car…shipping the pieces to another location…and then putting it back together again…AND have it all work when you are done…AND not disrupting your family’s transportation needs during the move. THAT is basically what the team accomplished. They moved 75 Servers, over 200 applications, several THOUSAND device addresses and 15 terabytes of data, all on a Saturday night.


What does our company gain from these efforts? In short, agility and elasticity. First agility…ANY new project going forward will no longer require the lead time to order a server, configure the server and deploy the server. This step now takes…minutes…TALK about agile! We no longer have to replace the servers every 3 or 4 years…taking capital and taking months of IT’s time to plan, order, and execute an upgrade. We no longer have to upgrade the server operating systems, or our middleware software. That frees up our staff to work on projects that provide more value to Goodwill, like the call tree for retail Home Pick Up, or the mobile devices for nursing, or our data architecture project that will provide improved reporting, data analytics and insights.


Elasticity? What does that mean? That means as we continue to grow we do not have to stair-step the infrastructure to keep pace; buying more capacity than we need to ensure the capacity is there when we do need it. This is the typical approach to infrastructure because the lead time to deploy capacity is so long. Now, for us, it is moments. Conversely, if we ever had to shrink our infrastructure footprint, it only takes a phone call…no contract negotiations, no selling off of assets…just a phone call.


Where we are today is unique among most other non-profits and the majority of for-profit companies. We are in a fantastic position for the future. A set of amazing accomplishments by an amazing team of IT professionals! In my 35+ years of IT work, I have never seen anything like it. I have seen data center moves that took days to complete (and the business was down during those times), I have seen at least one case where the entire move was canceled because it was deemed too costly and too risky to do the move after all was said and done. It is truly an honor and a privilege to work with this group. I thank them for a great job. I thank the organization for the freedom to execute such forward thinking strategies.


I know I have used the analogy of journey to describe our path of the last five years, and I know, a journey of this sort is never really complete. There is still plenty of work ahead. It is an exciting time to be in IT. The impact we can have on our businesses is profound. Over the course of my next several posts, I will lay out the path we have followed to get here. What worked, what didn’t. Why we even started on this journey to begin with. The series, “Amplify Your Value” will explore our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. Next up? “To Get Where You Are Going…You Have to Know Where You Are!”


We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, and CDW. (insert legal disclaimer #2: mentions of partner companies should be considered my personal endorsement based on our experience on our projects and should NOT be considered an endorsement by my company or its affiliates).


Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel’s IT Peer Network

Read more from Jeff on Rivers of Thought

Read more >

January 2014 Intel® Chip Chat Podcast Round-up

In January, Chip Chat continued archiving OpenStack Summit podcasts. We’ve got episodes covering enterprise deployments for OpenStack and key concerns regarding security and trust, as well as software as a service and utilizing OpenStack to streamline compute, network and storage. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

  • Commercial OpenStack for Enterprises – Intel® Chip Chat episode 362: In this archive of a livecast from the OpenStack Summit, Boris Renski (twitter.com/zer0tweets), the co-founder and CMO of Mirantis stops by to talk about the OpenStack ecosystem and the company’s Mirantis OpenStack distribution. Enterprises are now in the adoption phase for OpenStack, with one particular use case standing out for Boris – OpenStack as a data center wide Web server. For more information, visit www.mirantis.com.
  • OpenStack Maturity and Development – Intel® Chip Chat episode 363: In this archive of a livecast from the OpenStack Summit, Krish Raghuram, the Enterprise Marketing Manager in the Open Source Technology Center at Intel, stops by to talk about working with developers directly to get technologies quickly proven and tested, as well as Intel’s investment and work as an OpenStack Platinum member, the need for developing cloud-aware/stateless apps, and utilizing OpenStack to cut operational and capital expense costs. For more information, visit https://software.intel.com/en-us/articles/open-source-openstack.
  • OpenStack and SaaS Deployments – Intel® Chip Chat episode 364: In this archive of a livecast from the OpenStack Summit, Carmine Rimi, the Director of Cloud Engineering at Workday stops by to talk about the evolution of software as a service, as well as scalability and reliability of apps in a cloud environment. Workday deploys various finance and HR apps for enterprises, government and education and is moving its infrastructure onto OpenStack to deploy software-defined compute, networking, and storage. For more information, visit www.workday.com.
  • OpenStack and Service Assurance for Enterprises – Intel® Chip Chat episode 365: In this archive of a livecast, Kamesh Pemmaraju (www.twitter.com/kpemmaraju), a Sr. Product Manager for OpenStack Solutions at Dell, stops by to talk about a few acute needs when deploying OpenStack for enterprises: Security, trust and SLAs and how enterprises can make sure their workloads are running in a trusted environment via the company’s work with Red Hat and Intel® Service Assurance Administrator. He also discusses the OpenStack maturity roadmap including the upgrade path, networking transitions, and ease of deployment. For more information, visit www.dell.com/learn/us/en/04/solutions/openstack.

Read more >

Executives Must Manage Cyber Risks Differently in 2015

Security 400x200.jpgThe Sony breach should be a wakeup call for big enterprises worldwide.  Not only was it a massive loss of intellectual property, but it took the international stage with geopolitical extortion, and even stepped beyond the boundaries of the cyber world and included threats of harm to employees and patrons.  Definitely a dark set of events, which will likely be attempted in the future by a variety of different threat agents against other organizations.  This is not just a Sony problem or a media industry problem.  This is the problem which faces every large company, industry, and government.

 

The effects can be blinding.  As recently reported, Sony’s critical systems won’t be back online until February.  Executives and board members must consider the fact Sony has tremendous resources working to get the company back up and running, yet vital systems may be down for another month.  Attackers can be incredibly difficult to evict.  They dig in, disrupt attempts deny them access, and leave hidden backdoors for later.  Repairing damages can be time consuming and meticulous, even with proper backups and quality IT resources.  Services must be restored in a way more protected than before, without sacrificing performance or usability.

 

The lesson to all: your company’s operational availability, among other things, can be severely affected over a long period of time, even if you have substantial resources.  2015 will may be a defining year for many organizations in protecting and recovering from cyber-attacks.  Be ready. Manage your risks professionally.

 

 

Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

 

Read more >

The Essence of Cloud: Beyond Applications, Deliver Services?

You can look at cloud in a variety of ways.


I typically recognize two fundamentally different ways of looking at cloud. One I call an IT-oriented cloud, the other, a user-oriented cloud.


An IT oriented cloud is one where infrastructure is provisioned to facilitate the installation of the application as and when a new instance of that application is required. It consists in the automation of the provisioning of the appropriate number of virtual and/or physical machines with the right configurations and the connection of those through secured networking with appropriate storage capacity. The installation of the target application can also be automated. But ultimately you speak about the installation of an application for x users. You are looking at the cloud as an easy way to provision infrastructure. In other words, your cloud as an IaaS.

 

CV1.jpgLet me take an example that we all know. If you are part of IT and have to provision an exchange server for 5000 users, an IT oriented cloud will do the job. It will provision the right amount of physical and virtual servers, set-up the databases, the connections between the systems and install the appropriate exchange modules in the right place. Exchange is now available and you can start configuring users. In this case you request an application.

 

But what if you happen to be a manager in the business and have a new employee starting on Monday? You may want to make him feel at home in his new job by setting-up his mailbox and sending him a welcome message even before he is really onboard. You provision one mailbox. In most cases there is no need to provide more hardware, to install software, just to configure the mailbox of one user on an already provisioned environment. Obviously, if your request happens to be for the 5001st mailbox, the environment may have to provision a second exchange environment, but this hidden from you. You request a service. This is a completely different way to look at cloud. From a user perspective, cloud is a SaaS service. When you request a new user on Salesforce.com, you do not care about what that implies for Salesforce.com, you are just interested in getting your seat.


Cloud Enabling Legacy Applications


Let’s now assume you did set-up a private cloud environment. The first question is: which applications should you transfer to that cloud environment – legacy applications or new developments? And it’s a real good question.

 

If you decide for legacy applications, you may want to think about choosing applications that will truly benefit from cloud. There might be two main reasons why an application might benefit of moving to cloud. The application may have varying usage patterns requiring quick ramp-up and ramp-down of capacity over time or the application may have to be configured for many users. The cloud may not add that much value for applications that have a stable, consistent usage, although it may facilitate the delivery of the appropriate infrastructure quickly, so make life easier for the IT department.

 

The first can be addressed with a cloud in which you can provision applications; the second requires the provisioning of services. Let’s review at what characteristics the application needs to respond to in both situations.


Application Provisioning


I suggested it makes sense to migrate an application with varying usage patterns to cloud. Why? We all have our frustrations when an application responds very slowly due to the number of parallel requests. Cloud can address this by initiating a second instance of the application when performance degrades. Using a load balancer, requests can be routed to either of the instances to ensure appropriate response times.

 

Now, what I just wrote bears with it an assumption. And this assumption is that multiple instances of the application can run together without affecting the actions performed by the application. If your application is a web server, managing the display of multiple web pages, there is obviously no issue at all. But on the other hand, if your application is an order management system, things may be a little more tricky. You will need to maintain one single database to ensure all users have access to all orders in the system. So, the first question is whether your application is the bottleneck or the database. In the latter case, creating two instances of the application won’t solve the problem. You will first have to work on the database and maybe create a database cluster to remove the bottleneck. Once that is done, if the problem remains, you may look at creating multiple instances of the application itself.

 

Now, realize that the duplication of the application or some of its components in case of increased traffic may require you have a flexible licensing scheme for the application, the used middleware and potentially the database. Ideally you would like a pay per use model in which you only pay license fees when you actually use the software. Unfortunately, many ISVs have not developed that level of flexibility yet in their license schemes.

 

From an automation perspective, you will have to develop the scripts for the provisioning of an application instance. Ideally you will equip that application instance with a probe analyzing its responsiveness in real time. You will then develop the rules when it makes sense to create a second instance. With that second instance will come the configuration of the load balancer.

 

All this should be transparent to the end-user. It’s the IT department that manages the instances created, including the automated or manual shut-down of instances when they are no longer needed.


Service Provisioning


Service provisioning requires a much greater adaption of the application. Indeed, now you expect to perform automatically a number of tasks typically performed manually by the service desk. So, the first point to check is whether a way exists to initiate the configuration transactions via API’s or any other means. Can the appropriate information be transferred to the application? Is it possible to get the actual status of the request back at completion, etc?

 

Indeed, to set-up the service provisioning, you will have to create a number of workflows that automate the different processes required to configure a user, to give him access to the environment, etc.

 

When the business user requests the provisioning of a mailbox for example, he will be asked to provide information. That information will then be automatically transferred to the application so the configuration can take place. In return, the application will provide the status of the transaction (succeeded or failed, and in that case preferably the reason of the failure), so the cloud platform can inform the user and retain the status and the necessary information to access the service once provisioned.

 

What is important here is that the “services” delivered by the application are accessible. Often companies create web services to interface between the cloud environment and these applications, shielding the users from changes made in the applications. This allows them, once the application is encapsulated, to transform it and make it more cloud friendly without the user being aware of the changes implemented. You may want to think about such approach if you plan to transform or re-architect your application to make it more cloud friendly.  Obviously some applications may have both characteristics (variability in workload and user configurations), in that case both questions should be asked.


Should I start with cloud enabling legacy?


Having discussed the two key reasons why you want to bring an existing application to the cloud, the question remains, should you start with taking an existing application and transform it, or should you rather keep your legacy environments as is and surround it with new functionality specifically developed for the cloud? Frankly, there is not one answer to this question. It really depends on where your application is in its lifecycle. Obviously if you plan to replace that application in the foreseeable future, you may not want to take the time and effort to adapt it. On the other end, if this is a critical application you plan to keep for quite a while you probably would. Make sure of one thing though; build any new functionality with cloud in mind.

Read more >

Urban Growth and Sustainability: Building Smart Cities with the Internet of Things

Dawn1.jpg




This is the first installment of a four part series on Smart Cities with Dawn Olsen (#1 of 4).

Click here to read blog #2

Click here to read blog #3

Click here to read blog #4


Not long ago, the human race hit a significant milestone. In 2009, for the first time in our history, more of us lived in urban areas than rural. It’s estimated that 54% of today’s global population lives in cities and this figure is expected to rocket up to 66% by 2050. With this increase in city inhabitants, we’re quickly heading towards the “Megacity” era.  Soon a city with a population of 10 million or more will seem typical.  As these burgeoning metropolises drive industrial and financial growth on a global scale, the emergence of powerful new economies are beginning to be introduced and developed around the world.


Despite being financial powerhouses, cities can also generate their fair share of problems. For example, they consume two thirds of today’s available energy and other valuable resources, leaving the other third for the millions who still live in smaller settlements and rural areas. As urban populations get bigger, it is vital to make sure that our cities are ready to deal with more people, more traffic, more pollutants and more energy use in a scalable and sustainable way. In short, we need our cities to be smarter.


This is the challenge that gets me out of bed in the morning. I’m excited to be part of Intel’s smart cities initiative, which is focused on putting the Internet of Things (IoT) to use in any way that will benefit urban societies.


Dawn2.jpg

The IoT blocks that build smart cities may include anything from technical components like sensors that measure air quality or temperature, to end-to-end city management solutions that control traffic flow based on analysis of citywide congestion data. The combinations in which these blocks can be applied are almost limitless, and we are exploring innovative new applications to improve quality of life, cost efficiencies and environmental impact.  For example, the work Intel is undertaking with the City of San Jose, California, uses IoT technology to build more sustainable infrastructure, and the project has been recognized by the White House as part of its Smart America initiative. 



In this blog series (this is the first of four posts), I’ll be sharing my thoughts on some of the key areas in which we’re driving the smart cities of the future, based on innovative trials and deployments already completed, or going on now. My blog posts will cover three main areas:

  • Smart security and the evolving challenge of safeguarding our increasingly connected cities
  • Technology driving innovation in traffic and transport management
  • Sustainable solutions to the problem of rising air pollution.


Check back soon for my next post (next Thursday – 1/15/2015), which will explore how Intel’s smart city initiatives can help enhance citizens’ safety and security. I’ll give you a clue: it’s not by just rolling out more CCTV cameras.


Let’s get smart.


This is the first installment of a four part series on Smart Cities with Dawn Olsen (#1 of 4).

Click here to read blog #2

Click here to read blog #3


To continue the conversation, let’s connect on Twitter @DawnOlsen


Dawn Olsen

Global Sales Director

Government Enterprise, Intel

Read more >

Improving Air and Water Quality in Smart Cities

This is the fourth and final installment mini-blog series on Smart Cities with Dawn Olsen (#4 of 4).

Click here to read blog #1

Click here to read blog #2

Click here to read blog #3


Dawn7.jpgIn this blog series, we’ve been looking at the ways in which Intel’s smart cities initiatives are using the Internet of Things (IoT) to address the challenges faced by growing cities.


We first covered smart security, one of the most important areas of IoT technology for city authorities policing events and managing crowds. Then we looked at how Intel is implementing smart transport to alleviate congestion and improve traffic flow, which is especially important for emergency services routing. In this final post, we cover a topic that goes hand in hand with the issues of overcrowding and congestion in densely populated urban areas: the challenge of minimizing pollution and improving air and water quality in our cities.


There are many regions today where pollution is a well-documented problem. In China, for example, blankets of smog are a familiar sight over its metropolises. In New Zealand, water quality is a growing concern, with fresh water supplies susceptible to pollutants such as sediment and pathogens.  Furthermore, regulations have been introduced to limit the use of wood burners, which releases polluting particles into the air. 

While identifying pollution problems is the easy part, taking a timely and informed action to improve air and water quality is the real challenge that local authorities face. 

 

Intel has invested in smart city initiatives to build end-to-end solutions that utilize a full range of IoT tools.  City authorities can now monitor pollution levels by analyzing sensor data and automating real-time responses to the changing environment – all from a single management system. 


The recent investment made in the London-based Intel Collaborative Research Institute for Sustainable Connected Cities is helping lay the technological foundations for smart cities. For example, connected solutions can deliver greater efficiencies in cities like London where old utilities infrastructure is difficult to maintain. Systems that account for local water demand, combined with up-to-date weather data, enable authorities to adapt their water systems accordingly.  As a result, this increases the lifetime of the infrastructure while minimizing the risk of leaks, flooding or contamination.  


Dawn 8.jpgIn cities like Dublin, Ireland, Intel is working on several pilot programs to improve air quality.  These initiatives have the potential to connect the full spectrum of devices through the IoT.  Just imagine, when an individual with asthma is planning her morning jog, she can now use an app to find a route with the best air quality.  If pollution levels rise along the way, an alert can be triggered and a new route can be automatically suggested – all this from a handheld device!


In addition to the tools and technology that power the IoT, education will be crucial to the continued success of these programs. We at Intel are committed for the long term and know that even the best smart city solutions are only effective with the proper support before, during and after implementation. From training city authorities to make the most of their water management systems, to getting children involved on the ground, like in Christchurch, New Zealand, where school pupils conducted water quality tests on the Avon River, our focus is to provide the adequate support.


As our cities grow, so too does our responsibility to deal with the mounting pressures of more people, traffic and pollution. That’s the challenge Intel is helping city leaders to address through smart cities initiatives.  Let’s continue to work together to build more efficient and well-connected management systems for the smart cities of the future. After all, protecting our environment, safety and overall well-being is just the smart thing to do. 


To continue the conversation, let’s connect on Twitter @DawnOlsen


Dawn Olsen

Global Sales Director

Government Enterprise, Intel


This is the fourth and final installment mini-blog series on Smart Cities with Dawn Olsen (#4 of 4).

Click here to read blog #1

Click here to read blog #2

Click here to read blog #3

Read more >

What Does CIO Reporting Structure Mean for IT at Large?

A previous manager of mine used to say that structure follows strategy. So it seems logical to conclude that a business’s organizational structure contains significant insights about – and implications for – the role of IT within that company.

 

Gone are the more traditional expectations of IT as a cost center, and along with it the expectation that the CIO would report directly to the CFO. With every new reporting structure that emerges, a new conversation of strategy and importance is started. For example, here are a few that I ran across on Twitter:

 

When the CIO reports to the CEO, IT has a chance at being a valuable part of the business.

— Scott W. Ambler (@scottwambler) September 18, 2014

 

#CIO reporting to the #CMO? It may be a hot trend but is the wrong strategic move! http://t.co/RxhartTnhx

— Jeffrey Fenter (@JeffreyFenter) March 9, 2014

 

If a CIO reports up into the CFO, the CFO must be willing to sacrifice finance risk to make systems risk the priority. Can that ever happen?

— Wes Miller (@getwired) September 26, 2014

With IT on its way to being seen as driver, enabler, and – most importantly – a partner of the business, it seems that the CIO’s natural evolution would be to report directly to the CEO. This relationship may solidify the business’s view of IT as a strategic differentiator – a segment of the business worthy of the CEO’s direct attention.

 

In a Gartner report released this past October, research showed that CIOs are already pulling up a prominent seat at the proverbial table, with 41% reporting directly to their CEO.

 

This made me wonder – who do the readers of the Intel IT Peer Network and followers of the Intel IT Center (LinkedIn, Twitter, Google+, Facebook) report to? So we created a poll to discover if this reporting trend extended to our community of IT leaders as well.

 

The results were interesting – the majority of our readers responded that their CIOs report directly to their CEO, while the traditional CIO/CFO model was cited as the second most common reporting structure.

 

CIO_Reporting_Poll_Results_kvuvcj.png

                                                                              

In order to continue to understand the landscape of reporting structure, I’ve left the poll open for further votes – let me know who your CIO reports to, and I’ll check in again in a few months.

 

Connect with me in the comments below or on Twitter (@chris_p_intel) – I’d love to know how you view organizational structure and its impact on IT (or vice versa).

 

Does who the CIO reports to imply anything about the importance of the role or is it simply a meaningless line on an org chart?

Read more >

BI Does Not Guarantee Better Decisions, Only Better-Informed Decisions

informed decisions with BIIn my blog, What is Business Intelligence? (BI), I talked about faster, better-informed decision making. I want to expand on these two key pieces. What does it mean when we say “faster” decision making? And why do we say “better-informed” decisions instead of “better decisions?”

 

Putting aside the semantic differences and nuances of meaning, these two concepts play a significant role in delivering BI solutions that can address both the urgency needed by business and the agility required by IT.

 

Moreover, exploring these concepts–regardless of your interpretation—will further facilitate better engagements and result in tangible outcomes that can benefit the entire organization, both in the short term and in the long run.

 

BI is all about speed when capitalizing on opportunities

 

Speed plays a more important role than ever before when capitalizing on opportunities, whether it contributes to growth or bottom line. Moreover, speed plays a role in every facet of business transactions—sometimes before even a transaction is completed—where business data is born or created. We no longer operate in the world of business PCs, which are chained at desks and accessed during bankers/working hours. Instead, mobility fuels global transactions that take place around the clock.

 

  • Speed dictates our options. For example, when the opportunity to enter a new market or adjust a marketing campaign variables presents itself, the need for insight grows exponentially as we consider our options to react while the clock is ticking. As questions are formulated both about the past and future, historical data provides only a starting point for decisions that will eventually impact our company’s future direction. This phenomenon doesn’t happen only occasionally or based on a fixed and predictable schedule, which would allow us to prepare our teams.
  • Business operations are modeled to match the pace of change even if our existing infrastructure isn’t equipped to handle the heavy load and sudden curves of the road. We often hear the words “uncertainty” and “risk” when executives talk about trying to make business decisions.
  • The questions we ask today aren’t the same ones we asked last week, nor are they the questions we’ll ask next week. We can no longer deliver business information (forget insight for a moment) using the traditional methods that may require longer periods of fertilization. Hence, “faster” demands speed and agility, both of which require not only ability but also accuracy.

 

The speed at which we gain insight is critical because it allows us to take advantage of the opportunity at full throttle. Agility is essential because most of these opportunities or challenges don’t RSVP before they show up at our door step. They are identified by talented individuals that move organizations forward.

 

Ability is what makes this whole thing feasible under pressure. Besides, how can we even talk about insight if we don’t have the data or can’t obtain it to begin with? Accuracy—even if it isn’t perfect—plays a vital part because many times we can’t afford unforced errors that would otherwise defeat the purpose of data-driven decision making.

 

BI can make us better-informed decision makers—but it does not necessarily make us smarter

 

With the exception of those automated business processes, such as online credit card applications, many critical business decisions are still made by humans (despite what many sci-fi movies portray). Whether we’re developing a business strategy or executing that strategy, leaders and managers still want to rely on insight derived from solid business data. Though there are many factors that play into the decision-making process, ultimately our goal must be to employ data-based analysis and to look at the evidence using critical thinking.

 

Data has to be solid, otherwise it becomes “garbage in/garbage out.” Do we have the single version of the truth? Do we trust the data? Do we ask the right questions? We need to be ready and willing to admit that we may be wrong about our assumptions or conclusion if we can identify flaws (supported by reliable data) in our initial assessment. We must be willing to play devil’s advocate. And maybe, we don’t blink but think twice when we can afford it. As the old saying goes, “measure twice and cut once.”

 

It doesn’t matter how we get there, data alone will not suffice—we know that. All of these variables will inevitably shape not only the final decision we make, but also the path we choose to arrive there. History is filled with examples of leaders making “bad” decisions even in light of ample amounts of data to support the decision making process.

 

Bottom Line

 

We may not be able to prevent all of the bad or flawed decisions, but we can promote a culture of data-driven decision making at all levels of our organization so that corporate data is seen as a strategic asset. Informed patients are able to make better-informed healthcare decisions. Informed consumers are able to make better-informed buying decisions. Likewise, BI should be a framework to enable “better-informed” decision making at all levels of an organization, while still allowing the final call to lie with us—the humans (at least for now).

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

This story originally appeared on The Decision Factor.

Read more >

Mobile Allows Doctors to Answer, ‘How Did You Do This Week?’



Mobile devices and technology have allowed clinicians to gather patient data at the point-of-care, access vital information on the go, and untether from traditional wired health IT infrastructures. One hidden benefit of mobile capability is how doctors can gain access to data which analyzes their own performances.


In the video above, Jeff Zavaleta, MD, chief medical officer at Graphium Health and a practicing anesthesiologist in Dallas, shares his insight on how mobile devices offer a new opportunity for practitioners to self-evaluate, answer the question, “how did you do this week?,” and see key performance indicators such as their average patient recovery times and on-time appointment starts.

 

Watch the short video and let us know what questions you have about the future of mobile health IT and where you think it’s headed. How are you using mobile technology to improve your practice?

 

Also, be on the lookout for new blogs from Dr. Zavaleta, who will be a guest contributor to the Intel Health & Life Sciences Community.

 

Read more >

Checklist For Designing a New Server Room

Designing a new server room may initially seem to be a daunting task, there are after all, many factors and standards to consider. However, setting up the space and equipment doesn’t have to be an ordeal as long as you plan in advance and make sure you have all the necessary items. Here’s a checklist to facilitate the design of your data center.


Spatial Specifications

  • Room should have no windows.
  • Ceilings, doors and walls should be sound-proofed
  • Ceiling should be nine feet
  • Floor should be raised with anti-static surface

 

Equipment Specifications

 

  • Computer racks should have a clearance of at least 42 inches.
  • All racks should have proper grounding and seismic bracing.
  • Computing equipment should have a maximum electrical intensity of 300 watts per square foot.
  • Server room should contain fire, smoke, water and humidity monitors.

 

Cooling Specifications

 

  • Racks should be arranged in a hot-aisle/ cold-aisle configuration.
  • Under floor cooling systems require a raised floor with a minimum height of 24 inches, with the ability to hold the weight of server racks and equipment.


Electrical Systems Specifications

  • Computer equipment and HVAC should have separate power panels.
  • There should be no heat-generating support equipment.
  • Electrical systems should have an isolated ground, grounding grid and dedicated neutral.
  • Separate back-up power should be available for data center.
  • The electrical system should have a shunt trip for purposes of emergency shutdown.

 

Data Center Resources has had a reputation for providing superior data center solutions since 2002. Our dedicated team understands that while the solution is important, it is only a part of the overall relationship with our clients. Responsiveness, after sale service, ease of purchasing and breadth of product offerings are other important factors, and we are committed to exceeding expectations in all of these areas. Our principals and project specialists each have several years of experience in providing technical environment solutions.  Contact our team today to find out how we can help you design a new server room.

Read more >