Recent Blog Posts

CitizenM Moves Towards the Hotel of the Future



I recently had an opportunity to discover how the citizenM* hotel in Amsterdam is pioneering a move towards the new digital hotel; using mobile devices to create better customer experiences, and consolidated analytics to better understand customers.

 

Its aim is to tackle a significant challenge in the hospitality industry: personalizing the service to the needs of each guest. The answer, it believes, is to restructure all the disparate systems it previously had for storing guest data with a single guest profile. As a result, the hotel has the potential, in the future, to be able to remember your preferences for room temperature and TV channel. They’ll even know how you like your coffee.

 

Employees will have a tailored dashboard that provides the information they need for their role, such as whether a room is empty (for the cleaners) or what the guest’s favorite tipple is (for the bar staff).

Consumerization in Hotel Technology

 

The hospitality industry has often used tablets for employees, but a novel twist at citizenM is that each room will have a tablet guests can use to control the TV, windows, radio and lights from a single device.

 

This is one more example of how hotels worldwide are exploring ways they can streamline the check-in process to improve the guest experience. Starwood is planning to use smartphones or tablets for digital room keys, and Nine Zero is offering iris-scanning for access to the penthouse suite, for example.

Using the Cloud to Synchronize Data

 

The data will be stored in a Microsoft Azure* cloud using servers based on the Intel® Xeon® processor, with a local server used for backup should internet connection drop. The idea is to use the cloud to synchronize customer data between hotels, so a hotel in London could remember your preferences from Amsterdam, for example.

Sharing Data on the Service Bus with IreckonU

 

The solution is called IreckonU* and was developed by Dutch software company Ireckon. IreckonU* is built on a highly scalable base layer, which consists of a service bus and middleware containing business-specific logic. All the systems plug into the service bus, from the website booking systems to the minibar, so they can all communicate effectively with each other. Using this architecture, there are none of the maintenance and support headaches usually associated with point-to-point integration because each application just needs to be connected to the bus. At the same time, all the applications will be able to access the full guest profile, and update it to ensure there is a complete picture of the guest. The solution includes several standard building blocks. They enable the hotel to:

 

  • Create dashboards for employees, from the CEO to the housekeeper.
  • Optimize reservation flows, room availability and status, housekeeping, payment and client interaction.
  • Provide a personalized guest experience in the hotel and with external services such as flight data.

Future Possibilities

 

The ireckonU also provides hotel and hospitality brands with a whole range of additional features out-of-the-box so expect to see some of these services starting to appear in hotels in the not-too-distant future:

 

  • If you’re feeling peckish, you’ll be able to order room service through your tablet.
  • If you prefer, you will be able to use your own tablet instead of the one provided to control the room. To ensure a great performance, each room will have its own private WiFi network too.
  • You’ll be able to set your alarm according to your flight time, and the system will be able to let you sleep in later and reschedule your taxi if your flight is delayed. Now that’s what I call service!
  • You’ll even be able to use your phone to check in, and unlock your room door using an app. This will avoid any delays at the reception desk on arrival, and will spare guests the need to carry a separate key with them.

 

Watch the video to see the whole thing through the eyes of a guest. It provides a real insight into how hotel brands like citizenM and software company ireckon are approaching the challenges in today’s hospitality industry. I’d love to hear your thoughts on the hotel of the future in the comments below.

 

*Other names and brands may be claimed as the property of others.

 

Jane Williams

Find me on LinkedIn
Keep up with me on Twitter

Check out my previous posts

Read more >

INOX Communication Boosts the Shopping Experience With Robust Intel IoT Retail Solutions

The rapid evolution of retail, from digital signage to virtual shopping, wouldn’t be possible without collaboration, optimism, and futuristic IoT envisioning. In this guest blog post, INOX Communication Founder and CEO Lats Kladny, reveals how envisioning the future of the … Read more >

The post INOX Communication Boosts the Shopping Experience With Robust Intel IoT Retail Solutions appeared first on IoT@Intel.

Read more >

Top Tweets – 3D Avatars, Fast JavaScript and Twitter for Developers

Follow Gael on Twitter: @GaelHof Subscribe to the IntelDeveloperZone Subreddit My Google + profile Check out my Slideshare profile I was just going through my Twitter analytics and thought I would share what topics my followers are the most interested in.  While I … Read more >

The post Top Tweets – 3D Avatars, Fast JavaScript and Twitter for Developers appeared first on Intel Software and Services.

Read more >

Where Are You on the Road to Software-Defined Infrastructure?

In enterprise IT and service provider environments these days, you’re likely to hear lots of discussion about software-defined infrastructure. In one way or another, everybody now seems to understand that IT is moving into the era of SDI.

 

There are good reasons for this transformation, of course. SDI architectures enable new levels of IT agility and efficiency. When everything is managed and orchestrated in software, IT resources— including compute, storage, and networking—can be provisioned on demand and automated to meet service-level agreements and the demands of a dynamic business.map-graphic.jpg

 

For most organizations, the question isn’t, “Should we move to SDI?” It’s, “How do we get there?” In a previous post, I explored this topic in terms of a high road that uses prepackaged SDI solutions, a low road that relies on build-it-yourself strategies, and a middle road that blends the two approaches together.

 

In this post, I will offer up a maturity-model framework for evaluating where you are in your journey to SDI. This maturity model has five stages in the progression from traditional hard-wired architecture to software-defined infrastructure. Let’s walk through these stages.

 

Standardized

 

At this stage of maturity, the IT organization has standardized and consolidated servers, storage systems, and networking devices. Standardization is an essential building block for all that follows. Most organizations are already here.

 

Virtualized

 

By now, most organizations have leveraged virtualization in their server environments. While enabling high level of consolidation and greater utilization of physical resources, server virtualization accelerates service deployment and facilitates workload optimization. The next step is to virtualize storage and networking resources to achieve similar gains.

 

Automated

 

At this stage, IT resources are pooled and provisioned in an automated manner. In a step toward a cloud-like model, automation tools enable the creation of self-service provisioning portals—for example, to allow a development and test team to provision its own infrastructure and to move closer to a frictionless IT organization.

 

Orchestrated

 

At this higher stage of IT maturity, an orchestration engine optimizes the allocation of data center resources. It collects hardware platform telemetry data and uses that information to place applications on the best servers, with features that enable acceleration of the workloads, located in approved locations for optimal performance and the assigned levels of trust. The orchestration engine acts as an IT watchdog that spots performance issues and takes remedial actions—and then learns from the events to continue to meet or exceed the customer’s needs.

 

SLA Managed

 

At this ultimate stage—the stage of the real-time enterprise—an organization uses IT service management software to maintain targeted service levels for each application in a holistic manner. Resources are automatically assigned to applications to maintain SLA compliance without manual intervention. The SDI environment makes sure the application gets the infrastructure it needs for optimal performance and compliance with the policies that govern it.

 

In subsequent posts, I will take a closer look at the Automated, Orchestrated, and SLA Managed stages. For now, the key is to understand where your organization falls in the SDI maturity model and what challenges need to be solved in order to take this journey. This understanding lays the groundwork for the development of strategies that move your data center closer to SDI—and the data center of the future.

Read more >

Blurred Boundaries: Hidden Data Center Savings

Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models.

 

Cloud technologies and services are driving similar change. Data center managers have many choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.

 

Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.

 

Shifting Workloads to Lower Energy Costs

 

Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions. But many companies do not leverage all of the energy-related data available to them – and without this knowledge, it’s challenging to make sense of information being generated by servers, power distribution, airflow and cooling units and other smart equipment.

 

That’s why holistic energy management is essential to optimizing power usage across the data center. IT and facilities can rely more on user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat like graphical thermal and power maps of the data center. Specific services and workloads can also be profiled, and logged data helps build a historical database to establish and analyze temperature patterns. Having one cohesive view of energy consumption also reduces the need to rely on less accurate theoretical models, manufacturer specifications or manual measurements that are time consuming and quickly out of date.

 

A Case for Cloud Computing

 

This makes the case for cloud computing as a means to manage energy costs. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.

 

From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where Infrastructure-as-a-Service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.

 

Other Benefits of Fine-Grained Visibility

 

For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60 percent of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.

 

Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.

 

Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.

 

Some of the same energy management solutions that boost visibility also provide a range of control features. Thresholds can be set to trigger notification and corrective actions in the event of power spikes, and can even help identify the systems that will be at greatest risk in the event of a spike. Those servers operating near their power and temperature limits can be proactively adjusted, and configured with built-in protection such as power capping.

 

Power capping can also provide a foundation for priority-based energy allocations. The capability protects mission-critical services, and can also extend battery life during outages. Based on knowledge extracted from historical power data, capping can be implemented in tandem with dynamic adjustments to server performance. Lowering clock speeds can be an effective way to lower energy consumption, and can yield measurable energy savings while minimizing or eliminating any discernable degradation of service levels.

 

Documented use cases for real-time feedback and control features such as thresholds and power capping prove that fine-grained energy management can yield significant cost reductions. Typical savings of 15 to 20 percent of the utility budget have been measured in numerous data centers that have introduced energy and temperature monitoring and control.

 

Understand and Utilize Energy Profiles

 

As the next step in the journey that began with virtualization, cloud computing is delivering on the promises for more data center agility, centralized management that lowers operating expenses, and cost-effectively meeting the needs for very fast-changing businesses.

 

With an intelligent energy management platform, the cloud also positions data center managers to more cost-effectively assign workloads to leverage lower utility rates in various locations. As energy prices remain at historically high levels, with no relief in sight, this provides a very compelling incentive for building out internal clouds or starting to move some services out to public clouds.

 

Every increase in data center agility, whether from earlier advances such as virtualization or the latest cloud innovations, emphasizes the need to understand and utilize energy profiles within the data center. Ignoring the energy component of the overall cost can hide a significant operating expense from the decision-making process.

Read more >

Increasing Scalability and Cost-Effectiveness for InterSystems Caché with the Intel® Xeon® Processor E7 v3 Family

Healthcare systems are coping with an unprecedented level of change. They’re managing a new regulatory environment, a more complex healthcare ecosystem, and an ever-increasing demand for services—all while facing intense cost pressures.

 

These trends are having a dramatic impact on EMR systems and healthcare databases, which have to maintain responsiveness even as they handle more concurrent users, more data, more diverse workflows, and a wider range of application functionality.

 

As Intel prepared to introduce the Intel® Xeon® processor E7 v3 family, we worked with engineers from Epic and InterSystems to ensure system configurations that would provide robust, reliable performance. InterSystems and VMware were also launching their next-generation solutions, so the test team ran a series of performance tests pairing the Intel Xeon processor E7-8890 v3 with InterSystems Caché 2015.1 and a beta version of VMware vSphere ESXi 6.0.

                                

The results were impressive. “We saw the scalability of a single operational database server increase by 60 percent,” said Epic senior performance engineer Seth Hain. “With these gains, we expect our customers to scale further with a smaller data center footprint and lower total cost of ownership.” Those results were also more than triple the end-user database accesses per second (global references or GREFs) achieved using the Intel® Xeon® processor E7-4860 with Caché® 2011.1.

 

leibforth graph.jpg

 

These results show that your healthcare organization can use the Intel Xeon processor E7 v3 family to implement larger-scale deployments with confidence on a single, scale-up platform.

 

In addition, if you exceed the vertical scalability of a single server, you can use InterSystems Caché’s Enterprise Cache Protocol (ECP) to scale horizontally. Here again, recent benchmarks show great scalability. A paper published earlier this year reported more than a threefold increase in GREFs for horizontal scalability compared to previous-generation technologies.

 

This combination of outstanding horizontal and vertical scalability—in the cost-effective environment of the Intel® platform—is exactly what needed to meet rising demands and create a more agile, adaptable, and affordable healthcare enterprise.

                                                                              

What will these scalability advances mean for your healthcare IT decision makers and data center planners? How will they empower your organization deliver outstanding patient care and enhance efficiency? I hope you’ll read the whitepapers and share your thoughts. And please keep in mind: Epic uses many factors, along with benchmarking results, to provide practical sizing guidelines, so talk to your Epic system representative as you develop your scalability roadmap.

 

Read the whitepaper about vertical scalability with the Intel Xeon processor E7 v3.

 

Read the whitepaper about horizontal scalability  with Intel Xeon processors.

 

Join and participate in the Intel Health and Life Sciences Community

 

Follow us on Twitter: @IntelHealth, @IntelITCenter, @InterSystems, @vmwareHIT

 

Steve Leibforth is a Strategic Relationship Manager at Intel Corporation

Read more >

Network World Article Highlights Advances in iWARP Specification

The industry continues to advance the iWARP specification for RDMA over Ethernet, first ratified by the Internet Engineering Task Force (IETF) in 2007.

 

This article in Network World, “iWARP Update Advances RDMA over Ethernet for Data Center and Cloud Networks,” co-authored by myself and Wael Noureddine of Chelsio Communications, describes two new extensions that have been added to help software developers of RDMA code by aligning iWARP more tightly with RDMA technologies that are based on the InfiniBand network and transport, i.e., InfiniBand itself and RoCE. By bringing these technologies into alignment, we move closer toward the goal of the Open Fabrics Alliance, that the application developer need not concern herself with which of these is the underlying network technology — RDMA will “just work” on all.

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

Read more >

A Bucket of Wings: A Case Study of Better-Informed Decisions

In my blog Use Data To Support Arguments, Not Arguments To Support Data, I articulated how better-informed decisions are typically made and the role that business intelligence (BI) should play. Shortly after I wrote the blog, I experienced a real-life event that clearly illustrates three main phases of “data-entangled decisions.”

 

Since my family likes to take a day off from cooking on Fridays, we recently visited the deli of our favorite organic grocery store. At the take-out bar, I noticed an unusually long line of people under a large sign reading, “In-House Made Wing Buckets. All You Can Fill. On Sale for $4.99, Regular $9.99.” Well, I love wings and couldn’t resist the temptation to get a few.

 

The opportunity was to add wings (one of my favorite appetizers) to my dinner. But instead of using the special wings bucket, I chose the regular salad bar container, which was priced at $8.99 per pound regardless of the contents. I reasoned that the regular container was an easier-to-use option (shaped like a plate) and a cheaper option (since I was buying only a few wings). My assumptions about the best container to use led to a split-second decision—I “blinked” instead of “thinking twice.”

 

Interestingly, a nice employee saw me getting the wings in the regular container and approached me. Wary of my reaction, he politely reminded me of the sale and pointed out that I may pay more if I use the regular container because the wing bucket had a fixed cost (managed risk).

 

Although at first this sounded reasonable, when I asked if it would weigh enough to result in a higher cost, he took it to one of the scales behind the counter and discovered it was less than half a pound. This entire ordeal took less than 30 seconds and now I had the information I needed to make a better-informed decision.

 

This clinched it, because now two factors were in my favor. I knew that a half pound of the $8.99, regular-priced option was less than the $4.99, fixed-priced bucket option. And I knew that they would deduct the weight of the regular deli container at the register, resulting in an even lower price. I ended up paying $4.02.

 

This every-day event provides a good story to demonstrate the three phases as it relates to the business of better-informed decisions and the role of BI—or data in general.

 

Phase 1: Reaction

When the business opportunity (wing purchase) presented itself, I made some assumptions with limited data and formed my preliminary conclusion. If it weren’t for the store employee, I would have continued to proceed to the cash register ignorant of all the data. Sometimes in business, we tend to do precisely the same thing. We either don’t validate our initial assumptions and/or we make a decision based on our preliminary conclusions.

 

Phase 2: Validation

By weighing the container, I was able to obtain additional data and validate my assumptions to quickly take advantage of business opportunities —exactly what BI is supposed to do. With data, I was able to conclude with a great degree of confidence that I had mitigated the risk that it was the right approach. This is also typical of how BI can shed more light on many business decisions.

 

Phase 3: Execution

I made my decision by taking into account reliable data to support my argument, not arguments to support data. I was able to do this because I (as the decision maker) had an interest in relying on data and the data I needed was available to me in an objective form (use of the scale). This allowed me to eliminate any false personal judgments (like my initial assumptions or the employee’s recommendation).

  • From the beginning, I could have disregarded the employee’s warning or simply not cared much about the final price. If that had been my attitude, then no data or BI tool would have made a difference in my final decision. And I might have been wrong.
  • On the other hand, if I had listened to the initial argument by that nice employee without backing it up with data, I would have been equally wrong. I would have made a bad decision based on what appeared to be a reasonable argument that was actually flawed.
  • When I insisted on asking the question that would validate the employee’s argument, I took a step that is the business equivalent of insisting on more data because we may not have enough to make a decision.
  • By resorting to an objective and reliable method (using the scale), I was able to remove personal judgments.

 

In 20/20 Hindsight

Now, I realize that business decisions are never this simple. Organizations’ risk is likely measured in the millions of dollars, not cents. And sometimes we don’t have the luxury of finding objective tools (such as the scale) in time to support our decision making. However, I believe that many business decisions mirror the same sequences.

 

Consider the implications if this were a business decision that resulted in a decision of $100 in the wrong direction. Now simply assume that these types of less-informed or uninformed decisions were made once a week throughout the year by 1000 employees. The impact would be $5 million.

 

Hence, the cost to our organization increases as:

  • The cost of the error rises
  • Errors are made more frequently
  • The number of employees making the error grows

 

Bottom Line

Better-informed decisions start and end with leadership that is keen to promote the culture of data-driven decision making. BI, if designed and implemented effectively, can be the framework that enables organizations of all sizes to drive growth and profitability.

 

What other obstacles do you face in making better-informed decisions?

 

Connect with me on Twitter (@KaanTurnali) and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

Read more >