Lester Memmott on Context Aware Computing

Last week, the Intel Developer Forum (IDF) was held in Shanghai, China and one of the key messages was that Carry Small, Live Large (CSLL) is a vision held by Intel for future mobile computers. In a nutshell it is the vision of more powerful small form factor devices that are more aware of your environment and offer a more personal interaction with the user. This is a device with rich computing capabilities such as telephony, media, gaming & the Internet to name a few but this isn’t the limit. It is a platform for creating new kinds of applications and interactions as well. For example, you can imagine the new kinds of social networking applications that could be built with this device. Senior Fellow, Kevin Kahn wrote a great blog about CSLL.

On occasion when I explain CSLL to someone new to the topic, I get questions about the need to have a full-sized “usable” keyboard or better mouse input. Also common concerns are that with such a little display I won’t be able to show pictures or movies to friends or show a PowerPoint presentation to colleagues which is the perfect segue into Dynamic Composable Computing (DCC). DCC aims to enable the ability to connect to keyboards, mice, displays, and audio systems, to name just a few, dynamically and wirelessly with the mobile device. For example, you walk into a friend’s home and you want to show a group of people pictures and music from your last vacation. You “borrow” your friend’s large, wall-mounted flat-screen TV to show the pictures, the stereo system to play the music and a keyboard or remote to easily control forward and back action between pictures and videos. …and it is all done wirelessly. This is all done seamlessly and easily by dynamically discovering the devices available and enabling the mobile device to use them. Roy Want recently wrote a blog on this very subject that has more information.

To go a step further, think about the case where there are lots of devices and services available for dynamic composition. For example, if coffee shops and cafés across the globe who today are providing WiFi service also started offering tables with large LCD screens at one end for sharing pictures and videos. To share to my music they also include 7.1 Surround Sound at each booth and also provide use of full-size keyboards and mice. When I walk into such an environment, it could be a laborious task to discover the available DCC devices and then connect to the ones I want to use based on the table I’m sitting at.

My team, the Software Pathfinding and Innovation Group (SPI), within Intel’s Software and Solutions Group is focused on solving this problem along with many others with our research in a general-use Context Aware Computing (CAC) framework and engine. In our research we’ve designed and built a running prototype of a context aware computing engine. This engine provides a plug-in architecture for data collection (called Providers) from a variety of source types. The data schema is also extensible allowing 3rd parties to enhance and extend it as needed. Internal to the system it has a data collection mechanism, known as the Aggregator, making the data readily available to any number of data consumers. Also internal to the system is a programmable Analyzer which processes the context data to make higher-level conclusions from the data. Finally it contains a set of client APIs allowing applications to have access to the raw context data and analyzed data through poll-based and event-based methods. To circle back, in the coffee shop example above, this context engine can suggest to the user which booth to sit at based on the user’s preferences for display size, type of sound equipment, that fact that the user is with friends (and thus likely to share media content), the nearness to windows and so forth. It can also facilitate the composition actions once a decision is made by the user.

For IDF I along with members of my team developed a Context Aware Composition demo called “Automated Conference Room Composition” which combined features of the context engine along with the composition engine from Roy Want’s team mentioned above. Sri Sridharan, our group’s marketing guru, showed the demo which used the composition engine to discover and compose with conference room display devices (i.e. an LCD projector in this case). The context engine developed by my team was programmed to automatically compose with the projector if the following was true: LCD projector is available AND I’m the meeting owner AND I’m physically in the conference room AND I’ve sat down for the meeting. This was done through a variety of plug-in providers. A plug-in provider interacted with the composition engine to determine what projectors were available, another plug-in provider inspected my Outlook calendar to see if I had a meeting on my calendar, and another (simulated for the demo) indicated my location (at my desk vs in the conference room) and finally the last was a plug-in provider that communicated over Bluetooth with an Multiple Sensor Platform (MSP) device to determine if I was still walking, or if I had sat down to start the meeting. Once these criteria were met, the context engine automatically composed with the projector and started showing my presentation.

To summarize, the industry is in the midst of change. Mobile computers are becoming more capable and more powerful. With the CSLL efforts from Intel and the research done on Dynamic Composible Computing and Context Aware Computing you’ll have new-found capability on your mobile computer. You’ll be able to dynamically compose with devices and services to more easily interact with and more easy share your media with friends. You’ll also receive a better experience as mobile devices adapt to your ever-changing context and help you more easily make decisions and choices. You’ll be able to… Oh, wait! I’ve got to go. There’s my context aware device telling me that my next meeting got moved an hour earlier so I’d had better catch lunch soon or I’ll go hungry for the afternoon. Enjoy!

Lester Memmott is a senior software architect in the Software Pathfinding and Innovation group in Intel SSG. After employment with Novell & IBM, Lester joined Intel in 1995 and has worked in a variety of software related areas from product development to technology research. Most recently he is designing context aware technology targeted to make mobile computers easier to use. He holds two patents with others pending. He received B.S. and M.S. degrees in electrical engineering from Brigham Young University.

2 Responses to Lester Memmott on Context Aware Computing

  1. David C says:

    Great concept, and in sync with where “smart” mobile devices are actually going, much more personalized. Makes one wonder how fast the industry can move to make the “simulated” portions, real.

  2. MSJ says:

    The vision for a seamless integration of context based technology enabled life (personal and business) is very compelling. Making technology a slave to the user as opposed to the other way around is obviously the breakthrough we all can appreciate. It seems like Apple has been able to differentiate itself by making technology user friendly and Intel’s research is clearly aimed at making this a reality for the broader market. I know of a company called ZuluTime that is aiming to turn all of the wireless/mobile devices inherently location aware using only software– which seems to be an important aspect of the ultimate solution and CSLL vision.