This past month I took part in a technology showcase that we held in New York City to introduce the media to some innovations on the horizon that we think will change the lives of everyday people – not just technophiles. At this event, in addition to technical and scientific publications, we also spoke to several that covered very mainstream topics such as fitness, lifestyle, and fashion. I found this to be a refreshing change of pace – instead of talking about many-core prototypes or silicon waveguides I found myself talking about things like the idea of a virtual dressing room for online shopping.Virtual shopping is a very good example of what we are trying to do with visual computing and the tera-scale processors that will power more immersive internet experiences in the future. The main barrier for me to shopping online for clothing today is that I don’t know how anything will really fit until I try it on. For shirts I run between a men’s “L” and an “XL” and which one fits varies wildly between the brand, the cut, and the fabric. Sleeve length is also hit or miss. The end result is I don’t buy clothes online because I don’t want to have to deal with returning a bunch of stuff. What would help significantly is the capability to have some kind of “dressing room” experience online that at least roughly approximates the real deal (minus the lines, surveillance cameras, and cluttered booths). In fact, some of the basic capabilities to do this are ones we have been actively researching because they will require the 10s-100s of cores we expect to see over the next decade. These are computer vision, physics modeling, and human body tracking. Imagine if you could walk into a booth at a department store and a computer embedded into the booth used a few webcams and some computer vision software to take a scan of your body. That scan might be uploaded to a secure server or stored on one of your devices (such as your cell phone). Then at home when you go to that store’s website you are able to bring up a 3D model of yourself that you can move around. When you click on an article of clothing, it appears hanging on your virtual body approximately like it would in real life. Making that “approximately” better and better is the realm of physics modeling – simulating the properties of a material so that it looks and acts realistic. Imagine that the virtual cloth on your 3D body actually drapes, folds, and moves just like cotton, denim, or silk. Click the image above for an example of cloth modeling work by one of our collaborators, Prof. Ron Fedkiw at Stanford (and see his page for more). You could use one or more webcams to track your real body and transfer the movements to your virtual self, allowing you to walk a virtual runway to see how the outfit would move on your body. Here’s a screen capture of body tracking research where we’ve used four cameras and a lot of processing to this with need to wear any special equipment. You can also check out this video of me showing of both of these examples in this video (from the aforementioned New York event) posted on neuronspark. I’m certainly not a fashion nut, by any stretch of the imagination. But that being the case I’d also much prefer to get some new clothes online than have to drive out to a store. And take this a step further – you might be able to share your body models, so you could buy something online that looks nice on your significant other, or buy a new winter sweater for your dad back home (which would be well fitting but still appropriately cheesy-looking to keep with tradition). Shopping is one example, but these basic technologies for vision and realistic physics apply to a wide variety of virtual environments. Gaming is an obvious example. Virtual worlds is another emerging area – I’ve been watching headlines on the Virtual Worlds News blog recently and the investment and activity in this area is encouraging. In fact, here’s my recently created Avatar for ScienceSim, a new virtual world we are helping to create for immersive scientific collaboration. Certainly some of that cloth modeling could improve the looks. And body tracking could make it much easier to control and incorporate subtle gestures and facial expression to give virtual interactions a more natural feel. But, as I said – this world is for immersive science. As we develop this world and other immersive connected experiences, we’ll be able to collaborate on real research and interact with things that are too small (viruses), too large (galaxies) or too extinct (how about dinosaurs?) to touch and feel in the real world. The further these technologies progress, the wider the scope of possibility.
Connect With Us
- Qingfeng Zhu on The Third Eye View
- Anil on The Third Eye View
- Olajfestmény on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Tony Rivers on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Neel on Our ISTC-VC will rock at SIGGRAPH 2012
Tags#IntelR&Dday 80-core @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless