This years CES was filled with a variety of wireless display and wireless HDMI solutions using various combinations of radios (proprietary radios in the UWB or 5 GHz unlicensed bands, WiFi-based, UWB/W-USB based, and 60 GHz based) and compression algorithms (uncompressed, proprietary lossless and lossy, JPEG2000 based, and H.264 based). So, it appears there is interest in the industry to enable this usage model, but how can we reign in all this chaos? Clearly, lots of industry harmonization and standards will be needed before this application can really be ubiquitous. There will be a session on wireless displays during the Intel Developer Forum (IDF) in April which we hope will spark further discussion and collaboration in this area.Rather than trying to address all the issues related to the wireless display area, I’d like to focus this discussion on compression for short-range wireless applications. Depending on who you talk to and what their background is, there appears to be a number of different opinions on whether compression can meet the quality demands for this application (in short, trying to replace the HDMI or video cable via a wireless link). Clearly, replacing a wire with the same quality over wireless is not a trivial task, and the goal would be to have ‘visually lossless quality’ (i.e., the end user cannot see the difference between the wire and wireless). So, can compression (any kind of compression) meet this strict requirement? Let’s first ask the question, ‘Why not send video and display content uncompressed’? As an example, a 1080p resolution screen requires approximately a 3 Gbps link. Existing radios (UWB and WiFi based) clearly can’t meet these rates today and so some form of compression would be needed, but future 60 GHz radios might. So, assuming I had a 3+ Gbps radio, is it still best to send video streams uncompressed? What if I had other devices that wanted to share that bandwidth (for large file transfers, for example)? What if I wanted to support more than one screen? What happens as the screen resolutions increase over time, and what happens to my wireless bandwidth needs (will radio throughput be able to keep up with display resolutions)? And finally, aren’t you burning a lot of power continuously transmitting at a constant 3 Gbps rate or higher? Hopefully, these questions suggest that the answer of sending video content uncompressed is not obvious even if the radio is capable of doing so, and there are a number of engineering trade-offs that have to be explored. So, what if we were able to achieve comparable quality (where a consumer can’t tell the difference between compressed and uncompressed) with just a fraction of the throughput (say, 1/10 or 1/20 or even less)? Why wouldn’t we want to do that? I agree that this will require some complex circuits to achieve, but process scaling should keep this impact relatively small. If this were possible, what can I do with it? I can reduce my radio usage by, say, 1/10, and save roughly 90% of my radio power (you won’t be able to turn off all radio circuits, but this is just for explanation). I can increase my range by a factor of 3, or I can better go through a cabinet or wall. For some applications, like PC displays, very little is changing on the screen at any one time, and so I can achieve an overall reduction in average throughput (and power consumption) by a factor of 100 or even a 1,000. For the last example, this could be done while even maintaining mathematically lossless quality by implementing simple temporal compression and a lossless codec. So, aren’t these benefits worth exploring, even if we had a multi-Gbps radio? Of course, my opinion is yes. Also, it seems that some of these advantages could also benefit wired displays…at least is should be worth exploring for future generation HDMI and DisplayPort interfaces. The first hurdle to overcome with compression is quality, and whether or not it can meet consumer demands. Recognize that virtually all video content is compressed at one stage or another before a person sees it. So, we’re already viewing compressed content, which should give hope that it’s possible. Clearly, there are cases where we don’t have access to compressed content (like a PC display, or video game), and so we would need to be able to compress in real-time. In order to be convinced, people really have to see it to believe it. I have spoken to several skeptics and have found that people are genuinely surprised at the quality that can be achieved even with a fairly low compression ratio (1/20 and smaller) using some of the current state-of-the-art codecs like H.264. So, I would encourage people to explore for themselves first (for example, see some of the demos at IDF in April in China and others shown at CES), and then consider the benefits that could be possible if compression can satisfy consumer demands in quality. Of course, we also have to keep overall latency, cost, and power low as well, which should be part of the evolution of the technology. I recognize that compression is just one piece of the puzzle to enable wireless displays. Clearly, the performance has to be proven over a wireless channel (error recovery mechanisms needed), content protection must be addressed to protect the premium content, audio/video synchronization must be wire equivalent, etc. These are the kinds of problem engineers love to attack, and I have no doubt novel solutions for these can be achieved. So, I think we should take a fresh look at compression technology for short-range video and display transport (for both wireless and wired), and see what new benefits and usage models can be enabled by it.
Connect With Us
- s.mcknight on The Third Eye View
- Qingfeng Zhu on The Third Eye View
- Anil on The Third Eye View
- Olajfestmény on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
- Tony Rivers on Intel and Stanford Researchers Reveal Peptide Chip Details to Categorize Diseases and Analyze Protein Interactions
Tags#IntelR&Dday @idf08 Big Data Cloud Computing Ct CTO energy efficient Future Lab Future Lab Radio HPC IDF IDF2008 IDF 2010 Immersive Connected Experiences innovation Intel Intel Labs Intel Labs Europe Intel Research ISSCC Justin Rattner many core microprocessor mobility multi-core parallel computing parallel programming radio Rattner ray tracing research Research@Intel Research At Intel Day Robotics security silicon silicon photonics software development Stanford technology terascale virtual worlds Wi-Fi WiMAX wireless