Wireless Displays: To Compress or Not Compress

This years CES was filled with a variety of wireless display and wireless HDMI solutions using various combinations of radios (proprietary radios in the UWB or 5 GHz unlicensed bands, WiFi-based, UWB/W-USB based, and 60 GHz based) and compression algorithms (uncompressed, proprietary lossless and lossy, JPEG2000 based, and H.264 based). So, it appears there is interest in the industry to enable this usage model, but how can we reign in all this chaos? Clearly, lots of industry harmonization and standards will be needed before this application can really be ubiquitous. There will be a session on wireless displays during the Intel Developer Forum (IDF) in April which we hope will spark further discussion and collaboration in this area.

Rather than trying to address all the issues related to the wireless display area, I’d like to focus this discussion on compression for short-range wireless applications. Depending on who you talk to and what their background is, there appears to be a number of different opinions on whether compression can meet the quality demands for this application (in short, trying to replace the HDMI or video cable via a wireless link). Clearly, replacing a wire with the same quality over wireless is not a trivial task, and the goal would be to have ‘visually lossless quality’ (i.e., the end user cannot see the difference between the wire and wireless). So, can compression (any kind of compression) meet this strict requirement?

Let’s first ask the question, ‘Why not send video and display content uncompressed’? As an example, a 1080p resolution screen requires approximately a 3 Gbps link. Existing radios (UWB and WiFi based) clearly can’t meet these rates today and so some form of compression would be needed, but future 60 GHz radios might. So, assuming I had a 3+ Gbps radio, is it still best to send video streams uncompressed? What if I had other devices that wanted to share that bandwidth (for large file transfers, for example)? What if I wanted to support more than one screen? What happens as the screen resolutions increase over time, and what happens to my wireless bandwidth needs (will radio throughput be able to keep up with display resolutions)? And finally, aren’t you burning a lot of power continuously transmitting at a constant 3 Gbps rate or higher? Hopefully, these questions suggest that the answer of sending video content uncompressed is not obvious even if the radio is capable of doing so, and there are a number of engineering trade-offs that have to be explored.

So, what if we were able to achieve comparable quality (where a consumer can’t tell the difference between compressed and uncompressed) with just a fraction of the throughput (say, 1/10 or 1/20 or even less)? Why wouldn’t we want to do that? I agree that this will require some complex circuits to achieve, but process scaling should keep this impact relatively small. If this were possible, what can I do with it? I can reduce my radio usage by, say, 1/10, and save roughly 90% of my radio power (you won’t be able to turn off all radio circuits, but this is just for explanation). I can increase my range by a factor of 3, or I can better go through a cabinet or wall. For some applications, like PC displays, very little is changing on the screen at any one time, and so I can achieve an overall reduction in average throughput (and power consumption) by a factor of 100 or even a 1,000. For the last example, this could be done while even maintaining mathematically lossless quality by implementing simple temporal compression and a lossless codec. So, aren’t these benefits worth exploring, even if we had a multi-Gbps radio? Of course, my opinion is yes. Also, it seems that some of these advantages could also benefit wired displays…at least is should be worth exploring for future generation HDMI and DisplayPort interfaces.

The first hurdle to overcome with compression is quality, and whether or not it can meet consumer demands. Recognize that virtually all video content is compressed at one stage or another before a person sees it. So, we’re already viewing compressed content, which should give hope that it’s possible. Clearly, there are cases where we don’t have access to compressed content (like a PC display, or video game), and so we would need to be able to compress in real-time. In order to be convinced, people really have to see it to believe it. I have spoken to several skeptics and have found that people are genuinely surprised at the quality that can be achieved even with a fairly low compression ratio (1/20 and smaller) using some of the current state-of-the-art codecs like H.264. So, I would encourage people to explore for themselves first (for example, see some of the demos at IDF in April in China and others shown at CES), and then consider the benefits that could be possible if compression can satisfy consumer demands in quality. Of course, we also have to keep overall latency, cost, and power low as well, which should be part of the evolution of the technology.

I recognize that compression is just one piece of the puzzle to enable wireless displays. Clearly, the performance has to be proven over a wireless channel (error recovery mechanisms needed), content protection must be addressed to protect the premium content, audio/video synchronization must be wire equivalent, etc. These are the kinds of problem engineers love to attack, and I have no doubt novel solutions for these can be achieved. So, I think we should take a fresh look at compression technology for short-range video and display transport (for both wireless and wired), and see what new benefits and usage models can be enabled by it.

5 Responses to Wireless Displays: To Compress or Not Compress

  1. Joseph says:

    As you point out, compression occurs all throughout the pipe from actors to viewer. However, digital restrictions management schemes can cause the video to be re-compressed again and again (e.g. the DRM done by my IPTV provider between head end and set-top-box). This causes artifacts because of re-compression (e.g. when recording the output of said set-top-box on a TiVo or MythTV box), leading to a loss in quality. What are you doing to help the viewer with the inherent diminishment of their experience due to DRM?

  2. Igor Kozintsev says:

    In one scenario that you mentioned (PC display) compression definitely makes sense. In fact SW protocols go beyond just video compression and handle 3D and video separately. Examples include RDP on Windows and VNC on Linux. Implementing something like this for WD may require more HW and SW resources – an ideal task for IA SoC. And why not to extend this idea and include other platform I/O devices in the same way?

  3. JamesB says:

    Hmm. Lossless compression would make a lot of sense for 99% of PC usage. That said, lossy forms, even if good, really shouldn’t be done on screens. What if a graphic artist wants something exactly right? Why would a gamer want to buy a big screen only to get ‘good enough for TV’ quality as far as smoothness at polygon edges, etc.?
    The bandwidth issue will only get worse over time as screen sizes go up, to be sure, but we are also using higher and higher frequencies these days.
    As for improving range, please make sure that devices don’t broadcast at higher powers than they absolutely need to to get to each other. In high-density environments the idea of your neighbor’s wireless TV messing up your computer is not pleasant. Higher frequencies do poorly between walls, too, so as bandwidth requirements go up in the future (let’s be honest, compressing video AGAIN is a stopgap at best, and will not gain you anything long-term) you can just keep upping the frequency…

  4. JamesB says:

    Actually, what you *could* do is this…
    Start off with lossless compression as the default. This will work for 99% of PC usage – word processing, YouTube (the window is tiny), windowed and slow-paced games…
    If the lossless compression is using a significant amount of bandwidth, *and only if* the transmitter finds that it isn’t able to transmit everything, have it drop down gracefully to whatever compression level is able to get through. In other words, have it degrade gracefully.
    I mention the ‘if the lossless compression is using a significant amount of bandwidth’ part specifically because people who are not watching video or playing a full-screen game should never have their visuals degraded. Instead, if you have two wireless displays, the one with video which is only able to get 990 Mbps of its 1000 Mbps should be the one to drop. If at the low end of the usage scale the connection refuses to drop rate, the other side will have to.
    Also, if a region of pixels is totally static for more than a couple frames, it should definitely be retransmitted lossless if lossy before. That way, someone watching YouTube while doing PhotoShop work will not have their PhotoShop work degraded.

  5. Yaron says:

    So, what the conclusions ? Can compressed HD content over 11n be satisfied for the majority users, or should we go for solution like WHDI (amimon.com) or the 60Ghz solutions ? I would say that majority will be OK with compressed data over 11n (maybe over 11g?), and will consider what they get as good enough. Do you have any analysis/researches to prove or reject that claim ?