Researchers build Quad HD TV chip

Feb 20, 2013 by Larry Hardesty
A new video-coding standard — known variously as ultrahigh-def (UHD), Quad HD or 4K — promises four times the resolution of today's high-definition video.

It took only a few years for high-definition televisions to make the transition from high-priced novelty to ubiquitous commodity—and they now seem to be heading for obsolescence just as quickly. At the Consumer Electronics Show (CES) in January, several manufacturers debuted new ultrahigh-definition, or UHD, models (also known as 4K or Quad HD) with four times the resolution of today's HD TVs.

In addition to screens with four times the pixels, however, UHD also requires a new standard, known as high-efficiency video coding, or HEVC. Also at CES, Broadcom announced the first commercial HEVC chip, which it said will go into in mid-2014.

At the International Solid-State Circuits Conference this week, MIT researchers unveiled their own HEVC chip. The researchers' design was executed by the , through its University , and Texas Instruments (TI) funded the chips' development.

Although the MIT chip isn't intended for commercial release, its developers believe that the challenge of implementing HEVC algorithms in silicon helps illustrate design principles that could be broadly useful. Moreover, "because now we have the chip with us, it is now possible for us to figure out ways in which different types of video data actually interact with hardware," says Mehul Tikekar, an MIT graduate student in and computer science and lead author of the new paper. "People don't really know, 'What is the hardware complexity of doing, say, different types of video streams?'"

In the pipeline

Like older coding standards, the HEVC standard exploits the fact that in successive frames of video, most of the pixels stay the same. Rather than transmitting entire frames, it's usually enough for broadcasters to transmit just the moving pixels, saving a great deal of bandwidth. The first step in the encoding process is thus to calculate "motion vectors"—mathematical descriptions of the motion of objects in the frame.

On the receiving, end, however, that description will not yield a perfectly faithful image, as the orientation of a moving object and the way it's illuminated can change as it moves. So the next step is to add a little extra information to correct motion estimates that are based solely on the vectors. Finally, to save even more bandwidth, the motion vectors and the corrective information are run through a standard data-compression algorithm, and the results are sent to the receiver.

The new chip performs this process in reverse. It was designed by researchers in the lab of Anantha Chandrakasan, the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and head of the MIT Department of Electrical Engineering and Computer Science. In addition to Chandrakasan and Tikekar, these include Chiraag Juvekar, another graduate student in Chandrakasan's group; former postdoc Chao-Tsung Huang; and former graduate student Vivienne Sze, now at TI.

The chip's first trick for increasing efficiency is to "pipeline" the decoding process: A chunk of data is decompressed and passed to a motion-compensation circuit, but as soon as the motion compensation begins, the decompression circuit takes in the next chunk of data. After motion compensation is complete, the data passes to a circuit that applies the corrective data and, finally, to a filtering circuit that smooths out whatever rough edges remain.

Fine-tuning

Pipelining is fairly standard in most video chips, but the MIT researchers developed a couple of other tricks to further improve efficiency. The application of the corrective data, for instance, is a single calculation known as matrix multiplication. A matrix is just a big grid of numbers; in matrix multiplication, numbers in the rows of one matrix are multiplied by numbers in the columns of another, and the results are added together to produce entries in a new matrix.

"We observed that the matrix has some patterns in it," Tikekar explains. In the new standard, a 32-by-32 matrix, representing a 32-by-32 block of pixels, is multiplied by another 32-by-32 matrix, containing corrective information. In principle, the corrective matrix could contain 1,024 different values. But the MIT researchers observed that, in practice, "there are only 32 unique numbers," Tikekar says. "So we can efficiently implement one of these [multiplications] and then use the same hardware to do the rest."

Similarly, Juvekar developed a more efficient way to store video data in memory. The "naive way," he explains, would be to store the values of each row of pixels at successive memory addresses. In that scheme, the values of pixels that are next to each other in a row would also be adjacent in memory, but the value of the pixels below them would be far away.

In video decoding, however, "it is highly likely that if you need the pixel on top, you also need the pixel right below it," Juvekar says. "So we optimize the data into small square blocks that are stored together. When you access something from memory, you not only get the pixels on the right and left, but you also get the pixels on the top and bottom in the same request."

Chandrakasan's group specializes in low-power devices, and in ongoing work, the researchers are trying to reduce the power consumption of the chip even further, to prolong the battery life of quad-HD cell phones or tablet computers. One design modification they plan to investigate, Tikekar says, is the use of several smaller decoding pipelines that work in parallel. Reducing the computational demands on each group of circuits would also reduce the chip's operating voltage.

Explore further: PsiKick's batteryless sensors poised for coming 'Internet of things'

Related Stories

New standard HEVC encodes films more efficiently

Aug 23, 2012

Television resolution is constantly improving – and this must go hand-in-hand with transmitting the data more efficiently. Reputable manufacturers of televisions, computers and mobile telephones, working jointly with Fraunhofer ...

Researchers developing faster video streaming

Feb 05, 2013

In the smartphones and tablet era, more and more users are watching videos on the move—with a resulting strain on mobile networks. The combination of the HEVC video compression standard with LTE brings ...

Minimizing communication between cores

Feb 28, 2011

In the mid-1990s, Matteo Frigo, a graduate student in the research group of computer-science professor Charles Leiserson (whose work was profiled in the previous installment in this series), developed a parallel version of a fast Fou ...

A TV 4 times sharper than HD

Apr 26, 2012

Now that you've got a high-definition TV, you may want to start saving up for a super-high-definition one.

Recommended for you

Large streams of data warn cars, banks and oil drillers

Apr 16, 2014

Better warning systems that alert motorists to a collision, make banks aware of the risk of losses on bad customers, and tell oil companies about potential problems with new drilling. This is the aim of AMIDST, the EU project ...

User comments : 17

Adjust slider to filter visible comments by rank

Display comments: newest first

Lurker2358
2.2 / 5 (14) Feb 20, 2013
It makes no sense for home entertainment systems and personal computing devices to keep multiplying resolution and color depth far beyond human perception. The only thing it accomplishes is eating up all the gains in processor power.
Manitou
3.9 / 5 (7) Feb 20, 2013
Lurker2358, we are far from the limits of human vision. We want to achieve virtual reality, a sense of being "there".

Even if we did achieve those limits, there are benefits to going beyond them, e.g. zoom and stabilization.
baudrunner
2.4 / 5 (7) Feb 20, 2013
we are far from the limits of human vision
@Manitou: Actually, that's wrong. The thing about reality is that it is not backlit, like LCD or plasma screens are, for example, so there's really no comparison. The fact is that high definition TV screens can provide better apparent resolution than reality, and what's apparent is what we see.
slackjaw_hickspit
2 / 5 (4) Feb 20, 2013
I wonder if resolution will stop increasing once the human eye cannot resolve the pixels any further?
ShotmanMaslo
3.2 / 5 (6) Feb 20, 2013
VR is the ultimate goal, but TVs are not virtual reality, they fill only a small fraction of our field of view. So for TVs the resolution is already good, unless you sit close. Now virtual reality devices that envelope whole vision (Oculus Rift?) are another matter. Thats where 4K and even higher resolutions will be important.
Eikka
3.7 / 5 (7) Feb 20, 2013
The thing about reality is that it is not backlit


That hardly makes a difference. It's still the light arriving from the object that we see.

I wonder if resolution will stop increasing once the human eye cannot resolve the pixels any further?


Probably.

The thing is that if you want to cover the full field of view of a human with as many pixels as the eye can see, it appears you need a screen with about 576 megapixels. Technically, the human color vision could be compared to a 6 megapixel camera only, but the point is that we can look around. The sense for contrast instead of color is greater, at around 125 million sensing rods, but it is more spread out.

The regular HD screen has two megapixels, and the 4K screen has eight.

It's about time regular monitors become able to display a photograph from a cheap point and shoot camera of yesteryear, in full detail.
Frostiken
1.8 / 5 (9) Feb 20, 2013
You people are idiots. I currently use a 2560x1600 30" screen and you can still make out individual pixels and 'jaggies'. You people saying that 1080 is already 'enough' and 'at the limits of human vision' are embarassingly naive. You sound like the people who used to say that SDTV was 'good enough'. Unless you've used a true high-resolution display, which I doubt you have if you're arguing "high definition TV screens can provide better apparent resolution than reality", then shut up.

Seriously. Stop.

Also these screens would be useless for gaming as you'd need at least a quad-SLI setup to drive that resolution. Additionally, I'm skeptical about this codec, as it sounds extremely prone to errors, and I honestly cannot see it being that effect. More resolution means more detail, more detail means a higher chance of picking up minute details that WOULD require pixels to change, which would defeat the entire point of their 'only flip the changing pixels' plan.
ChangBroot
1.4 / 5 (5) Feb 20, 2013
The main reason behind these kinds of technology is not to impress people or for human perception. One of the reasons we want to achieve higher resolution, is to increase zoom-ability, if you will. If you were to project an UHD movie and current HD movie in a theater, you will see a humongous difference. In other words, in a UHD movie/picture, you could almost zoom in to each cell of your body. In a UHD movie you could keep zooming in far more than you can in the current HD, without losing any details.
sirchick
not rated yet Feb 21, 2013
It makes no sense for home entertainment systems and personal computing devices to keep multiplying resolution and color depth far beyond human perception. The only thing it accomplishes is eating up all the gains in processor power.


At what resolution would our eyes no longer detect a difference? I don't think we are there yet but one day we will probably surprise the human eye's ability to see the fine detail.
VendicarE
1.8 / 5 (5) Feb 21, 2013
Americans should hold out until they produce a 1,000,000 x 1,000,000 display.

There is just no point in upgrading unless Americans can watch their favorite fast food and spray on hair commercials in trillion pixel resolution.
triplehelix
1 / 5 (3) Feb 21, 2013
If you had a 3000ft screen, then HDTV resolution would be awful, unless you stepped quite far back. Like people are saying, it is mostly about zoom functions.

HDTV is as good as it needs to be for anyone simply watching a film, unless they want to zoom in for some bizarre reason? You will notice that new UDTV's samsung unveiled are 84-110" big, so sure, more pixels, but it's more pixels over a bigger screen, so effectively it's visuals will look almost identical to HDTV-Just a bigger screen.

It's all about zooming really.

I still can't see any difference between DVD and Blu Ray. I love the google comparison of the two, essentially the difference is colour. Its called your contrast button. A 10 second contrast increase/decrease and you have pretty much the same quality. Sure, if you still snapshot it and zoom and look REALLY close you may find something, but thats 1 frame in a 5 minute look. Your brain during the film will see that 1 frame for 1/30th of a second. It wont notice
machinephilosophy
not rated yet Feb 23, 2013
"I wonder if resolution will stop increasing once the human eye cannot resolve the pixels any further?"

Actually, this issue was resolved decades ago. While distinguishing individual pixels at extremely high resolutions is difficult for the human eye, there is still a synergy effect in overall pictures and videos. That's why the realism is still enhanced in spite of our pixel-by-pixel differentiation difficulties.

But all I want out of all this technology is to be able to plug in or bluetooth a pair of glasses and not have to use a regular computer screen. If it's a common need and a huge market, you can count on tech collectives to ignore it.
vertex
not rated yet Feb 23, 2013
Existing 40 inch 1920x1080 HDTV is a "Retina Display" when viewed from 5.2 feet or more
Existing 50 inch 1920x1080 HDTV is a "Retina Display" when viewed from 6.5 feet or more
Existing 60 inch 1920x1080 HDTV is a "Retina Display" when viewed from 7.8 feet or more

Unless this technology is used in an up-close computer screen you won't see a difference if used in a typical television scenario.
LastQuestion
1 / 5 (1) Feb 24, 2013
Spent quite a bit of time lately learning how to calibrate displays and from what I've gathered 4k UHD is pointless for most all consumers and has certain drawbacks concerning fov and gaming. Beyond this, current displays are not capable of producing the full range of color perceptible by the human eye.

In fact, the rec709 standard HD content uses is but a small part of what is perceptible. It's reiterated that calibration is about viewing content as the director intended, not as it is in real life. The displays are physically incapable of producing all perceptible colors. What prototypes can face the reality that film, or rather every piece content consumers desire to view, cannot make use of them. Films, however, have been recorded at high resolutions for ages now so there would be a plethora of content for a 4k display.

So it is that they push for 4k displays when the greatest benefit would be had from increasing color reproduction.
_etabeta_
5 / 5 (1) Feb 24, 2013
"prolong the battery life of quad-HD cell phones"
QuadHD on a cell phone?? Give me a break!! What is the use of such a resolution on a cell phone unless you use a microscope to look at the screen?
TheKnowItAll
1 / 5 (1) Feb 24, 2013
Everyone does not perceive visual content the exact same way, our body parts are not absolute carbon copies. I for one have a higher than average perception of angles and flickering while many do not notice that but I have partial color blindness and the flickering makes me dizy. My point is that we can all disagree on whether the current displays are good enough or not or we can agree to upgrade them so everyone is happy. If I had my way I would make it illegal to market any displays with a Frame Rate of less than 120Hz or 240Hz for 3D Displays and maybe use circular RGB dots instead of those damn obvious squares lol Also I wouldn't mind a 16'x9' display that covers my whole living room wall so yeah bring on the resolution!
georgeb1962
not rated yet Feb 25, 2013
At first I thought "quad hd cellphones" was ridiculous, but then I remembered that there are already accessories that equip your cellphone with a laser projector. Quad HD cellphones will presumably have those built-in. Not to mention that hi-res displays will be ubiquitous and will be able to display your cellphone's output.

More news stories

Hackathon team's GoogolPlex gives Siri extra powers

(Phys.org) —Four freshmen at the University of Pennsylvania have taken Apple's personal assistant Siri to behave as a graduate-level executive assistant which, when asked, is capable of adjusting the temperature ...

Better thermal-imaging lens from waste sulfur

Sulfur left over from refining fossil fuels can be transformed into cheap, lightweight, plastic lenses for infrared devices, including night-vision goggles, a University of Arizona-led international team ...

Chronic inflammation linked to 'high-grade' prostate cancer

Men who show signs of chronic inflammation in non-cancerous prostate tissue may have nearly twice the risk of actually having prostate cancer than those with no inflammation, according to results of a new study led by researchers ...