Turning conventional video coding wisdom on its head

May 19, 2008

A major drawback of the latest generation video products and applications has been the complex requirements for coding and decoding signals. An alternative put forward by European researchers turns the traditional video coding paradigm on its head.

Since digital television services began, there has been an accepted way of encoding and decoding video signals. The encoding process is more complex, and requires a great deal more processing power compared to the decoding process.

A television station transmits its signal from a single location, and highly complex equipment encodes the video content for transmission. At the receiving end are large numbers of viewers with simple aerials and television sets allowing them to decode and watch the broadcast.

Any other way of encoding and decoding would be less practical because the viewers would not be able to afford the expensive equipment needed to decode the signal if the complexity were built into the receiving end.

Video services, such as video on demand and streaming, have followed this paradigm of complex encoders operating with simple decoders. With the switch from analogue to digital broadcasting, new standards and video coding technologies have emerged, but again, these follow the same basic principal.

Something happened in 1970 that set the scene for a rethink. US researchers posited a new mathematical theory requiring a total overhaul of codecs – the device or programs that perform encoding and decoding on a digital data stream or signal.

For years, little was done about these predictions, until around the new millennium when a raft of new video devices started appearing in research laboratories and even on the market. Because they had less memory and battery capacity, these real-life applications, such as wireless video cameras, needed simple encoders and complex decoders.

Entering the picture

Since the year 2000, researchers around the world have been looking into this ‘reversal’, and trying to develop new codecs under the banner of Distributed Video Coding (DVC).

But it was only in 2004 that the first serious DVC research project in Europe, called Discover, was set up by six European universities to look at the problem from a European perspective.

“Getting applications to work was not the problem,” says project coordinator Luis Torres. “For example I can already use my mobile phone for videoconferencing, but the complexity of equipment for encoding to the same quality as a conventional digital television picture was the challenge.”

Despite entering the picture later than the Americans, Discover’s scientists looked at what was state of the art and set about improving on it. Within a few months, they had developed a new codec, a sophisticated software algorithm, which Torres says was already “very competitive” with those developed in the USA.

Improvements were made to the software during the two-year project, and it has been made available on the project website free of charge to the recording community and other interested parties.

Quickly seizing the lead

During the EU-funded project, the partners delved into the performance of DVC theory, and produced a series of technical documents detailing the latest advances and a publicly available benchmark for the international research community to evaluate.

By the end of 2007, Discover was able to exhibit the best rate distortion performance – a measure comparing compression rate with quality – of any DVC codec in the world.

Torres is at pains to point out this advantage still does not make the codex very competitive when compared to the compression performance of current video standards. There is a long way to go before picture quality will be anything like that of television. But the groundwork has been laid for other researchers to develop the codec for commercial use.

“I am quite sure, in the future, new projects will see DVC quality catch up with current mainstream broadcast technology and become indistinguishable from it,” he says.

When this does happen, there are large numbers of existing and planned applications that could benefit from such an advance. The applications are available, but are far from properly optimised.

“With our new techniques, they could become optimal,” Torres says.

These applications include wireless video transmission and wireless surveillance networks providing a high-quality video feed in real time. Medical applications, including tiny cameras transmitting video from inside patients, are also envisaged.

Also in the works is a new multi-view image acquisition standard involving the creation of a 3D effect using several unlinked cameras videoing the same scene from different angles and positions.

Although such advances are still only future concepts, Discover has brought them a lot closer to reality.

Discover received funding from the EU's Sixth Framework Programme for research.

Source: ICT Results

Explore further: MIT groups develop smartphone system THAW that allows for direct interaction between devices

add to favorites email to friend print save as pdf

Related Stories

Razor-sharp TV pictures

Aug 21, 2014

The future of movie, sports and concert broadcasting lies in 4K definition, which will bring cinema quality TV viewing into people's homes. 4K Ultra HD has four times as many pixels as today's Full HD. And ...

AMD reveals plans for 25x efficiency gains by 2020

Jun 22, 2014

(Phys.org) —California-based semiconductor product company, AMD, has announced goals for very ambitious efficiency gains in its products over the rest of the decade. AMD has already made headway in improving ...

Toshiba develops many-core SoC for embedded applications

Jun 15, 2012

Toshiba Corporation today announced the development of an innovative low-power, many-core System-on-a-Chip (SoC) for embedded applications in such areas as automotive products and digital consumer products. ...

Recommended for you

Who drives Alibaba's Taobao traffic—buyers or sellers?

Sep 18, 2014

As Chinese e-commerce firm Alibaba prepares for what could be the biggest IPO in history, University of Michigan professor Puneet Manchanda dug into its Taobao website data to help solve a lingering chicken-and-egg question.

Computerized emotion detector

Sep 16, 2014

Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people ...

Cutting the cloud computing carbon cost

Sep 12, 2014

Cloud computing involves displacing data storage and processing from the user's computer on to remote servers. It can provide users with more storage space and computing power that they can then access from anywhere in the ...

Teaching computers the nuances of human conversation

Sep 12, 2014

Computer scientists have successfully developed programs to recognize spoken language, as in automated phone systems that respond to voice prompts and voice-activated assistants like Apple's Siri.

User comments : 0