Researcher finds optimal fix-free codes

Apr 03, 2009
Researcher finds optimal fix-free codes
Dr. Serap Savari

( -- More than 50 years after David Huffman developed Huffman coding, an entropy encoding algorithm used for lossless data compression in computer science and information theory, an electrical and computer engineering faculty member at Texas A&M University has discovered a way to construct the most efficient fix-free codes.

Huffman coding uses a variable-length code table for choosing the representation for each symbol, resulting in a prefix code (that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common characters using shorter strings of bits than are used for less common source symbols. Huffman was able to design the most efficient compression method ofthis type since no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code.

Dr. Serap Savari, an associate professor in the Department of Electrical and Computer Engineering at Texas A&M, has developed the first approach to finding the optimal fix-free code, variable length codes in which no codeword is the prefix or suffix of another codeword.

“My method of finding optimal fix-free codes is computationally demanding, but no one has solved the problem before even though it was first posed in 1990,” Savari said. “Earlier algorithms produced good fix-free codes in a reasonably (time) efficient way, but without the guarantee of optimality.”

While there are numerous applications for fix-free codes, the most important applications have been in communications. Fix-free codes have been investigated for joint source-channel coding and have been applied within the video standards H.263+ and MPEG-4 because their property of efficient decoding in both the forward and backward directions assists with error resilience. They are also interesting for problems in information retrieval such as searching for patterns directly in compressed text. Savari is uncertain how her discovery will impact these and other applications of fix-free codes, but hopes that her work will be used by researchers and people implementing practical systems..

“My work is like Huffman’s in that it is basic research that is motivated by practically important problems and which contributes to the theory of data compression,” she said.

Savari has already been invited to discuss her findings at numerous seminars throughout the United States, including Stanford University, the University of Illinois-Urbana- Champaign, The University of California, Berkeley, the University of California, San Diego, Caltech, the University of Southern California and possibly MIT in the fall.

Provided by Texas A&M University

Explore further: Algorithm accounts for uncertainty to enable more accurate modeling

Related Stories

Entanglement unties a tough quantum computing problem

Sep 28, 2006

Error correction coding is a fundamental process that underlies all of information science, but the task of adapting classical codes to quantum computing has long bumped up against what seemed to be a fundamental limitation.

First International Conference on Quantum Error Correction

Oct 01, 2007

Quantum error correction of decoherence and faulty control operations forms the backbone of all of quantum information processing. In spite of remarkable progress on this front ever since the discovery of quantum error correcting ...

Cracking the secret codes of Europe's Galileo satellite

Jul 08, 2006

Members of Cornell's Global Positioning System (GPS) Laboratory have cracked the so-called pseudo random number (PRN) codes of Europe's first global navigation satellite, despite efforts to keep the codes secret. ...

Neurobiologists uncover evidence of a 'memory code'

Sep 08, 2005

By examining how sounds are registered during the process of learning, UC Irvine neurobiologists have discovered a neural coding mechanism that the brain relies upon to register the intensity of memories based on the importance ...

Recommended for you

Cattle ID system shows its muzzle

23 hours ago

Maybe it sounds like a cow and bull story, but researchers in Egypt are developing a biometric identification system for cattle that could reduce food fraud and allow ranchers to control their stock more efficiently. The ...

Combining personalization and privacy for user data

Jun 29, 2015

Computer scientists and legal experts from Trinity College Dublin and SFI's ADAPT centre are working to marry two of cyberspace's greatest desires, by simultaneously providing enhanced options for user personalisation alongside ...

User comments : 5

Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (1) Apr 03, 2009
not rated yet Apr 03, 2009
not rated yet Apr 03, 2009
Great achievement. She is part of Computer Science history now.
not rated yet Apr 04, 2009
Hmm. Like Huffman's? He was my professor at university. He might like this work, but maybe not her approach to self-advertising. Not sure how this discovery qualifies as "findings", either, but maybe the article isn't explaining sufficiently. Sounds more like a one-off discovery. But. We can always hope.
1 / 5 (1) Apr 04, 2009

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.