Cardan Samples has two favorite prime-time T.V. shows: "The Office" and the science fiction series "Fringe."
But between classes and homework at McCombs School of Business, the Management Information Systems junior doesn't have time to watch them when they air.
Instead, once or twice a week he uses the 15-20 minute bus ride from The University of Texas at Austin campus to his apartment on Riverside Drive to catch up on episodes by streaming them on his 4x2-inch cell phone.
"It's so convenient," Samples said. "When I'm on the bus or have free time on campus, I can just pull out my phone."
Samples is not alone in his thinking. Thanks to a combination of popular video streaming sites, like Netflix and YouTube, along with revolutionary and ever-growing mobile technologies that make watching or sharing a video as easy as switching on your iPhone, Americans are now spending the majority of their online time streaming videos.
Mobile video streaming is growing so quickly that wireless cell phone traffic is expected to increase by as much as 65 times by 2014, with most of this increase in the form of streaming digital video. If the forecast holds true, 3G and 4G wireless networks won't be able to support the demand, and the influx of video traffic could grind mobile networks to a halt.
This reality has commanded the attention of global technology companies, two of which are looking to professors at the Cockrell School of Engineering for a solution.
Five faculty in the school's Electrical and Computer Engineering Department have been selected to receive a $900,000 gift from Intel and Cisco to develop innovative and novel algorithms that could improve the wireless networks ability to store, stream and share mobile videos more efficiently. The gift is part of an interdisciplinary, multi-year research effort between engineering faculty at five universities: The University of Texas at Austin, Cornell University, the University of California San Diego, the University of Southern California and Moscow State University, which were selected out of 18 submissions worldwide.
"It takes a multidisciplinary approach to address this," said Jeffery Foerster, a principal engineer at Intel Labs, "and The University of Texas at Austin has unique skills and knowledge behind the understanding of video quality measurement and metrics, and it has a close-knit group in wireless communications that has been looking at these video analysis techniques from not just the theoretical side but the practical side. Each of the universities has unique capabilities that combine for a comprehensive solution."
The problem ahead
Cockrell School professors Robert W. Heath Jr., Alan Bovik, Gustavo de Veciana, Jeffrey G. Andrews and Constantine Caramanis bring to the project years of collective expertise, and they'll need to draw on all of it for the mammoth problem ahead.
For starters, in 2010 almost 2 billion people around the world connected to the Internet, and more are doing so each year. In 2010, 143 countries offered 3G services commercially compared to only 95 three years prior.
Supporting these online interactions in the wireless network are a series of base stations. The stations can interfere with each other. Depending on where you are in relation to each one and how many other users you're competing with to watch a video on your phone the transfer of information can feel more like braking through an urban grid of stop-and-go streetlights instead of speeding past intelligently timed traffic lights that account for a seamless, efficient ride that saves time and frustration.
"Our goal is to provide high perceptual quality video," said Heath, associate professor in electrical and computer engineering and the David and Doris Lybarger Endowed Faculty Fellow in Engineering. "Doing so requires the delivery of fewer, more perceptually relevant bits per video stream, communicating those bits more efficiently throughout the network and creating a more capable perception and video-content-aware network infrastructure."
How to solve it
But to make the network more aware of video content and more important, the perceived quality when users view it the researchers have to first understand how humans judge and rate videos. For instance, what in the human brain determines whether a video's quality is good or poor? And what compromises are viewers willing to make to watch a video on their phone? It's smaller, sure, but will a viewer notice or care if images in the background of a video aren't as enhanced as the main basic image in focus?
Such questions are important because with so much demand for video and limited bandwidth to send it, the researchers must know what aspects of a video are less important to a viewer so the layers in the video can be labeled accordingly and the network can adjust to send only the critical pieces when bandwidth is stretched thin. Traditionally, the network's only measurement for quality has been to compare the copy of a video sent to a person's phone to an original, undistorted version. But even then inaccuracies arise in the measurements.
To improve the system, the researchers are developing novel algorithms that could be input into the network and devices so that they know what a human would deem as acceptable or poor video quality.
The researchers recently finished a massive study in which people were asked to rate the quality of 3-D video images. The results are still being analyzed but metrics from the study will be important tools for developing the algorithms, which, once plugged into the network, will allow it to support 3-D video to multiple users by sending only what those users would view as necessary for the video's quality to be good.
Dr. Bovik, director of the Cockrell School's Laboratory for Image and Video Engineering (LIVE), demonstrates this point by holding up a photo. It's of a little girl whose face takes up most of the frame. But the image is distorted and pixelated, something that's most noticeable on her face and less noticeable on the flowers behind her. Because of how the brain processes visual information, the distortions are more visible in some places than others, even though the actual distortion level is the same everywhere. Modeling these kinds of perceptual processes is key to understanding how the brain perceives visual distortions, and how they might be measured digitally, Bovik said.
Because of how the brain processes visual information, distortions in an image are more visible in some places than others, even though the actual distortion level is the same everywhere. In this photo for instance, distortions are most visible on the little girl's face but they are less noticeable on the flowers behind her. Modeling these kinds of perceptual processes is key to understanding how the brain perceives visual distortions, and how they might be measured digitally.
Another aspect of the professors' research is to manage interference within the network. To understand this, let's go back to Samples.
When he makes his commute from campus to home, wireless base stations are feeding video to his phone. But these stations can pick up interference from each other and, in the current system, don't communicate with one another. This results in stations expending huge amounts of energy and bandwidth to get a video to Samples' phone, even when another station is much closer to him and could do it with less bandwidth.
The professors are working to design a more intelligent system, one in which the different stations communicate with one another in real-time and coordinate which should transmit information to Samples based on his proximity. More important, the professors say, the stations can coordinate so that bandwidth capacity can be added on demand wherever it's most needed at a given time.
"You can watch the Super Bowl on your phone now, so if it's on there would be a tremendous amount of video traffic at that time," Bovik said. "In a more intelligent wireless system, the base stations could communicate and would know to give more juice to the area where video traffic is highest."
The researchers are also trying to design a system that can take advantage of good opportunities when it has them. For instance, when someone has a speedy wireless connection on his or her phone, the network would be aware, and in turn would react by sending much more information, or parts of the video, during the buffering process. Currently, the network sends the same amount of video during this time, regardless of whether the connection is slow or fast.
The goal is to use the perceptual quality metrics to deliver all video at a high quality. This is a challenge. Each piece of the network can be optimized separately, but this does not mean they will all work together. To address this, the Cockrell School researchers are using mathematical techniques like data mining techniques, widely used by companies like Amazon and Netflix. The techniques will help the network figure out how to configure itself based on perception.
For example, if Samples is watching a video far away from a base station, the station may send the video at a lower rate. It could achieve this in different ways by reducing the video's resolution or frame rate. It could also vary the amount of redundancy added to the video to ensure it is sent correctly. These decisions may depend on the specific video and where exactly Samples is located. The network will try different strategies for sending video, and over time it will learn the best ways to send video to different users.
Streaming smoothly to everyone
While Samples likes to catch up on his favorite shows by streaming on his cell phone, Foerster, the principal engineer at Intel Labs, prefers watching short news or entertainment video clips while waiting at the airport. He also lets his twin seven-year-old girls watch videos wirelessly while on road trips.
Regardless of how these videos get to Samples and Foerster the goal for them is the same: to hit "play" and watch their video without interruption or disturbances.
Explore further: Collision course: ONR testing high-speed planing hulls to better understand wave slam