Researchers use AI to add 4-D effects to movies

July 6, 2018, University of Toronto
Fourth-year computer engineering undergraduate Yuhao Zhou (right) and U of T postdoctoral fellow Makarand Tapaswi are using neural networks to automate older films into 4-D movie experiences. Credit: Ryan Perez

James Cameron's 3-D film Avatar sought to revolutionize the movie-going experience when it was first released in 2009, creating an immersive world for viewers. But what if you also wanted to feel the heat and the wind, while flying on a banshee, direct from your cinema seat?

While a small number of so-called "4-D" that add a physical element already exist, researchers from the University of Toronto are working on a way to apply the feature more broadly.

"Usually the chair will shake, there can be splashing or some other kind of interaction while watching the film," says Yuhao Zhou, a fourth-year undergraduate in the Edward S. Rogers Sr. department of electrical & computer engineering, of the emerging entertainment. "Right now all these effects are created from the first phase of production. We'd like to automate this kind of process for movies that were not originally created for 4-D cinemas."

Zhou is working with Makarand Tapaswi, a U of T postdoctoral fellow of computer science, and Sanja Fidler, an assistant professor at U of T Mississauga's department of mathematical and computational sciences and the tri-campus graduate department of computer science. They recently had their work, Now You Shake Me: Towards Automatic 4-D Cinema, featured in a spotlight presentation at the Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah.

Zhou says a 4-D movie is usually perceived from the first-person viewpoint, or camera. If Will Turner in Pirates of the Caribbean is feeling the wind blowing in his face, and the moviegoer wants to experience being Turner, then they, too, would have to experience wind in their face.

"We want to have a feature where you can just flip a switch and experience what characters are feeling," Zhou says.

To take a regular or 3-D movie to 4-D, the researchers used a freelance website to annotate the film's effects for their 4-D prediction model.

"For example, [in Lord of the Rings: The Fellowship of the Ring] Frodo pulls Sam out of the water, but there are several effects happening simultaneously," says Zhou, who began working with Fidler during his third-year of undergraduate studies. "First, he pulls him – there's a physical interaction with the hand. When Sam goes back down into the water, he pulls Frodo, and the boat shakes.

"The camera is your input," adds Tapaswi. "But in this case you want to experience not only what the camera sees, but also one of the characters – relive how the characters felt shaking and so on."

While 4-D technology is still out of the range of physical interactions – that is, a hand pulling – Tapaswi envisions pressure sensors to simulate touch as the technology advances. The model could prove useful in other areas such as virtual reality or augmented reality.

"We're collecting these types of annotations for future studies," Zhou says.

For their dataset, they applied both effect classification and detection. For effect classification, Zhou says their neural network, a function of machine learning that allows deep analysis and learning of data, extracted features from a short clip, including movement and audio. For detection, he says, the neural net can predict what the effects are, and where they occur, in a long video clip.

"You don't only want to know what happens to a character in a particular shot. You want to be able to say, '[the ] is wind now, not only because I see the wind right now, but [because] it was probably windy before,'" Tapaswi says.

The researchers found certain genres of movies tended to share similar effects – for example, movies set in space like Interstellar or Gravity. This can be seen as a novel way to cluster, says Tapaswi.

"Usually with 3-D movies, film-goers wear glasses and sit in a chair," says Zhou. "With automatic 4-D cinema, the neural network would process 2-D and 3-D movie information, feed it into the chair, and simulate the effects."

Explore further: How to train your robot: Research provides new approaches

Related Stories

MoviePass crafts wider offerings for movie night

March 7, 2018

MoviePass is eyeing a broadening of its app capabilities to create a full-featured movie-going experience by tracking where people go before and after the film, the company said Monday night.

Consumers look to escapism when sentiment goes south

March 1, 2018

Movie demand provides a handy barometer for the economy, and a pointer to the types of business that do well in a downturn, according to new research examining the world's largest film industry, Bollywood.

Modeling wind power's impact on local climate

January 11, 2018

In 2012, a study led by UAlbany atmospheric scientist Liming Zhou analyzed nine years of NASA satellite data to present the first observational evidence of turbine induced nighttime warming effects in a west-central Texas ...

Recommended for you

Pushing lithium ion batteries to the next performance level

December 13, 2018

Conventional lithium ion batteries, such as those widely used in smartphones and notebooks, have reached performance limits. Materials chemist Freddy Kleitz from the Faculty of Chemistry of the University of Vienna and international ...

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.