Computer scientists reveal how aquatic olympic gold is captured -- above and below the surface

Aug 09, 2012
Courant's Chris Bregler and his team have isolated the movements of Olympic swimmers and divers through a cutting-edge technique that reveals their motions above and below the water’s surface. Pictured above is U.S. swimmer Dana Vollmer, who captured three gold medals at the 2012 Summer Olympic Games in London.

(Phys.org) -- Computer scientists have isolated the movements of Olympic swimmers and divers through a cutting-edge technique that reveals their motions above and below the water's surface.

The work, conducted by Manhattan Mocap, LLC, together with New York University’s Movement Laboratory and The New York Times, analyzes Dana Vollmer, who won three gold medals at the 2012 Summer Olympics in London, as well as Abby Johnston, who won a silver medal in synchronized diving, and Nicholas McCrory, a bronze medalist in synchronized diving.

The research team, headed by Chris Bregler, a professor in NYU’s Courant Institute of Mathematical Sciences, followed these athletes during their training in pools across the United States this spring and deployed ground-breaking motion-capture techniques to unveil their movement above and under the water’s surface.

Their work may be viewed here.

Of particular note is the team’s creation of a system, AquaCap (TM), which captures underwater motion. It was used to display Vollmer’s butterfly stroke and underwater dolphin kick, breaking down the technique the swimmer used to win the gold medal in the 100-meter butterfly in world-record time. Through a comparison of , the video illustrates how closely Vollmer’s kick resembles that of a dolphin swimming through the water.

Subsequent work analyzed Johnston and McCrory, showing through previously unseen angles their summersaults from 3- and 10-meter diving boards and marking another technical breakthrough in motion capture.

Motion capture records movements of individuals, who wear suits that reflect light to enable the recording of their actions. It then translates these movements into digital models for 3D animation often used in video games and movies, such as “Avatar” and “Iron Man.” Bregler and his team used a more sophisticated computer-vision technology, which allows for the tracking and recording of these movements straight from video and without the use of motion capture suits.

Explore further: MU researchers develop more accurate Twitter analysis tools

add to favorites email to friend print save as pdf

Related Stories

Virtual swimmer to speed up athletes

Mar 30, 2006

CSIRO and the Australian Institute of Sport are using mathematics in a bid to speed up our top swimmers by testing changes to swimming strokes. The research will make use of the same software CSIRO uses for other fluid simulations ...

Researchers demonstrate markerless motion capture

Aug 06, 2012

Conventional motion capture for film and game production involves multiple cameras and actors festooned with markers. A new technique developed by Disney Research, Pittsburgh, has demonstrated how three-dimensional motion ...

Body-mounted cameras turn motion capture inside out

Aug 08, 2011

Traditional motion capture techniques use cameras to meticulously record the movements of actors inside studios, enabling those movements to be translated into digital models. But by turning the cameras around — mounting ...

Recommended for you

Avatars make the Internet sign to deaf people

22 hours ago

It is challenging for deaf people to learn a sound-based language, since they are physically not able to hear those sounds. Hence, most of them struggle with written language as well as with text reading ...

Chameleon: Cloud computing for computer science

Aug 26, 2014

Cloud computing has changed the way we work, the way we communicate online, even the way we relax at night with a movie. But even as "the cloud" starts to cross over into popular parlance, the full potential ...

User comments : 0