A Dartmouth College team has created the first light-sensing system that reconstructs human postures continuously and unobtrusively, furthering efforts to create smart spaces in which people control their environment with simple gestures.
The findings and a demonstration video will be presented Sept. 9 at MobiCom, the 21st annual International Conference on Mobile Computing and Networking.
Light plays many roles in our lives, from illumination to energy source, but advances in visible light communication (VLC) add a new dimension to the list: data communication. VLC encodes data into light intensity changes at a high frequency imperceptible to human eyes. Unlike conventional radio frequency systems that require complex signal processing, VLC uses energy efficient light emitting diodes to transmit data inexpensively, securely, cleanly and with virtually unlimited bandwidth. Any devices equipped with light sensors can recover data by monitoring light changes.
"Here we are pushing the envelope further and ask: Can light turn into a ubiquitous sensing medium that tracks what we do and senses how we behave?" says senior author Xia Zhou, an assistant professor of computer science and co-director of the DartNets (Dartmouth Networking and Ubiquitous Systems) Lab.
Envision a smart space such as the home, office or gym that takes the advantage of the ubiquity of light as a medium that integrates data communication and human sensing. Smart devices such as smart glasses, smart watches and smartphones equipped with photodiodes communicate using VLC. More importantly, light also serves as a passive sensing medium. Users can continuously gesture and interact with appliances and objects in a room - for example, a wall-mounted display, computers, doors, windows, coffee machine—similar to using the Kinect or Wii in front of a TV. But there are no cameras (high-fidelity sensors with privacy concerns) monitoring users or any on-body devices or sensors that users have to constantly wear or carry, just LED lights on the ceiling and photodiodes on the floor. Compared to existing methods that use wireless radio signals such as Wi-Fi to track user gestures, light-based sensing does not introduce electromagnetic interferences and is not limited to classifying a pre-defined set of gestures and activities.
In their new study, the researchers developed a system called LiSense that uses VLC to reconstruct a human skeleton's movements in real time (60 Hz). They built the-first-of-its-kind light sensing testbed in the DartNets lab using off-the-shelf LED lights, photodiodes, and micro-controllers. LiSense uses shadows created by the human body from blocked light and reconstructs 3-D human skeleton postures in real time. The researchers overcame two key challenges to realize shadow-based human sensing. First, multiple lights on the ceiling lead to diminished and complex shadow patterns on the floor, so they designed light beacons enabled by VLC to separate light rays from different light sources and recover the shadow pattern cast by each individual light. Second, they designed an algorithm to reconstruct human postures using 2-D shadow information with a limited resolution collected by photodiodes embedded in the floor.
"Light is everywhere and we are making light very smart," Zhou says. "Imagine a future where light knows and responds to what we do. We can naturally interact with surrounding smart objects such as drones and smart appliances and play games, using purely the light around us. It can also enable a new, passive health and behavioral monitoring paradigm to foster healthy lifestyles or identify early symptoms of certain diseases. The possibilities are unlimited."
Explore further: Researchers test first 'smart spaces' using light to send data