Operating smart devices from the space on and above the back of your hand
It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality.
They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics.
Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique.
Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says.
But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move."
From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.