Touch, feel, see and hear the data

Feb 14, 2014 by Anthony King

It is now possible to sense scientific data as a means to deal with the mountains of information we face in our environment by applying subconscious processing to big data analysis

Imagine that could be transposed into a tactile experience. This is precisely what the CEEDs project, funded by the EU, promises. It uses integrated technologies to support human experience, when attempting to make sense of very large datasets. Jonathan Freeman Professor of Psychology at Goldsmith University of London, UK, talks to youris.com about how this project can help present data better, depending on the feedback participants provide to data from their environment, via close monitoring of their explicit responses—such as eye movement, for example—and their inner reactions, like .

What inspired you to get involved in data representation?

I felt there was a disjoint between our online and offline experience. For example, when shopping online or searching for a product—maybe a pair of jeans—the webpage on which you land, receives information about previous searches in all your cookies. Then, it can make inferences about you and target content appropriately. In the physical environment of a shop, there just isn't that level of insight and information provided to the environment. One big driver is to ask whether there are ways we can serve content that better suits a user's needs. And it does not have to be in a commercial environment.

What solution do you suggest?

We realised that humans have to deal with all this data. The problem is that our ability to analyse and understand it is a massive bottleneck. At the same time, the brain does an awful lot of processing that is not being used and that we are not consciously aware of. Nor does it figure in our behaviour. Our idea was therefore to marry the two and apply human subconscious processing to the analysis of big data.

Could you provide a specific example of how this could be of benefit?

Take a scientist analysing say a huge neuroscience dataset in our project experience induction machine. We apply measurements that tell us whether they are getting fatigued or overloaded with information. If that's the case, the system does one of two things. It either changes the visualisations to simplify them so as to reduce the cognitive load [a measure of brain workout], thus keeping the user less stressed and more able to focus. Or it will implicitly guide the person to areas of the data representation that are not as heavy in information. We can use subliminal clues to do this, such as arrows that flash so quickly that people are not aware of them.

Part of your approach involves watching the watchers use data. So what kind of technology do you rely on?

We devised an immersive setup, where the user is subjected to constant monitoring. We use a Kinect [motion sensing device] to track people's postural and body response. A glove tracks people's hand movements in more detail and measures galvanic skin responses, which is a measure of stress. We have an eye tracker that tells the user where about in the data they are focusing. It also looks at their pupil to see how dilated they are, as a measure of their cognitive work rate. In parallel, a camera system analyses facial expressions and a voice analysis system measures the emotional characteristics of their voice. Finally, users can wear the vest we developed under the project in this mixed reality space, called the CEEDs eXperience Induction Machine (XIM), which measures their heart rate.

Is the visual part of the project important?

Visualisation technologies in the experience induction machine are important as people are in an immersive 3D environment. But the representations that we use for the data are not just visual. There are also audio representations for data: spatialisation of audio and sonification of data so that users can hear the data. For example, the activity that normally flows in part of the brain can be represented so that more activity can sound louder or higher pitched to neuroscientists looking at these flows. There are also tactile actuators in the glove that allow users to grab data and feel feedback in their fingertips.

What is so novel about this approach?

The Kinect is available. But never before has anyone put all these components in one place to trial them together. And never before has this advanced set of sensors been put together with the goal of addressing optimising the human understanding of . This is novel, cutting edge and ambitious. It is not simple product development. This is about pushing the boundaries and taking risks.

Who will this technology be useful to?

Initially those who deal with massive data sets such as economists, neuroscientists and data analysts will benefit. But people will also benefit. We are all bombarded with information. There are going to be real benefits for people in having systems that response to your implicit cues as a consumer or person. It does not have to be in a consumption context.

Could you provide an example of application outside commercial applications?

Imagine you are an archaeologist working in the field and you come across a piece of pottery. You look at it and say it comes from the 4th century and from such and such object. It takes years of experience for an archaeologist to be able to do that. In our project, what we are doing is measuring how expert archaeologists look at objects and evaluate them. Then, we feed this interpretation into a database to speed up potential matching of pottery pieces. It makes the machine better, speeding up the predictive search powers of technology.

Explore further: Measuring the quality and quantity of sleep at home

More information: ceeds-project.eu/

add to favorites email to friend print save as pdf

Related Stories

Measuring the quality and quantity of sleep at home

Feb 07, 2014

Difficulty falling asleep, frequent waking, poor quality of sleep and a variety of sleep-related breathing problems are very common – they afflict approximately a third of the population.

New framework for 3D tele-immersive social networking

Nov 08, 2013

It's Friday night, you're exhausted after a long week in the office. You're not going to leave the house so you could either watch TV or spend a few hours catching up on your social networks. But how about doing both? Thanks ...

Active learning model for computer predictions

Dec 03, 2013

Computers serve as powerful tools for categorizing, displaying and searching data. They're tools we continually upgrade to improve processing power, both locally and via the cloud. But computers are simply ...

Researchers develop ultramodern forearm prosthesis

Feb 12, 2014

Researchers of the University of Twente (UT) and Roessingh Research and Development (RRD) have developed a system which can significantly improve the functionality of forearm prostheses. Using the activity ...

Beam me to my meeting!

Oct 29, 2012

Forget about crackly lines or blurry webcams. Video conferencing has just got a whole lot better. By combining robotics, video and a host of other sensor and display technologies, European scientists can now virtually 'beam' ...

Recommended for you

Saving lots of computing capacity with a new algorithm

Oct 29, 2014

The control of modern infrastructure such as intelligent power grids needs lots of computing capacity. Scientists of the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg have ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.