Comforting chatbot

February 5, 2014

Chatting with the customer service is now considered normal. But what if 'Eva', 'John' or 'Julia' were capable of not just solving technical problems but also providing us with emotional support? Janneke van der Zwaan investigated this possibility in the NWO research programme Responsible Innovation. She will defend her doctoral thesis on March 10.

Someone who is upset, angry or depressed is less capable of taking wise decisions and solving problems. That was the starting point for the research of Janneke van der Zwaan, who will shortly defend her doctoral thesis at Delft University of Technology. Sometimes a person needs a sympathetic listening ear before they can be receptive to good advice. That applies just as much in the virtual world as it does in real life. But an empathic chatbot did not exist yet.

Bullied children

Van der Zwaan is an expert in the area of artificial intelligence. She developed a prototype for an empathic virtual 'buddy' for who are being bullied. His name is Robin. 'I was mainly interested in the effects of Robin's appearance and behaviour on his conversation partners', says Van der Zwaan. 'I therefore kept the programme behind the robot as simple as possible. For example, I make use of multiple-choice answers. Chatting in a natural language is in itself already very difficult and that would only obscure our view of the interaction.'

Imitating people

Robin looks like a sort of SpongeBob who can do two things: chat (according to a preprogrammed question and answer model) and show emotions through his facial expressions. The emotions Robin can show have also been programmed. How can a fully programmed chatbot do something so inherently human as offer comfort? Very simple, explains Van der Zwaan: 'By imitating people.'


She based the software behind Robin on models for human conversations in which is provided. Some of these are 'from textbooks', prescribed as an effective method for holding coaching conversations. Others have been acquired from hundreds of informal conversations in which friends, acquaintances or colleagues tried to support each other. From the various models, Van der Zwaan distilled a single conversation model with questions such as 'what happened exactly?', 'how do you feel now?' and 'have you tried this before?' and answers such as 'I am really sorry for you' and 'well done', 'smart, that you blocked this bully', or 'well done, that you talked to someone about the bullying'.


Van der Zwaan first tested the comforting cyberbuddy among one hundred students, then among professional care providers and then finally a group of children. The majority of people were convinced that a bullied child would feel better after chatting with Robin. The professional care providers found Robin's appearance too square and unnatural, whereas the children liked this aspect. The professional care providers found the combination of answers and particularly comforting but for the students, the children were most happy with the practical advice.

First step

'On balance, Robin therefore appears to be a successful empathic virtual buddy', says Van der Zwaan. 'This is still just a first step. The animations can be refined far more and greater variation needs to be brought into the conversations. But the underlying principle works; this study has demonstrated that. Also people who initially felt resistance towards the idea of a virtual character that got involved with human emotions still "came round" once they had talked with Robin.'


Bullying is not the only area where empathic chatbots can play a role. Van der Zwaan: 'I deliberately chose this subject because children find it difficult to talk about it with their parents. The threshold for bringing up the subject with a virtual buddy is far less. There are more such taboo subjects. Examples are abuse, coping with grief or loneliness. A buddy could also be used as a coach for people who are on a diet, need to take medicines or quit smoking.'

Further development

Van der Zwaan is now looking for investors with who she can develop her prototype further. Of course a virtual buddy can never replace real human contact, she emphasises, let alone treatment. 'However, chatbots can certainly play a role in the first step towards a broader package of support or be a part of such a package, as long as they are warm and empathic. My research has shown that we can construct such "warm" virtual personalities.'

Explore further: Fully automatic software testing

Related Stories

Fully automatic software testing

May 16, 2011

University of Twente researcher Machiel van der Bijl has developed a system that eliminates the need to test software manually. The system not only facilitates quick and accurate software testing, but it will also save software ...

Bees are good informers

September 9, 2011

Honeybees can do far more than simply pollinate plants or make honey. The busy creatures also make excellent environmental monitors. This has been demonstrated by Wageningen UR bee researcher Sjef van der Steen. He used ...

Nanoparticles' effects on soils exposed

January 30, 2013

Nanotechnology can help solve many problems. But it is a technology that involves risks as well, for people and for the environment. In a PhD study conducted at Alterra Wageningen UR and Wageningen University, doctoral candidate ...

Recommended for you

US Navy keeps electromagnetic cannon in its sights

June 25, 2016

The US Navy is quietly pushing ahead with a radical new cannon that one day could transform how wars are fought, even though some Pentagon officials have voiced concerns over its cost and viability.

Flower power—photovoltaic cells replicate rose petals

June 24, 2016

With a surface resembling that of plants, solar cells improve light-harvesting and thus generate more power. Scientists of KIT (Karlsruhe Institute of Technology) reproduced the epidermal cells of rose petals that have particularly ...

Video privacy software lets you select what others can see

June 28, 2016

Camera-equipped smartphones, laptops and other devices make it possible to share ideas and images with anyone, anywhere, often in real-time. But in our cameras-everywhere culture, the risk of accidentally leaking sensitive ...

Computer model demonstrates how human spleen filters blood

June 27, 2016

Researchers, led by Carnegie Mellon University President Subra Suresh and MIT Principal Research Scientist Ming Dao, have created a new computer model that shows how tiny slits in the spleen prevent old, diseased or misshapen ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.