New machine learning models can detect hate speech and violence from texts

April 13, 2017

The words we use and our writing styles can reveal information about our preferences, thoughts, emotions and behaviours. Using this information, a new study from the University of Eastern Finland has developed machine learning models that can detect antisocial behaviours, such as hate speech and indications of violence, from texts.

Historically, most attempts to address antisocial have been done from education, social and psychological points of view. This new study has, however, demonstrated the potential of using techniques to develop state-of-the-art solutions to combat antisocial behaviour in written communication.

The study created solutions that can be integrated in web forums or social media websites to automatically or semi-automatically detect potential incidences of antisocial behaviour with high accuracies, allowing for fast and reliable warnings and interventions to be made before the possible acts of violence are committed.

One of the great challenges in detecting antisocial behaviour is first defining what precisely counts as antisocial behaviour and then determining how to detect such phenomena. Thus, using an exploratory and interdisciplinary approach, the study applied natural language processing techniques to identify, extract and utilise the linguistic features, including features, pertaining to antisocial behaviour.

The study investigated emotions and their role or presence in antisocial behaviour. Literature in the fields of psychology and cognitive science shows that emotions have a direct or indirect role in instigating antisocial behaviour. Thus, for the analysis of emotions in written language, the study created a novel resource for analysing emotions. This resource further contributes to subfields of natural language processing, such as emotion and sentiment analysis. The study also created a novel corpus of antisocial behaviour texts, allowing for a deeper insight into and understanding of how antisocial behaviour is expressed in written language.

The study shows that natural language processing techniques can help detect , which is a step towards its prevention in society. With continued research on the relationships between natural and societal concerns and with a multidisciplinary effort in building automated means to assess the probability of harmful behaviour, much progress can be made.

Explore further: Excessive TV in childhood linked to long-term antisocial behaviour

More information: Leveraging Emotion and Word-Based Features for Antisocial Behavior Detection in User-Generated Content: epublications.uef.fi/pub/urn_i … 78-952-61-2464-3.pdf

Related Stories

Does bad behaviour run in the family?

January 5, 2015

University of Queensland research aims to answer the age-old question of whether anti-social behaviour is passed down through families.

Recommended for you

Dyson to make electric cars by 2020

September 26, 2017

James Dyson announced Tuesday he was investing £2.0 billion ($2.7 billion, 2.3 billion euro) into developing an electric car by 2020, a new venture for the British inventor of the bagless vacuum cleaner.

Click beetles inspire design of self-righting robots

September 25, 2017

Robots perform many tasks that humans can't or don't want to perform, getting around on intricately designed wheels and limbs. If they tip over, however, they are rendered almost useless. A team of University of Illinois ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.