How to engineer intelligence

How to engineer intelligence
How to Engineer Intelligence takes place at Cambridge Science Festival on March 20, 2012. Credit: FlySi via Flickr

"Do we actually want machines to interact with humans in an emotional way? Will it be possible for them to interact with us?"

Those are just two of the questions posed by UCL academic David Barber as he prepares for his appearance at Cambridge Science Festival on March 20 (6pm).

Barber, who is working on machine learning and applications of probability in information processing, will discuss biological inspirations for computing and how this can help humans to interact with in his talk  ‘How to engineer intelligence’.

He will discuss the challenges of getting computers to process information in ways that enable interaction with humans to be more natural. This to an extent is already taking place with smartphones equipped with speech recognition software/ programming such as Siri.

According to Barber, the world expects to be able to interact naturally with machines by expecting them to understand what we say and move naturally in our environment.

He said: “There are already research programmes that attempt to gauge the emotion in someone’s voice or face but I’m more interested in a machine that could recognise the emotional significance of an event for a . In my talk, however, I’m going to mainly address biological inspiration for computing and how this can help humans interact with machines.”

Barber is keen to point out that these types of machines might never look like the robots we’ve seen in movies such as Bladerunner or The Terminator, despite the vast amounts of progress that has taken place in the past 20 years.

The ultimate dream in the future for researchers in the field of machine and information learning would be for machines to not only comprehend what we say in the pure semantic sense, but in an emotional sense as well.

How might a machine in the future react when reading an emotional novel? Could they ever act similarly to humans? Could these intelligent machines feel sad or feel happy? Would these machines understand the emotional consequences of the human sentence ‘I’ve lost my job’?

These questions represent some of the fundamental challenges that lie ahead – necessitating a large database of information about humans and the human world. Any machine that wishes to understand the complexity of social interaction, society and behaviour, needs to have some grasp of what it really means to be human.

Perhaps the initial step in ever beginning to reverse intelligence is to first understand the theoretical aspects of in the brain. From this, researchers can then analyse how an ‘artificial brain’ would be able to process or store in the same way.

Barber highlights a plurality of approaches when trying to understand intelligence which may all help to create a free-thinking machine in the near future. The question that remains, however, is for what purposes might these machines be used?

Explore further

Keeping tabs on Skynet

More information: ‘How to engineer intelligence’, 20 March, 6pm – 7pm. For more info please visit our website at
Citation: How to engineer intelligence (2012, March 20) retrieved 17 October 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Mar 20, 2012
No, no no.

As great as it is in science fiction, there must never be a "Commander Data" or a "Doctor" like in Star Trek.

Computers are too powerful and have too perfect of memory and execution, and obviously self-aware, emotional computers would be an immediate hazard to humans.

If you have ever played a modern Real Time Strategy game, you know that at peak performance at the most critical points in a game, the computer player can play at over 1000 actions per minute and average 300 to 400 for an entire game...that's with the processors splitting time between the OS, the game engine, and the graphics and the human player's interface...and splitting time between 7 computer "players"...

Now imagine 10 years from now, Moore's law and all, 1.5 to 2 years doubling the number of cores, and tweaks improving clock speed incrementally, they'll be about 50 times more powerful.

Now imagine that raw calculations and multi-tasking power, but with human or even near-human "understanding"...

Mar 20, 2012
Truth be told, I discovered the MAJORITY of human players cannot defeat one "Very Hard" computer in Starcraft 2 in 1v1, except possibly by cheese cannon rushing it...even though the computer has almost no "understanding" of the game. It's just running a script with a very bad build order, and doesn't even make the right units in some matchups!

It overwhelms most players through sheer multi-tasking and unit ability micromanagement. For example, it can use the Ghosts "Snipe" ability about 10 times per second, which no human being could possibly do, while simultaneously continuing to build and micro and maneuver other units.

Now again, imagine that sort of multi-tasking in a system with human or even dog level "understanding".

You would never be able to defeat it at anything, further, since it's pure software, it could also interface with what we call "expert machines" as needed to better itself in specific areas...

This can't be allowed to exist, as much as every boy wants a robot..

Mar 21, 2012
Now again, imagine that sort of multi-tasking in a system with human or even dog level "understanding".

You seem to think that intelligence in a machine would be just like playing a more complex version of tic-tac-toe, when the problem is really how to make anything humanly recognizeable into a form that can be computed by a program.

A computer is just a very fast calculator, and the intelligence you get out of it is a property of the software it runs. It takes all the multi-tasking capability of a modern computer just to run an insect-level "understanding" of things.

Mar 21, 2012
Computers ... have too perfect of memory and execution

Just wait till you get your first girlfriend.

emotional computers would be an immediate hazard to humans.

Why? Computers/robots do not need to have the kind of emotions which cause all the nasty things we do to each other (greed, envy, ... - as you demonstrate - the fear of the unknown, ... ). these emotions arise from our biological nature (limited time of survival, need for reources, drive for procreation, ... )

And an intelligent computer would not be automatically better at anything than a human - E.g. just because a computer could think and understand what we tell it would not make it automatically an expert hacker of other computers (just like you - by virtue of being a human - are not automatically an expert at making other humans do what you want them to).

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more