On the hunt for universal intelligence

Jan 27, 2011
On the hunt for the universal intelligence test. Credit: SINC

How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? So far this has not been possible, but a team of Spanish and Australian researchers have taken a first step towards this by presenting the foundations to be used as a basis for this method in the journal Artificial Intelligence, and have also put forward a new intelligence test.

"We have developed an 'anytime' intelligence test, in other words a test that can be interrupted at any time, but that gives a more accurate idea of the intelligence of the test subject if there is a longer time available in which to carry it out", José Hernández-Orallo, a researcher at the Polytechnic University of Valencia (UPV), tells SINC.

This is just one of the many determining factors of the universal intelligence test. "The others are that it can be applied to any subject – whether biological or not – at any point in its development (child or adult, for example), for any system now or in the future, and with any level of intelligence or speed", points out Hernández-Orallo.

The researcher, along with his colleague David L. Dowe of the Monash University, Clayton (Australia), have suggested the use of mathematical and computational concepts in order to encompass all these conditions. The study has been published in the journal Artificial Intelligence and forms part of the "Anytime Universal Intelligence" project, in which other scientists from the UPV and the Complutense University of Madrid are taking part.

The authors have used interactive exercises in settings with a difficulty level estimated by calculating the so-called 'Kolmogorov complexity' (they measure the number of computational resources needed to describe an object or a piece of information). This makes them different from traditional psychometric tests and artificial intelligence tests (Turing test).

Use in artificial intelligence

The most direct application of this study is in the field of . Until now there has not been any way of checking whether current systems are more intelligent than the ones in use 20 years ago, "but the existence of tests with these characteristics may make it possible to systematically evaluate the progress of this discipline", says Hernández-Orallo.

And what is even "more important" is that there were no theories or tools to evaluate and compare future intelligent systems that could demonstrate intelligence greater than human intelligence.

The implications of a universal also impact on many other disciplines. This could have a significant impact on most cognitive sciences, since any discipline depends largely on the specific techniques and systems used in it and the mathematical basis that underpins it.

"The universal and unified evaluation of intelligence, be it human, non-human animal, artificial or extraterrestrial, has not been approached from a scientific viewpoint before, and this is a first step", the researcher concludes.

Explore further: Researcher uses high-performance computing to design new materials

More information: José Hernández-Orallo y David L. Dowe. "Measuring Universal Intelligence: Towards an Anytime Intelligence Test". Artificial Intelligence 174(18): 1508, Dec 2010. DOI: 10.1016/j.artint.2010.09.006

Provided by FECYT - Spanish Foundation for Science and Technology

4.1 /5 (34 votes)

Related Stories

Expert: AI computers by 2020

Feb 17, 2008

A U.S. computer expert predicts computers will have the same intellectual capacity as humans by 2020.

SETI may be looking in the wrong places: astronomer

Aug 24, 2010

(PhysOrg.com) -- A senior astronomer with the Search for ExtraTerrestrial Intelligence (SETI) Institute, Dr Seth Shostak, has reported in an article published online that perhaps we should be seeking alien ...

In search of machines that play at being human

Oct 14, 2009

Researchers at Carlos III University (Spain) have taken part in an international contest whose objective is to improve artificial intelligence utilized in virtual worlds. The challenge for the participants ...

Recommended for you

Five ways the superintelligence revolution might happen

Sep 26, 2014

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. This is of course ...

User comments : 30

Adjust slider to filter visible comments by rank

Display comments: newest first

Zitface
5 / 5 (6) Jan 27, 2011
We may soon reach a point where we must distinguish intelligence from consciousness. Or not.
Pyle
5 / 5 (8) Jan 27, 2011
Semantics.
Ultimately, we are biased by our physical form.

Development of a test such as this is exciting because of its potential to push those working on AI in new directions. The more people we have working on it, the more likely we are to progress faster.
Quantum_Conundrum
1.3 / 5 (6) Jan 27, 2011
"Greater than human intelligence" can theoretically be made safe if it is either inhibited by some absolutely safe controlmechanism, or is limited to intelligence of a certain category.

A n example of a safe, super intellgience would be an underground computer system that is totally isolated from all other computers, and can only communicate with the outside world via a human and a portable disk drive. this disk drive woudl then be inspected by a second isolated computer, a common "non-self-intelligent system", which woudl scan it for viruses and hostile code to be sure the "super computer" hadn't tried to rig anything.

Dr. Kaku says our robots are like "stupid, retard, cockroaches". I think we've made some progress even in the past few years.

I've seen robot demonstrations of live-action pathfinders, and now the quad-rotor builders, which at least in one specific area, greatly exceed cockroaches.
donjoe0
5 / 5 (5) Jan 27, 2011
"a safe, super intellgience would be an underground computer system that is totally isolated from all other computers, and can only communicate with the outside world via a human and a portable disk drive."

On the contrary, as twice demonstrated by Eliezer Yudkowsky in the "AI Box" challenge, a superintelligence trapped in a computer would be able to convince its human guard to let it out.
MorituriMax
3.6 / 5 (5) Jan 27, 2011
I kind of wonder if we couldn't break any intelligence test down to the following question, "Is it interested in talking to me?"
lexington
5 / 5 (4) Jan 27, 2011
But how did they define "intelligence"? That's really the core issue when it comes to any testing of this sort.
Skeptic_Heretic
5 / 5 (2) Jan 27, 2011
This seems to be more a measure of perception than intellect.
alfredh
not rated yet Jan 27, 2011
A bit off topic, but related. Is there any research directed at granulizing language meanings along the same thought process as we do with phonemes of speech. Seems like a great way to do a universal translator. Is this part of intelligence analysis?
blazingspark
not rated yet Jan 27, 2011
This seems to be more a measure of perception than intellect.
I think perception and intellect are very closely related. In nature it seems that way at least. For example Jumping spiders are smart compared to other insects. It seems the massive IO capability needed to run eyes i.e. visual cortex can be used to run many other functions also.
maxcypher
3 / 5 (2) Jan 27, 2011
Since the article doesn't explain the parameters of this supposed "universal intelligence test", we really have nothing to base our opinions on.
Pyle
not rated yet Jan 27, 2011
Intellect and perception are linked, but intelligence is much more. Intelligence is a very broad term that any universal test would, necessarily, need to break down into many categories.

Identifying the means of perception of an entity and their limits would have to be the first step in any intelligence test. You have to be able to talk to an entity before you can test its intelligence.

Seems like a universal translator would be a must before we could do a universal intelligence test.
Quantum_Conundrum
3.3 / 5 (6) Jan 27, 2011
On the contrary, as twice demonstrated by Eliezer Yudkowsky in the "AI Box" challenge, a superintelligence trapped in a computer would be able to convince its human guard to let it out.


The guard is under strict orders to never, ever do such a thing, no matter what the computer says...
Blakut
5 / 5 (2) Jan 28, 2011
Yeah, humans, because in their minds if something is ultra intelligent it must be ultra violent. Figures...
Inco
4 / 5 (2) Jan 28, 2011
What we generally see as intelligence is the abillity to predict the future given a set of rules we already know.
What we often test is if a pattern evolve in n steps, how will it look at n+1. Works with object moving, number series and such, and is often seen in iq-test.
But we have a base knowlage. For example manipulating a person to do something is also predicting the effect of a cause. And even coding a computer program falls into this, predicting the future, or rather, predicting what will happen when the code execute.

Now, to think some day a computer might be able to predict stuff better then us isnt totally far fetched. But why do we always assume anything intelligent will be selfish and want to domminate and change the world for their personal gain?
Might it be hollywood making us beleave everything would be hostile, or our own insecurity of what we would do if we were that intelligense?
ODesign
5 / 5 (1) Jan 28, 2011
I applaud the authors for recognizing the significance of Kolmogorov complexity as a measure of intellegence. Compression algorithms such as .jpg or h264 are also ways of approximating Kolmogorov complexity for visual objects and objects moving in time. We acknowledge this colloquially by calling a codec smarter than another, but in that usage smart is synonymous with Kolmogorov complexity.

I suggest the method the brain stores information is highly dependent on Kolmogorov complexity and uses least effort/energy results as a synonym for smart. for example instead of remembering all exact colors and size and dimensions of a chair we saw, we use extremely aggressive information compression to conceptualize a chair and the instance differences. This can be experimentally demonstrated and actually calculated by measuring memory retention and speed for new objects we are unfamiliar with having poor compression while objects more familiar have a high compression.
Kedas
1 / 5 (1) Jan 28, 2011
I guess they will have to define a goal/target of the subject and maybe even a predefined environment before you can tell how good/smart 'it' is.

Different senses give different forms of information so how do you compare the processing behind it if you provided different info.
They received different info because they have different goals/methods to survive.
So intelligence can only be measured/compared after the goal(survival way) has been defined.
ODesign
5 / 5 (1) Jan 28, 2011
Defining a goal target is not required for AI.

The ability to predict the future is a form of information compression, aka Kolmogorov complexity. For instance, rather than storing past information on the motion of objects around us (cars, trees, birds, balls, etc.) as a long series of recored data to predict the future movement we dramatically reduce the data we remember by apply a Kolmogorov translation (gravity makes things fall down).

so basically make a machine try some method of predicting the future based on past observations. Measure the Kolmogorov complexity. start compressing this information using a variety of randomly generated strategies and then adopt the compression method, or information storing strategy with the best Kolmogorov complexity. keep doing this with enough data and artificial intelligence is the likely result.
El_Nose
3 / 5 (2) Jan 28, 2011
@QC

There is an issue with the statement
this disk drive woudl then be inspected by a second isolated computer, a common "non-self-intelligent system", which woudl scan it for viruses and hostile code to be sure the "super computer" hadn't tried to rig anything.


unless you code the checking machine to be just as intelligent as the super intelligent computer you have underground it will never be on the level of ability to SAFELY check that disk. -- It's programming by default will be limited ot that of human intelligence -- which i am imagining will fall short fo the super computer.

gawell
not rated yet Jan 28, 2011
Presently it might cost $36 to read the full article, perhaps double that to take the test or possibly administer it to a viable
candidate...iows...someone/thing's whose potential intelligence is greater than it's present command. Parameters might not tell much more than has already not been said. A really good test would be one that might tell if a test was even necessary. Yes...a Universal Translator...having one of those would surely qualify as a successful pass of a Universal Intelligence Test!
Lordjavathe3rd
4.7 / 5 (3) Jan 28, 2011
How do you create a machine super intelligence which is capable of hostility? Hostility through ignorance or purpose? Purpose through? Most hostility committed by humans is related very strongly to social status. Money is stolen in so much as it prevents death or far more commonly enhances the social status and respect paid to the individual.

I would just call the people afraid of a violent machine morons, but that doesn't achieve the desired effect. So I have partially explained why intelligence in its self does not have motivation to be hostile. What causes hostility is far more related to social status. Or in the cases of animals, sexuality or starvation.

OK, now I've more completely explained it. Morons, fools, incompetents, idiots. Remove yourselves from your positions of authority for all the idiocy you spread.
Pyle
2 / 5 (3) Jan 28, 2011
LJ3:
Wow, troll much?

As far as I know, survival is usually the precursor for violence. Lions are violent cause they gotta eat. Lions attack other lions because they gotta eat or wanna breed.

Ultimately a super intelligence might be provoked to hostility against humans if it saw us as a threat.
DrDubious
not rated yet Jan 28, 2011
Hostility is not a prerequisite to danger. I feel no hostility toward ants crossing my sidewalk, but if I inadvertently step on them, I will be perceived as very dangerous and they would not be wrong.
If the ants tried to restrain me first, I would consider their action hostile and attempt to resist.
The_Morgan_Doctrine
not rated yet Jan 28, 2011
The article claims the most immediate use of this technology is for artificial intelligence. I contend the main use is "grokking" an alien computer architecture.
Thinkcarrier
not rated yet Jan 28, 2011
Intelligence can not be effectively or faithfully measured through the intellectual behavior of sampling entities simply because such measurements would only be sampling the capacity of the said entities to react and/or respond.
In other words, you would only measure the intellectual reactivity of the entity and not its intellectual capacity, or worst yet, generate a true measurement of intelligence itself.
How could anybody effectively measure something that they, themselves, do not fully understand?
While sampling may be used to clue as to what are the internal and operational mechanisms of intellect, sampling itself, is not a means of measure.
-If something is not understood, it can not be build, less measured-
Kedas
1 / 5 (1) Jan 29, 2011
@ODesign
You can say being able to predict the future based on the past = AI
But you can only measure it based on the reactions.
If a person knows a man will walk in and shoot him in about 30 sec and the person does notting and just wait and die then you would think he was stupid (didn't predict it correct,no right reaction) but he did, only his goal was to die not to live.
So to measure it right you need to know the goal.
Code_Warrior
1 / 5 (1) Jan 29, 2011
If I am understanding the research correctly, they have found a universal intelligence measuring technique that requires a specific complexity model that describes the complexity of the intelligence being measured. If my understanding is correct (and I'm not saying it is) then the measure is only meaningful when comparing like kinds. Also, the more specific the type of intelligence, the more confident we can be in the measurements. From this I can only conclude that measures of "general" intelligence are of little value since intelligence is usually highly specialized in individuals, and different individuals will have high levels of intelligence in different areas and it would be difficult to construct a complexity model that can accurately measure intelligences of different kinds. Am I understanding this correctly?
gwrede
2.3 / 5 (3) Jan 29, 2011
So, your neighbor's 4-year old has invented an IQ-test that he thinks can measure the difference between the physics professor and the math professor at the local university. I don't think so.

He *may* have invented a way to measure the complexity of a subject by interviewing it. But if that is what he did, then a bunch of barrio bullies together counts as more intelligent than the current chess champion. (Their Kolmogorov complexity is greater. Or, if it's not, add 2 more bullies and test again.) See if they together win him even at tic-tac-toe.

You know, you don't become Einstein just by calling your sorry piece of software Intelligent. In the 80s, there were several brands of AI software, in flashy boxes, in any serious computer store. The shares of those companies were fought for. And, in hindsight, those programs were about as Intelligent as the phonebook in your cell phone.
meBigGuy
2 / 5 (4) Jan 30, 2011
I hope phys-org can apply this technology to the comments section -- or, perhaps to their own articles.
stealthc
1 / 5 (2) Jan 31, 2011
this would be a great test to give to our puppet politicians to see if they can do anything other than act out pre-scripted teleprompter content.
Nartoon
not rated yet Feb 05, 2011
And so the search for intelligent life on earth goes on...