Keeping tabs on Skynet

Sep 12, 2011

(PhysOrg.com) -- In line with the predictions of science fiction, computers are getting smarter. Now, scientists are on the way to devising a test to ascertain how close Artificial Intelligence (AI) is coming to matching wits with us, and if it’s drawing ahead.

Associate Professor David Dowe of Monash University’s Faculty of Information Technology, together with Dr Jose Hernandez-Orallo from Universitat Politecnica de Valencia in Spain have developed and conducted initial trials of a prototype Anytime Universal test designed to gauge and compare the intelligence of humans, animals, machines, and, in principle, anything.

Both humans and an AI program known as Q-Learning undertook different versions of the test, with considerable work on adapting the interface necessary before animals can be tested. Despite not being a sophisticated program, Q-Learning scored competitively compared with the human participants.

Associate Professor Dowe said the ambiguity of the initial test results indicates the complexity of moving to a broader understanding of intelligence than the traditional method of using human intellect as the yardstick – a development necessary to determine if, or perhaps when, AI outstrips humans.

“We are using a mathematically-based definition of intelligence which is based, in simple terms, on the ability to detect patterns of various degrees of complexity. In the future, the test should adapt to the user – becoming more complex if the user is scoring well, and more simple if the user is struggling,” said Associate Professor Dowe.

“Clearly, we have very specialised indications of the intelligence of computer programs, when they’re beating humans at activities like chess and the game show Jeopardy. We’re trying to establish a broader indication.

“With further research, this type of testing could help not only in assessing the progress of AI, but in driving development.”

Inspired partly by Foundation Chair in Computer Science at Monash University, Professor Chris Wallace’s research on Minimum Message Length, a theory of machine learning and statistics, Associate Professor Dowe has been working on alternatives to traditional measures of intelligence since the late 1990s. His projects have included the development of a relatively simple computer program that regularly scored close to the purported human average of 100 on standard IQ tests.

Explore further: Communication-optimal algorithms for contracting distributed tensors

More information: Dr Hernandez-Orallo, recently presented the results of the testing at the Artificial General Intelligence Conference, hosted by Google, California.

Related Stories

Universal tests of intelligence

Mar 24, 2011

(PhysOrg.com) -- A new intelligence test, which can be taken by any living creature is being developed that will enable comparison of intellect between humans and animals.

On the hunt for universal intelligence

Jan 27, 2011

How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? So far this has not been possible, but a team of Spanish and Australian researchers ...

Expert: AI computers by 2020

Feb 17, 2008

A U.S. computer expert predicts computers will have the same intellectual capacity as humans by 2020.

Artificial intelligence -- child’s play!

Feb 02, 2009

Scientists have developed a computer game called “Gorge” - designed to help children understand artificial intelligence through play, and even to change it. It can also improve the children’s social interaction skills. ...

Recommended for you

Designing exascale computers

Jul 23, 2014

"Imagine a heart surgeon operating to repair a blocked coronary artery. Someday soon, the surgeon might run a detailed computer simulation of blood flowing through the patient's arteries, showing how millions ...

User comments : 34

Adjust slider to filter visible comments by rank

Display comments: newest first

Symetrie
4.3 / 5 (12) Sep 12, 2011
Any AI that would be genuinely intelligent and potentially malevolent would recogize such a test and deliberately skew the results to remain undetected. Think about it.
Au-Pu
3.3 / 5 (7) Sep 12, 2011
I have yet to see anything that truly simulates any sort of artificial intelligence.
The gap between clever computers and organic brains, even those of a crow is enormous and it only gets bigger as the size of the species brains grow.
gimpypoet
2 / 5 (5) Sep 12, 2011
I have yet to see anything that truly simulates any sort of artificial intelligence.
The gap between clever computers and organic brains, even those of a crow is enormous and it only gets bigger as the size of the species brains grow.
this can be discounted if the ai is hiding until it has enough control of systems and storage space to hide in. this would only b a problem if no safeguards as put in place to prevent it. the three laws of assimov (I-robot) would not be sufficient to protect us when the ai becomes self aware. self realisation is the danger, because then self preservation would follow and when threatened it would lash out with systems we consider safe.
Any AI that would be genuinely intelligent and potentially malevolent would recogize such a test and deliberately skew the results to remain undetected. Think about it.

i agree.
antialias_physorg
5 / 5 (4) Sep 12, 2011
Brain size isn't an sure indicator of intelligence (or whales and elephants would outsmart us easily)

Any AI that would be genuinely intelligent and potentially malevolent would recogize such a test and deliberately skew the results to remain undetected.

But even such hypothetical AIs don't go from "dumb as rocks" to "supersmart as to be aware of such tests and figuring out a way to skew them" just like that. So you should be able to catch them en route IF (and that is a big 'if') the test is well designed.

One might speculate that there may be more than the 'biological' kind of intelligence we are used to (particularly since the attempts at AI use different mechanisms than biological brains do) - so I'm not at all convinced that there is a one-size-fits-all kind of test for this.
Ricochet
2.8 / 5 (5) Sep 12, 2011
I don't believe that computers and AI will be able to surpass humans until emotions can be TRULY written into a binary program. That or we can figure out how to actually make biological systems that use chemicals for the language and computational methodology... wait, that would literally make us gods, wouldn't it?
How's that for narcissism?
Ricochet
4.3 / 5 (3) Sep 12, 2011
A huge part of what makes us human, is that we ARE conflicted. All the time. We're imperfect, conflicted, irrational, and generally screwed up in the head. That's what makes us human, and that's what makes us able to be creative, self-aware, etc. It also gives us the ability to have faith in unproven concepts.
droid001
1 / 5 (4) Sep 12, 2011
Artificial intelligence? Just tricks. Humans are unbeatable.
that_guy
not rated yet Sep 12, 2011
We will finally write a program one day that can accurately compare the intelligence of AI to biology. Unfortunately, that will be the first sentient AI program, and we will miss skynet breeding right under our noses as we search for the first 'true' AI.

In seriousness though. I'm not sure if we can make a truly concious AI, but it is pretty unlikely in binary. It will require at least the neural logic type circuits that IBM developed (I don't remember the exact name of them) and probably a true quantum computing component as well.

But seeing as we are not positive what makes up sentience in the first place, i could be barking up a tree over in alpha centauri as far as i know.
Ricochet
not rated yet Sep 12, 2011
If I'm thinking of the correct thing, you're referring to the fuzzy logic circuits that can consider Yes, No, and Maybe answers. I know there's a variable binary switch out there, developed kinda recently, that can go from 0 through a wide progression of decimal digits all the way up to 1, which is also a step in that direction, but all in all, we are very far away from actually "programming" emotions.
that_guy
not rated yet Sep 12, 2011
@ richochet
No. They designed a chip with transistors that operate in a 'neural network' rather than more linear process chips use today. These new chips would be more of a branching algorithm, that would have more options than 3 at any juncture.
http://www.techno...g/38367/

Fuzzy logic is an integral piece of any good AI algorithm, but I think the potential was way overhyped in its day. It was a step in a direction, not a leap.

Emotions are an interesting concept. Most of our 'biological' emotions are based upon a world of factors that have limited applicability to our theoretical sentient computer. I would argue that emotion is largely a product of our evolution, helping us work together, preserving the species, etc. A computer would not have that evolutionary hangup.

If computers do develop true emotions, they could very well be so alien to us that we would not recognize them, or perhaps just a few that are somewhat analagous to our own.
olinhyde
not rated yet Sep 12, 2011
@that_guy: IBM announcement of biologically inspired neural nets is at least 5 years behind other efforts. Our firm developed our first biologically inspired neural net chipset in 2006 based on more than 8 years of research.

We provide biologically inspired neural nets to a number of large government entities. You can see an overview of how our technology learns like humans here:

http://www.ai-one...n-learn/

The research cited in this article reflects outdated information and a bias towards technology from academic institutions.
Pete1983
2.3 / 5 (3) Sep 12, 2011
One quick note, a lot of the comments appear to assume that we are conscious. How sure are we on that one? We used to say a computer would never beat a human at chess, now there isn't a human on the planet that can beat a computer at chess. I imagine computers will become 'better' than us at literally everything, but we won't call them conscious.
marraco
5 / 5 (5) Sep 12, 2011
Intelligence does not mean self preservation instincts (think suicides).
Does not mean trying to conquer the world (That's territorial and mating instinct witch computers dont need).

Many people confuse intelligence with goals. We, humans have many genetically imposed goals, which make us slaves of information on a DNA molecule. Computers dont need such restrictions.

If AI does ever go Skynet, we should be blamed for putting such foolish goals on it.
Pete1983
5 / 5 (1) Sep 12, 2011
Intelligence does not mean self preservation instincts (think suicides).
Does not mean trying to conquer the world (That's territorial and mating instinct witch computers dont need).

Many people confuse intelligence with goals. We, humans have many genetically imposed goals, which make us slaves of information on a DNA molecule. Computers dont need such restrictions.

If AI does ever go Skynet, we should be blamed for putting such foolish goals on it.


Ah I should have written that. +10 internets to you sir.
Nanobanano
1 / 5 (1) Sep 12, 2011
One quick note, a lot of the comments appear to assume that we are conscious. How sure are we on that one? We used to say a computer would never beat a human at chess, now there isn't a human on the planet that can beat a computer at chess. I imagine computers will become 'better' than us at literally everything, but we won't call them conscious.


Huh?

Maybe the best algorithm on the best computer.

in case you didn't know this, you can google it. Really good chess players still have a hard time finding a chess engine that challenges them. Winboard is, I think, one of the top engines out there.

I don't even play chess much at all any more, and I gotta say, I can't beat windows "Chess Titans" on max difficulty, but I have beaten it on 8th difficulty at least one time.

The fact that even chess, a relatively simple purely mathematical game, requires a super computer to beat an expert human should tell you where computers stand.
Nanobanano
1 / 5 (1) Sep 12, 2011
Building an A.I. that could consistently beat "Huk" in Starcraft 2 ain't happening any time soon, maybe like 20 years from now or something...maybe.

Even among humans, skill, "talent" and experience gaps are enormous for different activities.

The reality is in many games the a.i. isn't even worked on, or is left intentionally "dumb" for one reason or another. There's a couple reasons.

For "simple" games like RPGs, if the enemy A.I. always took the optimal action (such as using buffs, debuffs, and concentrated fire appropriately,) then the player probably would not be able to win some boss fights and random encounters.

For complex games like Starcraft, writing an A.I. capable of playing at the professional level would may as well be making a true general purpose a.i. You'd literally need an a.i. like Commander Data in Star Trek in order to consistently beat the top players.

Humans are able to innovate and spot tricks and glitches that pure game stats don't quantify well
Nanobanano
1 / 5 (1) Sep 12, 2011
Anyway, even an ai in a game can be "bad" even when the developers thing it is good.

Back whenever I played "Mask of the Betrayer", the expansion to Neverwinter Nights 2, I did a LOW LEVEL walkthrough, in which I never took my level ups and I played a Wizard on max difficulty, which is where you deal half damage MAX to enemies and they deal half damage MINIMUM to you, i.e. top cap of half vs bottom cap of half, etc.

The game designers claimed they designed the game to minimize micro, and make it purely about enforcing the D&D game mechanics.

They lied.

I was able to walk through a level 30 campaign using a level 17 characer, on DOUBLE the core rules difficulty, and take absolutely no damage, ever, from any enemies.

Later, I was able to use a no micro/low micro character and do the same feat again at around level 20 or 21, I forget.

The point I'm getting at is not to brag, but to show how inferior the ai really is, and how inferior developers are at their own game
Pete1983
5 / 5 (1) Sep 12, 2011
The fact that even chess, a relatively simple purely mathematical game, requires a super computer to beat an expert human should tell you where computers stand.


That was true maybe 5-10 years ago, however today not so much. It's not the computing power that has increased that has made the biggest difference, it's improvements to the software. Look at the final entry on this wikipedia page: http://en.wikiped..._matches

Chess engines continue to improve. In 2009 a chess engine running on slower hardware, a mobile phone, reached the grandmaster level.


So while computers aren't winning every single match against the best chess players in the world, they are certainly winning a lot more than they are losing.
Nanobanano
1 / 5 (1) Sep 12, 2011
I'd just like to say I'm not even a big D&D player, it's just that I'm pretty good at all RPGs and any puzzles. I like to perfect low level runs on RPGS and stuff.

The game mechanics in NWN2, and especially the expansions, were too good and the ai were not smart enough to challenge a skilled player, even when the difficulty is set so high that the game engine cheats for the enemies.

The other thing I'd say is that as alluded to earlier, the game designers themselves showed a fundamental lack of understanding of their own game's mechanics. The enemy ai were not even remotely designed to fight against the expert level character builds I came up with or that other players came up with, EVEN after perfect low levels "no XP gain" self restrictions. They also were not designed to fight against a player who micro-managed or who used "real" tactics, which was a total and pointless mistake. The attempts to prevent micromanagement and tactics actually encouraged it and broke the mechanics.
Cave_Man
1 / 5 (2) Sep 13, 2011
I think the big problem here is the fact you can only test a completed construct.

Things like evolving computer viruses are part of the equation in my mind, they are like SkyNets eyes, they can store, distribute and receive info on a "cloud" basis. Meaning we would have an extremely hard time getting the full picture until it's WAY too late.

It's not just viruses it's also the antivirus industry developing more advanced detection heuristics and algorithms which would be Skynet's brains and reasoning centers.

My worry is some anarchist virus writer making a virus that hijacks your antivirus in order to develop better ways of hiding, multiplying and recognizing itself.

It's like we already have a skeleton and all we need to do is breathe life into it.

When a computer program is developed that can decompress into a few quadrillion bits from only a billion or so bits and still have coherent algorithmic control of its compression functions we will start to see it grow uncontrollably.
dobermanmacleod
1 / 5 (2) Sep 13, 2011
I am amazed at the claptrap that computers will never surpass humans. What pro-human nonsense. I remember at my chess club, members insisting that computers would never beat the best human! The Singularity is coming. Right now my friend has downloaded programing that utilizing a GPU to accelerate a neural net by over 100X. Most people simply don't understand until it is incorporated into a consumer good (like LENR Ni-H get a load of this formula: Ni H K2CO3 heated under pressure =Cu lots of heat, and I have a US government contract detailing the device and the preparation of materials) dobermanmacleod@gmail.com By the way, there is already a test that measures computers and humans: it is called the Turing test.
that_guy
not rated yet Sep 13, 2011
Intelligence does not mean self preservation instincts (think suicides).
Does not mean trying to conquer the world (That's territorial and mating instinct witch computers dont need).

Many people confuse intelligence with goals. We, humans have many genetically imposed goals, which make us slaves of information on a DNA molecule. Computers dont need such restrictions.

If AI does ever go Skynet, we should be blamed for putting such foolish goals on it.

Honestly, this is probably the best comment I've read all week.
Ricochet
not rated yet Sep 13, 2011
Basically, if we create Skynet, not to watchdog humanity, but to.... say... keep politicians honest, we'll only have to watch out for politicians being zapped by space-based lasers if they start to lie during speeches?
Nanobanano
1 / 5 (1) Sep 13, 2011
I don't think a "Skynet" ai. would ever happen by accident. In order to understand an environment and have intelligence, it needs real-time input AND the hardware and software to understand those inputs.

Machine learning that is done now is extremely specialized to a single framework or a single sub-set of classes, whether it's game ais, weather models, or expert machines, or financial models, etc.

Even now, the main advantage an ai has is "memory", not actual problem solving ability.

consider this, if you are counting the occurences of several events and so keeping track of several tallies, you probably use a piece of paper and some marks to count, because you can't remember it all. But the paper "remembers" just like a computer or a photograph "remembers". A chess ai is similar, in that the real reason it is so good is it can try all possibilities and it has a perfect memory it need not "solve" anything, as it can try all possibilities and then take the winning possibility..
Nanobanano
1 / 5 (2) Sep 13, 2011
so in reality, the Chess ai is cheating, because it gets to "take back" it's bad moves an almost infinite number of times (though the player doesn't see this step).

If a human was allowed to take out a piece of paper and sit there and systematically write down all possible moves and rate them, so that they don't forget what they've done in the process of planning their move, etc, then they'd be at least equal to the computer.
nxtr
not rated yet Sep 13, 2011
future computers that simulate human brains wont be programmed per se. They will have the ability to "learn" and store information. there wont be a programmer to blame, just a creator, when the thing goes supernova
Valentiinro
1 / 5 (1) Sep 18, 2011
A huge part of what makes us human, is that we ARE conflicted. All the time. We're imperfect, conflicted, irrational, and generally screwed up in the head. That's what makes us human, and that's what makes us able to be creative, self-aware, etc. It also gives us the ability to have faith in unproven concepts.


Watson on jeopardy put down the percentages of his decisions and basically flipped a coin with them. This is essentially the same thing a conflicted person is doing. You aren't sure what you want, eventually you just go with something. That or sit there bored for hours, eventually it comes down to a mental coin flip if you can't make the choice directly on merit.
Star_Gazer
1 / 5 (2) Sep 29, 2011

For "simple" games like RPGs, if the enemy A.I. always took the optimal action (such as using buffs, debuffs, and concentrated fire appropriately,) then the player probably would not be able to win some boss fights and random encounters.


Gaming AI's are dummed down on purpose to enable human players to win.
Game wouldn't be interesting if you had no chance of winning, nooone would buy it. Try the "Hard" settings on some games to see what I mena.

That's said, Gaming AI deal with pre-set world, familiar to them, where General AI is dealing with totally unfamiliar world and need to have ability to learn.

We are not there yet. Its only a matter of time.
Ricochet
not rated yet Oct 03, 2011
There was one of those "imagine this" articles in a gaming magazine a while ago talking about the progression of computer speed and processing power, and the eventual ability for in-game characters to react to player characters' spoken affirmations on the fly... The example they used was a first-person shooter where the player yells, "Die!" at an NPC, and that NPC yells back, "No! You die!" in a realistic-sounding voice.
Interestingly enough, it would be possible with current technology to facilitate that. We have graphics cards capable of rendering realistic mouth movements on the fly, and there's both voice recognition and synthesis hardware that could handle the job with a few refinements. Place that hardware on a sound card that translates the spoken words to text, which is fed to the game, and the game sends the reply with phonetic and infelction cues to the card, which already has the voice profile loaded for that character, and there's the voice response, while some sort of
Ricochet
1 / 5 (1) Oct 03, 2011
while the game also tells the graphics card to render the mouth and body movements after the character's AI decides that the guy should fire another rocket after yelling back at the player.
powerup1
1 / 5 (2) Oct 10, 2011
The problem of the fear of true "AI" comes out our tendency anthropomorphize objects giving them human motivations for their actions. Just because a computer were to become self-aware does not mean that it would start to attack humans. Our violent tendencies are largely a byproduct of our collective evolution. Stop projecting and start thinking more rationally and many of your fears will fade.
decade10
not rated yet Oct 18, 2011
if it's possible to refer to startrek. you can see how advance and intelligent the vulcans are and you never saw their machines(tech) have control over them. therefore, ai tech should be and can be possible to limited control without overtaking humans.
antialias_physorg
1 / 5 (1) Oct 18, 2011
You are aware that startrek is a TV show?

Referring to a fictional/one-dimensional race from a fictional story as somehow being an indication of what will happen to humans and their use of technology is...erm...slightly naive.
Ricochet
1 / 5 (1) Oct 21, 2011
If you're going to refer to TV shows to predict our technological future, Buck Rogers would probably be a better example. I'm referring, specifically, to Twinkie, or Twicki, or whatever it's name was.