Professor proposes alternative to 'Turing Test'

Professor proposes alternative to 'Turing Test'
Mark Riedl. Associate Professor, School of Interactive Computing

(Phys.org) —A Georgia Tech professor is offering an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test - originally called the Imitation Game - was proposed by computing pioneer Alan Turing in 1950. In practice, some applications of the test require a machine to engage in dialogue and convince a human judge that it is an actual person.

Creating certain types of art also requires intelligence observed Mark Riedl, an associate professor in the School of Interactive Computing at Georgia Tech, prompting him to consider if that might lead to a better gauge of whether a machine can replicate thought.

"It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or can actually think like a human," Riedl said. "And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities."

To that end, Riedl has created the Lovelace 2.0 Test of Artificial Creativity and Intelligence.

For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Further, the human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. The created artifact needs only meet these criteria but does not need to have any aesthetic value. Finally, a human referee must determine that the combination of the subset and criteria is not an impossible standard.

The Lovelace 2.0 Test stems from the original Lovelace Test as proposed by Bringsjord, Bello and Ferrucci in 2001. The original test required that an artificial agent produce a creative item in such a way that the agent's designer cannot explain how it developed the creative item. The item, thus, must be created in such a way that is valuable, novel and surprising.

Riedl contends that the original Lovelace test does not establish clear or measurable parameters. Lovelace 2.0, however, enables the evaluator to work with defined constraints without making value judgments such as whether the artistic object created surprise.

Riedl's paper, available here, will be presented at Beyond the Turing Test, an Association for the Advancement of Artificial Intelligence (AAAI) workshop to be held January 25 - 29, 2015, in Austin, Texas.


Explore further

Move over, Turing Test. Winograd Schema Challenge in town

Citation: Professor proposes alternative to 'Turing Test' (2014, November 19) retrieved 15 September 2019 from https://phys.org/news/2014-11-professor-alternative-turing.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
0 shares

Feedback to editors

User comments

Nov 19, 2014
As dumb as dumb can be. The one test is for the machine to generate it own questions. What tells us people are intelligent. That's easy we started philosophizing. Animals engage in meeting their basic needs. Then instead of being need driven the next step the animals play on their own. The next step after that is human on there own asked the meaning of life. We began to inquire about our lives. How long we will live and so on. Any task that humans make the machine isn't intelligent it is need driven. In order to be intelligent a machine needs to independent.

Nov 20, 2014
I disagree that the Turing Test is "a weak measure because it relies on deception." My understanding is that a questioner can pose the machine/person as many questions - of which the machine, of course, has no prior knowledge - as he/she wishes, and must determine from the replies whether the respondent is intelligent.

As far as I am concerned, this is far more accurate than any "arty", "creative" test, where the criteria to be satisfied are EXTREMELY subjective. As an Engineer, I consider much "modern" art (and architecture) which commands high prices to be "The king's new clothes", so enough said about that type of test, I think.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more