Professor Finally Publishes Controversial Brain Theory

Nov 19, 2008 By Lisa Zyga feature
Professor Finally Publishes Controversial Brain Theory
The human brain. A new brain model in which some parts control other parts, developed by Professor Asim Roy, could overcome some of the limitations faced by the more conventional connectionist brain model, and possibly open the doors to autonomous learning systems. Image credit: SW Ranson.

(PhysOrg.com) -- In the late '90s, Asim Roy, a professor of information systems at Arizona State University, began to write a paper on a new brain theory. Now, 10 years later and after several rejections and resubmissions, the paper “Connectionism, Controllers, and a Brain Theory” has finally been published in the November issue of IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans.

However, Roy’s controversial ideas on how the brain works and learns probably won’t immediately win over many of his colleagues, who have spent decades teaching robots and artificial intelligence (AI) systems how to think using the classic connectionist theory of the brain. Connectionists propose that the brain consists of an interacting network of neurons and cells, and that it solves problems based on how these components are connected. In this theory, there are no separate controllers for higher level brain functions, but all control is local and distributed fairly equally among all the parts.

In his paper, Roy argues for a controller theory of the brain. In this view, there are some parts of the brain that control other parts, making it a hierarchical system. In the controller theory, which fits with the so-called computational theory, the brain learns lots of rules and uses them in a top-down processing method to operate. In 1997, IBM’s Deep Blue computer, which famously defeated world chess champion Garry Kasparov, operated based on countless rules entered by its programmers.

Despite the success of the rule-based AI system in chess, no AI system has come close to learning and interacting with the world at the human level, using either the connectionist approach or the computational approach. Although the human brain may not serve as the best model for AI systems, a human-like machine should, by its very nature, be patterned after the human brain.

Brains without external babysitters

In his paper, Roy shows that the connectionist theory actually is controller-based, using a logical argument and neurological evidence. He explains that some of the simplest connectionist systems use controllers to execute operations, and, since more complex connectionist systems are based on simpler ones, these too use controllers. If Roy’s logic correctly describes how the brain functions, it could help AI researchers overcome some inherent limitations in connectionist algorithms.

“Connectionism can never create autonomous learning machines, and that’s where its flaw is,” Roy told PhysOrg.com. “Connectionism requires human babysitting of their learning algorithms, and that’s not very brain-like. We don’t guide and control the learning inside our head. Wish we could tweak our brain from outside, but we can’t.”

In his argument, Roy uses examples of a human using a TV remote control or driving a car to demonstrate a general controller-based system. In these systems, the human is the controller, whether changing the TV channels or accelerating the vehicle, while the TV and car are the subservient systems.

In response, connectionists have argued that such systems are not controller-based, but connectionist – or, more specifically, that these are feedback systems, where the components are codependent on each other. In the examples, the TV screen displays the show on that channel, which the human sees and decides whether or not to change the channel again. Or, the car’s speedometer registers 25 mph, which the driver sees and decides whether to accelerate or slow down. This feedback is essential for the human to act, connectionists argue, making the notion of a single controller in the system meaningless.

However, Roy’s response is that the controller doesn’t necessarily need feedback to control the TV or car. The human can act completely arbitrarily without feedback, such as by closing his eyes, and still continuing to change channels and press the accelerator. The key, Roy emphasizes, is that the controller has the ability to act in an arbitrary mode.

Self-supervision

He then examines a simple connectionist learning method called the back-propagation algorithm. This method consists of an interconnected network (for example, of neurons) but also uses an external supervisor when the network makes an error. The supervisor determines which neurons made the error, and then these neurons alter their inputs in an attempt to reduce the system error and come closer to the desired output.

In this algorithm, connectionists see the supervisory abilities as distributed throughout the entire learning system, since the system uses a feed-forward approach to respond to an error in a pre-defined way. But Roy argues that there is a distinct supervisor that has the ability to act in an arbitrary manner, completely neglecting the types of errors the network generates. Therefore, he sees the supervisor as the controller.

Roy acknowledges that only neuroscience – not information science – can determine if connectionist algorithms such as the back-propagation algorithm actually exist in the brain. But if they do, he says that they must rely on controllers.

He highlights evidence from a variety of neuroscience studies that support the existence of controllers in the brain. For example, past research has proposed the existence of control centers of the brain, such as the prefrontal portion of the cerebral cortex. Studies of dopamine and other neural transmitters, as well as the presence of “neurogenesis” (cell birth) in adults, are also compatible with a controller-based brain theory.

Roy clarifies that there is not necessarily a single executive controller in the brain, but rather that multiple distributed controllers could be responsible for different subsystems of the brain. After all, he says, if the connectionist theory is correct and the brain is a network of changing connections, there must be some control mechanism to determine how all those connections are made.

“The controller theory actually takes down a big chunk of connectionism, although not all of it,” Roy said. “The parallel computing idea, within a network of neurons, is still valid. But the controller theory allows one to design and train neural networks in a completely different way than connectionism. So it almost becomes a new science, and we are looking forward to a new generation of human-like learning algorithms.”

Resistance to a new science

Roy’s theory undermines the roots of connectionism, and that’s why his ideas have experienced a tremendous amount of resistance from the cognitive science community. For the past 15 years, Roy has engaged researchers in public debates, in which it’s usually him arguing against a dozen or so connectionist researchers. Roy says he wasn’t surprised at the resistance, though.

“I was attempting to take down their whole body of science,” he explained. “So I would probably have behaved the same way if I were in their shoes.”

One reason why it was so difficult for researchers to accept his theory is that, in cognitive science, terms are not always defined in a very strict way like in other sciences, he explained. He says he ran into many circular arguments with reviewers regarding his paper, and that’s why much of the paper is dedicated to defining what a controller is.

“It’s not that connectionism does not use controllers,” Roy says. “They do, but they use them at the level of individual neurons and call the whole thing as distributed control. And that kind of control is fine with them. They visualize each neuron as ‘deciding’ on its own how to adjust connection strengths during learning, but have a hard time accepting the notion that some neurons in the brain could be sending instructions to some other neurons and tell them what to do. That is, until I showed them that that is exactly what they are doing in their systems, and that there is also growing neuroscience evidence for signals coming from elsewhere in the brain.”

However, Roy’s controller theory may not be quite as at odds with some connectionist’s perspectives as he supposes. Psychology Professor James McClelland at Stanford University, whose early work Roy cites in his paper, thinks that modern connectionist thought allows for some controlling parts of the brain, though not to the extent as in Roy’s model.

“Roy appears to be using a quote from our 1986 book [see below] to mischaracterize our position,” McClelland said. “Work shortly after the publication of our 1986 book began to address the issue of cognitive control. I still favor the view that control is an emergent function of neural populations distributed over several brain areas, but there is no doubt that some parts of the system (most notably, left lateral inferior prefrontal cortex) play a special role in control. Roy's position appears more modular than ours, but I don't think there's anyone who disputes the idea that there are mechanisms that exert some degree of control over cognition.”

Neuroscientist Walter J. Freeman of the University of California at Berkeley also said that he agreed with the notion that there are controllers or guidance systems within the brain. Freeman, who has taught brain sciences at Berkeley since 1959, has developed a model of the brain’s intentional system. The model involves a control loop that predicts future states, future sensory input and future plans of action. The spatiotemporal pattern that implements this plan is transmitted by cortical neurons into the brain stem and spinal cord, using feedback from various parts of the brain. So guidance, control and monitoring of actions play an important part in Freeman’s model.

Autonomous learning machines

No matter exactly where or what the brain controllers are, Roy hopes that his theory will enable research on new kinds of learning algorithms. Currently, restrictions such as local and memoryless learning have limited AI designers, but these concepts are derived directly from that idea that control is local, not high-level. Possibly, a controller-based theory could lead to the development of truly autonomous learning systems, and a next generation of intelligent robots.

“The controller theory gives us much more freedom in creating brain-like learning systems,” he said. “The science is currently stuck, and we have not made any significant progress towards creating robots that can learn on their own like humans.”

The sentiment that the “science is stuck” is becoming common to AI researchers. In July 2007, the National Science Foundation (NSF) hosted a workshop on the “Future Challenges for the Science and Engineering of Learning.” The NSF’s summary of the “Open Questions in Both Biological and Machine Learning” [see below] from the workshop emphasizes the limitations in current approaches to machine learning, especially when compared with biological learners’ ability to learn autonomously under their own self-supervision:

“Virtually all current approaches to machine learning typically require a human supervisor to design the learning architecture, select the training examples, design the form of the representation of the training examples, choose the learning algorithm, set the learning parameters, decide when to stop learning, and choose the way in which the performance of the learning algorithm is evaluated. This strong dependence on human supervision is greatly retarding the development and ubiquitous deployment of autonomous artificial learning systems. Although we are beginning to understand some of the learning systems used by brains, many aspects of autonomous learning have not yet been identified.”

Roy sees the NSF’s call for a new science as an open door for a new theory, and he plans to work hard to ensure that his colleagues realize the potential of the controller model. Next April, he will present a four-hour workshop on autonomous machine learning, having been invited by the Program Committee of the International Joint Conference on Neural Networks (IJCNN).

“At this time, the plan is to show this community that it is feasible to construct machines that can learn on their own like humans,” he said. “I did a similar workshop last week at ANNIE (Artificial Neural Networks in Engineering), and I had people come up to me and say that they would like to automate their learning algorithms in a similar way. So, at this point, the focus is to build a worldwide team of researchers to collaborate on this new science and move aggressively towards building human-like robots (software and hardware) that can learn on their own. The applications of these systems would be limited only by imagination.”

More information: Roy, Asim. “Connectionism, Controllers, and a Brain Theory.” IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, Vol. 38, No. 6, November 2008.

Rumelhart, D. E. and J. L. McClelland, Eds., Parallel Distributed Processing: Explorations in Microstructure of Cognition, vol. 1. Cambridge, MA: MIT Press, 1986, pp. 318–362.

NSF’s summary of the “Open Questions in Both Biological and Machine Learning” www.cnl.salk.edu/Media/NSFWorkshopReport.v4.pdf

ANNIE Conference Web site annie.mst.edu/annie_2008/ANNIE2008.html

Copyright 2008 PhysOrg.com.
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

Explore further: Sistine chapel dazzles after technological makeover

add to favorites email to friend print save as pdf

Related Stories

Learning from robots

23 hours ago

In a Bavarian village, Mathias Hubrich is building remotely controlled robots designed to perform tasks too dangerous for human beings. Robots are now also being used as teaching aids because their Siemens ...

Pair bonding reinforced in the brain

Oct 28, 2014

In addition to their song, songbirds also have an extensive repertoire of calls. While the species-specific song must be learned as a young bird, most calls are, as in the case of all other birds, innate. ...

Optimizing how wind turbines work with machine learning

Oct 14, 2014

Machine learning helps make complex systems more efficient. Regardless of whether the systems in question are steel mills or gas turbines, they can learn from collected data, detect regular patterns, and ...

Recommended for you

Sistine chapel dazzles after technological makeover

4 hours ago

High above the altar in the Vatican's Sistine Chapel, the halo around Jesus Christ's head in Michelangelo's famous frescoes shines with a brighter glow, thanks to a revolutionary new lighting system.

Free urban data—what's it good for?

20 hours ago

Cities around the world are increasingly making urban data freely available to the public. But is the content or structure of these vast data sets easy to access and of value? A new study of more than 9,000 ...

Rice team sets sights on better voting machine

Oct 27, 2014

At the urging of county election officials in Austin, Texas, a group of Rice University engineers and social scientists has pulled together a team of U.S. experts to head off a little-known yet looming crisis ...

User comments : 23

Adjust slider to filter visible comments by rank

Display comments: newest first

Sonhouse
3.3 / 5 (7) Nov 19, 2008
In 15 years he has not demonstrated a superior learning system? Shouldn't that come first THEN the big paper? Or is it his position that the problem is too difficult for one person or team to do?
Velanarris
2.7 / 5 (9) Nov 19, 2008
I think Roy is going to be proven correct down the road. It would explain some of the lesser understood mechanisms of the mind like the fight or flight response.
BigTone
3.5 / 5 (4) Nov 19, 2008
I agree with this Roy that current AI techniques leave much to be desired... I'm not seeing the connection how his construct will apply to any significant improvement to the state of affairs in this field of science.

It seems he is just making these brute force algorithms more efficient - not fundamentally changing the strategy behind machine learning.
vlam67
3.8 / 5 (11) Nov 19, 2008
Partial quote:
...Now, 10 years later and after several rejections and resubmissions,the paper %u201CConnectionism, Controllers, and a Brain Theory%u201D has finally been published in the November issue of IEEE Transactions on Systems, Man, and Cybernetics %u2013 Part A: Systems and Humans...


This is a sordid state of affairs. I would have thought that the academia value and sought after knowledge, and eager to evaluate the strange, the different, and ultimately, the unknown. Not so. Apparently the top dogs who have tenures, reputation and standing are really tight arsed lot, much tighter than a gnat's anus, and will not allow upstarts who might offer something new to upset the prevailing order and their livelihood ! I think somebody said that the smartest ones on the planet are also more capable of more creative back-stabbing, viciousness and vanity than the rest of us has some truth it it.
They created the "peer review system" to cut the chaff...and more often, the throats of their perceived enemies!

Freedom of thought and expression of it are still an ideal on paper. The reality is, the alpha primate descendants of the species Homo Jealousus Academicus still screw you six ways to Sunday in the name and the game of knowledge!
MongHTan,PhD
1.4 / 5 (8) Nov 19, 2008
RE: Can "AI-dreaming" (theorizing) explain "wet-dreams"!?

Absolutely not! And that is the reason Why the AI specialists would never understand nor present a final theory of our Human Mind (or Brain)!

Briefly and neurophysiologically, our Brain controls and regulates both our Consciousness (via the voluntary central nervous system) and the Subconscious or the Unconscious (via the autonomous neuro-endocrino-cardiac system). And our dreams (including wet-dreams) are all modulated by our biologically-unconscious hormonal, neural system -- a "wet" autonomous system that the AI specialists would never be able to emulate, simulate, and/or duplicate by and/or through the brightest theory of their "dry" algorithmic engineering!

For more indepth arguments on this very complex issue, please see my seminal book "Gods, Genes, Conscience" Chapter 15: The Universal Theory of Mind here: http://www.amazon...?ie=UTF8&s=books&qid=1226262931&sr=1-1 (Look Inside) especially Chapter 15.4: Memory Modulation and Recall: A new hypothesis of psychic imagery, perceptivity, creativity, and reflectivity (including how to evaluate and resolve all these very complicated life-mind issues, scientifically and spiritually).

Best wishes, Mong, author "Decoding Scientism" and "Consciousness & the Subconscious" (works in progress since July 2007), "Gods, Genes, Conscience" (2006: www.iuniverse.com...95379907 ) and "Gods, Genes, Conscience: Global Dialogues Now" (since 2006: http://www2.blogg...50569778 ).
MGraser
2.7 / 5 (6) Nov 19, 2008
AI systems lack motivation, which I believe is necessary for spontaneity. As humans, we are hardwired to avoid pain and increase pleasure. How much pain or pleasure we derive is built upon complex variables, such as nerves and neural connections.

If we want an AI system to "spontaneously" learn, then it needs built-in motivation to do so - basically something it can recognize as a reward for doing so. After all, we wouldn't bother thinking about much unless it benefited us (although there does seem to be some thought that occurs, regardless of our desire to do so, which indicates some lack of control over the function).

An AI system will also seem less human-like, because if it is setup to receive the same input under the same conditions, it will have the same response. Humans each have uniqueness in how their systems are assembled, which means that they will perceive things differently. Also, we are prone to forgetfulness and cannot recall with complete clarity - this influences our future perceptions. So, unless we want to build in variability in design and imperfect access to information into these systems, they will never seem quite human.

Thoughts?
BigTone
2.5 / 5 (4) Nov 19, 2008
MGraser,

Actually, depending on how you define it... Motivation is easy to achieve for a program. Having goals and creating cause effect ruleset based systems can be done by a first year computer science student.

You are more on the right path when you discuss the differences in the interpretation of data from unique humans. It is very hard to create computer programs that come up with a new theory on how the universe is evolving based on the same unrelated data points as given to a human.

The high level extrapolations and combinations that humans create to innovate has not been artificially replicated and cannot be brute forced. As silly as it sounds, if you think of the analogy of how many monkeys randomly typing in a room and for how long would type out Einstein's theory of relativity or Shakespeare etc. Then you would have a more correct understanding of what I mean by trying to brute force innovation. The more difficult the problem you are trying to solve and the more unrelated the data sets... the less applicable any current computing technique we have can produce a meaningful result, because they can't calculate and test every permutation, which I concede, is a dramatic oversimplification of the methods used today.
x646d63
3.6 / 5 (5) Nov 19, 2008
This is a sordid state of affairs. I would have thought that the academia value and sought after knowledge, and eager to evaluate the strange, the different, and ultimately, the unknown. Not so...They created the "peer review system" to cut the chaff...and more often, the throats of their perceived enemies!


Actually, I think this demonstrates exactly what this paper is suggesting. A controller in human minds creates the bias that "generally accepted" ideas are correct. People who suggest alternatives to the "generally accepted" methods are often castigated for doing so.

Slowly, we break through "generally accepted" theories (science has tremendously aided us here) but it takes time and lots of evidence!

I don't think it's coincidence that we have concepts that are thousands of years old that don't seem to have any real validity other than mass acceptance...
BSmith
1.2 / 5 (5) Nov 19, 2008
I don't think it's coincidence that we have concepts that are thousands of years old that don't seem to have any real validity other than mass acceptance...

What is that supposed to mean? Up until that last "throw-a-way line,"x646d63 and I were in accord, but now he/she strikes me as condescending. He/She needs to consider that before something tangible can be made/brought into existence there must first be a "concept." While I have no idea who x646d63 is, and I'm sure he's a fine person, my guess is that he/she is an engineer. As a class engineers have problems with things that they cannot maniuplate (weigh, measure, stretch,hammer, burn, etc.), believing there is a mechanical explanation for everything, therefore, missing all of the really important things in life. I have two conjectures. The first is that the mind is of the brain, but not in the brain. (It's an idea I stole from Mortimer Adler.) the second is that AI, ultimately, is a dead end. While its adherents will construct devices that are more and more clever, they will never fully explain, let alone replicate, the human mind. Or were he talking about Judeo/Christian Theology?
brant
2.3 / 5 (4) Nov 19, 2008
Actually the question is whether you can have a conscious machine or is it just an extremely complex machine that mimics life.....

I suspect that man will never be able to create conscious machinery..... And I doubt that his theory is correct even though it is probably a good stepping stone. We are too young as a civilization....
superhuman
3.4 / 5 (5) Nov 19, 2008
This whole controversy seems absurd from the outside. The field is in serious trouble if they put that much weight on their pet model so as to oppose publication of another model for 10 years!

It means that the model is the only thing they achieved as opposed to any real results which could stand for themselves!
It also means the authors think their model won't be able to defend itself.

So generally it looks like all their scientific work is useless and they simply try to hold to their positions by means of politics.

Who cares what model you are using, just make a better circuit.

Besides the whole dispute looks pointless as its pretty damn obvious there are controllers in the brain, after all brain evolved in stages and the older parts were naturally positioned to control the younger ones.

A rough example - unconscious house keeping circuit has top priority, then there are primal instincts which decide general goals like procreation or self preservation, then there's circuit which governs emotions - more subtle control and allows for better cooperation with other pack members, finally there's the youngest part which grants us higher cognitive abilities like speech, resoning and complex values, this one evolved to deal with our complex social environment.

Its obvious that those circuits are not on equal footing, for example when you panic, you are not able to reason clearly.

Brain is a very complex composition of various more or less specialized modules with equally complex hierarchy.
out7x
1.3 / 5 (4) Nov 20, 2008
We already have robotic learning, and self-reproducing. That is much different from understanding how the brain works. Soon fMRI will have enough resolution to find out.
vidyunmaya
1 / 5 (11) Nov 20, 2008
SUB:GOD,CONSCIOUSNESS, BRAIN
SEARCH LINKS:COSMOLOGY VEDAS INTERLINKS
1.Resource : Reflectors,3-Tier Consciousness, Source, Fields and Flows
2Noble Cause : Human-Being, Environment, Divine Nature and Harmony
The Physical Nature wobbles between two extremes unable to compromise with Nature's Function.
How does one search for Control ? Nature Leads the way
JNANINAH TATVA DARSINAH- DIVINE KNOWLEDGE INDEXES PHILOSOPHY IN NATURE
Frames of Minds: Human Being has various frames of Minds
1.Biological Frame
2.Philosophical Frame
3.Divine Frame
4.Nature Divine Frame
5.Cosmic Divine Frame
Today Minds are in conflicts due to Horse Nebulae.
One needs to maintain Cool Senses and search for Cosmic Index
Ten Log scale LIGHT YEARS constitutes the Universe
Reproduced from my BOOK-
HEART OF THE UNIVERSE-2007
CONCENTRATION, MEDITATION AND DEDICATION ARE THE KEYS FOR PROGRESS INDEX-
All Books - CONTACT AUTHOR- Copyright © Vidyardhi Nanduri
http://www.ebooko...?Aid=241
http://www.newciv...hp/_v162
Vidyardhi Nanduri
Ant
1.4 / 5 (5) Nov 20, 2008
AAAAAH. Give me strengnth! AI Does NOT think it is simply a complex conditional system weighted toward one result or another dependant upon inputs and previous parameters. IT DOES NOT THINK. Only the sentient beings of this planet THINK. There is no such entity as true artificial inteligence, thank god, because if there was we would now not exist. certanly those of you beleive otherwise would be long gone.

x646d63
3.3 / 5 (3) Nov 20, 2008
I don't think it's coincidence that we have concepts that are thousands of years old that don't seem to have any real validity other than mass acceptance...


What is that supposed to mean? Up until that last "throw-a-way line,"x646d63 and I were in accord, but now he/she strikes me as condescending. He/She needs to consider that before something tangible can be made/brought into existence there must first be a "concept." While I have no idea who x646d63 is, and I'm sure he's a fine person, my guess is that he/she is an engineer. As a class engineers have problems with things that they cannot maniuplate (weigh, measure, stretch,hammer, burn, etc.), believing there is a mechanical explanation for everything, therefore, missing all of the really important things in life. ... Or were he talking about Judeo/Christian Theology?


My "throw-away" comment does, in fact, refer to theology and mythology, but also to any other applicable belief systems (like the Big Bang and Global Warming.)

For thousands of years human societies have created, evolved, and ultimately discarded hundreds of theologies and mythologies. Many remain, and all modern ones are evolving with science.

I'm suggesting that part of the brain (a controller, perhaps) may cause us to give special importance to existing theories and concepts rather than allowing us to carefully and objectively analyze them. This would explain why people cling to belief systems even in the face of significant contrary evidence.
BobbyT
2.6 / 5 (5) Nov 20, 2008
Velanarris
1.3 / 5 (3) Nov 21, 2008
AAAAAH. Give me strengnth! AI Does NOT think it is simply a complex conditional system weighted toward one result or another dependant upon inputs and previous parameters. IT DOES NOT THINK. Only the sentient beings of this planet THINK. There is no such entity as true artificial inteligence, thank god, because if there was we would now not exist. certanly those of you beleive otherwise would be long gone.


Hey buddy, thought is a complex conditional system. There's nothing that differentiates an IF Then statement from a thought process other than complexity.
Pointedly
not rated yet Nov 22, 2008
The last sentence of this article states, "The applications of these systems would be limited only by imagination." My first thought is: Who's imagination...a human's or a machine's? I can foresee a time when human imagination will be surpassed by machines.
denijane
not rated yet Nov 24, 2008
Hm, on first glance, his theory makes sense. Although I would vote for a hierarchy that is a result from the levels of organisation-which would mean that it's not a micro but a macro effect. Or in other words, that in order for such system to function, there would be always some parts that would get on top. It's simply the most logical way.

It's hard to put it in words, but it makes sense keeping in mind that cells look much more versatile than we thought.
Vikstar
2 / 5 (2) Nov 26, 2008
I lol'ed until I remembered that it was actually published. The AI community has been doing this type of learning for years. He chose to compare his hypothesis to supervised learning, too bad he didn't do a proper literature review to discover that we also use unsupervised learning, reinforcement learning and evolutionary computation that better fit "his" model. To create and analogy with film, he says that black and white films stunt our ability to see colour in them (yes, it is just as rediculous as it sounds), and he proposes that we use colour in films... but he hasn't done his homework to realise that colour films already exist. I suspect his previous submission attempts were a failure since they contributed nothing unique to the body of knowledge.
techisbest
4 / 5 (1) Dec 19, 2008
The submit-for-publication, review-comment-reject, submit-for-publication, review-comment-reject, submit-for-publication, review-comment-accept, publish feedback loop is science at its best. No doubt the final, published paper has much more clarity than the originally submitted paper. This strengthens the arguments and enables the ideas to move forward with less resistance.

No doubt the ideas presented will still be challenged, but the challenges will be at a higher level.
deepsand
1.6 / 5 (7) Jan 14, 2009
AAAAAH. Give me strengnth! AI Does NOT think it is simply a complex conditional system weighted toward one result or another dependant upon inputs and previous parameters. IT DOES NOT THINK. Only the sentient beings of this planet THINK. There is no such entity as true artificial inteligence, thank god, because if there was we would now not exist. certanly those of you beleive otherwise would be long gone.


Sentience and thinking are 2 quite different things.

The etymology of the former is :

Latin sentient-, sentiens, present participle of sentire to perceive, feel

Clearly a non-thinking entity can possess the ability to perceive, including that of SELF-awareness.

Likewise, a thinking entity need not possess such perception, but only the ability to internally contemplate data.

Bren
1 / 5 (1) Mar 04, 2009
Hmm. I actually kind of see what Roy is talking about. You know, science is really just a bunch of guesses and theories... Nothing is really definite in a bunch of cases. I truly believe anything is possible. Heck if they said they'd found a way to make pigs fly- hell- bring it on.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.