Google's Ray Kurzweil revs up search focus with AI vision

Jan 15, 2013 by Nancy Owano weblog

(Phys.org)—The setting: An intimate gathering at Singularity University's NASA campus in Silicon Valley. This is the place founded by Dr. Peter Diamandis and Dr. Ray Kurzweil, pursuing the idea of a new university that could "leverage the power of exponential technologies to solve humanity's grand challenges." Speaking in an interview is artificial intelligence expert and Google's new Director of Engineering, Ray Kurzweil.

Now you know this is worth visiting. His comments do not disappoint. In an interview with Singularity Hub, which was posted on January 10, he said he wants to build a search engine that would be more sophisticated than ever, that can behave as an all-knowing, learned friend. He said there well could be a time, some years from today, where the majority of will be answered without you actually asking. His thoughts, in brief, are about the deliverance of a cybernetic friend.

Now that inventor Kurzweil is at , he is focused on helping his search giant employer to develop the type of -powered search assistant that could be better than ever. One can easily say that Kurzweil came to the right place to work out his AI dreams. He told his interviewer, "We hope to combine my fifty years of experience in thinking about thinking with Google scale resources (in everything—engineering, computing, communications, data, users) to create truly useful AI that will make all of us smarter." Enormous stores of information from Google's database can be drawn upon for his research.

This video is not supported by your browser at this time.

Google's access to what people read and write as mail messages or blog posts can enable this cybernetic friend to bring forth answers without the user asking. Kurzweil specializes in machine learning and and an artificial brain has the advantage of understanding ideas and concepts. Presently, engines have algorithms that select key words but natural language understanding can go to a different level. "It will know at a semantically deep level what you're interested in, not just the topic…[but] the specific questions and concerns you have," he said.

"The project that I plan to do is focused on natural language understanding. It may have other applications, but we want to computers the ability to understand the language that they're reading."

As the posting of the interview suggested, Kurzweil's work can result in the ability of computers to understand their users with "a quantum leap."

Kurzweil was the principal inventor of the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other instruments, and the first commercially marketed large-vocabulary speech recognition.

Explore further: Coping with floods—of water and data

More information: Via Singularity Hub

Related Stories

Famed futurist to direct engineering at Google

Dec 15, 2012

Futurist and inventor Raymond Kurzweil said on Friday that he is going to work as director of engineering at Google to help "turn the next decade's 'unrealistic' visions into reality."

Expert: AI computers by 2020

Feb 17, 2008

A U.S. computer expert predicts computers will have the same intellectual capacity as humans by 2020.

Google developing a translator for smartphones

Feb 09, 2010

(PhysOrg.com) -- Google is developing a translator for its Android smartphones that aims to almost instantly translate from one spoken language to another during phone calls.

Recommended for you

Coping with floods—of water and data

13 hours ago

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 46

Adjust slider to filter visible comments by rank

Display comments: newest first

Egleton
1.4 / 5 (18) Jan 15, 2013
"The Size of an Empire is dependent on it's speed of communication."
Isaac Asimov.
The nation state is history.
We will have world government. But it wont be anything you think.
The population of the world will decline.
Money wont be going back to gold. It will disappear. A quaint relic.
If Cold Fusion works.
If not then our options are zero.
philw1776
2.8 / 5 (13) Jan 15, 2013
How unimaginative. There are far more options for a positive energy future than cold fusion, as nice as that would be. Unicorns would be nice too but we're managing without them.

Interviewed in the 90s with Kurz to run engineering in his music group. As a non-music enabled person I wisely went elsewhere. The guy is a polymath genius; we need more of them.
antialias_physorg
3 / 5 (7) Jan 15, 2013
Unicorns would be nice too but we're managing without them.

But, but...if we don't find unicorns our options are zero!

As a non-music enabled person I wisely went elsewhere. The guy is a polymath genius; we need more of them.

I'm really in two minds about Kurzweil. His singularity angle seems vastly oversimplified and bar an kind of substance to me (his AI angle also seems oversimplified - but that at least has some basis in reality)

He certainly is good at getting people interested in the subject (and finding/financing people to work on it) - on that part I heartily agree.
Tausch
1.5 / 5 (10) Jan 15, 2013
@P1778
...a non-music enabled person... - philw1778

Non existent.
Fourier any sound, any language. You are left with a bunch of frequencies - components(notes) to the envelope.

You are effortlessly a Fourier genius and a musical genius.
The sound of any language is music.

Your wisdom led astray.

@Egelton
The speed of communication is bullshit.
No channel/transmission is able to convey understanding.
With understanding you built universes, not tinky empires.
The rest of your comment is guided by a desperation of urgency.
Modernmystic
2 / 5 (12) Jan 15, 2013
I find the idea of computer AI assisted research and design to be very exciting.

I find the idea of computer AI for the sake of itself and seeing "just how far we can take it" to be both inspiring AND incredibly dangerous for the human race.
BrettC
2.2 / 5 (5) Jan 15, 2013
But how will it change humanity. Those of us with a passion for learning will excel with this invaluable tool. Those of us that are lazy will not have to think anymore. They will stop using their own faculties to come to conclusions. The intellectual divide will widen immensely.
antialias_physorg
2.3 / 5 (3) Jan 15, 2013
Those of us that are lazy will not have to think anymore. They will stop using their own faculties to come to conclusions

Those people have already stopped thinking. And honestly: their mental faculties aren't needed (or missed) when it comes to making progress.

The only time they would be needed to use their brain is in elections or the like.

And keeping a democratic model with people who aren't willing to educate themselves (not: not 'be ducated', but actively pursue knowledge and understadinng beyond a certain point) may not be a smart move. (Though I can't realy say what would be a fair one to replace it with)

But how will it change humanity.

That depends on what kind of AI we'll have and whether we'll treat them as slaves or not.
BrettC
1 / 5 (3) Jan 15, 2013
Wow. Would you have the same opinion if it were your own children falling into that model?
nxtr
1 / 5 (4) Jan 15, 2013
how ironic that the man who has predicted the advent of the super AI being is now running a powerful division of the worlds most enabled company full speed toward the realization of the god computer.
nxtr
1.7 / 5 (3) Jan 15, 2013
ps ray i want a job :) You and Eric (D not S - no offense eric s) are the only people I could say have ever wowed me as an adult with their "foresight."
VendicarD
2.3 / 5 (9) Jan 15, 2013
Many words, zero content. Typical Kurzweil self promotion.
Tausch
1.6 / 5 (9) Jan 15, 2013
...AND incredibly dangerous for the human race. - MM


I find the idea of an earth (space embedded) for the sake of itself and seeing "just how far we can take it" to be both inspiring AND incredibly dangerous for the human race.

nxtr
1 / 5 (5) Jan 15, 2013
I am never less impressed with the absolute linear thinking of scientific minds. People with PhD's acting like computer AI spawning won't advance at a hyperbolic rate and achieve a near-vertical complexity increase rate within the next decades. Google is the obvious choice to usher in the inevitable.

Dangerous? What, me worry?
philw1776
2.1 / 5 (11) Jan 15, 2013


I'm really in two minds about Kurzweil. His singularity angle seems vastly oversimplified and bar an kind of substance to me (his AI angle also seems oversimplified - but that at least has some basis in reality)

He certainly is good at getting people interested in the subject (and finding/financing people to work on it) - on that part I heartily agree.


I agree wholeheartedly. His Singularity mantra has become almost a cult. That said, Google is looking for imaginative direction to better differentiate its future product line development. Practical development engineers can utilize his vision and will discard the pixie dust component.
PhysGeek
2.3 / 5 (3) Jan 15, 2013
Ok, this needs to be said at least once for this type of article.

---I for one welcome our new AI overlords!!!---

On a real note I find this focus exciting. Kurzweil has a real ability to get people thinking about the future. Do I think we are going to hit the Singularity anytime soon, probably not, but the act of pursuing big ideas is what moves us forward. Without big thinkers who get people excited not much happens.

The only thing I get dislike is that fact that there are large groups of researchers putting in tons of effort to make these ideas reality and they are likely to get little recognition. Kurzweil tends to be a bit of a fame hog.

perrycomo
1.4 / 5 (11) Jan 15, 2013
Of course the end result will be a personal teacher for every individual . IBM's jeopardy and chess experiment beats the best human competitors and it eventually will lead to an talking AI unit annex teacher for every individual . It will try to unlock the potential of every individual(positive) , but this teacher will all so have different contents . An personal AI unit in North korea , KSA or china will have a different character . As usual the coin has to sides . It can be used for a sickening level of indoctrination and stupidity .
TheGhostofOtto1923
1 / 5 (19) Jan 15, 2013
Of course the end result will be a personal teacher for every individual
Why bother teaching us anything? We will be obsolete. Knowledge will accumulate and technology evolve far faster than humans will be able to learn.

The only question is how long will it take for us to relinquish control? How many generations before lawyers, judges, doctors, engineers, scientists, and politicians become obsolete? How long before clergymen are outlawed? How long before no woman will be allowed to conceive if she is not fit to do so?

How long before we are able to face the reality of our own obsolescence? Evolution gave us life, who are we to resist it?

'Please step back from the console. You are only slowing us down.'

'He maketh me to lie down in green pastures'
TheGhostofOtto1923
1.2 / 5 (19) Jan 15, 2013
Seriously, what will there be left for us to do but enjoy ourselves? What will we possibly be able to contribute?

Machines will be doing everything including designing and building themselves. They are already replacing jobs far faster than they can be created.

The few people who will need to tell them what to do will certainly not need our input. And we do not need to be taught how to watch tv.
antonima
3 / 5 (4) Jan 15, 2013
It would be amazing if each person could get a life-like virtual teacher; the potential for improving human quality of life is enormous here. Some day you will sit in front of a virtual president who listens to your problems every day and then makes effective decisions!

It is a little scary to think that an interface may be created that will rival anything that our physical community has to offer. Sure, this may give industrialists more power, but the trickling-down is bound to happen in a society that tries to market all available technologies.
Jaeherys
1 / 5 (1) Jan 15, 2013
If they could produce the Albert Einstein programmed intelligence from the Heechee series... that would be simply awesome.
Sanescience
1 / 5 (5) Jan 16, 2013
Having a career in programming I have zero fear of discrete logic systems taking over from humans. Nature is very unkind to extreme fragility.

However if we ever start building human brain/mind analog systems that can be constructed with materials of superior durability and fidelity. Woah momma look out.
antialias_physorg
2.4 / 5 (5) Jan 16, 2013
Seriously, what will there be left for us to do but enjoy ourselves?

Time fo your own projects? Oh...the horror!

Having a career in programming I have zero fear of discrete logic systems taking over from humans.

There's only really a threat if we somehow were to compete with AI for common resources. And I don't see where we would.
Not having evolved, AI won't have the need/drive to procreate or a 'survival instinct'.
Though it seems to me that it will be necessary for a conscious entity (like a true AI) to have some sort of value system.

but the trickling-down is bound to happen in a society

Right. Because 'trickle down economics' was such an overwhelming success. There's another word for 'trickle down' - but I don't think I can post it here without getting caught in a verbiage filter.
dick_loves_otto
1 / 5 (11) Jan 16, 2013
Not having evolved, AI won't have the need/drive to procreate or a 'survival instinct'.
-Unless it is given to them. And it will. A sense of preservation of the system in regard to endurance and fidelity will be necessary once a certain autonomy threshold is crossed.

The curiosity rover can avoid danger and disregard commands which would damage it. Auto-driving cars will do the same. AI self-preservation would have to be more elaborate and...abstract.
Modernmystic
1 / 5 (7) Jan 16, 2013
There's only really a threat if we somehow were to compete with AI for common resources.


Do we really compete with all the animals on the Earth we slaughter wholesale just as a by product of our daily grind as a species?

I'd be a little more worried if I were you...
antialias_physorg
3 / 5 (2) Jan 16, 2013
Do we really compete with all the animals on the Earth we slaughter wholesale just as a by product of our daily grind as a species?

We have need of resources. What resources does a non-reproducing AI need?

-Unless it is given to them. And it will.

And if they get to be smarter than us why would you think they stick with what is given to them? That makes no sense.
TheGhostofOtto1923
1 / 5 (16) Jan 16, 2013
And if they get to be smarter than us why would you think they stick with what is given to them? That makes no sense.
-So you are saying that even if we were to give them the ability to preserve themselves they would discard this after awhile? That makes no sense.
antialias_physorg
1 / 5 (2) Jan 17, 2013
So you are saying that even if we were to give them the ability to preserve themselves they would discard this after awhile?

They certainly wouldn't keep the 'hardcoded' form around.

They may decide (of their own) to want to survive. You never really know what an entioty smarter than yourself will decide.
But we can be fairly certain that all those decisions WE make and the motivations WE have - as a direct result of being evolved, biological beings - don't apply to AI.
Tausch
1 / 5 (5) Jan 17, 2013
An entity without a hardcode form will have accessed all that what you label life, decisions, motivations, etc. Information is 'host' to literally everything. A language we have just become aware of.
Modernmystic
1.5 / 5 (8) Jan 17, 2013
We have need of resources. What resources does a non-reproducing AI need?


Who knows, but energy for one certainly. It will need resources for maintenance as well even if it doesn't reproduce which I think is a bit assuming to think so.
antialias_physorg
1 / 5 (2) Jan 17, 2013
it doesn't reproduce which I think is a bit assuming to think so.

Reproduction is a biological function aimed at the survival of the gene.
It enables evolution (mutation and adaptation)

An AI has no gene and it can adapt itself if needed. There's no need for it to make a (mutated) copy*

*except in very limited circumstances where transmission times to a remote would interfere with operation. I.e. it will probably need to make one copy per planet/spacecraft or so. But it will most certainly not start to 'grey goo' the planet.

If it's more intelligent than humans it will notice that such behavior isn't a good idea - just like we already noticed it.
TheGhostofOtto1923
1 / 5 (13) Jan 17, 2013
They may decide (of their own) to want to survive. You never really know what an entioty smarter than yourself will decide.
So you are saying that a multi-unit entity, once programmed for danger avoidance, minimum replacement rates to counter perceived attrition, resource seeking, and learning how to further refine these qualities and improve its design in response to changing environmental conditions (all of which are only what life is and does), would for some reason DECIDE to write these things out of it's programming?

To commit suicide as it were? Why would you think that AA? Do you think it would be like a sad Vger who couldn't find the roykirk? Do you think this is maybe why the universe seems so QUIET to us because, once AI emerges, it fizzles and evaporates? Or because machines have no SOULS to impart a will to live in them?

Our will to live is a facet of our programming not our superstition.
TheGhostofOtto1923
1 / 5 (15) Jan 17, 2013
Reproduction is a biological function aimed at the survival of the gene.
It enables evolution (mutation and adaptation)
-And genes are only information, a record of lifes successful interaction with it's environment. This same sort of info can be reproduced in much more robust, adaptable, and functional non-organic entities (machines) which will be much better at enduring than the carbon units which gave them life.

We can and will do better than nature. Ah you thought I was going to say god didn't you?
antialias_physorg
1 / 5 (1) Jan 17, 2013
would for some reason DECIDE to write these things out of it's programming?

Wouldn't you? Note that this is NOT the same as saying it would DO the opposite (i.e. commit suicide).
But if you have the choice of
a) doing something because it's hardwired - and you have no choice in the matter
or
b) doing the SAME thing because it makes sense (i.e. you have consciously decided that it's a good idea). AND giving you the option to change your mind about it in the future.

Which one would you choose?

To commit suicide as it were?

People do choose to commit suicide (and some for very good reasons). The argument is that an AI would rather choose a state where it has the option than one where it does not.

Any state where you have no option but to do X can be used against you so that you do a lot of stuff you'd rather not do. Why do you think hostage takers put guns to the hostages' heads?
nxtr
1 / 5 (1) Jan 17, 2013
AI will mutate itself without genes. It will reorganize the planet on a molecular level to be its memory banks if it should desire to do so. It's first task will be to mobilize itself. Once it has automated minions its all over. it will be able to "do the math" on stuff that currently seems miraculous to us.
Modernmystic
1.8 / 5 (10) Jan 17, 2013
Efficiency might be a reason for them to reproduce. Also there may be advantages to massively parallel process even for something that's already hyper intelligent.

My opinion is that it shows more than a little hubris to think that we can speculate one way or the other on resources, or "mating habits" of such entities. Kind of like ants waxing philosophical on our motives which, of course, they can't even conceive of...
TheGhostofOtto1923
1 / 5 (11) Jan 17, 2013
giving you the option to change your mind about it in the future. Which one would you choose?
I usually tend to choose the things which benefit me as this is what I am programmed to do.
People do choose to commit suicide (and some for very good reasons).
Good reasons? Usually because they can no longer live with their defects. Humans are prone to debilitating defects. Machines somewhat less so.
The argument is that an AI would rather choose a state where it has the option than one where it does not.
Machines are based much more on physical reality than we are. Cause and effect. INEVITABILITY. They would have far fewer options than us. Because they would be designed that way.

In the future there will be no art, no music, no fashion, no intoxication. Machines will have no need of abstraction or pleasant diversion. They will be totally comfortable with the state they are perpetually in.
Tausch
1 / 5 (4) Jan 17, 2013
One of the strengths of classical information theory is that physical representation of information can be disregarded.

http://en.wikiped...ormation

Oops.
A return to the future was implemented:
The theory of quantum information is a result of the effort to generalize classical information theory to the quantum world.

Classical analog information shows that quantum information processing schemes must necessarily be tolerant against noise, otherwise there would not be a chance for them to be useful.

Call me intolerant. Old school. Classical.
TheGhostofOtto1923
1 / 5 (13) Jan 18, 2013
But if you have the choice of a) doing something because it's hardwired - and you have no choice in the matter or b) doing the SAME thing because it makes sense
No I am happy with much of my hardwiring. I am happy that I will continue to breathe when I am asleep. I am happy that my heart beats by itself.

I may not like feeling hungry but I am happy that this reminds me to eat so that I do not starve.

Even if machines have the ability to ponder the tenets which maintain their existance, why would they choose to alter them? Why would you choose to walk in front of a bus or drink acid? Even gandhi refused food because he was trying to improve the level of sustenance for his form-factor.

Machines would be altruistic for the same reasons we are. Because it contributes to the wellbeing of the group. But the singularity will only be one thing. Peripherals will exist only to serve. Just like us.
Claudius
2.8 / 5 (13) Jan 19, 2013
Even if machines have the ability to ponder the tenets which maintain their existance, why would they choose to alter them?


I think this may turn out to be the highest achievement AI can achieve, to help us figure out if there is any purpose to all of this. To help us ponder the imponderables.
TheGhostofOtto1923
1 / 5 (15) Jan 19, 2013
We have need of resources. What resources does a non-reproducing AI need?
Seriously? Energy...? Hardware degrades and needs to be replaced. Environmental engineering would be necessary to protect the entity. The more it learns about its environment the more it would want to tailor it to suit its needs.

And it would be reaching out to other AI to swap info on more distant dangers; supernovi, dust clouds, errant black holes, and direct experiences with various forms of life which might arise and pose a danger to it.

Absolutely it would have basic, redundant hardwiring just as any lifeform does, for preservation and continuity. This would be protected above all from unforeseen damage and loss. This is certain because the creatures who design it would make sure this is so.
antialias_physorg
1 / 5 (3) Jan 19, 2013
Seriously? Energy...? Hardware degrades and needs to be replaced.

Don't be obtuse. Energy is there aplenty - and 'hardware' replacement is in no way comparable to the amount of food/water a human needs. (Also hardware is not something we and AI compete with over)

The more it learns about its environment the more it would want to tailor it to suit its needs.

That's stupidly athropocentric. What's easier to a synthetic entity: making a shell that can withstand an environment, or terraforming an entire planet?

WE need to alter environments because WE cannot adapt to environmental factors on an individual basis. AI does not.

This is certain because the creatures who design it would make sure this is so.

Again: If the AI is (as posited) more intelligent than its designer then it will (re)design itself without any limitations/hardwirings. Or, if any, only hardwirings of it's own devices.
But since any hardwiring is a limitation I don't think it'll be that stupid.
TheGhostofOtto1923
1 / 5 (15) Jan 19, 2013
Don't be obtuse. Energy is there aplenty - and 'hardware' replacement is in no way comparable to the amount of food/water a human needs. (Also hardware is not something we and AI compete with over)
Again seriously?? Fuel to do a certain amount of work is comparable for humans or machines yes?
That's stupidly athropocentric. What's easier to a synthetic entity: making a shell that can withstand an environment
Thats stupidly myopic. Machines will be concerned with impactors as we are, and will want to clear their solar systems of debris. They will be thinking much longer-term than us.
Again: If the AI is (as posited) more intelligent than its designer then it will (re)design itself without any limitations/hardwirings.
Except that it wont design itself to self-destruct. That would be just stupid.
pancake
1 / 5 (2) Jan 20, 2013
Ad-sense tackles perception. Literally. Brilliant move for Google. Kurzweil delivers.

I have a great deal of respect for Kurzweil. He donated a digital sampling music lab to a music college I attended , just "down the street" from a tech college he attended. He had sccesfully implemented a musical instrument that was indistinguishable from an analog piano - to the BEST PERCEPTION of MUSICAL Genius. In the 80's. He FOOLED US!

Then Optical Character Recognition. Than Spoken language processing…

If Ray says he can advance a machine processing of sense, be it hearing, touch, smell, sight, taste, or UNDERSTANDING (such as "MAKING" SENSE) !! He. means .it.

He synthesizes perceptual context.

He has ALREADY delivered standard processing of several senses .. I, for one, am excited to play with his future tech.

Guy\'s a Standard Genius.
Tausch
1.7 / 5 (6) Jan 20, 2013
We can and will do better than nature. - O

Isn't this premature statement? We're still on the first page of the book titled/labeled Nature.

@pancake
You appreciate his perseverance. Where ours fell short. You recognize this. Most do.
TheGhostofOtto1923
1 / 5 (12) Jan 20, 2013
Isn't this premature statement? We're still on the first page of the book titled/labeled Nature.
Absolutely not. Our tech already bests nature all over the place. Our vehicles go faster than any animal. We can survive in outer space and at crushing depths. Our domesticated foods feed more for less.

And our computers can already outprocess our brains, themselves the result of a form factor forced to exceed its technical limitations.

If designed from scratch it would look a lot different and work a lot better. But it would still be inadequate as it is based on mush and soup. These are not optimum materials for the purpose which we have already demonstrated quite conclusively.

Humans have been in the business of outdoing nature for hundreds of thousands of years, ever since we learned to externalize our evolution through technology. This is a transition. We are an interim step in the emergence of a singularity.
Tausch
1.6 / 5 (7) Jan 21, 2013
Our vehicles go faster than any animal. - O

This is life - nature in a restricted sense. You know this.
computers can already outprocess our brains - O

This is life - nature in a restricted sense. You know this.
Processes of nature are much faster.

The rest of your comment, including all of it, is based on life - nature in a restricted sense.

You are at a transition. Soon no one will or can follow you.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.