February 17, 2008

"We'll have intelligent nanobots go into our brains through the capillaries and interact directly with our biological neurons."

Predicts Ray Kurzweil:
The nanobots, he said, would "make us smarter, remember things better and automatically go into full emergent virtual reality environments through the nervous system."
My first reaction is extreme resistance.... but I'm afraid I could be talked into it.

ADDED: I'm assuming it will be voluntary, but, really, that's unlikely, isn't it? Even if the government doesn't do it to you... I'm having flashbacks to this movie:


Even if the government doesn't do it to you, won't your parents? How can you compete in school if you didn't get your nanobots? They'll assume you'll want it — like circumcision! — and you'll feel put out if you didn't get it.

29 comments:

rhhardin said...

Artificial intelligence has the longest history of unfulfilled promise of almost any discipline.

Results have been just around the corner since the 50s.

Generations of grad students have attacked the problem.

They all walked away, unsure even what questions to ask, let alone which ones to answer.

It appeals to the male instinct to abstract. He sits and thinks about how things must work.

He abstracts.

What he abstracts away is how things work.

Which is why nothing ever gets anywhere, and never will.

Bob said...

All they have to do is promise that you can take all your favorite music with you and talk to you friends, and everyone will want it.

Ann Althouse said...

But the article is about enhancing a human brain, not creating an external computer with artificial intelligence.

rhhardin said...

Well, Limbaugh has a cochlear implant, so that works.

Maybe it implements Hartley's associatory vibrations. Coleridge actually thought through the whole problem, and there's nothing really new since.

It winds up a theoretical problem, just pushed out past where the difficulty is obvious, in the excitement of new hardware.

Anonymous said...

I'll just tag this on here. Does it really matter how we categorize the comments in dialogues?

www.youtube.com/watch?v=6fqtbMHfpXY

NotWhoIUsedtoBe said...

Worry about it if it happens. Most things don't. It's the things we didn't see coming that really change things.

Meade said...

"You wouldn't hurt me... After all, we're married."

Bissage said...

We'll have intelligent nanobots go into our brains . . .

So what if he promises this thing will improve the way you think?

If it looks like this and he looks like this, JUST SAY NO!

“Capteen, zey put zees creeeatures in our eeears!”

Anonymous said...

Although your point is a good one, Ann, rhhardin is still mistaken. The field of artificial intelligence has made tremendous strides over its 50-year history, and most of us Internet junkies use at least some of the results every day: Google's Director of Search Quality, Peter Norvig, was once Chief Scientist of a company named "Junglee," which was bought by Amazon and is an integral part of their recommendation technology. Peter is also co-author of Artificial Intelligence: A Modern Approach, which anyone who wishes to learn how to effectively write software that does sensible things under complex conditions has on their desk.

Cyc is also worth keeping an eye on, as they seem to have actual satisfied customers.

What rhhardin is right about, of course, is that we don't seem to have solved the Strong AI problem. It is, however, not quite true that we don't know what questions to ask. John L. Pollock is one of a rare breed of philosophers who are busily putting rationality on a sound scientific foundation—that is, crafting a testable theory of rationality. Quite successful work has also been done to characterize intelligence in terms of algorithmic information theory.

Finally, it's important to remember that we're talking about intelligent digital computational devices, when digital computational devices have only existed for approximately 72 years. It's extremely early days yet!

Bissage said...

Of course there have been tremendous strides in the development of artificial intelligence.

How else to explain some regular Althouse commenters?

Heh.

rhhardin said...

John L. Pollock is one of a rare breed of philosophers who are busily putting rationality on a sound scientific foundation—that is, crafting a testable theory of rationality.

You might enjoy Quine's angry letter about Derrida .

I don't suppose Pollock reads Derrida.

It might be important.

Or check out Nietsche

Supposing truth to be a woman -- what? is the suspicion not well founded that all philosophers, when they have been dogmatists, have had little understanding of women? that the gruesome earnestness, the clumsy importunity with which they have hitherto been in the habit of approaching truth have been inept and improper means for winning a wench? Certainly she has not let herself be won - and today every kind of dogmatism stands sad and discouraged. If it continues to stand at all!

What computers did is make philosophers put their money where their mouth is. Alas, they did not come off well.

I also recomment Thurber and White _Is Sex Necessary?_ the chapter on feminine types, which is actually about experts.

rhhardin said...

when digital computational devices have only existed for approximately 72 years. It's extremely early days yet!

I've been programming them for 45 years. It seems like a long time, to me. Enough time to figure it out, if that was the way to go.

It's not a hardware problem, and never has been.

Ernst Stavro Blofeld said...

Actually, I'm not sure we can say it's a hardware problem or not, which gives you some idea about how little we know.

Very smart people have been thinking about AI for fifty years or more with very little to show for it. It seems like a solveable problem, superficially, at least if you're a materialist. We're made up of matter, and some interaction of the matter gives rise to consciousness. So we should be able to replicate that. But we know bugger-all about how to go about doing that from the top down.

It's far more profitable to use computers as augmentation to human capabilities rather than trying to create a replica human intelligence.

ricpic said...

Begs the question of whether machines have intelligence at all. You can pack a ton of information into a machine but can any machine see/make connections based on that information, other than the most linear A + B = C stuff? In other words, can a machine be creative? When they come up with a machine that has a sense of humor, then I'll be worried.

Bissage said...

During moments of excitment, Mrs. Bissage has addressed me as her love machine. Does that count?

Charlie Martin said...

When I was a philosophy undergrad, many years ago, it struck me that what we study in philosophy classes, in general, is the stuff that hasn't come to any sort of satisfactory conclusions yet. Why things fall, and how to characterize it, used to be "natural philosophy"; now it's physics and mechanics. When Franklin and his key were the big thing in electricity, he was a philosopher, corresponding with Goethe and Voltaire. Mathematical logic: was philosophy, then moved out with computation so it's considered more or less mainstream math and computer science. In general, once something is on solid ground, it's no longer philosophy.

AI is much the same. Fifty years ago, rule based systems were AI; now they show up so often we often don't realize we're using them. Chess machines were marvels of AI; now they're games. We hardly think about those as AI any more.

So, after the fact, it's hard to see what AI's accomplishments have been; when it works, its no longer AI.

The thing with Strong AI is "how do you know when you have it"? Turing's test is pretty straightforward: you have strong AI when you can't tell the character you're typing at from a real person. (Of course, modern blogging also present an opposite problem: what happens when you can't tell the person you're typing at isn't a machine?) A lot of the critiques of Turing assume he's saying the machine on the other end of the wire *is* "conscious"; all he's saying is you can't tell it isn't. But if you head down the track of asking "can a machine be creative" you're taking a couple risks: certainly mahcines can make new connections among ideas. What are you going to do if it turns out that's actually pretty simple too?

Christopher Smith said...

Actually, I would totally go for something like that. Not only would you get increased mental abilities (imagine always being able to remember the exact word you need to perfect that sentence), but you could participate in fully realistic VR, and interact with people online in a totally realistic environment. Sweet!

Just, you know, not the first generation. I'll let them work out the bugs before putting nanobots in my brain, thank you.

rhhardin said...

Well, as Coleridge put it (op cit), the state you describe is one of extreme light-headedness.

rhhardin said...

I'll tell what you ought to do, all you AI believers.

Read through all 2000 pages of A Comprehensive Grammar of the English Language , Quirk Greenbaum Leech and Svartvik (that is a good price by the way ; I paid $88 in 1987), just for the examples of sentences ; sentences that nobody has the slightest difficulty understanding, yet for which no rule is obvious.

And it's sentence after sentence after sentence. It's not only interesting to see all the stuff you somehow know, but how far it all is from the sentences you diagrammed in school, and from the sentences you see made up by guys thinking how artificial intelligence might work.

The reaction will be, more or less exactly, hmm.

It's a real time saver.

somefeller said...

It seems like a solveable problem, superficially, at least if you're a materialist. We're made up of matter, and some interaction of the matter gives rise to consciousness. So we should be able to replicate that. But we know bugger-all about how to go about doing that from the top down.

But, if there's more to consciousness than what materialists assume, then the strong AI problem gets even harder.

rhhardin said...

My own suspicion is that something about the tendency to abstract produces the idea of AI as a hallucination.

Random said...

I think this is pretty implausible (more than it sounds even), perhaps more implausible than AI. That is, we're quite a bit further from understanding how human brains could be improved than we are from understanding how to replicate its ouput. Detailed models of individual neurons are comically incomplete and largescale models are a joke. Don't be fully by the endless reports from imaging studies that sound like progress is being made - it's true in a gross sense, but not in any detailed mechanistic way. The better bet (and the more popular one) is that we can manipulate evolved mechanisms to good effect in the brain and elsewhere with much greater precision (as fallout from bioinformatics, etc, work).

Palladian said...

I'd be happier if we could first invent a nanotech treatment for cancer. What good are improved brains when their vessel is still vulnerable?

Smilin' Jack said...

I'd be happier if we could first invent a nanotech treatment for cancer. What good are improved brains when their vessel is still vulnerable?

They'll be better at finding things like cures for cancer.

However, since the nanobots will undoubtedly be programmed by Microsoft, I think I'll pass--the "blue screen of death" can remain metaphorical for me.

Peter V. Bella said...

Bissage said...
During moments of excitment, Mrs. Bissage has addressed me as her love machine. Does that count?


Cool, but are you artificially intelligent? LOL

KCFleming said...

Palladian's proposed usage will find implementation far before anything we admit is 'real AI'.

I hope nanobots become common in medicine, for cancer and especially Alzheimer's disease.

Trooper York said...

The Ruler: Plan 9? Ah, yes. Plan 9 deals with the resurrection of the dead. Long distance electrodes shot into the pineal and pituitary gland of the recently dead.
(Plan 9 from Outer Space, 1959)

blake said...

You see? You see? Your stupid minds! Stupid! Stupid!

From the freeze frame, I thought Ann was going to show a clip from Scanners. Heh.

As for AI, no, it hasn't really been successful. Some of the techniques developed have been applied successfully to small domains but "strong AI" is really just a re-branding of what we were told to expect from "AI". (In other words, what we're getting now is "weak AI", i.e., not "AI" at all.)

If there is an extra-physical component to intelligence, consider that that might mean that all you had to do was imitate structure to attract it. (We're accustomed to thinking of the spiritual in vague terms but it might simply be another universe with its own rules that is imposed on to ours.)

cold pizza said...

Real stupidity trumps artificial intelligence everytime. -cp