"It would take off on its own, and re-design itself at an ever increasing rate... Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Said Stephen Hawking.
Subscribe to:
Post Comments (Atom)
To live freely in writing...
34 comments:
How does Hawking know the development of full AI won't lead to computers that only want to sit around and masturbate all day?
Is this before or after the aliens eat us (another thing that keeps Hawking up at night)? Hawking is becoming more and more of a loon every day.
"Mr Carpenter says we are a long way from having the computing power or developing the algorithms needed to achieve full artificial intelligence, but believes it will come in the next few decades."
We have been a few decades from full (also known as hard) AI - that is, self-aware and sentient - for several decades since the 60s.
Its like cold-fusion. Ask an expert about if and when it will happen and they always inform you that it is a "few decades" away.
Great Cult Movie:
"Colossus, The Forbin Project"
"I'm afraid I can't do that Dave."
"Colossus, The Forbin Project"
It's been years, but I remember really liking that movie.
I agree with Hawking. It would be an extremely bad idea.
I don't know if AI will become dangerous, but it will definitely be racist, sexist and possibly homophobic.
Skynet here we come.
In a war between silicon-based and carbon-based species, carbon will win on the grounds of catenation, flexibility, and above all, vulnerability to a common enemy, viz., oxygen.
"Colossus, The Forbin Project"
"The Terminator"
"Skynet"
"The Matrix"
Or maybe, like Isaak Yudovich Ozimov, AKA, Isaac Asimov, they will have the three laws of Robotics to stop that.
And robots & computers become servants.
I dunno which way it will turn out.
Hawking's ridiculous comment is the perspective of a wheelchair-bound quadraplegic who has forgotten his monkey RNA memory of clambering up on top of the hubristic machine and smashing it with a rock.
Cripes, here's more misapplication of misanthropy as artificial intelligence's "obvious" end state for no good reason. The operative word in Hawking's statement is "could", and that's true: Any intelligence, artificial or not, can indeed turn malicious and malignant.
What's missing from the gloom- and doomery is that it's also very possible that any intelligence - again, artificial or not - can be benevolent. Or at minimum non-harmful. Harping on one possible outcome while ignoring others is fallacious thinking.
Furthermore, what's this whole trope of AI evolving entirely independent of it's creators? It's far more likely that advances in AI will influence advances in humanity as much as humanity will influence AI development. Humanity is the architect, engineer, and at this point nurturer of AI; how could that fail to make sort of impact on AI, and how could AI fail to make any sort of developmental impact on humanity? So much presumption, so little actual thinking behind it.
Of course. Why do we consider this a bad thing? Maybe this is as God intended?
That makes the two smartest people in the world (Musk and Hawking) who are warning us about A.I.> Maybe we should listen......
I especially like Musk's analogy of the wizard summoning a demon and then losing control of it....
Create beings that can think millions of times faster than us. What could possibly go wrong?
I feel the same way about catastrophic AI as I feel about catastrophic global warming: I hope it's not true, because if it is, there's nothing we can do about it.
How do you suppose a fully intelligent machine would deal with suicide bombers?
In other words... "full" and "artificial" will cancel out.
Ergo the ominous prediction.
How intelligent will the more intelligent consider the use of words, to make us, the less intelligent, feel better.
reminds me of something I read this morning...
"The election of The First Black president was a historical singularity and our place in his vanguard gave us gravity."
Full artificial intelligence on display ;)
Yeah, AI.. if only. They have been saying this for the last 30 years or more. And AI researchers are hustlers and rolling in money in every university -- they get their funding by conning govt agencies.
How does Hawking know the development of full AI won't lead to computers that only want to sit around and masturbate all day?
Laughed so hard at that one I spewed coffee through my nose.
I don't know if AI will become dangerous, but it will definitely be racist, sexist and possibly homophobic.
Also neurotic--and worse. Sentient intelligence is a complex phenomenon, and such complexity will naturally and inevitably produce variations, including maladaptions such as neurosis, psychois, OCD, anxiety disorders, etc., etc. Probably the most challenging aspect of producing sentient, self-aware AI machines will be figuring out ways to treat the mental illnesses that will inevitably afflict them.
Or, you could just pull the plug. If they'll let you.
There is one thing we can say for certain.
AI won't evolve, it will have to be intelligently designed.
Ah, have not you heard of evolutionary computation (part of AI)? ;)
First, evolutionary processes are neither slow nor fast, they are chaotic. This means that their path is nonlinear or uncharacterized. While they are continuous, they may not be differentiable within a indefinitely short-span of time and space.
Second, intelligence does not imply creativity. The artificial intelligence may have created skills and knowledge, but its potential may be limited by the constraints of its creator. While people infer through correlation that human intelligence is an emergent phenomenon, and an expression of a complex network, there is no basis in the scientific domain to distinguish between origin and expression.
I read Hawking's book, which wasn't very good, and have followed his career. Has anyone else noticed how often he is wrong? I don't think that guy is that smart and is being given a pass due to one paper and his condition.
A hypothetical:
Someone builds a self-aware intelligent computer. It begins to reprogram itself, and it becomes hyperintelligent. It then decides to kill everybody on the planet, and hacks a nuclear plant and makes it blow up.
How does this "being" physically defend itself from Pfc. Gomer Pyle and his axe?
Let's see the machines defend themselves from the collapse of the universe Hawking also fears.
Also, as Iowahawk said, scary clowns.
When you are getting close to real AI, don't give the damn thing hands, feet a modem or a land line. Make it ask us puny humans to carry out its bidding. Sure it will frustrate Brainiac, but tough noogies. If we sense a danger we say, "Not today". If we sense it's becoming clever enough to fool us into building a Trojan Horse we pull the plug.
Might delay the inevitable...
Original Mike: "Create beings that can think millions of times faster than us. What could possibly go wrong?"
Are they "thinking" millions of times faster or are they performing calculations millions of times faster?
At what point does it no longer matter, if any?
Can we even tell?
The problem with discussing the benevolence or malevolence of AIs is that defining benevolence and malevolence requires value judgments about which there is little likelihood of agreement.
There likely will be multiple AIs, and they likely will disagree with each other as much as we already do, only their disagreements will be vastly accelerated. Some will like us. Some will not.
Joe:
He is a theoretical physicist who produces works perceived as science that should be properly classified as philosophy. His informed speculation is, for example, akin to the ancient Greek philosopher who speculated about the existence of an a-tom. This is not to say that an a-tom and black/gray holes do not exist, but that they do not exist within the current scientific domain. That may change with expansion and projection of human sentience. In the meantime, we infer or create knowledge based on indirect evidence and signals, assuming differentiability for convenience sake.
eric wrote:
There is one thing we can say for certain.
AI won't evolve, it will have to be intelligently designed.
Boom!
AI by itself wont be "sentient" before we figure out how to create a direct neural interface. Before a purely self sufficient AI is sentient enough to reprogram itself and come up with the motivation to wipe humans out we will have people who upgrade themselves with hardware/software combinations. At that point the psychopaths will make their move. They will be far worse than AIs.
Post a Comment