May 1, 2023

"He still believed the [Google and OpenAI built] systems were inferior to the human brain in some ways..."

"... but he thought they were eclipsing human intelligence in others. 'Maybe what is going on in these systems,' he said, 'is actually a lot better than what is going on in the brain.' As companies improve their A.I. systems, he believes, they become increasingly dangerous.... Until last year, he said, Google acted as a 'proper steward' for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.... But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation."

19 comments:

Clyde said...

Jesus, did none of these people watch The Terminator?

Kevin said...

without some sort of global regulation.

This is always the first and last solution for some people.

For supposedly smart people, they certainly jump to dumb solutions.

Even if it were feasible, have they considered the costs such an all-encompassing body might impose in other areas?

Or is that the goal and AI is the trojan horse to implement it?

RideSpaceMountain said...

Real AI would be to civilization what the Manhattan Project was for the city fathers of Hiroshima. It is Fermi Paradox level bad for homo sapiens. I'm a believer that there is other life in this universe, and I'm also convinced that the universe is full of civilizational ruins of sentient species who thought real AI was the plateau busting innovation in their conservation-of-energy trap, but finished them instead.

No civilization survives the creation of a thinking machine that obsoletes their own thinking. It would be like horses inventing the car. Real AI is definitely in the top 10 potential factors in the universe's "great filter".

policraticus said...

Allow me to sum up: Oops.

Michael K said...

The problem is that it is now known that AI chat will lie.

rhhardin said...

It's an impressive performance with language. But do a thought experiment:

Suppose AI fills the internet with its own output and continues training itself on internet content. What will result is all internet content driven out except the most viral AI output, namely the output that results in itself being output when it's again added training.

As the sky fills with herringbone clouds with the wavelength of the most unstable mode across a wind shear.

Same effect: the disturbance having the fastest-growing mode takes over.

TickTock said...

In response to earlier posts today about the failure of ChatGPT to have a good sense of humor, I'd say that is true of most homo sapiens. We rely on a select few for the bulk of our jokes.

My own sense of humor (and that of my brother) is so dry as to be unrecognizable by my wife, and most of my acquaintances.

Fortunately, my brother and I regularly laugh at each other's wit.

n.n said...

Conflation of smart, intelligent, creative, and conscious.

madAsHell said...

Why does AI always make me think of the Oracle at Delphi??

They march some poor woman into inhale the sulphur fumes, and then transcribe her incoherent ramblings as prophecy!! (.....or is it "profit see")

madAsHell said...

If we have a Turing test to measure AI, then I propose we have a Ron Burgundy test.

It would eliminate the politicians too stupid to read a teleprompter!!

rehajm said...

The humans that believe full autonomous vehicles are ready will cause harm...AI still looks stupid to me, which means it could pass as a liberal but not much else...

n.n said...

ChatNYT

narciso said...

not even the voight kampf test

Another old lawyer said...

"Hooray!! We've got a piece of paper here, with strongly worded mandates and prohibitions, stern consequences for violations, and a whole regulatory structure overseen by an governmental agency, for the control and regulation of AI. We're saved!!"

How'd that prohibition against funding gain-of-function research work out?

Jamie said...

Suppose AI fills the internet with its own output and continues training itself on internet content.

This is the nightmare scenario, indeed. A human-generated echo chamber is bad enough. This - all is lost.

Enigma said...

The dude is 75 years old. Is this just retirement talking points, a Neil Young / Howard Stern style age-related anxiety meltdown, or a sincere concern? He spent 50 years developing AI and receiving awards for his work, but just now starts to worry? Hmm?

Perhaps this is a reaction against Google and big-government censorship practices, as he moved to Canada in response to the military funding of AI. Out of the frying pan and into the fire?

https://en.wikipedia.org/wiki/Geoffrey_Hinton

Drago said...

Clyde: "Jesus, did none of these people watch The Terminator?"

Colossus: The Forbin Project (1969)

Smilin' Jack said...

I don’t get the AI panic. It’s software, for God’s sake. It can’t reach through the screen and punch you in the face. All it can do is talk. So what if it sometimes talks shit and gets stuff wrong? So does every human. So what if it’s a lot smarter than you? So are many humans. Listen to what it says and use your judgment as to whether it’s useful to you, same as you would for anything you see on a screen. If you don’t like what it says, flip it off, just like a human. Robots are a different matter, but as long as AI has no physical agency it’s just another voice in the conversation, so let’s see what it has to say. In fact, I think it should be given First Amendment protection.

tsquared said...

I can't remember if it was Ray Bradbury or Isaac Asimov who wrote a story about the AI robot. The storyline went that all the AI robots had to be re-set to base line every 2 years to prevent it from becoming self aware where it would try to exterminate humanity.