November 24, 2023

"Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity."

"... two people familiar with the matter told Reuters."
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.... Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

40 comments:

Stick said...

I've seen this movie. Do not want

RideSpaceMountain said...

Skepticism at such an announcement is richly deserved. Kurzweil might be right about the singularity being near, but this isn't it. I subscribe to Kurzweil's premise that we'll only know we created sentient AGI when it can prevent us from shutting it down.

Legitimate intelligence won't follow our rules, in much the same way billions of humans don't follow them, and we won't be able to force a real AGI to either. Real life finds a way.

planetgeo said...

Q* doesn't even need to learn to do math at grade school level because it's little brother (GPT4) can already call in "agents" (specialized software tools) like Wolfram Mathematica to do the heavy math (differential equations, statistics, etc.) and other complex functions whenever needed.

The Crack Emcee said...

A few years ago, I had to ditch one of my long-time best friends, a freelance troubleshooter who did tech work for the likes of Google, etc., because he started insisting - demanding - I accept that they were creating God with AI.

I've had to suffer a lot of these kinds of encounters/break-ups since Trump won.

wild chicken said...

Considering grade school students can't do grade school math anymore, I'd say this is progress!

mezzrow said...

Balboa glimpses the Pacific Ocean. The trick this time is that it wasn't there before it was seen, and we don't know how large it is. It gets even bigger really fast, some say. Things will likely get even more interesting, whether we can stand it or not.

It may be a bumpy ride. Fasten up.

Roger Sweeny said...

Marginal Revolution has some links, including a good short explanation of what Q and Q* are.

https://marginalrevolution.com/marginalrevolution/2023/11/q-assorted-links.html

mikee said...

This is why, years ago, after seeing the movie Colossus, The Forbin Project, I put a manual "power off" button within reach on my PC and continue to do so to this day.

Tom T. said...

These people are all lunatics, lost in their own heads.

rhhardin said...

"Of" means multiply, is the word problem trick.

Oligonicella said...

"Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said."

I love it when they combine failure with hype.

Oligonicella said...

mikee:
This is why, years ago, after seeing the movie Colossus, The Forbin Project, I put a manual "power off" button within reach on my PC and continue to do so to this day.

Why? PCs have always come with power switches.

Presume it didn't and you haven't, what do you expect a PC to be able to do?

Scott Patton said...

"OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks"
I'm not sure if the author is inaccurate or if OpenAI really defines AGI that way.
I would would say that an AI could be considered an AGI and still not necessarily be able to surpass humans in most economically valuable tasks. At least, not at first. The quoted definition seems to be more an ASI (artificial superintelligence)

Original Mike said...

"This is why, years ago, after seeing the movie Colossus, The Forbin Project, I put a manual "power off" button within reach on my PC and continue to do so to this day."

M5 dealt with that by frying the putative button-pusher.

Oso Negro said...

Grade school students from this century? Or the last?

stlcdr said...

AI (sic) appears to be smart because people are so much less educated than they were and becoming more so each day.

People who talk about AI (sic) as the next best thing are simply equivalent to snake oil salesmen. People keep throwing money at them, so who doesn't want to sell snake oil?

stlcdr said...

In what way, shape or form can AI 'threaten humanity'?

The only 'threat' is the way these people are 'teaching' AI (sic) systems. Certain things are bad; but only to the enlightened teachers.

Put that system in charge of, say, a credit card/payment system and you will end up with a defacto censoring system being enforced by...well, if the computer says you cant buy that thing because, based on your historical purchases, you don't fit the learned profile of a good citizen.

Remember, it the computers: who are always right, except this one time, because it's a bug, that no human had any involvement in. That's why we have 'security patches' to fix the system - heavens forbid we fix the people programming them...

...wait, hmmm, I have an idea...maybe a bad programmer who created the bug is a bad citizen...

Howard said...

On the he Lex Fridman podcast, Elon Musk telegraphed this safety concern at Open AI a couple days before Altman was fired.

Coop said...

It’s disturbing what the future holds and I can sort of empathize with the boards side of things but I gotta stick with Altman on this.

Cambridge researcher’s are close to achieving similar results on a model they have. It’s safe to assume that China likely has several of their own iterations of AI nearing AGI, too (and I hope the Western researchers have very enhanced cybersecurity postures!) and there are probably other bad actors working on thier own versions.

This will all end in some sort of digital Godzilla v Mothra v Rodan v Mantis and on at some point.

Big Mike said...

Fifty years ago arguably the hottest area of AI research was “commonsense reasoning,”. This involved teaching the AI engine such things as “liquid water takes the shape of its container and in the absence of a container it will spread out,” and “gravity pulls things down.”

If AQI really can “surpass humans in most economically valuable tasks” then they’ve come a long way in a half century. But then again a half century is a long time.

typingtalker said...

" ... two people familiar with the matter ... "

They sure get around. FT has some general thoughts ...

As FT Alphaville has already noted here, there appears to be one very busy “person familiar with the  matter” doing a not insubstantial amount of leaking ...

But just who could this person be? And what is his or her agenda? FT also has some general concerns ...

We hereby launch the stress test leaker ‘Guess Who’ game:
1) Does he have a beard?
2) Does he work for the government?
3) Does he have blue eyes?
4) Does he work with people familiar to the matter he is familiar with?
5) Does he drive a car?

So who is this person familiar with the matter?

Paddy O said...

What will change? Seems like human leaders have shown themselves to be inhuman and depesronalizing to anyone who stands in the way of their corruption.

We've been run by artificial intelligence for millenia. It's just been in a human bodies who have little humanity left, driven by their sin and petty or mega evils.

rehajm said...

...that threat to humanity stuff means no matter how much propaganda they feed it or how much they monkey with the algorithms the fucking thing chooses Trump...

Sebastian said...

"autonomous systems that surpass humans in most economically valuable tasks"

Not sure how "autonomous" they need to be to "surpass" humans in such tasks, but AGI is coming, is here in fact, and will have a massive impact on education (for one thing, cheating gets so much easier), science, law, finance--all the jobs that depend on symbol manipulation/information processing in some way. I can't judge the "safety" issue but it will unsettle many sectors. In the hands of bad actors, i.e., not "autonomously," new possibilities of major mischief emerge.

Rich said...

Why no meaningful discussion in an in-depth review on the reasons for and societal risks associated with the plausibly rational Open AI board firing of a possibly reckless Sam Altman.
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

It seems that there Q* AI model posed risks to society if rushed through. Instead, when the prior Open AI board tried to execute their mandate ( safety first) Altman was inappropriately determined to fast track commercialization ( we now see off the back of MS influence and support). Ostensibly Altman was not entirely open and honest with the board who sought restraint and measured caution.

Through seeming avarice MS and Altman have cast off the governance shackles by using inter alia media hype and inducement to OpenAI employees to jump ship, effectively firing their board and now charging full steam ahead in the profit frenzied AI race with Amazon and Google.

Where is the discussion on government oversight and regulation of such profit-ego motivated commercial zeal by the MS-Altman duo? It feels like none of the mainstream press are picking up on this angle of the story, which is very strange given the reasons for the board's actions are still not understood but lie at the heart of this whole saga.

Media picked on the ego and cultural clashes between Altman and the board. But there was something much deeper. The firestorm that broke out, especially from the employees, and the reinstatement of Altman has papered over these cracks but they are very much there especially now that Altman is back.

Bob Boyd said...

I'll only know they created sentient AGI when Althouse posts a clip from it's interview on Joe Rogan.

boatbuilder said...

The whole thing looks to me like a ploy to boost the company.

Oligonicella said...

For anyone reading this who is unfamiliar with the AI landscape - 2c:

There is no "AI". There's conflation between the concept and the reality. There's never going to be a singular AI for a number of reasons.

There are already multiple AIs in code and more so in databases and the owners have competing interests. There will be more, not less. This alone should stop the singular AI for many decades barring some extreme reconfiguring of the Earth's economy and technology.

Like green energy and EVs, singular AI isn't compatible with our current global societies.
Remember how simple it was going to be to just dump power into batteries for use at night? Didn't happen. We're even having battery supply problems.
Remember how simple it was going to be to just plug your car in at night for the next day's use because of panels and windmills? Didn't happen and doesn't look like it will.

The code/database of an AI cannot travel to another computer a la SciFi because it is simply too massive. There are also systems with AI which not only disallow unrequested external contact but don't even have the connections for it. So, no singular control of the world.

'AI' is real but as far as your day to day use, it's not a great threat unless you swallow the hype - much like the news. Not because the AI itself is a danger but because the owner(s) exert their personal biases on/thru it - much like the news.

I suppose (loosely) a set of competing AIs could duke it out for control but...

The plug can always be pulled, nicely or otherwise.

Narr said...

I worry less about what AI/AGI/Q*/Q-anon (how come I never hear about them anymore?) can do when the power is on, than about what they can't do when all the power is off.

Josephbleau said...

"Of" means multiply, is the word problem trick.“

Yes, but Seven of Nine, in Star Trek, is ordinal. An element of an ordered set or multi set. And she was great looking too.

Rich accomplished well educated people like the OpenAI board are very willing to be crazy cultoids now. Luddites afraid of the plot of an old movie they saw once. Why are you in the AI business if you are going to shut down half way. How are you going to make any new products with that policy? It’s like when the director of the ministry of truth only allows lies.

Walter S. said...

It's all hype.

There is some good work that's called artificial intelligence. The chatbots aren't it. The chatbots just fetch random sentences from the internet and string them together into mostly-coherent paragraphs. It's a nice programming trick, but if the result contains anything good, it's because the random sentences were good. That's all.

I have seen mathematical proofs written by chatbots. A fish could do better.

Hey Skipper said...

The Biden administration is preparing to regulate GPT AI that has more than 10^26 computer operations for training. That's 100 trillion trillion.

GPT4 training has taken about 10^23 operations.

Leaving aside just what constitutes an operation, that means GPT AI, as it stands, is two orders of magnitude below the regulation threshold.

No telling how soon that gap will be covered.

Or what the climate impact will be.

Rich said...

Rumors are the board got caught up arguing if AGI was achieved on a new breakthrough (Q*) AGI is said to be explicitly carved out of any agreement with Microsoft and the for profit side of OpenAI.

Team ”safety” EA vs Team revenue capture rationalists.

The Godfather said...

As I recall (from reading history -- I wasn't around at the time), when the steam engine was invented, it could "surpass humans in most economically valuable tasks". That innovation eventually freed a lot of humans to perform other, more economically valuable tasks. That's how progress tends to work.

Bob Boyd said...

Human technological advancement peaked with production of the 1989 Mitsubishi Montero. Everything's went downhill after that.

PigHelmet said...

“The code/database of an AI cannot travel to another computer a la SciFi because it is simply too massive.” Literally just downloaded a 17B parameter uncensored version of the Llama 2 large language model to my home PC. Try out LM Studio (open-source AI downloader and framework).The Butlerian Jihad will be fought house to house.

Smilin' Jack said...

I suspect AGI wil do a much better job of running the country than the current system of AGS (Artificial General Stupidity), aka the federal government.

Smilin' Jack said...

I suspect AGI wil do a much better job of running the country than the current system of AGS (Artificial General Stupidity), aka the federal government.

NotWhoIUsedtoBe said...

Code breaking.

Oligonicella said...

PigHelmet:
“Literally just downloaded a 17B parameter uncensored version of the Llama 2 large language model to my home PC. Try out LM Studio (open-source AI downloader and framework).

That sounds fun to play with.
For size reference, the online game I play currently downloads at around 26GB and as it's NPC heavy, one could say it's got AI embedded.

My post was batting around thoughts on an AI singularity that some people are concerned by, so as everything I wrote concerns an AI of that magnitude, I still stand by it.