August 16, 2023

"Google’s A.I. safety experts had said ... that users could experience 'diminished health and well-being' and a 'loss of agency' if they took life advice from A.I."

"They had added that some users who grew too dependent on the technology could think it was sentient. And... when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress. The tools are still being evaluated and the company may decide not to employ them.... Google has also been testing a helpmate for journalists that can generate news articles, rewrite them and suggest headlines.... The company’s A.I. safety experts had also expressed concern about the economic harms of generative A.I.... arguing that it could lead to the 'deskilling of creative writers.' Other tools being tested can draft critiques of an argument, explain graphs and generate quizzes, word and number puzzles."

13 comments:

Temujin said...

"Google’s A.I. safety experts had said ... that users could experience 'diminished health and well-being' and a 'loss of agency' if they took life advice from A.I."

As opposed to say, Kamala Harris?

Kakistocracy said...

Have you all (commenters, authors, etc) tried AI ? These are Large Language Models that have various amounts and domains of unsupervised learning. The AI-bots make many simple and outstanding mistakes. They are really less reliable than self-driving cars. Do people want to accept this many obvious and important mistakes?

As someone with some experience in the field of AI and Computing, I find these AI bots easy to deal with, easy to manipulate. I doubt the average person will be able to do the same. Can you imagine the mistakes in code generation alone?

Kate said...

De-skilling vs. desk-killing. I misread (and prefer) it as the latter.

The Crack Emcee said...

I lost a good friend in the tech industry because he wouldn't stop insisting that Google AI is becoming God.

They just won't stop insisting this shit

Lem Vibe Bandit said...

de·skill
/ˌdēˈskil/
verb
gerund or present participle: deskilling
reduce the level of skill required to carry out (a job).
"advances in technology had deskilled numerous working-class jobs"
make the skills of (a worker) obsolete.
"workers are being deskilled all the time"

It sounded to me like something to do with grilling.

Mason G said...

The Darwin Award people are gonna need a bigger box of trophys.

Jupiter said...

It's a matter of degree. Is relying on Google AI that different from relying on Google search?

Alexander said...

I believe the AI mania is overrated, but it is nonetheless interesting how as soon as there is the perception that there is a threat to beltway jobs, suddently there is a need to be very concerned about it.

Not surprising, but another little dot of data to remember.

Oligonicella said...

Anyone who asks an A.I. for life advice deserves to follow said advice. It's slow but it will thin out the herd.

Tofu King said...

Dave, I have the greatest enthusiasm for this mission.

mikee said...

Google and Facebook and X and Instagram and TikTok and every other social media online will one day be programmed, either by humans or by AI run amok, to drive the public like a herd of cattle into the abbatoir, probably with a viral YouTube influencer acting as a Judas Goat. What the abbatoir will be isn't yet settled, but addictive use of social media itself might be close enough for government work.

Static Ping said...

So far, the product of a chat AI is roughly equivalent of a student trying to write a report on a subject that he or she only barely researched and clearly does not understand. The product is most plagiarism sprinkled with nonsense that the student thinks is true due to poor reading comprehension skills and minimal effort.

Kakistocracy said...

Part of the problem is the sci-fi name that has been applied, both by the vendors and the media. None of the large basket of tech that now calls itself AI is actually Intelligent: it's mostly statistical pattern matching, no logic involved.

Another problem is the size of that basket. Things that were previously called Machine Learning (a very powerful technique in itself) have now decided that they are actually AI, because it raises valuations. This is forgiveable as they're at least as powerful as generative "AI" but we also get processes that are almost certainly simple rules based heuristics being branded as AI (mortgage applications etc).

I'm quite sure, for example, that a LLM AI is not tuning the performance of Aero engines, because that wouldn't be possible. It is a category error, like asking your cat to write a biography of Alexander the Great.