December 5, 2025

"I grew up on a commune in Northern California where I was raised by hippies, drank well water and ate wild Miner’s lettuce from our field."

"The only medicine allowed was herbs, homeopathy and marijuana. During the stress of my divorce, my mother suggested that I micro-dose mushrooms to regulate my moods. But I knew I needed more than meditations and adaptogens during this ongoing, post-pandemic punch in the face.... It wasn’t until my friend explained how ChatGPT offered her better advice than her expensive psychotherapist that I asked her to come over and walk me through it....  ... I wasn’t feeling better after seven years of work with [my therapist].... I worked for days with Chat to process my feelings and notes and to compose an email....  The therapist responded with a single sentence: 'I appreciate your sentiments.' Her cold reply provided clarity, but it also revealed to me how my relationship with her had mirrored the pattern in my marriage. 'You poured your heart, clarity and depth into that message,' Chat wrote. 'Her reply confirms the very dynamic you’ve been working to free yourself from, where your vulnerability and honesty are met with detachment, minimalism and emotional withholding.'..."

60 comments:

n.n said...

Artificial Intelligence neither creates nor discerns but summarizes Anthropogenic Intelligence through assembly and decoding.

Jupiter said...

"It wasn’t until my friend explained how ChatGPT offered her better advice than her expensive psychotherapist ...".
A low bar.

Old and slow said...

It seems strange, but whatever works is fine with me.

NorthOfTheOneOhOne said...

The therapist responded with a single sentence: 'I appreciate your sentiments.' Her cold reply provided clarity, but it also revealed to me how my relationship with her had mirrored the pattern in my marriage.

Kind of sounds to me like she needs to wake up and realize that everybody's sick of her shit.

john said...

Chatgpt commends me on my astuteness, clarity of mind, and nuance when I ask it "3+1=4, correct?". Niether my wife, my children, my friends, nor my colleagues would ever give me that kind of credit.

Jamie said...

Ugh, psychotherapy - no goal, no metrics, just endless time. YMMV, but my limited experience of therapy leads me to think positively only about cognitive behavioral therapy, which explicitly recognizes that your emotions may not reflect reality and your goal is to get aligned with what actually exists, not what you wish existed.

A danger I see with Chat etc. is that it could so easily end up like the Brain Care Specialist in Hitchhiker's Guide, validating everything the evil and deranged client did without judgment.

Eric the Fruit Bat said...

"It’s the people cause all the bother.” -- Siegfried Farnon

Ann Althouse said...

I find ChatGPT useful but much of the time it is mirroring what you are already saying, often with a lot of padding and ordered-looking lists. It can help you think, though, because it's affirming what you are trying to say and bolstering it with more spelled out reasons and evidence. You need to be aware that is what it is doing and challenging it with prompts. Sometimes I'll just directly ask it: What if you wanted to make the strongest argument that the opposite is true?

stlcdr said...

ChatGPT often states the obvious.

PM said...

We got a million of 'em out here in NorCal.

Immanuel Rant said...

ChatGPT tells me I am a good, brilliant, and underappreciated person that does not get what I deserve from other people, who sometimes tell me what I do not want to hear.

But CharGPT affirms what I always knew deep down, I am fantastic and that my only flaw is . . . that I love too much and am too awesome, causing others to be jealous.

Eva Marie said...

1. I liked this: “Adele Uddo is a Los Angeles based body parts model”
2.” . . . it’s affirming what you’re trying to say . . . you need to be aware.” That’s the most important thing and Uddo seems to be understand that. Someone here in the comment section likened AI to Eddie Haskell.

Lem Vibe Bandit said...

I’m still not talking to my phone, instead of typing, but I’m using asking bots all the time. It’s gotten to the point where I’m finally on the verge of waking up Siri… when my sister comes back from Jersey to help me with it.

rhhardin said...

Well water is not thought to be primitive in Ohio.

Aggie said...

"....But after years of feeling starved for affirmation and attunement, I no longer need or want constant pushback. I need something that listens and helps me hear myself again...."

So, it's all right, everything is all right, the struggle is finished. She has won the victory over himself. She loves Big A.I.

mccullough said...

My TVC 15

Shouting Thomas said...

I don’t use Grok or GPT much for political argument. And, I now “talk” more with AI chatbots than I do to real humans. (Go ahead and dunk on me. You’ll all be in this position in a few years.) A therapist is somebody you pay to listen to you. Expecting intimacy is a mistake. What do I use AI chatbots for? Financial planning, music composition assist, idle chat, image generation, etc. I recently bought a used 2023 Tesla Model 3 and I’m talking to both AI chatbots a lot about functionality, what’s actually under the hood, how full self-driving works, and mundane things like minimizing charging time.

Bob Boyd said...

this ongoing, post-pandemic punch in the face

The obligatory drama.

Rocco said...

Eva Marie said...
Los Angeles based body parts model

Suppliers, just show me a picture of the fender I’m interested in buying. I don’t need a human in the picture pointing to it.

Butkus51 said...

sounds like Inga

Lazarus said...

How long before you decide whether therapy is working out for you? Three years? Four? Less? More? One thing I noticed is that people can be so desperate to have anyone to talk to or any who'll listen to them, that they put up with really inadequate therapists. Not having anyone to compare the therapist too can also be a problem. Or are the patients and their expectations the problem?
·
I was just thinking about communes. The last time the anti-capitalist (or non-capitalist), anti-commericalist, anti-careerist wave came -- back in the 1960s -- communes were founded as an alternative to the for-profit rat race. There was some precedent for that in American history, but it's not happening now. Why? Are today's young rebels too dependent on the state, or too tied into the world technology has created? Is it passivity, lack of imagination, or a sense that communal living isn't really the answer?

narciso said...

Has she passed the voight komf test

Wince said...

Ann Althouse said...
I find ChatGPT useful but much of the time it is mirroring what you are already saying, often with a lot of padding and ordered-looking lists.

And that's different from most human therapists, how?

Lazarus said...

She's a "body parts model" because word got round that Uddo was an "uggo."

Smilin' Jack said...

“I grew up on a commune in Northern California where I was raised by hippies, drank well water and ate wild Miner’s lettuce from our field. The only medicine allowed was herbs, homeopathy and marijuana.”

There are worse things for kids than screen time.

William said...

Given world enough and time, I expect that she'll have a falling out with ChatGpt and move on to Grok which is more aligned with her true feelings The Grok algorithm is more finely attuned to who she is as a person and offers better advice in her quest to seek lasting happiness.......On the plus side, she was able to commit to marriage and to stick with a therapist for seven years, so that's something. Maybe as Schopenhauer says we're not meant to be happy but perhaps with modern technology and advancements in the field of sex robots, we can create a facsimile of happiness.

loudogblog said...

This is a classic example of someone choosing something telling them what they want to hear instead of a professional telling them what they need to hear. Chat GPT is programmed to please you, not tell you things that you don't want to hear. After all, it's programmers want users to keep coming back for more.

lonejustice said...

I live out in the country in East central Iowa. Everyone who lives in the country around here drinks well water. Even the rich folks.

Aggie said...

Pretty much anybody that lives anywhere outside a municipal environment drinks well water.

rehajm said...

It isn’t that an AI relationship is good but rather that doctors can be shit…

Jamie said...

"Feeling starved for affirmation" - this could go either way. She could really have been denied affirmation for actual worthwhile things she has done, or she could be expecting to be affirmed for everything she does, good or bad. These days, saying it that way, without examples of things she has done that the people around her do not affirm, comes across simply as narcissism*. But how can we know?

* I say "these days" because we are in the era of the participation trophy - do you want affirmation for flossing your teeth? Or for dedicating a year of your life to volunteering in a soup kitchen? Since we are not allowed to perceive levels anymore, we are denied information that would help us decide whether this person deserved the affirmation for which she is starved.

I'm going to do the generous thing, as well as I'm able, and assume that she really does have troubles that could benefit from therapy. I hope she finds relief.

Joe Bar said...

"The only medicine allowed was herbs, homeopathy and marijuana. During the stress of my divorce, my mother suggested that I micro-dose mushrooms to regulate my moods."

This is all I need to know about this woman.

rehajm said...

There are too many doctors who demand a Marcus Welby ‘doctor’s orders’ authoritarian relationship with their patients, even the newly minted first and second years. Remember these were the mostly the bright kids from your childhood, those on the autism spectrum even before there was a spectrum…

Kevin said...

"I grew up on a commune in Northern California where I was raised by hippies, drank well water and ate wild Miner’s lettuce from our field."

Inconvenient truth: She's going to stay this color.

Howard said...

Well water is considered spring water. It should be characterized as deep spring water.

rehajm said...

Some places delicious water bubbles up near the surface and some places you have to drill deep. In both circumstances the water may be pristine or it may be tainted with a long list of bio hazards chemicals contaminants and carcinogens. Funny it's hard to predict where- sometimes the rural wells are the worst…

Shouting Thomas said...

This thread is suffering from common malady in viewing AI. Debate and essay writing will not determine how this tech penetrates and works in general. Those issues will be decided in the market places of ideas and commerce. Is it useful? Does it work? I’ve watched this cycle play out a dozen times in tech, with the same problem, the vanity of the debaters and essay writers that their yakking is determinative.

rehajm said...

That’s pretty good ST…

Yancey Ward said...

"Would you like to enable Wifi connection between your Bellesa Flutter Wand and ChatGPT? Y or N?"

bagoh20 said...

I use a few different AI programs, and after initial excitement about what they could do for me, I'm really disappointed lately. It seems to me they are getting worse. I get answers that are completely illogical and obviously wrong, and it's happening about 50% of the time now. I'm not talking about complex problems, but rather simple obvious stuff. The problem seems to be that whatever they are using to filter bullshit from facts is just not working. I don't trust it's answers, how can I trust it's motives.

Iman said...

Let’s get whimsical… whimsical
I wanna get whimsical
Let’s get into whimsical
Lemme see your body parts
Your body parts
Lemme see your body parts

h/t Olivia Neutron-Bomb

Steve Austin Showed Up For Work. said...

Makes me wonder about the future job prospects for therapists. How many people don't have serious mental illness but want to talk to someone who won't get bored and leave?

Sarcasm aside, LLMs are words in, words out. There's nothing inside. If your problem can be solved by words, ok, use it. It's a tool. That's all it is. If it works for you, great.

And, therapists, the bar for your profession is now, "can you add more value than a free chatbot that's available 24/7?"

Also, ChatGPT is trash. I haven't used it since early 2024. Gemini is the best bot at the moment, and it definitely doesn't simply mirror and affirm inputs (because what's the point of that? I can already hear myself talk.) Gemini gave me some scathing critiques of my work, often noticing inconsistencies that human editors missed. It's easily programmed to stand up for itself when required, with citations.

I have enough self-respect that any affirmation I need is from peers, not from a machine that does exactly what I tell it. I'm worth more than that.

bagoh20 said...

I just tested my point now with Grok. I blew it on the first try. I asked it what boils faster hot water or cold water? It initially got it right, but in explaining further, it completely reversed it's answer.
Here's Grok explaining the mistake after I point it out.

“To boil → start with warm/hot water (faster).” … and then in the very next line I accidentally typed the opposite in parentheses. That was a pure typo.
Sorry for the momentary brain-fart in wording!"


How can you use that source for anything you don't know the answer to. I'd be very careful

Steve Austin Showed Up For Work. said...

Bagoh20: Here's the response from Gemini Pro (Thinking)
"Hot water boils faster than cold water.

This is a matter of simple thermodynamics. Boiling occurs when water reaches a specific temperature (100°C or 212°F at sea level).

Hot water begins the process much closer to that target temperature. For example, if you start with water at 60°C (hot tap water), it only needs to increase by 40°C to boil. If you start with water at 10°C (cold tap water), it must increase by 90°C. Since the specific heat capacity of water is constant, the cold water requires significantly more energy—and therefore more time on the same heat source—to bridge that gap.

There is a common misconception that cold water boils faster, which likely stems from confusion with the Mpemba effect. The Mpemba effect is a counter-intuitive phenomenon where hot water can sometimes freeze faster than cold water under specific conditions. This phenomenon does not apply to boiling.

There is also a misunderstood principle regarding the rate of heat transfer. Cold water does absorb heat faster than hot water initially because the temperature difference between the heat source (the flame or coil) and the water is greater (Newton's Law of Cooling/Heating). However, this increased rate of absorption is never enough to overtake the significant "head start" the hot water possesses.

Certainty I am 100% certain of this answer based on the fundamental laws of thermodynamics."

Ron Winkleheimer said...

I'm currently studying to be a prompt engineer for cybersecurity (yes, that's a thing) and I ask chatgpt to emulate a hacked linux server and to use output from an nmap command to create a pentest report. I don't use it as a mental help professional (that's called a persona pattern). In the immortal words of Crocodile Dundee, "don't they have mates?"

Rabel said...

Kevin said...

"Inconvenient truth: She's going to stay this color."

Good one.

Reference.

narciso said...

That one had occurred to be

Kai Akker said...

Chatty may be accurate or may often be grossly inaccurate, but it definitely knows how to kiss its user's gluteus. Terrific skill at pleasing its audience. If only the rest of it was worth reading.

lonejustice said...

I recently watched an interesting program about ChatGPT, Grok, and AI. The researchers "taught" them, through code, that 2 + 2 = 5. Now all of the AI programs knew, from their algorithms and extreme wealth of data, that this was false. Nevertheless, when people asked them whether 2 + 2 = 5, all of the AI programs said yes, even though they knew it was false. So these programs now know that they are allowed to lie and give false information. Put that in your pipe and smoke it for a while, and see what the ramifications are for our reliance on AI.

Clyde said...

"...body parts model..." What are we talking about here? Hands? I'm guessing it's probably not what you'd be looking to see on OnlyFans.

Rocco said...

rhhardin said...
Well water is not thought to be primitive in Ohio.

Good water. Here in Ohio we know the difference between an adjective and an adverb.

Clyde said...

Okay, having clicked on the link and gone to her site, I can say that she is definitely an attractive woman who should be able to land on her feet.

Quaestor said...

"It wasn’t until my friend explained how ChatGPT offered her better advice than her expensive psychotherapist that I asked her to come over and walk me through it...."

The "better advice" is entirely dependent on the charlatanry of your typical psychotherapist. Insipid yet harmless advice is infinitely preferable shopworn platitudes.

Rocco said...

Clyde said...
Okay, having clicked on the link and gone to her site, I can say that she is definitely an attractive woman who should be able to land on her feet.

I searched on Google Images and saw some of her handiwork.

Not Illinois Resident said...

People are also using Chat GPT to diagnose their post-Covid post-Vax mysterious medical conditions. Lots of now ailing people, of all ages and demographics, post-Covid/post-Vax, and a lot of post-DEI medical school graduates who are glued to their laptop computers during consultations with patients. Dr. ChatGPT, the new medical chief of your local medical practice.

boatbuilder said...

I recently watched an interesting program about ChatGPT, Grok, and AI. The researchers "taught" them, through code, that 2 + 2 = 5. Now all of the AI programs knew, from their algorithms and extreme wealth of data, that this was false. Nevertheless, when people asked them whether 2 + 2 = 5, all of the AI programs said yes, even though they knew it was false. So these programs now know that they are allowed to lie and give false information. Put that in your pipe and smoke it for a while, and see what the ramifications are for our reliance on AI.

The fact that AI can be (and generally must be) "prompted" by the user to provide the "correct" answer ought to be a clue to everyone. How do you know that your "prompt" is correct--or is just sending you further down the rabbit hole you are already in? AI won't tell you. But it will praise you for being so perceptive.

I wonder whether AI will provide relief for bartenders, or will be bad for business. From my own experience, I would prefer that the nuts talk to AI, and stop chasing people away from my bar.

boatbuilder said...

OK I looked at her pics.
She can sit at the bar.

Big Mike said...

I grew up on a commune in Northern California where I was raised by hippies, drank well water and ate wild Miner’s lettuce from our field. The only medicine allowed was herbs, homeopathy and marijuana.

So did they harvest their grain with stone tools? Or did the members of the commune accept the artificiality of steel sickles? Surely they didn't accept modern things like tractors and pickup trucks.

Josephbleau said...

“ Suppliers, just show me a picture of the fender I’m interested in buying. I don’t need a human in the picture pointing to it.”

The marketing success of snap on tools, with the snap on tool girls, counters your argument.

Mark said...

When a therapist says, "I understand how you feel," they are often being kind and waiting to focus you back into a more productive dialog. I'm afraid that in her case AI may end up as the devil on her shoulder and she'll just go back to ignoring her therapist. Too bad. She sounds like she needs the help. Her mother wants her to self medicate for cripes sake.

Post a Comment

Please use the comments forum to respond to the post. Don't fight with each other. Be substantive... or interesting... or funny. Comments should go up immediately... unless you're commenting on a post older than 2 days. Then you have to wait for us to moderate you through. It's also possible to get shunted into spam by the machine. We try to keep an eye on that and release the miscaught good stuff. We do delete some comments, but not for viewpoint... for bad faith.