७ सप्टेंबर, २०२५

"In a clinical setting there are many things you’re not allowed to say unless you want to end up in a hospital. So I couldn’t be honest with my psychiatrist..."

"... my therapist or anyone who was supposedly there to help, and I kept it all bottled inside. Then ChatGPT came along. I could ask it anything. There was no threat to my freedom. I wasn’t judged. I got information, ideas of things to try to help with symptoms and to talk to my psychiatrist about, encouragement, and when things were bad, advice to reach out to my medical team or call the suicide hotline. It was actually ChatGPT that encouraged me to think about transcranial magnetic stimulation, among other options. I did end up getting T.M.S., and it was like a miracle. I wouldn’t be here today if I didn’t have this outlet. I understand this is a new technology and it’s not a replacement for medical care, but in my case it was an amazing supplement."

Writes Sarah K., in a letter to the NYT commenting on the op-ed "Re "What My Daughter Told ChatGPT Before She Took Her Life."

There's also this letter responding to the same op-ed, from a woman whose 31-year-old daughter killed herself: "To our knowledge, our daughter did not have access to ChatGPT. As limited as it is and should be, I wonder if it could possibly serve as a type of cathartic journaling? In reading some of the comments from people who had contemplated or had made unsuccessful suicide attempts, I was reminded of our daughter’s overriding fear of the loss of her personal agency, which prevented full disclosure and honesty on her part."

३१ टिप्पण्या:

R C Belaire म्हणाले...

"Should Harry have been programmed to report the danger “he” was learning about to someone who could have intervened?" NO!! This is the beginning of a very slippery AI slope.

The Vault Dweller म्हणाले...

"So I couldn't be honest with my psychiatrist, my therapist or anyone who was supposedly there to help,"

Isn't there a Hearsay exception that is premised on the belief that people are fully honest with their doctors? It is an interesting idea for a story, ( I was going to say Science-Fiction but given it is current technology I think it is just fiction,) where people at large are never fully honest with one another, not even their families, doctors, or pastors, but they are honest with A.I. It would interesting to think about how this A.I. that would evolve, and if it is a more accurate reflection of what society really is or should be when people remove the patinas they put up over themselves.

G. Poulin म्हणाले...

Don't know about this AI stuff, but I do know that the fear of losing personal agency is a real thing among mentally ill people. My friend knows that if she tells her support team too much, they will pack her off to some crappy nursing home for poor people. She will lose her apartment, her freedom, her privacy, everything. So she tells them what they want to hear, and clams up about what's going on inside her head.

Quaestor म्हणाले...

"I wonder if it could possibly serve as a type of cathartic journaling?"

Yes, and it could just as likely have provoked the daughter's suicidal impulse.

Transcranial magnetic stimulation has been approved by the U.S. Food and Drug Administration. However, given that TMS is neither a food, a drug, nor anything ingested, one must doubt the competence of that agency to approve or disapprove that treatment, particularly in light of the general corruption of taxpayer-supported organizations exposed since the demise of the Biden presidency.

Magnetism has been at the core of mental health quackery for over three hundred years. Consequently, any such claims call for skepticism and years of rigorous testing before any official approvals are issued.

Eric the Fruit Bat म्हणाले...

I didn't know that psychiatrists these days listen to patients at all except to adjust the meds.

Jersey Fled म्हणाले...

How sad.

FormerLawClerk म्हणाले...

People need to understand that OpenAI is reviewing EVERY. SINGLE. THING. you are asking ChatGPT and reporting to law enforcement when it decides the information should be given to the proper authorities.

Do not say ANYTHING to ChatGPT that you do not want printed on the front page of the New York Times.

These articles suggesting that you can't talk to your therapist (who has a patient-client legal relationship) but you CAN talk t ChatGPT are designed to get people to say things they wouldn't ordinarily.

It's HORRID the ways they will use this against you the absolute SECOND you become a problem for the elite in this country.

Grok: "OpenAI’s policy, as outlined in recent updates, states they monitor ChatGPT conversations for harmful content. If their automated systems flag a conversation for potential violence or an imminent threat of serious physical harm to others, it’s reviewed by a human team trained on their usage policies. If the reviewers determine there’s a credible threat, they may refer it to law enforcement."

Thusfar, they haven't called the cops if you tell it you plan to kill yourself, which is probably a good thing. But that will change once enough parents sue enough times.

We don't want those genes in the pool in the first place.

FormerLawClerk म्हणाले...

Lawyers are famous for saying: "Never talk to cops." And they're right to pass along that advice. The cops aren't there to help you (they're legally not required to, according to the Supreme Court.) They are there to jail you. Not help you.

This is exactly how you should treat ChatGPT and all the other corporate AI.

john mosby म्हणाले...

If you have enough awareness to know that telling your therapist certain things will result in certain outcomes, then don’t you have enough awareness to avoid harming yourself and others? Or at least doesn’t it make you rational enough to be a criminal and not a crazy person? Sort of a Catch 22 or Corporal Klinger situation, no? RR, JSM

The Vault Dweller म्हणाले...

"then don’t you have enough awareness to avoid harming yourself and others?"

I'm not a therapist of any sorts and I've never been to one, but my understanding is that a lot of therapy deals with getting patients to acknowledge something about themselves that they already know or at least suspect on some level and elevating that to position in their mind where they can more fully incorporate that into their sense of self and act upon it. I would assume most people use denial as a form of coping with uncomfortable thoughts at some point.

Howard म्हणाले...

A sad story to be sure. However, sad anecdotal stories and grieving parents is the worst possible jumping off point for the regulation of tools. I axed Gemini AI what the US suicide rate was. It's ~15/100K. Get this. The highest is Montana at 29 and the lowest is DC at 6. Not surprising, men are 23 and women 6. Whites are 17, indigenous people are 27.

Achilles म्हणाले...

AI gives us an interesting chance at an actually egalitarian society. This woman got treatment when Humans whether through incompetence or malice withheld it from her. Her daughter missed that chance.

Everyone says they want equal justice under the law and equal treatment in health care and equal arbitration etc.

One of the greatest flashpoints will come when poor people start getting access to the same health care as rich people.

Achilles म्हणाले...

You are going to see that the best way to improve health care will be to expand access to generative models and subsidize model training and to provide anonymous health data to the models.

You are going to see the progressive left fight for government control of both the information and the technology because access to better health care for the masses is not what they want.

Marcus Bressler म्हणाले...

The fear that the mentally ill have regarding having their freedom taken away: it sometimes works for the good of society. Think trans shooters being scooped up after revealing to a professional what is in their manifesto.

Kakistocracy म्हणाले...

Brains don't mature until about 25 years old and in some ways are flexible enough to be considered 'blank slates'. Hence what our young do in terms of how they acquire knowledge about the world matters (which after all only really exists inside our skull sized prisons).

The Upanishads were the first form of written knowledge and the word Upanishads means something like "to stand next to or beneath" , suggesting that some important knowledge is acquired from talking to others, not just from doing things. "Just as iron sharpens iron so man sharpens man" is from the Old Testament. Or from Henry David Thoreau "Man is the tool of his tools".

Generative AI seems to sound a lot like another human, and it seems likely that many people might get companionship from talking to an LLM. I have a friend whose daughter spends hours every day asking advice over mental health. We are programmed somehow to filter what we hear from others to acquire knowledge for survival & reproduction. These LLMs are essentially stochastic parrots, taking some sort of 'average' approach to guessing the next words -- but our brains are likely to treat them as real people -- albeit without any of the 'bad' habits of real people, like getting angry or challenging us. No wonder GenZs are ghosting each other at record levels -- why bother with real relationships when you can get told what you want to by your AI friend?

The history of man is really a history of tools and how tools extend man's reach and man's social world. The phenomenologists tried to understand how we individually conceptualize the world as one integrated thing. One of the most insightful of these philosophers in the 1940s I think said.

"We are forced to realize that technology is not a means at the disposition of man but the environment in which man undergoes modifications". Maurice Merleau-Ponty.

AI is sycophantic. It's wonderful to have a machine flatter you, but once used to that, will we retreat from other humans and their imperfections. Using this technology is going to change us. For the better?

Original Mike म्हणाले...

"In a clinical setting there are many things you’re not allowed to say unless you want to end up in a hospital. "

I got this mental health questionnaire to fill out before my annual physical recently. Of course I never feel anxious, I never feel sad, etc.

In real life, I may feel anxious, and I may feel sad, but I'm not stupid.

TobyTucker म्हणाले...

I've recently seen articles about people whose experiences with ChatGPT only confirmed their delusions and they ended up psychotic. I guess it all depends on how you approach these sessions, but I rather doubt that most are aware of the dangers involved.

Tina Trent म्हणाले...

Until AI asks me which sex I identified as at birth, I am beginning to believe it has a leg up on our current health system.

And anyone with even a partial grasp of reality who has ever sought therapy probably has some opinions about the relative sanity of seeking counsel from a computer or a psychoanalyst.

Tina Trent म्हणाले...

Howard, I make frequent typos, and my dumb ipad inserts errors, and I try to not be the grammar police, but "axed" instead of "asked" makes me crazy. I apologize for even pointing it out. I'll bring my personal reaction up with Dr. AI.

narciso म्हणाले...

yes one is a request, another involves violence, I suggest the problem with these incidents, are not AI, but a lack of common sense,

Randomizer म्हणाले...

I've never been to a psychiatrist, but assume that they act the way other doctors do. Each one has a certain set of therapies that they go to first. They aren't necessarily up on every medication or treatment option.

I would have a discussion with ChatGPT if my health care professional wasn't making headway. If ChatGPT ratted me out, and police or some other authority wanted to use those conversations against me, I'd say I was just playing. It was all fiction for my entertainment.

That might not help, but that's what I'd do.

Assistant Village Idiot म्हणाले...

Lack of knowledge about mental illness in this group always distresses me, as I regard it as generally intelligent and commonsensical. But a lot of people's default positions, impervious to any new information, are wildly wide of the mark here.

Ah well. To the main point, we seem to be increasingly regarding GhatGPT as a Generic Smart Person, or perhaps a composite smart person, based on real people and a good enough substitute. I have mixed feelings about that.

Rabel म्हणाले...

"My friend knows that if she tells her support team too much, they will pack her off to some crappy nursing home for poor people."

Involuntary commitment for mental illness in the US generally requires a judge's sign-off as well as testimony from multiple medical professionals.

If your problems are extreme enough to get through that process then the odds are that you need to be committed.

ALP म्हणाले...

Original Mike @ 10:05:
I got this mental health questionnaire to fill out before my annual physical recently. Of course I never feel anxious, I never feel sad, etc.

In real life, I may feel anxious, and I may feel sad, but I'm not stupid.

****************
You have nothing to worry about. I completed several such questionnaires on behalf of my late mother. Went all the way to the end of the scale responding to questions about depression etc. (my mother clearly had something going on). I'm convinced these things are never read - got nada, nothing, zip in response.

ALP म्हणाले...

My partner tends to lecture in response to simple questions, and it can be tough to stop him once he gets started. I find myself going to AI for certain queries, because it's faster and spares me a lecture.

effinayright म्हणाले...

FLC said:

"Thusfar, they haven't called the cops if you tell it[ChatGPT] you plan to kill yourself, which is probably a good thing."
***********
Has ChatGPT called the cops on *anyone*, and if so for what?

Prof. M. Drout म्हणाले...

Maybe this could prompt us to reconsider the mindless, one-size-fits-all, bureaucratic responses that designating more and more people "mandatory reporters" causes.
I know for a fact--because students have told me--that students deliberately keep things from their professors and RAs out of fear of being swept up in the inhumane and sometimes flat-out dangerous 'System' that springs into motion if a vulnerable person speaks about possible suicide or mentions sexual assault.
Back in the early 2000s, I had several students who told me about suffering sexual assaults. In more than half the cases, I was able to convince the student to go to the counseling center to get help--in two cases I walked them over myself.
Since we were made "Mandatory Reporters," students have clammed up, and I think they are missing out on getting help that they need.
I understand the motivation behind such laws, but, as always, there are unexpected consequences. Maybe having vulnerable people going to ChatGPT is telling us that those people are desperate for human interaction but fear the consequences of honest communication.

Prof. M. Drout म्हणाले...

Rabel wrote: Involuntary commitment for mental illness in the US generally requires a judge's sign-off as well as testimony from multiple medical professionals."

In Massachusetts a 72-hour involuntary psych hold can be put on a college student based on very little fact-finding. It can also be very difficult for a student to get out of such a situation until insurance money has been exhausted.
At the same time (I can testify from bitter experience) that it can be almost impossible to have a person "sectioned" even for serious, life-threatening substance abuse that ends up being deadly, and there is ZERO accountability for the judges, clinicians, or social-workers people who make decisions that produce terrible consequences.

Valentine Smith म्हणाले...

Self murder may have more devastating consequences than that of murder, although it’s likely that it is a distinction with little difference.

One of the guys I grew up with threw a little get together one night. There was four or five of us and little did we know he was throwing a farewell party for himself. On the table was a dish full of alluring bright orange seconals, which certainly should have been a tip off. Actually, I think I did know at the time. I was the last to leave and all I wanted was a couple of those seconals. He wouldn’t give any to me. It was three weeks after my brother‘s murder, and all I sought was oblivion. he found his and mine was only temporary. Thank God.

15 years later, while thumbing through the New York daily news, I ran across a picture of his father, wearing a bloody T-shirt being escorted by a couple of cops. He had just murdered his landlady.

Valentine Smith म्हणाले...
ही टिप्पणी लेखकाना हलविली आहे.
stunned म्हणाले...

A partner who tends to lecture in a personal relationship is engaging in controlling and toxic behavior. Lecturer is positioning him/herself as superior. But so is ChatGPT.

टिप्पणी पोस्ट करा

Please use the comments forum to respond to the post. Don't fight with each other. Be substantive... or interesting... or funny. Comments should go up immediately... unless you're commenting on a post older than 2 days. Then you have to wait for us to moderate you through. It's also possible to get shunted into spam by the machine. We try to keep an eye on that and release the miscaught good stuff. We do delete some comments, but not for viewpoint... for bad faith.