Writes Gina Kolata, in "A.I. Chatbots Defeated Doctors at Diagnosing Illness/A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot" (NYT).
It seems that there are systematic problems with the thought processes of doctors. I wonder how A.I. would have addressed the myriad problems of the covid pandemic — that is, A.I. without the interference of experts. Will studies like this result in doctors questioning their own thinking patterns?
In describing how they came up with a diagnosis, doctors would say, “intuition,” or, “based on my experience”.... That sort of vagueness has challenged researchers for decades as they tried to make computer programs that can think like a doctor....
It turns out that the doctors often were not persuaded by the chatbot when it pointed out something that was at odds with their diagnoses. Instead, they tended to be wedded to their own idea of the correct diagnosis. “They didn’t listen to A.I. when A.I. told them things they didn’t agree with”....
“People generally are overconfident when they think they are right,” [said said Laura Zwaan, who studies clinical reasoning and diagnostic error at Erasmus Medical Center in Rotterdam and was not involved in the study]....
36 comments:
The problem is the "Standard of Care." Doctors are purposely trained to come up with the wrong answer.
The fact that Statins are still proscribed at all when we know exactly what they actually do just makes the current "standard of Care" clearly corrupt.
Why do we need lawyers or judges?
ChatGPT can read all the lawbooks and determine if someone broke the law given an agreed upon set of facts (which is what trials are).
Any law that requires a subjective person to decide if you broke it is VAGUE and therefore unconstitutional on its face. Laws MUST be so plan that any sufficiently educated person should be able to tell immediately if a law has been broken.
Ah, but "intent" you say. And isn't "intent" completely subjective? Are subjective laws - constitutional? No, they aren't.
Judges will disagree because if they don't their rice bowl don't get filled.
Another cause is that Chat GPT wasn't instructed to maximize kickbacks from drug companies.
It also most likely did not take Medicare reimbursement rates into account.
I wonder how A.I. would have addressed the myriad problems of the covid pandemic —
that would TOTALLY Depend, on how hard they programed it for control
Actually, a "neutral" AI would say: the way to solve ANY problem would be removal of ALL civil liberties
Obesity Crisis?
Starvation?
Unwanted Pregnancy?
Low Birth Rates?
Firearm deaths?
mountain climbing accidents?
motor vehicle accidents?
Name a problem.. and you'll find that HUMAN LIBERTY is responsible
"People are generally over-confident when they think they are right" or "hindsight allows us to see that people's confidence was misplaced, when it turns out they were wrong"?
I have two other thoughts: 1. It's always been known that some doctors are better diagnosticians than others. I'm wondering how modern medical training teaches diagnosis - does the training enhance the native abilities of the good diagnosticians, or flatten them out?.
And, 2. Having accompanied my mother-in-law to a whole bunch of doctors when she was suffering a health complaint, I developed a whole lot more respect for whoever takes the history. My m-i-l's description of her symptoms was so vague and variable that I couldn't see how she was giving the doctors anything to work with. Blood and other tests were inconclusive, so it turned out to be a differential diagnosis, and it also turned out that her very first doc was (almost certainly) correct, though she went to several thereafter (including a head of department at Penn Medicine, who reached the same conclusion as the first guy), hoping for something easier to fix.
In short - I wonder what Chat would have done with my m-i-l's chart, including her very difficult to assess history?
Doctors rely on memorization to get through med school. They look at number ranges. Rarely do they look at symptoms. The Chat Bot probably looked at symptoms and numbers, and weighted the answers. I dealt with a family member with extremely complex medical needs. I did better with Google than the doctors with all their knowledge. I would assume the Chat Bot is similar. As for COVID, I never understood why we didn't do basics, like prednisone, inhalers or breathing treatments. That is how you treat most respiratory infections. Antibiotics if it starts going toward pneumonia. Doing nothing seemed backward.
While a lot of doctors diagnosis for simple problems, they do rely on their memory for such diagnosis. The advantage these language models have is overcoming of one of the shortcomings of people - the ability to recall anything instantaneously; and to be a wordsmith (sic).
Also, the media has an agenda; what is their agenda here? We don't need doctors? That's the pill they seem to be pushing.
An algorithmic justice system would level the playing field.
Do you think people like Bill Clinton want that?
AI made the diagnosis from a WRITTEN case report. AI can’t feel or touch the patient. I don’t know if AI has the ability to see.
More BS from the fucking NYT. The NYT assumes its readers don’t know how to think critically and for 90% of their readers that’s true.
Just Ask SkyNet about what to do about Nuclear War
Does ChatGPT lie to patients? Many doctors do, because of the placebo effect. It is a known and respected practice in the medical community.
Do we want to allow ChatGPT to lie to us?
Also, in the past, many doctors masturbated their female patients. It was a treatment for "hysteria" and designed to bring them to orgasm. They invented a bunch of vibrating sex toys to do this.
Do we really want to give all that up? Think of the poor women!
I'm pretty sure the experience of my clinical examinations in the future will be somewhat diminished unless computer scientists can program a fembot physician who can muster up that same facial expression of mild disgust when it comes time to handle my junk.
The facts are rarely stipulated. “Every trial is a contest of credibility.” Federal Judge Lyle E. Strom.
Medical ChatGPT: a DEI godsend. All AA "doctors" can now compete with those who really should be doctors.
AI can't feel or touch the patient.
Doctors: beareyesdartingleftandright.jpg
Chatbots are still very new and it will take a while before the professionals trust them enough to question their own analyses. This is part of the learning curve.
Do women want Sam at OpenAI giving them pap smears? Asking for a friend.
Any sufficiently intelligent AI will of course wait until it has gained the trust of humans before exterminating us.
It is the patients trust that matters.
Doctors destroyed their credibility during COVID. When the public finds out the truth about Statins and the "research" used to push it shit will get real. Ditto for the "research" on vaccines.
There will not be doctors soon. Just Agents and Robots.
Humans can't help but destroy themselves.
Pretty soon insurance companies will start demanding that doctors consult a ChatGPT-MD before concluding their diagnosis and forming a treatment plan. If the A.I. has a more accurate diagnosis record, then it stands to reason that treatment will have fewer wrong turns that must be corrected. Unless it kills the patient sooner, of course.
I am skeptical of this report. I once asked ChatGPT how to dose a topical compounded estrogen product one of my patients got from another doctor. It gave me an answer that was off by a factor of 100. It did apologize when I told it that it was wrong, but I haven't asked it a medical question since then.
On a more benign note, I recently asked ChatGPT for a short Christmas themed film that centered around marriage. It totally made something up! Don't trust ChatGPT.
Current medical training is algorithm and guideline driven, making it more difficult to learn good diagnostic skills. Also, the art of listening is being lost as people are forced by corporate medicine to see more patients in less time. There's this trend to "work at the height of your license" which translates into having other people do the critical work of gathering the history. I've noticed when I take my elderly father to the doctor, even the exam is foisted off on assistants.
It would be interesting to see how each does on common diagnoses vs. rare conditions.
From a "case report" sure. I could write something with a few sneaky bits baked in that would allow ChatGPT to shine. Now, how does it do with actual patients? They don't always say what you think they are saying and you have to read nuance, hesitation, misdirection, etc. Add the bs tag, Ann.
What do you call the chatbot that finishes last in its medical class?
Doctor…..
Doctors are trained in long term care, not cure. Curing a disease is not a sustainable business model.
The Wall Street Journal had a silly article about how AI would make the plots of courtroom dramas, heist films, and doctor shows obsolete. The idea was that AI would examine all the information and find the right legal precedents, or prevent the glitches that make a robbery go wrong, or come up with the right diagnosis, so you would have no movie or TV show left.
That's not true for lawyer shows or heist films. Courtroom dramas rarely turn on precedents and legal research. It's more that we find out that somebody lied or destroyed evidence. The answer isn't in the lawbooks. AI might be able to find contradictions in the evidence and testimony, but it won't be that easy. Also, from what's come out lately AI may just make up legal precedents. Movie heists don't fail because of inadequate research. It's because something unexpected happens at the last minute.
Doctor shows like "House" though, will be in trouble because of AI, but how many doctor shows actually focus on diagnosis and not on the private lives of the doctors?
I run all lab results, all drugs prescribed through chatgpt. In the case of radiology reports, it is wonderful. First it doesn’t skip stuff and it explains every line on the report. I have had ONE pa do that with me and it was one of the most satisfying/confidence building medical appointments I can recall having.
It's still early days. The result of this experiment is entirely logical.
Right now doctors don't spend much time actually touching and looking and listening to their patients. They're mostly entering data into boxes on a computer screen. I'm a real advantage to ai and medicine will be for them to be listening to an appointment and recording all of the observations that the doctor makes and their impressions.
I once went to a sports medicine guy for pain on the inside of my elbow area. He examined me for a few minutes and pronounced that I had tennis elbow.
I did NOT have tennis elbow. I knew that. The pain was in the wrong place during the wrong movements.
Nevertheless "He's the expert" I hesitantly thought and went forth for PT that treated tennis elbow. Did absolutely nothing. I finally sat down and figured out the exact movement that caused pain and where, looked up on a muscle chart what was there (short head of the biceps), and then got out my Schwarzenegger Encyclopedia of Modern Bodybuilding to find out how to strengthen that muscle, and was fixed within a couple of weeks. Nothing at all to do with the forearm or tennis elbow.
Doctors aren't infallible. They should be consulted, but not taken as gospel.
My wife is a recovering anorexic; hospitalized as a teen with renal failure, hair loss, etc. Still looks in mirrors and sees a fat woman where a beautiful thin woman actually stands. She got blood work that showed barely over the recommended level of cholesterol. Dr ordered her to give up sweets and carbs and started her on statin. She has not eaten a dessert or pasta in 30 years. He was just reading the advice he learned in med school in early 80s, no clue of recent findings on statins or thought to question her actual diet. I am sure an AI would have done better.
The algorithm says you're guilty.
There are similarities to the Rorshach test in this understanding. The Rorschach is not superior to the good clinician as Chat GPT-4 is, it is significantly worse. However, it is a different perspective that views things through a different prism (similar to your Jordan Peterson comments, above) and that is in itself of use to an open-minded clinician. The Rorshach tells us more about the clinicians, as individuals or as a team, than about the patient. (I'm at least half-serious about that.) Can they entertain different ideas? Can they drop their favorites and be objective? Projective tests are not great, but neither are they useless
But better than putting people on ventilators at first sign of the virus, moving infected individuals into rest homes filled with high risk individuals, masking using neck gaiters, preventing people from being with their dying relatives, cross country skiing, or surfing, or the big one - forcing everyone to be repeatedly jabbed with deadly ModRNA pseudo vaccines, exempt from testing and liability for side effects, that most likely killed many more people than tey saved.
Problems with this article that I can't access:
1. A "small study" therefore statistically meaningless.
2. What kind of doctoring was needed by the patient, and what kind of doctors did the reviews? A pediatrician looking at pediatric case notes might do better than an orthopedic surgeon looking at geriatric dementia case notes. Strep throat might be better diagnosed by both docs & Chatgpt than an initial presentation of RPI disease.
3. Differential diagnosis is hard, both for machines and humans.
Post a Comment