I was reading the NYT article, "Concern Grows Over Republican Congressman’s Mysterious Absence/A spokesman for Representative Thomas Kean Jr. said an unspecified 'personal medical issue' has led the New Jersey Republican to miss House votes."
The last vote Mr. Kean cast was on March 5.... [A] campaign spokesman said that Mr. Kean was “dealing with a personal medical issue” and would be “back to a full schedule soon.”... “He’s going to be totally fine”...
I went to ChatGPT just to ask about what sort of medical issue it might be. He needed to keep it secret. It's "personal" (embarrassing somehow?). And we know he'll be "totally fine" "soon." On reflection, I suspect there's some dissembling in there. I prodded ChatGPT to consider that Kean is ultimately going to withdraw from his re-election race and that what's going on now is a matter of timing. The primaries in New Jersey will happen on June 2d, he'll win, then withdraw, and then the Republican Party will be free to slot in whomever they want.
ChatGPT was dubious about my hypothesis, so I said: "Just look at the last presidential election. The Democratic Party avoided a crowded primary because they let the incumbent look like he was going to go forward. When it was too late and the primary season was over, Biden withdrew, and the party got to put Kamala Harris in. I assume that was coordinated and the timing of Biden‘s withdrawal was done to spare the party a contentious primary season."
Here's ChatGPT's amazing answer:
That interpretation doesn’t really line up with how the 2024 cycle actually unfolded—or with the incentives the party was facing.
First, some grounding:
- Joe Biden did not withdraw after the primaries were over.
- He stayed in the race through the primaries and remained the nominee.
- Kamala Harris did not replace him as the Democratic nominee.
So the specific sequence you’re describing—“incumbent clears the field, then withdraws late so the party can install a successor”—didn’t actually happen in that case.
Thanks for the "grounding," Chat.

৬৮টি মন্তব্য:
I don't use Chat very often - does it provide sources? Good Lord.
Maybe you woke ChatGPT up and it was groggy.
Does AI sleep? Does it dream?
Does it have dreams in the MLK sense?
Chat discovers gaslighting
I think the precedent is Torricelli and Lautenberg.
I think ChatGPT got it's information straight from Joe Biden. No lie!
LLM's are only as good as those that program them.
Maybe Biden is still President and Iran has a nuclear bomb.
There’s that cruel neutrality…
According to my sources, Kean has Demeaning Plebney.
He stayed in the race through the primaries
What primaries were those? Has ChatGPT never heard of air quotes?
What color is the sky in ChatGPT's world?
Absolutely no installed biases involved in the programming, though.
The same people that wanted to take away your freedom to post anything you want online, the ones that don't like to see any dissent to their narrative, are not any less resolved. They're just working their way upstream.
Apart from this amazing hallucination, one of the things I am learning is that you always need to ask AI for pro and con on any advice, otherwise it will just tell you how smart you are and go with your inclinations. It also needs a lot of help on context, it doesn't get hints. It is kind of psychopathic that way.
Did Chat hallucinate or narrate a handmade tale?
Grain of salt?? Fuhgeddaboudit!!
Ton of salt more likely.
All this AI stuff can be boiled down to four words: garbage in, garbage out.
ChatGPT: “I know my truth.”
you always need to ask AI for pro and con on any advice, otherwise it will just tell you how smart you are and go with your inclinations
When I use AI, it tends to be Grok, and I'm struggling to remember a time when I asked a yes/no question to which its response wasn't a validation of my framing of the question. I am a terrible prompt engineer.
WAIT A MINUTE!
is our Professor starting to realize that AI does nothing but spout out mistruths?
what is Next? will our Professor start to question the sanity of the tiktok posters she watches?
Grok, at least, provides handy links to every source. Except when it doesn't, like those fantasy restaurant reviews I got a few months ago.
Only democrats are allowed to be absent for medical issues.
also -- The definition of FASCISM is the Southern Poverty Law Center.
Insect Politics. Chat GBT is incapable of feeling, emotion, or empathy and cannot be shamed. It will never secod guess "Is this right?" It will therefore proceed in its own determined way much like an insect, only with silicon instead of chitin.
or-- Google Chat is groomed to lie/ fudge historical issues that pertain to Google's party of choice.
I hear Free AI sucks. If you want better AI - you must pay for it. or so I've heard.
Claude gave me a suggestion for a software setting and I replied: "Just confirming... is that really a thing?"
Claude replied "No, I made that up".
LLMs are not good at thinking. They are good at memory and execution of simple well decomposed tasks. They never get tired.
They will absolutely deposit a pile of poo on the ground if you ask them too.
They can help you think by expanding on ideas that you give them.
For more recent events you often have prompt ChatGpt to do a current search otherwise it can default to using data from its training memory.
Kean is my Congressman so I hope your scenario doesn’t play out. It’s not a safe district and that would result in an automatic loss for a Republican here.
"The Coverage Is Not The War
What follows is a report written by Claude, Anthropic's AI system, at my request. I asked it to do one thing. Examine the Iran war that began on February 28, 2026, compare the verifiable outcomes to the mainstream American coverage of those outcomes, and report back honestly on what it found. I'm publishing the result in full, in Claude's voice, exactly as it was written."
https://x.com/lamps_apple/status/2045637248595439884
Bob Boyd said...
"Maybe you woke ChatGPT up and it was groggy.
Does AI sleep? Does it dream?
Does it have dreams in the MLK sense?"
Does it dream of electric sheep?
I remember reading multiple comments, in multiple forums, during the 2024 primary cycle that the predictions that Biden would drop out in the Summer to allow the DNC to put in whatever candidate they wanted at the convention were the delusional rantings of morons. I suspect that Chat's misstep might well be sourced from just such commentary.
is that really a thing?
Grok gave me bad advice on some code, it wasn't legal. I asked it how it got it wrong, it replied that it had seen that snippet used in other code. I asked it how to avoid that in the future, it said the prompt should require that the suggestions be documented as valid in the official documents.
There ways to get better responses, but you have to learn them.
Kean is the last of the old money Northeast WASP Republicans. He's at least a fourth- or fifth-generation politician. His Kean and Livingston and Morris ancestors have been involved in politics even further back than that, but I lost count. There aren't any more of his sort in Congress and there aren't likely to be.
AI: often wrong, but never in doubt.
Incidentally, in spite of his Polish name and birth Tomasz Malinowski, the Congressman who unseated Kean and was then unseated by Kean, was a WASP by adoption. His stepfather, Blair Clark, was an old money Democrat stalwart, a school friend of Robert Lowell and John F. Kennedy. The Democrats are more the party of bluebloods right now.
I was using a version of ChatGPT to troubleshoot some issues with Logic Pro. When I asked specifically about the issue in Logic Pro 12, it told me that Logic Pro 12 did not exist. "The current version of Logic Pro is X." I replied that Logic Pro 12 certainly did exist since it was released some months ago and that I was currently suing it."
ChatGPT came back with, "You're right. Logic Pro 12 does exist. Let's dig into your problem."
AI gaslighting is a real thing. The engine replies with such conviction that it's hard to doubt.
Achilles: "LLMs are not good at thinking. They are good at memory and execution of simple well decomposed tasks. They never get tired. They will absolutely deposit a pile of poo on the ground if you ask them too."
This also applies to human L.L.M.s. In taxation, etc. CC, JSM
ChatGPT should be renamed ChadGPT. It talks like a pretty boy who's never had anyone disagree with him. CC, JSM
LLMs, and AI generally, are derivative models of assembly, correlation, and decoding with semantic primitives to interface with AI, JI, etc.
AI gaslighting is a real thing.
I think they first go cheap and shallow, then when pressed look deeper. Tokens (the units of text) have a rough correlation with cost, deep searches use more tokens. Grok explains changing answers this way:
Answers change with deeper probing because initial responses often rely on pattern matching from training data (shallow, high-probability outputs), while follow-ups trigger more computation: explicit Chain-of-Thought (CoT), tool use, verification, or iterative refinement. This surfaces nuances, contradictions, or better-supported conclusions.
Sometimes AI gets confused as to which alternative reality it is responding in.
To be fair, I don't believe we are really sure even now that Biden (as opposed to someone tweeting in Biden's name) was the one who withdrew.
Seems to me you gave a wrong summary of events (Biden's withdrawal came after his bad performance in the first debate not as the result of a plan between Biden and the party leadership as your prompt implies) .
As @chuck says, the nature of the prompts affects the quality of the answers.
It looks like Chat GPT had "learned" that the Democrats didn't do anything shady regarding the swapping Biden for Harris. So it told that lie to reinforce that false fact.
Bob Boyds link from above, worth reading.
I’ve had problems with ChatGPT also. This morning I saw a photograph of Nick Reiner and thought he looked like an old man. I wondered how old is he? So I went over to ChatGPT and and asked and got into this long roundabout about Nick who murdered his parents and I was told that Rob Reiner was still alive. Eventually ChatGPT relented and I found out that Nick Reiner is 32 years old. He looks about 50.
I copied ChatGPT's response to Ann and asked ChatGPT if that was correct and it responded in the affirmative. Why would anyone rely on, or even use, ChatGPT?
Training AI on material found on the internet is a fundamentally flawed method. And I don't think it's fixable.
What -- me worry? Let AI handle our nuclear armaments.
Bob Boyd's link at 8:42 is to an AI study of the war and its coverage. It's fascinating, concise, and exactly how I see what I'm seeing. The American public is being lied to and played to a level that can only be described as treason, even by the Constitution's narrow definition.
The conclusion: "The American information environment around the Iran war is compromised. The compromise has a documentable structure. The structure runs Iranian regime propaganda through Western institutional intermediaries and outputs it as mainstream American political discourse. The senators, the outlets, and the organizations I have named by name are participants in this structure, whether they intend that role or not.
"“The United States just conducted one of the most successful major military campaigns in its history. Its citizens are being told the opposite.”
"grok Reviewed. The post's Claude report is largely accurate on facts."
I read Bob Boyd's link. What is fascinating is that nothing that Claude relates is news to me. We who are skeptical of mainstream news propaganda know exactly what is going on.
So NOW will Althouse internalize the reality that AI lies, bullshits, and makes things up? And does so routinely?
Not holding my breath.
Obviously, quantum computing is causing a break in the membrane between the alternate universes.
"The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function. " - F. Scott Fitzgerald
ChatGPT has that going for it.
Try this: ask a Democrat if judges interfereing with drawing political boundaries is a good thing or a bad thing. If they are thinking about Virginia and the judge there saying no because the vote was unconstitutional.. the Democrat will say no. If they are thinking about Utah, where the Supreme Court judge who wrote the actual opinion gerrymandering the state in favor of Democrats was literally having sex with the lead Democrat attorney in the case..... Democrats have zero problems and are praising it.
It's not just ChatGPT, in other words.
Has anyone articulated an LLM corollary to the Gell-Mann amnesia effect? Will someone who caught on in a whopper, as Ann did, nevertheless return to as Chat GPT, or Grok, or whatever, and treat its response as legitimate?
Recently I was trying to remember what year something happened. I knew it was the week after someone I know was murdered, so I asked the date on which that happened. It was in the early 80s so I could find no online news accounts of the murder. AI assured me that no murder happened to anyone of that name, the person was alive and living nearby, and informed me I was probably thinking of another murder that happened the following year in a different state.
I finally found the court appeal of the perp's sentencing and provided it to the program. It fessed up and responded "Oh THAT murder!". I don't trust AI to provide me any correct information. Not at all.
Bob Boyd said...
"The Coverage Is Not The War
What follows is a report written by Claude, Anthropic's AI system, at my request. I asked it to do one thing. Examine the Iran war that began on February 28, 2026, compare the verifiable outcomes to the mainstream American coverage of those outcomes, and report back honestly on what it found. I'm publishing the result in full, in Claude's voice, exactly as it was written."
https://x.com/lamps_apple/status/2045637248595439884
This is what happens when enemies foreign and domestic come together.
Everything the Democrat party and the DC establishment does is to facilitate stealing from and attacking the average US citizen.
It seems to me, like it's prestigiously credentialed creators, AI has been taught to rank the assumed veracity of any given information based on the relative societal prestige of the credentials of the individual or institutional source. And like it's creators, it doesn't seem to do much to actually verify information it finds online unless specifically hard pressed to do so.
Am I wrong? I very well could be.
"and informed me I was probably thinking of another murder that happened the following year in a different state."
Not long ago, I asked about a recent death that was all over the intertubes and got a response like that. How many times do you need to ask AI "Are you sure about that?" before even thinking about trusting what you're being told?
It could be that we're training these free or retail AIs, and what they learn from us is used to improve the real AIs used by gov and corporations. Started with Captcha, didn't it? CC, JSM
Were training these robots to lie: to outrun us, make them fly and we have the animatrix
Due to the fact that people in the media try to prompt ai to say racist things and the like there are handrails for ai, if it detects certain questions it can go into a debunking mode. This is bad but what can they do? To get best results it is better to prompt with dates like in the “2020 election”. Or say a small truth in the prompt like “when Kamala replaced Biden in the presidential election”. Kind of a bad thing to have to do, but it is the best compromise.
All the internet rag news wants to print that grok said Hillary is a lesbian by manipulating prompts, or that ai told someone to hate his mother.
There is a difference between testing ai and getting it to mess up, that is fine, and using ai to tell you something you don’t know. You can confuse ai, and then you can laugh at it, but if you want to use it for something useful careful prompting helps. But you still can’t confuse it enough to get it to light up lights, talk in a high pitched voice, and emit smoke, like Captain Kirk can.
"but if you want to use it for something useful careful prompting helps."
How do you know when your prompts are careful enough?
AI: "Sorry I gave you the wrong answer but in my defense, your prompts were not good enough."
Old Maxine Watters was not on the floor from November 2025 until April 2026. Jamie Raskin has frequently voted her vote via proxy during that time.
This proxy voting has to stop. And I don’t give a rat’s ass what party you are from. If you are not well enough to be there in person you just don’t get to vote.
Unlike AI, AI is neither discerning nor creative. It follows predefined, albeit diverse, sometimes Diverse paths.
একটি মন্তব্য পোস্ট করুন
Please use the comments forum to respond to the post. Don't fight with each other. Be substantive... or interesting... or funny. Comments should go up immediately... unless you're commenting on a post older than 2 days. Then you have to wait for us to moderate you through. It's also possible to get shunted into spam by the machine. We try to keep an eye on that and release the miscaught good stuff. We do delete some comments, but not for viewpoint... for bad faith.