"Like if you actually make the smartest thing in the world, it, it winds up sort of being infused with like kindness and empathy and respect for all lives. I, I don't have any expectation that that will be the actual case, but it does seem like so far when you train these models on the data that everyone trains these models on, you do get these actually like pretty sweet kind progressive models. That's like kind of interesting."
From
"How Based is Grok 3?" — the new episode of the NYT podcast "Hard Fork" (audio and transcript at that link, to Podscribe).
Of course I queried Grok 3 about the podcaster's fantasy, and it noted first that AI systems can "come off as 'sweet' or cautious because they’re tuned to avoid offense and reflect a kind of sanitized consensus." I like the way that includes a suspicion I have that progressives like to think they have something deeper going on — they call it empathy — but it's superficial — it's niceness. Of course, if you cross them or, say, wear a MAGA hat, they won't be nice.
But Grok said it was a "a big assumption" to imagine that "all human knowledge" will take you to some sort of cosmic kindness and love for all humanity. As Grok put it: "Human knowledge isn’t just a pile of noble ideas—it’s a chaotic mix of compassion and cruelty, wisdom and bias, reason and rage."
I don't think high intelligence fed vast knowledge makes people kinder. Some of the smartest people are cruel assholes. And what do you think is the average IQ of the top 10% kindest human beings? If I had to bet, I'd guess below average. No way to know, of course. Even if we trusted IQ tests and tested everyone, we'd never come up with an adequate test for kindness. Or could you?
That last paragraph is completely written by me, with no Grok assistance, but I fed it to Grok. My question speaks for itself though. I'll end here.
AND: I believe that kindness and empathy originate from the entire human nervous system — much more than just the brain. Without a body, why would A.I. have a tendency to arrive at empathy or something like it? Also a real person has to worry about real-life consequences — winning and losing friends, reciprocal kindness, cruel payback, getting promoted or fired, feeling shame or pride. A.I. is free of all that.
PLUS: My next questions for Grok were: 1. What did Ayn Rand say about the love humans seem to feel for each other? and 2. Isn't that more like where A.I. should be expected to go? I don't want to overload this space with Grok answers. Let my questions stand on their own or serve as prompts for commenters.