"... as well as by many A.I. companies — was largely nonsense. It was never going to be possible to contain the spread of this powerful emergent technology, and certainly not just by placing trade restrictions on components like graphic chips. That was a self-serving fiction, foisted on out-of-touch leaders by an industry that wanted the government to kneecap its competitors. Instead of a futile effort to keep this genie bottled up, the government and the industry should be preparing our society for the sweeping changes that are soon to come...."
Writes Zeynep Tufekci, in "The Dangerous A.I. Nonsense That Trump and Biden Fell For" (NYT)(free-access link).
February 5, 2025
Subscribe to:
Post Comments (Atom)
23 comments:
I have a 1099R apparently filled out by AI that says this year's payment is 100% nontaxable. This is impossible so I just adjusted it to the usual stable historical percentage and paid it. That's how to deal with AI.
The Douthat Andreason interview a week or so ago is illuminating on this point. Andreason said Silicon Valley abandoned its support for Biden in May 2024 when Biden aides said they would contain the AI industry in just three large corporations which the government would regulate and control. Silicon Valley's allegiance switched to Trump at that point.
"That was a self-serving fiction, foisted on out-of-touch leaders by an industry that wanted the government to kneecap its competitors....."
The way I heard it, Biden's administrative lackeys were AOK with doing the kneecapping on those out-of-touch leaders all by themselves, without the asking. More like an announcement, to their profound shock. And subsequently, they decided that this presented a danger.
I've also read that DeepSeek was not done cheaply, that this is a Chinese lie, and that there are strong indications that it's based on stolen technology. But good news: It's just as biased and untrustworthy as our own A.I. efforts, just using Chinese nationalist bias instead.
Seems like our intrepid writer doesn't believe too strongly in balanced reporting, though..... "...It is time, too, to admit that the interests of a few large, multinational companies aren’t good proxies for the interests of the people facing such ...." Ah, there it is. The anti-Trump, anti-capitalist journalista; vive le résistant!
The NYT's is often the mouthpiece of the business/banking elite. So the current message is "dont worry, be happy" leave all that AI to Big Business, it'll work out fine. Trust us.
Sorry, I don't believe in that. Not after 2008 when "Financial deregualation" came crashing down. I want the GOvernment involved, and a i want people seeing what AI is doing and what are its dangers.
Don’t trust the government or the New York Times.
A sociology professor is everyone's go to expert on the bleeding edge of technology.
Zeynep Tufekci (@zeynep) is a professor of sociology and public affairs at Princeton University, the author of “Twitter and Tear Gas: The Power and Fragility of Networked Protest” and a New York Times Opinion columnist. @zeynep • Facebook
She almost always has it wrong, another usaid recipient, besides stargate has a whole nother purpose,
What was the name of the movie where the AI became self-aware?
Times continue to be more interesting.
Labor and environmental regulatory arbitrage or improved Automated Intelligence?
Are you back on your meds?
Silicon Valley has only succeeded in wasting $100bns or even trillions on huge numbers of GPUs. They're not necessary. Open source has won. There's no moat. And it's impossible to monetize this stuff in a way that justifies anything like what has been spent.
Silicon Valley has failed business 101.
"the government and the industry should be preparing our society for the sweeping changes that are soon to come....".
Fortunately, the government and industry are completely clueless about AI. Industry, because it is their current hype, and they are required by the rules of the game to constantly emit detail-free fantasies of its immense impact, so they can sell more chips. The government, because the best and brightest don't go into government. They don't just buy into the hype, they get a warm-and-fuzzy just from the fact that the cool kids are willing to accept money from them. For them, it's not really about the magic beans.
Terminator Forbin Project Age of Ultron, Singularity,
"Artificial life finds a way." —With apologies to Jurassic Park.
I think that the jury is still out as to whether China cheated, but then I also think the jury is still out as to whether AI is going to be, well, it brings to mind what Pedro said in Napoleon Dynamite: "Vote for me and all of your wildest dreams will come true."
China definitely cheated. Under-reported costs. Over-reported capabilities. Deep Seek is a distillation of an existing LLM so it wouldn't be anything without the foundational LLM that it stole from.
Now is LLM the be-all and end-all approach to AI? Probably not so they probably just came up with a way to make slightly cheaper tulips.
There's some good things here. But it misses the key bit which is that they released it open source.
There won't be any Google like tech monopoly generating 10 bn in cash flow. Maybe OpenAI will be the leader. Maybe DeekSeek will be. But competition is wide open and as long as a good model keeps coming out open source there won't be any excess profits to collect.
The insight to this reflects the key difference between the US business model (one company earning billions in producer surplus) versus the China model (low profit, consumer surplus only).
Kurzweil smiles.
the sterling test, is what can deep seek do vs say open ai,
Big difference between various AIs and versions. Which were you using.
If you want to convince Republican voters that A.I. is a real, viable thing, then just create one that can generate a Rush Limbaugh broadcast and play it every day in this environment.
Yes, emphatically yes. I will posit - for purposes of argument - that we may never get wholly accurate reliable AI. However, we don't need that for the effect of what has been done so far and what is in the works to have a dramatic effect (increase) on productivity. Among those affected will be lawyers and law firms. Practice and the profession will be unrecognizable within 10 year.
Concerns about AI safety are absolutely not nonsense. One large multinational software company with which I do business is working furiously to develop their own proprietary AI, and they do two things (that I know of) that are very different from how the public-facing AIs do things. One, they only use very specific data to train their AI. It doesn't see anything outside of its specific area of concern. Two, they spend more than 50% of their budget monitoring for, anticipating, and correcting hallucinations. They don't get very many, but it's not immune. Just yesterday I read an account from a historian that asked ChatGPT to summarize a particular historical event. Not only did it get it wrong, it invented a book and an author to support its incorrect version of history. LLMs are absolutely not trustworthy.
Evaluations of (public facing) AI range broadly. Many commentators feel that AI frequently goes wrong, and hallucinates often. A smaller subset sing its praises. I believe that there are two key differences driving these different evaluations.
In the first instance, those who use it more often are better at writing prompts that get the result they seek. Obviously some component of this is self reinforcing. But I also think that some element of good prompt writing is unexplained, although I suspect it is in some way related to the second difference.
Those who find value in AI, are seldom looking for final answers. Per the billing on Deep Research, they are looking for a research assistant that will give them a leg up on understanding a problem, or a partner to converse with about ideas and possibilities. They are not looking for a final result. Thus, their use cases are more tolerant of slop, and misdirection.
I'll give my own experience from back in early 2023 when I first tried ChatGPT. I asked it to evaluate some legal issues and provide citations. At least 30 percent of the citations were incorrect. But here is the thing. When I searched (Google) on those incorrect citations, the search results were useful in their own right. And AI has progressed a good deal since then.
Post a Comment