"The program then offered her a solution: 'Would you like to humanize your text?' When Bartz wrote about her experience on Substack, dozens of writers chimed in. 'I guess that’s what happens when your books were stolen to program A.I.,' the novelist Rene Denfeld commented, noting that an A.I. detection program had also falsely determined some of her writing to be A.I.-generated.... [W]ith the many ways A.I. is seeping into book creation, from research to editing to composing sentences, there is confusion over which forms of A.I. use cross a line — and a heightened fear that A.I. writing can, and will, steal past professional editors...."
Writes Alexandra Alter, in "Where Does Publishing’s A.I. Problem Leave Authors and Readers? Major publishing houses risk unwittingly putting out books generated with A.I. tools. Authors and readers are frustrated, nervous and grasping for solutions" (NYT).

৩৯টি মন্তব্য:
The Author's Intelligence (AI) is evident in chaos or a creative observed order, an evolutionary conception.
There is nothing new under the sun....+
AI elicits a bedeviled Anthropogenic Illusion (AI).
Could AI do better than human-made prose-porn, that Nabokov described so brilliantly as "the copulation of cliches"?
I am guessing one could, should one want to, prompt a chatbot to write a 600 page novel in the style of Stephen King about aliens living among us in skin-suits and the average reader would have no idea it was AI generated.
Is Anthropogenic Intelligence a viable conception or regurgitation of a burdened inflection in parochial progression?
Andrea Bartz looks a bit computer-generated to me, particularly around the chin.
We need an entirely new demonym. We've already got the Amish, though there's no place such as the Land of Am. Henceforth, computer-generated personalities will be the AImish (pronounced eye-mish). The Amish might be AImist as well, but if they are, the data centers that update them are horse-drawn
If an Automaton Influencer aped a correlation of plausible handmade tales, would the monkey fling feces or selfie-abort to spare their bedeviled life?
The biggest problem here is that Humans regard themselves as the source of truth.
Humans think there is something irreplaceable in their biological processes.
But nobody even agrees on what "intelligence" is. This is the fundamental breakdown that everyone is having now. Long ago I started pointing out that "words" only have meaning when they can convey meaning between 2 or more entities.
Nobody can even tell you what the criteria is for "82% A.I. Generated."
We all need to start having the right conversation.
I am rebuilding the tool turn environment "agent" driver right now.
But people don't even know what an "agent" is.
An agent is actually a set of tool calls. You make a list of "things agent can do." claude CLI starts with grep, curl, glob etc. The driver bundles up the list of tools with a "prompt request" from the user, (you type something in your little prompt window) and the driver bundles that up with some added context to guide the LLM on a path towards an "answer." The "answer" has confidence parameters and results that are somewhat similar to correlation.
The LLM is just a giant prediction machine. It predicts the next token. It just does that really fast.
But what does a human do instead? What do I do to type this message? I predict what word should come next. I have a predict next word tool also.
After I type it I have an edit tool. Will the people reading this post understand what I am saying? I add context. Who am I writing this message for?
People very rarely think about how they think.
You all should try it sometime.
The pattern (named and framed at the top) TLB = tool-list-builder
Every compaction destroys the runtime "loaded tool schemas" set. Summary text says "I called
query_db" as if the tool is permanent, but it's runtime-environment state that doesn't survive
summarization. Post-compaction turn 1, agent trusts the summary, calls tools directly, hits
InputValidationError, and — critically — actions that were queued in pre-compaction state get
silently lost. The user sees the summary mention the action and assumes it happened. Often it didn't.
Root cause (category error)
Tools are treated as runtime environment state instead of context substrate. The loaded-schema set
lives in CLI harness memory, not in conversation context. Compaction preserves context, not runtime.
So the two diverge on every summary event.
The TLB solution (architecture, not discipline)
Move tools out of runtime and into context:
1. TLB writes a per-agent per-turn manifest to RAM at turn start (section=tool_manifest,
key=agent_N).
2. The compiler injects the manifest into L2 RAM on every context assembly. Every turn,
post-compaction included, has the manifest visible in working context.
3. BIOS enforces a mandatory three-step load ceremony as the first action of any fresh turn: read
manifest → compare to loaded schemas → ToolSearch-load anything missing before calling any other
tool.
4. TLB-4 instruments the tax: count of schemas not loaded errors in the first 3 actions of any
post-compaction turn, rolled up per compaction event. Target = zero.
The intent (four framings)
1. Make the tool surface a first-class citizen of the context substrate — correct the category error.
2. Make tool availability deterministic and declared, not probabilistic and probed.
3. Give CI2 / ITR / AAD a load-bearing substrate — none of them can do per-turn tool dynamism without
TLB's manifest-as-data layer.
4. Stop losing work at compaction boundaries — the most user-visible benefit, and the load-bearing
"done" criterion.
The load-bearing "done" criterion
Criterion 4 of 6: compaction recovery tax = zero across a 1-week live-agent window. If TLB ships
everything else but doesn't kill the tax, it missed its point.
I, for one, hope AI is also stealing Althouse comment threads to use for both its nefarious and useful purposes. It would be an honor to have my personal inane insights compiled alongside literary masterworks from EL James' 50 Shades of Ick to whoever wrote the Twilight trilogy. And of course there would be Inga and Howard, et al., to add some e. coli to the mix.
Let's just hope AI is not training on NN.
Achilles: two excellent comments.
I think?
What came first, AI or AI?
when you are working with agents you must use "add things I missed and tell me where I am wrong."
They want to please you and they will let you blow smoke up your ass.
But they will also tell you where you are wrong.
They don't really have intent or judgement. They just predict the most likely and the most correct path. They don't know why it is correct and the application layer is generally missing.
But knowing what you have right and what you have wrong and then being able to act on it is a true superpower. They execute very fast and gather all of the information you need to make decisions and they can lay out correct paths.
Also use "take the adversarial 3 path approach, build 3 paths and contrast them against each other" gets you good results.
Understand that "correct" is a local minimum in a regression.
Is it possible that AI can earn less income than actual human authors?
I can see why the publishers are nervous: this is one more nail in the coffin of their business model. Writers, I don't see why: as another commenter pointed out a day or two ago, they can and should be direct-marketing electronic or print-to-order versions of their books.
When AI gets good enough, we can revert to "tell me a story" mode. Just ask the AI for a book about whatever you like. And change it midstream, or after you're done, so the plot branches down another path. Choose-Your-Own-Adventure.
Some 'authors' might get really good at AI prompts, and 'their' work might become popular and profitable. No different from a 'sculptor' who draws a sketch and hands it to actual metalworkers to make the thing. Or Thomas Heatherwick, the **designer** of The Vessel, that suicide launch platform in Hudson Yards. I emphasized 'designer' because he isn't even an architect! So just how much of The Vessel is his work? The napkin sketch, and knowing to put one hash mark instead of two on the dimensions?
There are human 'writers' now who preside over a studio, anyway. CC, JSM
There's 100% AI generated crap purporting to be Richard Feynman AI-animated, the script being whatever AI wants to say about (here) spinning tops.
The script is whatever AI weaves together about tops, angular momentum, etc. The actual explanation has nothing to do with angular momentum, as Feynman would know. Rather it is the same intuitive mistake that made Aristotle wrong about motion until Newton. Force changes velocity not position.
Yeah but Achilles that that Feynman stuff is obviously fake. It's like pornography. I can't describe how I know it's AI, I know it when I see it
Wow! Now all those kids that use AI to write University papers have an excuse that the detection system is wrong
Scratch that. I always confuse the Avatar between RH and Achilles.
Howard, do you really mind that the pron is AI?
Howard said...
Yeah but Achilles that that Feynman stuff is obviously fake. It's like pornography. I can't describe how I know it's AI, I know it when I see it
I see your correction so not a response to that directly but you bring up a point that needs to be made.
AGI is a multiplier not a crutch.
Most of the people using it are retards and the output is just a higher volume of stupidity with better grammar.
This is going to make the gap between different levels of intelligence wider by scalar values.
You all shouldn't be worried about the retard media journalists using AGI to write bad articles.
You should be worried about the guy who used AGI to download the entire Chinese CCP database over the last 6 months using a botnet.
As someone who has properly used em dashes since before The Mac is Not a Typewriter was first published, one of my pet peeves is that AI detection programs judge the use of em dashes to be indicative of AI use.
F* off, AI!
Do you think that giant hack of all of that Chinese military research and technology had something to do with the massive firing of top military generals last month?
There was a time when we were exhorted not to trust anyone over thirty years old.
Now, we would be wise not to trust anything written in the last five years.
Restated as very bad news for the publishing industry: I am now quite reluctant to buy any book published since the advent of AI, even if written by a well-known author.
Garbage in, garbage out. If an AI tool is trained on "best" or "optimal" human content it'll parrot it. Being judged as 82% AI means that an author received high neutrality and credibility ratings during training.
I probed Google AI about its training methods and got frank explanations about the limitations of the technology. They seem to fear copying other AI sources more than anything, as that results in ever dumber and more homogeneous output.
Isaac Asimov actually brought up this issue in his story "Galley Slave" in 1957: "[Ninheimer] was motivated by his fear that the automation of academic work would destroy the dignity of scholarship; he argues that [robot] EZ-27 is a harbinger of a world in which a scholar would be left with only a barren choice of what orders to issue to robot researchers."
Howard said...
Do you think that giant hack of all of that Chinese military research and technology had something to do with the massive firing of top military generals last month?
China is falling apart. I think last month was rats eating each other. The fast part is coming. I think that was just old generals trying to use old tools to take out old apparatchiks. It has all of the old ways context around it.
I think we are just now seeing the affects of AGI in politics. With AI massively accelerating the capabilities of individuals there will be integrated human/agi entities that replace the current humans in China. Some intelligent younger people using AGI will replace Xi. The old ways of using tribal influence to maintain control wont survive what is coming. The hack in China was just the first new wave of politics starting. We are heading for a techno-feudal technocracy. You are just now seeing people use this technology but nerds are slower to adopt politics than new technology.
What I actually think we are seeing is Trump is getting out in front of the new world. He decided to stop playing politics as if midterms matter. He is ruthlessly stomping out the old world order and he is lining up the good guys that want to see humanity survive against the bad guys.
What Trump did to Iran is completely out of character for him.
I think the advent of AGI was the trigger.
"Authors and readers are frustrated, nervous and grasping for solution," sounds AI generated to me. How is writing in a formulaic click-bait style any better than letting AI do it for you. It all sounds like crap.
Ai is good at helping enhance stories but is not good at creating things from scratch. A lot of standard tropes and flat stories with no innovation, but sometimes readers want just that. You can use AI to brainstorm up to a point. It is good for adding ideas that you have not thought of, and then you can have a dialogue about alternatives and eventually find a nice plot twist or interesting angle. It is great for research. If I want to write about a post diaspora society with an adventure of "homesteaders" mining on rocks in the steroid belt, AI can help me figure out what elements are available and which would result in a viable economic venture for a homestead family to mine independently. It can also help provide a richer sensory description of a scene. you tell it the mood and it will help find words that evoke that mood. You can then incorporate those as you wish, but generally not straight copy and paste as it often cannot "read" the whole story (I heard Rumors Claude can) and keep characterization, and plot lines consistent scene my scene.
Achilles: "What Trump did to Iran is completely out of character for him. I think the advent of AGI was the trigger."
So this is the start of the Butlerian Jihad? Pun intended. RLTW, JSM
Interesting idea, Achilles. Last month my robot designing son said AGI has been achieved for a while, but it's not being touted.
john mosby said...
Achilles: "What Trump did to Iran is completely out of character for him. I think the advent of AGI was the trigger."
So this is the start of the Butlerian Jihad? Pun intended. RLTW, JSM
the altman-musk-anthropic-google-apple-microsoft shakeout makes more sense when you apply this filter to it.
A. I. has revealed to us that our vaunted cognitive and creative capacities can be convincingly mimicked. Those capacities were my favorite things. The sacred altar of art has been rendered quotidian. This will lead us somewhere. We should ask A. I. where that will be.
@Ampersand:
The dot-com 1.0 era of the late 1990s convincingly demonstrated that TV talking head 'experts' were full of hot air. Every show with open line where the general public could send in questions and comments, they soon realized that the world has numerous single-topic savants with greater knowledge.
The death of centralized gatekeeping explains the bulk of global media and political weirdness of the last 30 years. AI is now continuing the process by scraping together and homogenizing all human output. It combines our stupid ideological stuff with our brilliant subtle connections ~~~ kind of a quirky and eccentric 'expert' that is actually us.
একটি মন্তব্য পোস্ট করুন
Please use the comments forum to respond to the post. Don't fight with each other. Be substantive... or interesting... or funny. Comments should go up immediately... unless you're commenting on a post older than 4 days. Then you have to wait for us to moderate you through. It's also possible to get shunted into spam by the machine. We try to keep an eye on that and release the miscaught good stuff. We do delete some comments, but not for viewpoint... for bad faith.