I think what you're saying is that journalists mess up about 8% of the time (or more). That is the conventional wisdom, because mistakes can often be high-profile and produce tons of commentary, but it's not really the case for the industry as a whole, and there are other high-profile industries where I'm sure it's the same, I'm just not as familiar. We're not talking about bad opinions or things people disagree with or predictions that turn out to be wrong. A starting reporter on, say, a local daily newspaper, would be expected to get all facts right in their reports - dates, times, names - nearly all the time. More than one or two corrections a month would raise huge red flags, and this person is producing dozens of stories. Again, that's just for the most junior entry-level person.
This is one of the most useful things I've read about AI. Working as a scientist and programmer I work with AI a lot, and one thing that I notice all the time is despite how incredibly powerful it is in some domains, it just doesn't have any "taste" at all. I've struggled to explain what I mean by this to people and now I can just send them your post which perfectly encapsulates the issue.
"We took a survey of 4 people whose net worth and self worth depends on them being uniquely good at writing. We asked them if they had been superseded by machines. You'll be shocked what we found."
Also when people talk about investing billions in AI as a means to radically advance science, they're not talking about psychology, so I don't think your experience is relevant here...
Sorry if this sounds harsh. I just think we need to face AI honestly. Burying our heads in the sand and pretending that, actually, it sucks and always will, is not helpful.
I strongly agree, Trevor. And part of facing AI honestly will mean putting effort into:
1. Making sure the wealth generated by AI allows everyone to enjoy a materially abundant life.
2. Building a culture where human beings are valued even after our labor is kinda worthless. Not loading up our pockets with stones and jumping into a lake!
The longer we bury our heads in the sand, the less time we'll have to tackle these problems.
Heretofore when asked about “my views on LLMs and AI as a writer” I will just forward people this article instead of filling my pockets with rocks and walking into the sea as they seem to be suggesting I should consider when they ask.
I had some notes on a piece with the current zeitgeist around the failure of LLMs to improve their ability to write (and the small resurgence of “I-told-you-I-still-matter” talk from writers like me.) Thankfully you already wrote the piece much better than I could and now I don’t have to try. <insert applause sounds here>
Perhaps I missed it but my concern is that society will discover that the majority of activities (jobs, research, what-have-you) actually don't require much subjective intelligence. Wholly agree that faster problem solving just hastens the next roadblock.
You might be interested in paying attention to the Un-slop Fiction Prize https://www.hyperstitionai.com/unslop $10k for best purely AI-generated short story (500-10k words), >$100 in token usage strongly recommended ("It's up to you how you do so; hundreds of generations, elaborate multi-pass pipelines, whatever"), judged by gwern, roon, Alexander Wales, and Jamie Wahls. The application deadline just passed today, so the winners should be out soon.
This was a fun read! I think ur characterization of intelligence is too narrow, but then again, to give it its full treatment would have this essay longer than the Bible. I think that kind of hints at the distinction you are trying to illustrate with your objective/subjective intelligence. Accuracy is not the most important part of rhetoric - what matters is whether the reader, when they follow the recipe you give them, cooks up with more or less the same dish.
What also hints at that distinction is your implicit conflating of "intelligence" with "success." The idea that someone with alot of objective intelligence is successful at solving alot of kinds of problems seems totally reasonable on its face (and for the purposes of the point you are making, it is sufficient) but the true nature of even "objective" intelligence is far more nebulous. If a person is only successfully able to solve math problems because they have a Wolfram Alpha subscription, that's very different than someone who can calculate square roots of large numbers with a pencil and paper. At the end of the day, even these "objective" domains are far more subjective than they initially seem, and that is exactly what you are getting at with your points about being bored by the right things.
Ultimately, I decided against going into graduate school because I didn't want to spend my time locked in a room with knowledge that I wouldn't be able to talk to anyone else about. I wanted to be able to share my knowledge with the world and ordinary people. Communicating ideas to people, ie actual education, is a far different skill than "objective intelligence." Your point about data accumulation (that it should be "no contest" between you and the machines) is also fascinating, because again, it shows that objective intelligence isn't even necessarily correlated with lots of data points. In fact, I often find it is subjective knowledge that requires the accumulation of data. But you can't just look at "every single thing" and magic a thru line. You have to gain experience, build a mental model at each stage, and slowly tweak it over time. In this sense, I think people forget that the scientific method is a form of humanistic inquiry. So is mathematics! It's just that mathematicians tend to agree on things.
Anyway, I am just rambling at this point. Thank u for the read!
I'm not sure this is perfectly relevant. A lot of people never wanted to pay a journalist, they wanted answers, and journalists gave them answers mixed with other things. Those people can now pay for a robot to generate just the answers they want, which seems like it still threatens journalists, even those of extremely high skill.
Great piece, bravo! There is one bit of subjective data I REALLY wish could be deconstructed, analyzed, and accurately predicted by AI: the inherent quality of a work of music (whether a pop song, a string quartet, or a banjo rag). I'm a Juilliard School graduate...and yet, I feel wholly incapable of really explaining (and justifying) why The Beatles are the greatest band in history, or why a Mahler symphony towers over a John Williams movie soundtrack. But the vast majority of quasi-intelligent people know both of the above to be true.
Sure, some might argue the points, but you get my drift. In any field, there are universal standards—the "greats" that we all look up to, and that are studied in school and everything else is compared against. But every time I start trying to rationalize (using objective intelligence) WHY these things are great, I can't. I hit a wall. But I KNOW Joni Mitchell is the greatest female singer/songwriter in human history. Even most other famous female singer-songwriters will readily agree to this. But exactly how is she the greatest?
And incidentally, AI is pathetically bad at music. Like, REALLY bad. You think it can't write? Try listening to the lowest-common-denominator elevator music created by Suno, which is supposedly state-of-the-art for AI music making. It's awful. Good musicians (like good writers) are 100% safe from this idiotic "revolution" happening around us...
I disagree just a tiny bit that seems significant. I don’t really know much though.
I certainly agree that subjective intelligence and objective intelligence are different things, but I think there is just a tiny sliver of subjective intelligence that helps you decide whether to become an artist or go on holiday to Tuscany or have bananas and custard for dessert — but after that, it’s objective intelligence all the way down. There’s not much call for subjective intelligence after that initial choice.
With writing, we need that tiny bit of subjective intelligence to write the very best books and poems; but only a tiny proportion of readers have enough subjective intelligence to tell the difference. AI poems will be good enough for most people, and those great writers will have an ever-shrinking audience. And perhaps the computer can’t taste the meal, but just waiting to hear back for the meal-tasting robots to come along though.
A question for someone who knows a lot more than I do: I’m thinking of Kahneman’s Thinking Fast and Slow. I wonder if there is any correlation between objective/subjective thinking and System 1/System 2 thinking. Perhaps AI is stuck on System 1 thinking?
This begs the questions ... What is a new thought? Where do they come from? LLMs certainly can *offer* an answer to the "What is a good life?" question, but by technical definition, that answer will be a variation on answers that already exist on the internet. That variation will be vapid and shallow, but it will likely reflect at least some of the answers real -people offer for this question.
Anyway, that's the question I'd like an answer to: Where do truly new thoughts come from? I'm happy for machines to optimize the electrical grid, control production of snickers vs. cheerios, and so on. But we humans need to up our collective game, thinking-wise, so this debate can go away.
A central claim here is that AI-generated blog posts & fiction are soulless. It's true that the default outputs of LLMs are bland. But can the LLMs be prompted in ways that make their outputs less bland and more soulful? This is an empirical question that deserves more inquiry than "LLMs can't write. Trust me bro."
Confession time! Yes I have used AI to help me manage my science fiction universe WIP. It’s really good--embarrassingly good if we’re being honest—at assembling plot lines, helping me figuring our essential beats and keeping track of all the bells and whistles that goes into worldbuilding a comprehensive setting.
And then it asked me if I wanted it to write Chapter 1 of the first book in the series. I said yes, partly out politeness but mostly because I wanted to see what the robot would write.
Calling the resulting scene a disappointment would be an insult to disappointment itself. It wrote a properly sequenced, grammatically exact, syntactically perfect scene of robots reading robotic lines to other robots. They went about their robot mission. They did robot things. The scene included a bunch of plot points that the Ai had discussed during the building exercises I ran it through. But I doubt any human would ever confuse the result with a real chapter I would write. (At least I hope not).
I couldn’t find anything wrong with the scene. But it sucked. I realized no matter how I tweaked it, it would always suck, because it would never be anything more than a robot’s dream of other robots.
What really hurt was the fact that I really and truly like robots. But real robots in the real world would have more interesting things to say than this…or would if they could talk, which some of them can, at least for certain situations. If any of them ever start writing on their own, I hope they’ll do better than my AI.
“Writing is a task that takes both objective and subjective intelligence… good writing requires an additional bit of juju that makes the prose live and breathe..” Yes! The mojo is in the juju!
Imagine a plane pilot who sticks 92% of landings.
Or let's lower the stakes. A journalist who is accurate 92% of the time.
A baseball pitcher who throws the ball towards home plate 92% of the time.
Near-perfection is the goal in a lot of industries. Even creative ones. That last 8% might be really, really difficult.
I see your point, but I will say it doesn’t require any imagination at all to picture a journalist who is right 92% of the time.
I think what you're saying is that journalists mess up about 8% of the time (or more). That is the conventional wisdom, because mistakes can often be high-profile and produce tons of commentary, but it's not really the case for the industry as a whole, and there are other high-profile industries where I'm sure it's the same, I'm just not as familiar. We're not talking about bad opinions or things people disagree with or predictions that turn out to be wrong. A starting reporter on, say, a local daily newspaper, would be expected to get all facts right in their reports - dates, times, names - nearly all the time. More than one or two corrections a month would raise huge red flags, and this person is producing dozens of stories. Again, that's just for the most junior entry-level person.
This is one of the most useful things I've read about AI. Working as a scientist and programmer I work with AI a lot, and one thing that I notice all the time is despite how incredibly powerful it is in some domains, it just doesn't have any "taste" at all. I've struggled to explain what I mean by this to people and now I can just send them your post which perfectly encapsulates the issue.
"We took a survey of 4 people whose net worth and self worth depends on them being uniquely good at writing. We asked them if they had been superseded by machines. You'll be shocked what we found."
Meanwhile when the NYT surveyed their readers (as opposed to their writers), guess what they found: https://www.nytimes.com/interactive/2026/03/09/business/ai-writing-quiz.html
Also when people talk about investing billions in AI as a means to radically advance science, they're not talking about psychology, so I don't think your experience is relevant here...
Sorry if this sounds harsh. I just think we need to face AI honestly. Burying our heads in the sand and pretending that, actually, it sucks and always will, is not helpful.
I strongly agree, Trevor. And part of facing AI honestly will mean putting effort into:
1. Making sure the wealth generated by AI allows everyone to enjoy a materially abundant life.
2. Building a culture where human beings are valued even after our labor is kinda worthless. Not loading up our pockets with stones and jumping into a lake!
The longer we bury our heads in the sand, the less time we'll have to tackle these problems.
Psychology is not yet a science.
Excellent thank you - I don't care if its right or wrong, that was a good read
Heretofore when asked about “my views on LLMs and AI as a writer” I will just forward people this article instead of filling my pockets with rocks and walking into the sea as they seem to be suggesting I should consider when they ask.
I had some notes on a piece with the current zeitgeist around the failure of LLMs to improve their ability to write (and the small resurgence of “I-told-you-I-still-matter” talk from writers like me.) Thankfully you already wrote the piece much better than I could and now I don’t have to try. <insert applause sounds here>
Perhaps I missed it but my concern is that society will discover that the majority of activities (jobs, research, what-have-you) actually don't require much subjective intelligence. Wholly agree that faster problem solving just hastens the next roadblock.
You might be interested in paying attention to the Un-slop Fiction Prize https://www.hyperstitionai.com/unslop $10k for best purely AI-generated short story (500-10k words), >$100 in token usage strongly recommended ("It's up to you how you do so; hundreds of generations, elaborate multi-pass pipelines, whatever"), judged by gwern, roon, Alexander Wales, and Jamie Wahls. The application deadline just passed today, so the winners should be out soon.
This was a fun read! I think ur characterization of intelligence is too narrow, but then again, to give it its full treatment would have this essay longer than the Bible. I think that kind of hints at the distinction you are trying to illustrate with your objective/subjective intelligence. Accuracy is not the most important part of rhetoric - what matters is whether the reader, when they follow the recipe you give them, cooks up with more or less the same dish.
What also hints at that distinction is your implicit conflating of "intelligence" with "success." The idea that someone with alot of objective intelligence is successful at solving alot of kinds of problems seems totally reasonable on its face (and for the purposes of the point you are making, it is sufficient) but the true nature of even "objective" intelligence is far more nebulous. If a person is only successfully able to solve math problems because they have a Wolfram Alpha subscription, that's very different than someone who can calculate square roots of large numbers with a pencil and paper. At the end of the day, even these "objective" domains are far more subjective than they initially seem, and that is exactly what you are getting at with your points about being bored by the right things.
Ultimately, I decided against going into graduate school because I didn't want to spend my time locked in a room with knowledge that I wouldn't be able to talk to anyone else about. I wanted to be able to share my knowledge with the world and ordinary people. Communicating ideas to people, ie actual education, is a far different skill than "objective intelligence." Your point about data accumulation (that it should be "no contest" between you and the machines) is also fascinating, because again, it shows that objective intelligence isn't even necessarily correlated with lots of data points. In fact, I often find it is subjective knowledge that requires the accumulation of data. But you can't just look at "every single thing" and magic a thru line. You have to gain experience, build a mental model at each stage, and slowly tweak it over time. In this sense, I think people forget that the scientific method is a form of humanistic inquiry. So is mathematics! It's just that mathematicians tend to agree on things.
Anyway, I am just rambling at this point. Thank u for the read!
And even subjective domains are far more objective than they might seem.
Yes, absolutely!
I'm not sure this is perfectly relevant. A lot of people never wanted to pay a journalist, they wanted answers, and journalists gave them answers mixed with other things. Those people can now pay for a robot to generate just the answers they want, which seems like it still threatens journalists, even those of extremely high skill.
The same holds for many other professions.
Great piece, bravo! There is one bit of subjective data I REALLY wish could be deconstructed, analyzed, and accurately predicted by AI: the inherent quality of a work of music (whether a pop song, a string quartet, or a banjo rag). I'm a Juilliard School graduate...and yet, I feel wholly incapable of really explaining (and justifying) why The Beatles are the greatest band in history, or why a Mahler symphony towers over a John Williams movie soundtrack. But the vast majority of quasi-intelligent people know both of the above to be true.
Sure, some might argue the points, but you get my drift. In any field, there are universal standards—the "greats" that we all look up to, and that are studied in school and everything else is compared against. But every time I start trying to rationalize (using objective intelligence) WHY these things are great, I can't. I hit a wall. But I KNOW Joni Mitchell is the greatest female singer/songwriter in human history. Even most other famous female singer-songwriters will readily agree to this. But exactly how is she the greatest?
And incidentally, AI is pathetically bad at music. Like, REALLY bad. You think it can't write? Try listening to the lowest-common-denominator elevator music created by Suno, which is supposedly state-of-the-art for AI music making. It's awful. Good musicians (like good writers) are 100% safe from this idiotic "revolution" happening around us...
I disagree just a tiny bit that seems significant. I don’t really know much though.
I certainly agree that subjective intelligence and objective intelligence are different things, but I think there is just a tiny sliver of subjective intelligence that helps you decide whether to become an artist or go on holiday to Tuscany or have bananas and custard for dessert — but after that, it’s objective intelligence all the way down. There’s not much call for subjective intelligence after that initial choice.
With writing, we need that tiny bit of subjective intelligence to write the very best books and poems; but only a tiny proportion of readers have enough subjective intelligence to tell the difference. AI poems will be good enough for most people, and those great writers will have an ever-shrinking audience. And perhaps the computer can’t taste the meal, but just waiting to hear back for the meal-tasting robots to come along though.
A question for someone who knows a lot more than I do: I’m thinking of Kahneman’s Thinking Fast and Slow. I wonder if there is any correlation between objective/subjective thinking and System 1/System 2 thinking. Perhaps AI is stuck on System 1 thinking?
This begs the questions ... What is a new thought? Where do they come from? LLMs certainly can *offer* an answer to the "What is a good life?" question, but by technical definition, that answer will be a variation on answers that already exist on the internet. That variation will be vapid and shallow, but it will likely reflect at least some of the answers real -people offer for this question.
Anyway, that's the question I'd like an answer to: Where do truly new thoughts come from? I'm happy for machines to optimize the electrical grid, control production of snickers vs. cheerios, and so on. But we humans need to up our collective game, thinking-wise, so this debate can go away.
A central claim here is that AI-generated blog posts & fiction are soulless. It's true that the default outputs of LLMs are bland. But can the LLMs be prompted in ways that make their outputs less bland and more soulful? This is an empirical question that deserves more inquiry than "LLMs can't write. Trust me bro."
„ “Well, let’s just get some data!” and then we waste a few months being like “hmm what does this data mean, so many numbers, so mysterious”“
This sounds like much of biology these days. And then they hope that AI will help make sense of all the data…
Confession time! Yes I have used AI to help me manage my science fiction universe WIP. It’s really good--embarrassingly good if we’re being honest—at assembling plot lines, helping me figuring our essential beats and keeping track of all the bells and whistles that goes into worldbuilding a comprehensive setting.
And then it asked me if I wanted it to write Chapter 1 of the first book in the series. I said yes, partly out politeness but mostly because I wanted to see what the robot would write.
Calling the resulting scene a disappointment would be an insult to disappointment itself. It wrote a properly sequenced, grammatically exact, syntactically perfect scene of robots reading robotic lines to other robots. They went about their robot mission. They did robot things. The scene included a bunch of plot points that the Ai had discussed during the building exercises I ran it through. But I doubt any human would ever confuse the result with a real chapter I would write. (At least I hope not).
I couldn’t find anything wrong with the scene. But it sucked. I realized no matter how I tweaked it, it would always suck, because it would never be anything more than a robot’s dream of other robots.
What really hurt was the fact that I really and truly like robots. But real robots in the real world would have more interesting things to say than this…or would if they could talk, which some of them can, at least for certain situations. If any of them ever start writing on their own, I hope they’ll do better than my AI.
“Writing is a task that takes both objective and subjective intelligence… good writing requires an additional bit of juju that makes the prose live and breathe..” Yes! The mojo is in the juju!