48 Comments
User's avatar
Gillian Hill's avatar

I want to quote all of this. Such a good way of thinking about AI. I will now picture an old Crown Royale bag full of slightly dented Scrabble tiles whenever I think about AI.

Expand full comment
Alexander Simonelis's avatar

Since you're an editor, it's Crown Royal. Ahem.

:-)

Expand full comment
Gillian Hill's avatar

Ah crud! Do get a pass because I don’t drink it?!

Expand full comment
Alexander Simonelis's avatar

Sure!

:-D

Expand full comment
Andrew's avatar

Some of the best writing on AI I've ever read. Should be a mandatory primer before anyone uses, talks, or even thinks about it.

Douglas Engelbart (inventor of the mouse, the modern filesystem, networking, multiplayer, and just about everything else in computers) talked about machines as human enhancers, and it seems to me that AI should be thought of as that. It can do the things that we are bad at so we can be better at doing the things we're really good at, like being creative.

Expand full comment
Nate's avatar

I love this! I also think it's important that the companies have a tremendous interest in making us anthropomorphize the AI and perceive it to have a mind. They didn't have to make the interaction be a chat interface (like we would talk to a human) or make the output so much similar to someone we're talking to, or give them names, or voices. But all those things serve to make us anthropomorphize them more.

And some of the things that lead to the anthropomorphism interact with the AI in various weird and unintuitive ways. For instance, I've worked on some research about how making an AI behave less repetitively makes people trust it more because they perceive it to have a more humanlike mind -- but certain ways of making the AI's outputs less repetitive actually make it less accurate, in some cases. Basically, people's intuitions as to what indicates a humanlike mind work agains them because AIs don't have minds in the sense humans do.

Expand full comment
Amica Terra's avatar

Something I bring up a lot when I talk to people about AI is something I wrote about previously (https://asmallkernel.substack.com/i/139181752/who-even-plays-piano-anymore), but can pretty easily be summed up as: do you think Kurt Vonnegut knew about hobbies when he wrote Player Piano?

I like to ask three questions:

1. Since the advent of high-quality recorded music, radio, and music streaming, do you think more or less people know how to play instruments?

Answer: more people

2. Since the advent of computers who can absolutely obliterate chessmasters in chess, has chess become more or less popular?

Answer: more popular

3. Since John Henry died after losing to that steam engine, what are bodybuilders and strongmen doing?

I think that people who write feel very threatened by AI because they have invested a lot of their identity into their ability to write and they want to feel like it is special, so I think it's good to think about historical analogies. From my post: "it seems a bit provincial to proclaim that what makes the human life worth something is thinking well, or that we won’t be able to adapt to life where machines can think as good or better than us. Does it really seem fair to the median schoolkid to say that thinking is what gives human life dignity or worth? Does that even seem right? One mustn’t be fooled by the fact that humans are the most intelligent animal into thinking that being the most human means being the most intelligent."

Expand full comment
David Howard's avatar

The relevant category in the case of John Henry isn't bodybuilders, it's steel-driving men. There are absolutely fewer steel-driving men than there were before the steam engine. You're right that AI can't take away our will to live, but man does not live by will alone.

Expand full comment
Amica Terra's avatar

My point is precisely that the relevant category for John Henry is indeed bodybuilders--that what previously was the province of work required for the necessities of life has become a playful passion project. There are fewer steel-driving men and more bodybuilders, fewer troubadours and more music-players, no more humans at the forefront of chess but more chess players.

I agree that man does not live by will alone, but if AI can't take away our will to live, what will it take away? If we believe AI will bring material abundance, what are we concerned about other than our will to live? My point is that we will be able to make meaning and have passions and challenge ourselves in a world of AI abundance, as we have every time some piece of technology has rendered a practice unnecessary.

Expand full comment
David Howard's avatar

I don't agree with your supposition that AI will lead to widespread material abundance, but grant that if we assume that your point holds.

Expand full comment
King Cnut's avatar

I think this is an interesting point. Just to offer a countervailing note, there's a section in Benjamin Labatut's book The Maniac where he talks about AlphaGo and Lee Sedol, and mentions that because of the new technology Lee Sedol ended up quitting the game of Go. This quote is from the top line of Lee Sedol's Wikipedia: "losing to AI, in a sense, meant my entire world was collapsing. ... I could no longer enjoy the game. So I retired."

The game ended up losing one of its best players.

After the machine plays move 37, the possibilities of the new technology to find new creative patterns are discussed in the AlphaGo documentary, but the deep impact on the psyche of those who are invested in the sport is worth noting.

I think young people will now grow up with Go and computers now and think little of the fact that they could never beat the best computers. But these sort of painful transitions are likely to occur for lots of people in lots of fields.

You're right that its provincial to proclaim that what makes human life worth living is thinking well, but many people believe this. Many people have their identities bound into their skills in thinking or animating or creating art. If machines can do it in such a way that renders those skills historical artifacts then this will inflict pain, and we should be aware of that.

I think the great concern is that initially many AI companies were founded on a similar idea to you - we should automate everything, so that people can (to borrow from Marx): "hunt in the morning, fish in the afternoon, rear cattle in the evening, and criticise after dinner, just as I have in mind, without ever becoming hunter, fisherman, shepherd or critic".

And yet, some of these companies are now moving to privatise the gains of this automation, which was built off the corpus of collective human knowledge. And in a society structured around work, my concern is that removing people's ability to work without changing the structure of the surrounding systems of labour could create extremes of inequality and deprivation.

Expand full comment
Amica Terra's avatar

I don't deny that changes in the locus of meaning are usually difficult or even wrenching for people. In fact, I would love to see a fictional story about what it would look like to have an older generation used to our contemporary ways and values rearing a younger one absolutely native to the new ways. I think there could be much tragedy and interest in such a story. At this point, we might be working with what Nietzsche would call a "transvaluation of values" and by no means happens without a great deal of sorrow.

What I mean to rebut is the sense that such aimlessness is perpetual. Dystopias generally deal with societies that are stable and bad. My point is that there is no reason to think that stasis would prevail. There are few things harder to pull off than a change in the meaning of your life--existential crisis, identity crisis, spiritual crisis, whatever you want to call it. In some sense, the person you were has to die, whether biologically or psychologically. I just think that, eventually, we will figure something out. We've done it before.

As for the privatization of gains, I just don't really buy this. For there to be "gains," the companies have to be selling things to people. If everyone is miserable, who are they selling to?

Or will the companies just produce products and say "actually only our shareholders/employees/owner/C-Suite can use these products"? Why? Even if you think all these guys are egomaniacs, what is better for an egomaniac than to solve the world and say "my works that I have given to you today shall be on your heart. You shall teach them diligently to your children, and shall talk of them when you sit in your house, and when you walk by the way, and when you lie down, and when you rise. You shall bind my name as a sign on your hand, and it shall be as frontlets between your eyes. You shall write my name on the doorposts of your house and on your gates"? Maybe it will be an eyesore having statues of Sam Altman everywhere, but I think we can figure out a way to live with it if it means no more material plight.

Furthermore, unless we're getting into misalignment territory, why would only one company ever figure out the AI sauce? And if we're in super-AI territory, it only needs to be one company that decides on general abundance. And if we have figured out THE technology and there's no more need for the general structure of IP law or contracts or whatever to ensure economic organization and growth, then why would the government let everyone be immiserated? It might take 2 or 3 election cycles, but eventually you're going to get a majority in the House and Senate that will say "yeah, actually that's ours now" or "yeah, so you have to publish all your data/processes. Thanks, though!" These might create different problems, but not privatization.

Sorry for the long reply. These are questions that I've been thinking about a lot and I haven't really been able to figure out, mechanically, on a step-by-step basis, how AI could, in the long run, immiserate us (putting misalignment to the side).

Expand full comment
SkinShallow's avatar

Totally. Replace "write" by "write code" and you have exactly the same effect but an order (or two) larger.

Expand full comment
victor's avatar

I can't see the reference to piano playing in the link you provide. Can you provide it? I was wondering about that. Also, is that just for piano? How about other musical instruments and singing. Piano is an expensive, expanding, and modern instrument. Thanks

Expand full comment
Amica Terra's avatar

There were two links, but only this one still works: https://today.yougov.com/society/articles/43512-young-americans-increasingly-exposed-music?redirect_from=%2Ftopics%2Fsociety%2Farticles-reports%2F2022%2F08%2F23%2Fyoung-americans-increasingly-exposed-music

The other link extended the effect to the UK.

It's not piano playing, but musical instrument playing in general. "Player Piano" is a book by Kurt Vonnegut about a society where automation has created a class of people with nothing to do---the title is meant to evoke the automated "player pianos" of the early 20th century. Piano, in the YouGov survey, is actually the only instrument asked about which has fewer young people learning to play it than older.

Expand full comment
victor's avatar

Thanks so much!

Expand full comment
David Howard's avatar

Your last sentence illustrates a big problem with this line of thinking. "We don't understand humans, but we do understand the machine, so the machine is unlike a human". It would be great for all of us if the current line of "AI" research never produces something with greater agency than the backhoe, but unless and until someone can show me where agency/intelligence/interiority/whatever comes from in the things we all agree to already possess those, I will have to entertain the idea that the thing which walks like a duck and talks like a duck may in fact be a duck.

Expand full comment
Marek Veneny's avatar

Thanks for the post, Adam! What resonated with me the most, funny enough, was the bracketed paragraph with the lifting metaphor: you don't want to outsource your thinking to AI, just like you don't want to outsource weight-lifting to forklifts.

As for the argument, I believe the primary issue with AI is that it's overhyped and misused. If, as an individual user, you set aside the hype, use it in situations it's good at, and learn some basic prompting techniques, you're doing okay. If you follow every headline of "look what AI can do now, say goodbye to xyz job", don't prompt it well, and use it in situations it's bad at, you'll be very quickly gathering all the negative instances to feed your confirmation bias.

Case in point, the source you provided (https://svpow.com/2025/02/14/if-you-believe-in-artificial-intelligence-take-five-minutes-to-ask-it-about-stuff-you-know-well/). The author, Michael, asks AI something he knows well: "Who reassigned the species Brachiosaurus brancai to its own genus, and when?". The AI fails spectacularly - naming people that didn't contribute to the naming at all, creating new people when called out on this, and - most problematically - all the while communicating these "facts" with plausible sincerity. From this interaction, Michael calls Frankfurtian bullshit on AI output in general. I think he's right and wrong. Right in the sense that yes, that's how the current AI works - it tries to predict the most likely sequence of tokens from the tokens you provided it with. There is no concept of "truth" or "meaning" in this. It's a bag of words (look at me, using the metaphor) spilling out some letters. And yet he's wrong because he didn't prompt the AI all that well - he didn't give it context, nor what output he expected. And so, from the depths of AI irredeemability, where I was already reading up on how exactly Ned Ludd sabotaged the weaving looms to see if some of it would apply to AI, out comes a commenter who a) writes a decent prompt and b) uses one of the thinking models. Voila, out spills a correct sequence of letters, arrayed just so. Funny, that, isn't it? The same machinery produces either utter garbage or a useful summary, depending on how you use it. So, maybe instead of a new metaphor, it's more important to apply the correct mental model to AI. Here's one, encapsulated as a proverb: trust, but verify.

Expand full comment
Steve Byrnes's avatar

I wish you wouldn’t use the word “AI” when you mean “specifically LLMs”.

The human brain does not work via magic. An entire civilization of collaborating human brains and bodies does not work via magic either. Whatever those things do, future AI algorithms running on chips could do that too, and someday will.

That day is not today, and I feel pretty much the same way as you do about LLMs. But someday! When? I don’t know, and nobody else does either. Maybe quite soon! Maybe not for decades! But when it comes, it this OP will be an extremely, dangerously mistaken way to think about it.

You can (and should) push back against people over-hyping LLMs. But just use the word “LLMs”, not “AI”. “AI” is a broader term that also includes yet-to-be-invented algorithms that would be less “bag of words” and more “a new intelligent species on our planet”.

Expand full comment
ToSummarise's avatar

I was about to post almost exactly this and you saved me the time - thanks! It is incredibly dangerous to think that "AI" is limited to chatbots that only generate text and images.

Expand full comment
Dave Palmer's avatar

This is a superb essay. Insightful, helpful, and calming.

Expand full comment
SkinShallow's avatar

Excellent metaphor. I think it also explains very well why I get the best results when I use rambling voice to text prompts, instead of concise, clean, well thought out ones.

But for funsies I like throwing subjects that are not REALLY obscure or local but just happen not to be in the current bag of words and see it go off into the realm of completely, obviously, staggering factually wrong: when you see the kinds of errors it generates for limited data topics or for clearly stated yet unusual starting assumptions -- probably a relatively rare experience to most English speakers living in the cultural anglosphere -- the "auto complete on steroids with turbo boosters" metaphor becomes obvious.

Expand full comment
Rob Nelson's avatar

Love the bag of words metaphor. When I steal it, I plan to say "bag of words and math" because it does seem to me that what comes out of these things surprised everyone who kinda understood the way deep neural networks work. There was no reason to suspect that bagging up an internet's worth of words, throwing in some tricky probabilistic math, and having low paid workers in Kenya check to make sure nothing too gross in the outputs would be as delightfully weird and confusing as ChatGPT's outputs. Yet, here we are.

Expand full comment
Charles Justice's avatar

This is the best, most thoughtful critique of AI I've seen. And I believe you are right on the nose about our propensity for anthropomorphicizing computers, and worrying about our competitive status as if they were superior persons.

Expand full comment
Tony Martyr's avatar

Once again, looking at things from a different perspective yields some insights, and some laughs. Win/win. Your penchant for the killer final paragraph gets there again:

"It’s unfortunate that the computer scientists figured out how to make something that kinda looks like intelligence before the psychologists could actually figure out what intelligence is, but here we are."

Well done, that man!

Expand full comment
Eitan Rude's avatar

Really like this. I feel like there's an under-rated amount of value in describing this as a "metaphor" versus simply saying "this is how it is." I've seen a lot of pushback from the ultimate source of all truthiness and wisdomitude (social media) whenever folks suggest that AI is just "fancy autocomplete," but I think there's a lot of "heuristic value" (a term I'm surely abusing) in constructing simple mental models like this one for the purpose of playing with an idea and communicating with the masses.

Expand full comment
Ekene Moses's avatar

As always, a very brilliant perspective!

Expand full comment
Jack Hooper's avatar

I wish I could like this post a thousand times.

Expand full comment
Michael Sylvester's avatar

Excellent and useful metaphor, as usual.

But you’ll need to find another example for the appreciation of live human vocal performance. Swifties are indeed willing to pay for lip-synced songs at an otherwise ‘live’ performance.

(See Eric from Wings of Pegasus for a robust proof if you care.)

Expand full comment