Nothing ever dies. It merely becomes embarrassing.
OR: the Halo theory of science
Here’s a reasonable thought: as the replication crisis has unfolded over the past 10-15 years, a bunch of psychological phenomena have been debunked and discarded forever. Power posing, ego depletion, growth mindset, stereotype threat, walking slower after reading the word “Florida”—all gone for good. Surely, nobody studies or publishes on these topics anymore, except maybe to debunk them a little further, like infantrymen wandering around a battlefield after the fighting is done and issuing the coup de grâce to those poor wounded soldiers who are dying, but not yet dead.
This isn’t true. All of these ideas live on, mostly undaunted by news of their deaths. Nobody calls it “power posing” anymore, but you can still find plenty of new studies on “embodiment” and “expansive posture”, like this one, this one, and this one. Ego depletion studies keep coming out. I count over a thousand papers published on growth mindset just in the first three months of 2026. People are even doing variations on the slow-walking study, but now in virtual reality.
This leads to some absurd situations. One psychologist who used to work on stereotype threat now disavows the theory entirely: “I no longer believe it is real, but you can make up your own mind.” But another psychologist claims that “stereotype threat is real and virtually universal [...] there is a lot of evidence supporting [its] existence (and impact)”.
We’re not arguing about whether stereotype threat is powerful or weak, or whether it is pervasive or rare, but whether it is obviously alive or obviously dead. That’s literally the premise of a Monty Python sketch.

It might seem like the way to resolve these disputes is to weigh up all the evidence, meta-analyze all the data, deploy your p-curves, your moderator analyses, and your tests for heterogeneity, maybe even run a big, multi-lab, preregistered replication. Do all that and then we’ll finally know whether these effects are real or not!
This is a trap. We have spent the past decade doing exactly those things, and yet here we are. Clearly, no amount of data-collecting, number-crunching, or bias-correcting is going to lay these theories to rest, nor will it return them to the land of the living.
We need a different approach. And so we must turn, as we so often do, to the source of all truly important ideas in the philosophy of science: the sci-fi universe of Halo.
SPARTANS NEVER DIE
In Halo, Spartan super soldiers never officially die; they are only ever listed as “missing in action”. (This is meant to keep morale high among a hyper-militarized human culture that is on the verge of being exterminated by evil aliens.) I think we should adopt a similar scheme for scientific phenomena: they never die. They merely become embarrassing.
This isn’t how science is supposed to work, of course. The secret sauce of science is supposed to be falsifiability: it ain’t science unless you can kill it. If I claim that all swans are white, and you show up with a black swan, then I’m supposed to bid a tearful goodbye to my theory and send it to that big farm upstate where it can frolic and play with all the other failed hypotheses.
Falsification sounds straightforward until you actually try it. You show up with your black swan, and instead of admitting defeat, I go, “Hmm, well is it really black? Is it actually a swan? Seems more like a dusky-looking duck to me!” And we publish dueling papers until the end of our days.

Falsifiability depends not only on the qualities of the theory itself, but also on the whims and biases of the people who engage with it. And because there are so many people with so many different whims and biases, few theories are ever going to be left with zero adherents. For instance, there are still physics PhDs trying to prove that the sun orbits the Earth. That might be disturbing, but it’s also necessary—if no one was ever willing to entertain crazy ideas, we wouldn’t have any scientific progress at all. We have to keep some kooks around because occasionally, as the economic historian Joel Mokyr puts it, “a crackpot hits the jackpot”.1
The persistence and necessity of kookiness means we’ll never be able to say that a theory is well and truly dead. We can, however, say when a theory is embarrassing. If I deny the possibility of a black swan, and you produce something that looks awfully like a black swan, it’s still possible that I will prevail—maybe we’ll discover your black swan is actually a white swan covered in soot, or DNA analysis will vindicate my “dusky duck” theory. But if I didn’t expect a black swan-looking thing to exist at all, my hypothesis is a lot less plausible than it was before, and it’s much more embarrassing to believe in it.
REAL TALK
This is the situation we appear to be in with many theories in psychology. We can’t say whether they’re “real” or not. Somewhere out there, the Spartans may live on. But if we’ve been studying something for decades and people look at all the evidence and they still doubt whether it exists at all, we have to admit: that’s cringe.
Cringe doesn’t mean wrong! Continental drift was cringe.2 Germ theory was cringe.3 Smallpox vaccination was cringe.4 All of them went from mortifying to undeniable. Maybe truly revolutionary theories must follow that trajectory. If a scientific idea is young and it’s not cringe, it probably has no promise. But if it’s old and it’s still cringe, it probably has no merit. That’s why I am not optimistic about any big-name theory in psychology that has gone the wrong direction on the Cool-Cringe Continuum over the past ten years—it’s not impossible for them to make a comeback, but it’s not the way things usually go.
Still, no matter how ropey things get for these theories, it makes no sense to write them off as “not real”. If stereotype threat truly doesn’t exist, that means you could never, under any circumstances, run a study that produces results in line with the theory. That’s a crazy claim to make! We don’t have nearly enough evidence to support such a conclusion, and we never will.5
In fact, insisting that certain effects are “not real” merely provides an incentive for people to keep studying them, because it makes their results newsworthy: “Look, we’re proving the existence of a supposedly nonexistent effect!” But of course it’s rare for anyone to actually prove such a thing. Instead, they are almost always “proving” that, given infinitely flexible theories and infinite ways to test them, you can produce some small effect that kind-of sort-of accords with some version of the hypothesis, broadly construed. No one should claim that this is impossible, and no one should get credit for showing that it is possible.
If we appreciated how hard it is to kill a theory for good, maybe we’d stop wasting our time trying to do exactly that. For instance, ego depletion—the idea that willpower is a “muscle” that can get “fatigued” by overuse—has been the subject of at least three big replication attempts. This preregistered multi-lab replication from 2016 found no effect (N = 2,141). This preregistered multi-lab replication from 2022 also found no effect (N = 3,531). But oops, this other preregistered multi-lab replication from 2022 did find an effect (N = 1,775). At this point, maybe we should cut it out with all the preregistered multi-lab replications and just admit this theory is never going to die, and to spend any more effort investigating it would be embarrassing for all involved.

SEEING THE LIGHT AND LYING ABOUT IT
Science snobs love to claim that this problem is unique to the social sciences, as if falsification is a breeze everywhere else. But it isn’t.
For example, when Arthur Eddington went out to test the theory of relativity by photographing an eclipse in 1919, he ended up throwing out several pictures that “didn’t work”. He reasoned that the sun had heated the glass of his telescope unevenly, throwing off the results. Was that fair? Was it right? Were those pictures legitimate and failed tests of the theory, or were they tainted by faulty equipment?
Now that other results have independently supported Einstein’s theory, Eddington’s choice to ditch the disconfirming data seems appropriate and wise. But in the moment, it sure looked like p-hacking (where the p in this case stands for “photograph”).6

When you omit the inconvenient details, Eddington’s adventure seems like a classic case of falsification. That’s probably why it partly inspired the philosopher of science Karl Popper to come up with the idea of falsification in the first place.7
The Eddington affair isn’t unique. Knock-down, drag-out disconfirmations are rarer than we would like to admit. Francesco Redi performed the first experiments “disproving” spontaneous generation in the 1660s, but Louis Pasteur was still “disproving” it in the 1860s! Remember during the pandemic, when people were arguing about whether respiratory viruses can spread via aerosols, whether masks work, and whether UV light can protect us from infection? It seemed like those arguments arose around the same time that a novel coronavirus jumped down someone’s windpipe, but in fact they had been going on for a hundred years.8
Here’s a fun one: in 1906, when Camillo Golgi won the Nobel Prize for Physiology or Medicine, he used his acceptance speech to argue against the “neuron doctrine”, the idea that the brain is made up of functionally independent cells. This was surprising to the neuroanatomist Santiago Ramón y Cajal, who happened to be the biggest proponent of the neuron doctrine, and who also happened to be in the audience, sharing the other half of the prize. Ramón y Cajal, for his part, would later retort that Golgi’s images were “artificially distorted and falsified”.9
These disputes didn’t end because someone recanted their beliefs or committed scientific seppuku. Max Planck famously quipped that science advances one funeral at a time10, but that’s not quite right, because nothing changes if everyone at the funeral vows to continue the legacy of the dead. It seems to me that science actually advances one young person’s decision at a time. Do they choose to keep carrying the banner for increasingly cringe hypotheses, do they enter into endless disputes over the aliveness or deadness of theories, or do they take a another page from the Monty Python playbook, and decide to do something completely different?

FINISH THE FIGHT
When the replication crisis kicked off a decade ago and “classic” psychological phenomena started looking shaky, we could have asked ourselves, Talking Heads-style, “Well, how did we get here?”11 When we run a study and get a result, what does that mean? Which studies should we be running in the first place? What the hell are we doing? What’s it all for?
We did not ask ourselves these questions. Instead, we checked “have a reckoning” off our our to-do lists and then we went back to business as usual, just with bigger sample sizes and occasional preregistration. I fear that we are concluding our come-to-Jesus moment without interrogating any of the mistakes that put us face-to-face with the Son of Man in the first place.
We seem to believe that tighter stats, more transparent methods, and time-stamped analysis plans will separate the true ideas from the false ideas, like Jesus separating the sheep from the goats on the Day of Judgment. Unfortunately, it turns out that some of the sheep and goats are hard to tell apart. And some of the goat-owners are absolutely insisting that their goats are sheep.12
We could spend the rest of time trying to get to the bottom of this. Or we could admit that, if we’ve argued about it this long and we’ve gotten nowhere, maybe we’ve crossed into cringe territory and it’s time to call it quits. We can’t say the Spartan is dead. But we can say he’s probably not coming back.
The Lever of Riches, p. 252.
Further discussion of it [continental drift] merely encumbers the literature and befogs the minds of fellow students. [It is] as antiquated as pre-Curie physics. It is a fairy tale.
There is no end to the absurdities connected with this doctrine. Suffice it to say, that in the ordinary sense of the word, there is no proof, such as would be admitted in any scientific inquiry, that there is any such thing as ‘contagion.’
Extermination [of smallpox] will be proved to be impossible, unless the vaccinators be mightier than the Almighty God himself
Imagine you read that some guy in your city was murdered. Then the next day you find out that, actually, the guy’s fine; he merely faked his death for tax reasons. You would not conclude, “Ah, turns out murders never happen, and could never happen!” Similarly, when a result fails to replicate, it doesn’t mean that every possible version of the theory is invalidated forever. It merely tightens the space of possibilities around the idea, which almost always leaves it less interesting and more embarrassing.
See The Knowledge Machine by Michael Strevins, p. 41-46.
p. 38-39.
See Carl Zimmer’s Airborne.
See Lorraine Daston and Peter Galison’s Objectivity, p. 115-116.
As is usually the case with quotes like these, the canonical version was said by someone else.







I recently finished college (and am devouring your substack's archive, sorry, not feeling quite full yet), and it's so funny admiring science before trying to actually make it and then seeing it from the inside and noticing it's essentially "orderly infighting". Nonetheless, it only made studying science even wilder, like: "Wow we actually got somewhere by doing this!".
Writing through this comment I noticed that it only reinforces your past point that science is a strong-link problem, and the proof is that we actually got where we are almost "despite" the way it works.
Yes! And “Science” is just one form of knowledge production, one way of understanding, producing, and categorizing what we count as knowledge, and subsequently how we shape our societies. The equation of numerical data to truth is a very modern invention that shaped modern knowledge into the data-obsessed culture the west has now.
Shapin and Schaffer’s “leviathan and the air pump” and Mitchell’s “rule of experts” are great works in history of science that discuss this, in addition to the literature you use