Any jackass can kick a barn door down, but it takes a carpenter to build it back.
In psychology, we’ve recently been kicking down a lot of barn doors. In 2015, a bunch of psychologists tried to redo 100 studies and less than half replicated. Even the ones that “worked” produced much smaller effects than the original authors reported. Another group tried to replicate 28 studies and found the same thing. Individual replication failures have thrown famous effects into doubt: power posing, growth mindset, social priming, stereotype threat, ego depletion.
What do you do when the barn doors come a-tumbling down? You can join in and start kicking, or you can kick back. After all, the replicators sometimes deserve the business end of a boot, too. Some of them make unjustified changes to the original studies, like trying to replicate a study about race in America using Italian participants. When choosing studies to replicate, they prefer convenience over scientific importance. And occasional replication failures are a sign of a healthy science: if everything works when we try it again, we’re not asking very interesting questions.
All of this kicking has produced lots of splinters and bruised shins. I’m here to take a different approach. Instead of asking “How bad is it?”, I’m here to ask “What’s good?” When people start kicking down barn doors, you could politely ask them to stop, or to at least kick down a representative sample of barn doors instead. Or you could go looking for barn doors that don’t deserve kicking down at all. Maybe if you look closely at these barn doors, you can learn something about building good ones. And you might rediscover why you cared about barn doors in the first place.
So let’s look.
The fading affect bias
Two short stories:
When I was 12, I got third place in a Yu-Gi-Oh! tournament. (Yu-Gi-Oh! is like Pokémon but worse.) At the time, it felt great. When I remember it now, it feels less great, but it still feels pretty good.
When I was 18, my high school girlfriend dumped me. “I never actually liked you,” she said, “And you’re boring.” At the time, this felt terrible. But when I remember it now, it doesn’t feel terrible at all; it feels hilarious. My immediate, flailing response to her was, “Could you give me some feedback so that I can be a better boyfriend in my next relationship?” How could you not laugh at a guy whose first instinct in a breakup is to conduct an exit interview?
These stories illustrate two truths. 1) I’m a big ol’ nerd, and 2) the goodness and badness of memories fade over time, but the badness fades faster—that’s the fading affect bias. Some bad memories even become good memories, while good memories rarely become bad memories.
It makes sense that both joy and pain fade with time—stuff just feels less intense when it's farther away—but why does pain fade faster? It’s because when bad stuff happens to us, our psychological immune systems turn on. We start to rationalize (“Why would I want to be with someone who doesn’t want to be with me?”), downplay (“Breakups happen all the time in high school, it’s no big deal”), distance (“I never liked her that much anyway!”) and distract (“I’m gonna go play video games”). These mental processes function like emotional antibodies, taking the sting out of bad memories. We don’t use them on good memories, so good memories keep their luster longer.
The fading affect bias deserves higher regard because it’s got a message everybody should hear: things that feel bad today will probably feel less bad in the future. “Tragedy + time = comedy" is the closest thing psychology has to a chemical equation. So when your girlfriend dumps you and you’re feeling awful, pretty much all you have to do is wait, and you’re allowed to play Yu-Gi-Oh! in the meantime. Plus, when something feels great, you can rest assured that it will probably still feel good decades from now. Everything is temporary, bad stuff especially. I find that very comforting.
The illusion of explanatory depth
I use a computer every day, so I definitely know how computers work. Let me explain it to you. So, there’s a screen. And the screen lights up because of…electrons? And when I press buttons, stuff happens on the screen, and I think that’s because of electrons too. The computer remembers stuff and can do math and that’s definitely because of electrons. And there is the internet because of lots of electrons zipping around.
Okay look I thought I knew how a computer works but I really don’t. That’s the illusion of explanatory depth. I have it for everything: toilets (the water goes whoosh and the poop goes away; I have no idea how this happens), cars (gasoline makes the pistons go up and down and something something the car moves), and capitalism (people sell stuff and people buy stuff and this is bad, I guess?).
Calling this the illusion of explanatory depth undersells a really profound insight: people generally know exactly as much about the world as they need to know, and they rarely realize the limits of their knowledge. There are only a few things you need to know about toilets, cars, and capitalism in order to get what you need out of them. That makes it feel like you know a lot about them, but you don’t. You can flush a million times, drive a million miles, and spend a million bucks without learning much about cars, toilets, or capitalism.
Not knowing stuff is fine; the real problem is that we don’t know that we don’t know. Ignorance plus ignorance about your ignorance is a recipe for overconfidence. When people get into shouting matches about pandemic policy or how the Ukrainian army can beat Russia or who’s going to win the next election, you can bet their illusions of explanatory depth are on full blast.
(This effect is related to the much more famous Dunning-Kruger effect, but that one is, I think, overrated.)
The focusing illusion
There’s a whole big world out there and we only have two eyes to see it with and one brain to think about it. We can’t behold everything at once, so we have to pick one thing at a time and filter the rest, and the mental process of picking and filtering is called attention.
You might think that attention is a window between our brains and the world. You peer over here, you peer over there, like a submarine twisting its periscope around. And that’s mostly true, except the window is also a magnifying glass, enlarging whatever you see through it. Simply attending to something can make that thing seem more important than it really is.
The focusing illusion especially distorts our view when we think about what makes us happy. For instance, how much happier would you be if you had an extra $10,000? You might picture yourself paying off your credit card or hopping a plane to Bali. Those things really would make you happy—just not as much as you think. For one thing, you’re not thinking about all the less glamorous details: you may pay off your Mastercard but you’re still behind on rent, and that flight to Bali might be at 6am and get delayed two hours. For another, you tend to get used to good things and end up just as happy as you were before. (I recently wrote about how to short-circuit this process.)
I think the focusing illusion is underrated because it may secretly be the reason we do anything at all. We have to make a billion stupid little decisions every day: wear the plaid or the paisley? Order the shrimp fried rice or beef lo mein? Watch the next episode or go to sleep? Almost none of these decisions will actually affect our happiness much at all, but we have to choose something. How do we make all these unimportant choices? Maybe our mind tricks us into thinking they’re actually important choices. Perhaps a pinch of focusing illusion helps us quickly pick the shrimp over the beef and move on with our day, but a pile of focusing illusion sends us spiraling: shrimp! no, beef! no, wait, shrimp!
The region beta paradox
Imagine you have a rule: you always walk whenever you’re traveling a mile or less, and you always drive whenever you’re going more than a mile. If you follow that rule, you will, paradoxically, travel two miles faster than you travel one mile. That’s the region beta paradox.
This effect has largely been forgotten, and that’s a shame because the region beta paradox points out something important: if you only take action when things cross a certain threshold of badness, sometimes better things can feel worse than worse things. If you feel miserable for a month, you might go to therapy. But if you feel a little bleh for a month, you might never do anything about it—“I mean, I’m not depressed”—and a month of bleh can stretch into years.
Look around and you’ll found lots of people stuck in region beta: the guy who sticks around his just-okay job instead of ditching it for the chance of something better, the couple who should break up but can’t bring themselves to do it, the friend who refuses to get a new apartment because their current one only has some black mold. All of these people would actually be better off if their situations were worse, because they’d leave their jobs, partners, and apartments, and be glad they did. Their only regret would be not leaving sooner.
The planning fallacy
Things tend to take longer than we think, especially when we don’t think much, and even when we should know better. This costs us dearly. The Sydney Opera House was supposed to take six years to build; it took sixteen. My students never plan on turning their final project in late, but the night before it’s due, my email fills with extension requests. I’m no better: I was supposed to have this post done a couple days ago and here I am still fiddling with the commas.
We all replicate the planning fallacy pretty much every day. We make too-long to-do lists, we show up late to meetings because we didn’t account for traffic, we promise to get things done by the end of the day and end up turning them in next week. Everybody knows that when the mayor promises the new bridge will be done by 2024 and will cost $10 million, it’s really won’t be done until 2027 and it’s going to cost $20 million.
That’s why, even though the planning fallacy is highly rated, it’s still wildly underrated. There should be more conversations that go like this:
“I’ll probably finish my novel in six months.”
“Planning fallacy.”
“I’ll probably finish my novel in a year.”
“PLANNING FALLACY!"
“I’m never going to finish my novel."
What do these underrated ideas have in common?
I reckon these are five mighty fine barn doors that do not need to be kicked down. That doesn’t mean they’re flawless, infallible, or unbounded—these are psychological truths, not physical laws, and they should be admired, not idolized.
I can’t help but notice that these ideas have a couple things in common. First, you can do a decent job of replicating all of them yourself. To replicate the planning fallacy, for instance, just keep track of how long you think it will take to do stuff and how long it actually takes to do stuff. To replicate the fading affect bias, keep track of how stuff feels at the time, wait a couple days, weeks, months, or years, then rate how it feels to remember it. Better yet, do these things to someone who doesn’t know the hypotheses.
That’s why, no matter what happens to the original studies backing these ideas—notice I didn’t even mention them!—the ideas themselves are unlikely to be purged from the scientific record because they “failed to replicate.” In fact, those original studies are more like allegories than experiments, meant to illustrate rather than to prove. To paraphrase the theologian Rob Bell, the studies backing these ideas are like Shakespearean plays or Biblical parables: the point is not that they happened, but that they happen. A good word for these ideas, then, might be empirical art.
And, much like works of art, the most important feature that binds these ideas together is simply that they’re beautiful and looking at them feels good and right. That sensation you get when you hear a major chord or take the first bite of a perfectly grilled steak or stare at the ceiling of the Sistine Chapel—I get that same sensation when I think about these ideas. It’s a deep rightness and a rich pleasure, as if my soul starts to purr in the presence of Truth. (And if you don’t feel this way because you think these ideas are obvious, may I introduce you to an honorable mention on the list of underrated ideas: hindsight bias.)
That feeling is precious not just because it feels good, but because it is good. Appreciating beauty is both a scientific virtue and a regular virtue. Kicking down barn doors may feel fun for a while, but if you keep doing it, all you’ll get is a world full of kicked-down barn doors. In psychology, in science, and in everything, the point of kicking something down is to make room for something better. The world has plenty of jackasses. It needs more carpenters.
Good one. The explanatory depth part reminds me of another great post (http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail), which really drives the point home that EVERYTHING is so insanely detailed, and you realize it only when you (have to) pay attention. For example, stairs are easy when you don't have to build them, but become an engineering marvel when you actually have to build them. Really makes you appreciate the makers of the world. They're in the same boat as we are, but manage to actually... make things? Despite this insane amount of detail in reality? That's crazy.
Excellent stuff. The explanatory depth fallacy was nicely illustrated (literally) by having subjects draw a bicycle. Much harder than it sounds.