Discussion about this post

User's avatar
Geoff's avatar

The question “why aren’t smart people happier” basically takes the question asked by books such as Stumbling on Happiness that asks why people are not better at attaining happiness and modifies it to why aren’t smart people at least better at it.

This assumes people seek happiness as their primary goal, but I think there are well supported reasons to think that this is not true. The literature that points out that many people spend far too little time on activities that make them happy and too much time on those that don’t is commonly used to prove that people are fundamentally irrational or misinformed about what makes them happy. These same facts can alternatively be used to show that happiness is not peoples primary goal; that things like social status, money, or self-identity fulfillment are more important. These alternative explanations make the finding that people work more, commute longer, and take fewer vacations than one might expect under the hedonistic assumption unsurprising. Smart people are in fact better at the solving the problems of attaining higher social status, earning more money, etc.

As for what can actually explain any correlation between intelligence and happiness the author again does a poor job. The author points out that there are very few things people can do that reliably increase their happiness by a large amount but leaves out that this is because the majority of the variance between people in happiness is caused by factors like genetics that contribute to a persons baseline level of happiness. These factors are also correlated with personality factors such as extroversion, so it seems likely that any correlation between intelligence and happiness can be explained by correlations between intelligence and personality factors. Isn’t the idea that happiness is best attained by those who don’t try to attack it as a well defined problem just pointing out the negative correlation between neuroticism and happiness?

Expand full comment
James Carrico's avatar

I've been using the phrase "ungoogleable question" for a few years now to try and get at something like the issue you're describing. The poorly-defined vs well-defined brings me a great deal of clarity to the matter.

In another Substack thread there was some discussion around how DALL-E type AI machines can't really make a distinction between "X does Y" and "X tries to do Y". Something about capturing the essence of "trying" becomes a very difficult concept, and it certainly tilts things toward the less-well-defined end of the spectrum. And anyway AI doesn't try, only does.

I think of the Daft Punk song, Human After All.

Expand full comment
158 more comments...

No posts