A great article, but it starts with a wrong premise. Not all problems are either weak or strong-link problems. Some problems are "average level problems". That's where there is value in lifting the average quality. I think science is both a strong-link problem and an average-level problem. Science is a strong-link problem when you want large leaps forward in understanding of the world. That's the focus of the article. But in today's society science is also used to answer lots of small empirical questions in many different fields. It actually matters a great deal that the quality of that research is reasonably good. Its not quite a weak-link problem (although it is good to avoid outright frauds). I use research papers all the time for my work. It matters that the methods, data and theories get better and more rigorous over time. It also matters that scientific findings in the aggregate are trustwordy when they are used to guide important decisions.
I don't think my field is unique. A good friend of mine who is a doctor is constantly complaining that some of the research used to justify the prescription of some very common drugs is actually quite weak. Interestingly enough there are hundreds of papers and dozens of metastudies on those particular drugs, but still large uncertainties that could be solved with the right research design. This is something that really matters!
I don't think that this detracts from the overall point that the organisation of science today is broken. Whilst there certainly is constant improvement in some qualitative aspects, it seems to me that academia doesn't value the kind of rigorous and slightly boring empirical work that is actually usefull here and now. When I use empirical findings from research in my work, it is usually not what's the main focus of the papers. The main focus is usually some shiny new "advance" in the theory that often doesn't amount to much, but is the reason the paper gets published in a top journal. As a practitioner I would easily trade that for some more and better data. But assembeling data is hard and pretty unglamorous.
Love this framing. I think of different sports to explain the concept - basketball tends towards strong link (your team will go as far as Lebron James takes you) vs. soccer as weak link (It's great to have Messi on your team but if your goalie sucks it's not going to matter).
One thing I came to realise recently is the problem of a lack of shared vocabulary, particularly in the case of interdisciplinary work that you mention in the article as “hard to fund”, but which is probably also valid when trying to pitch novel ideas (which bring their own vocabulary) or try to overcome inertia or peer conformity, or compliance of consistency.
I would argue that in many cases of interdisciplinary work, the vocabulary used in the different disciplines is not the same or does not have the same meaning (case in point is “innovation” which habitually means different things to an engineer and an economist).
Dave Snowden highlights this nicely in his article on The Importance of Silos, where he shows that the less shared knowledge (thus vocabulary) a group has, the higher the cost of codification of knowledge becomes.
As cost of codification of knowledge increases with “group size” (in that case, more disciplines with different vocabularies), getting to understand each other and work together gets much more difficult and costly in interdisciplinary settings.
This also means that people doing the boundary spanning across silos need not only be good communicators per se, but actually be good translators between vocabularies of different silos.
I love this as a heuristic, but from where I sit in the humanities, I'm less concerned about science's strong-link problem in terms of innovation and very worried about its weak-link problems in mission and ethics.
Take the vaccination/autism fraud. Following your reasoning, that's a weak link and not something that we should be worried about. Except of course we have to be worried about it because it has precipitated and complicated numerous health crises.
I think what's important to recognize here is that the scientific and technological advances that have proven to be incredibly damaging, or at least potentially so (genetic manipulation, chemical and nuclear weapons, AI) are all "strong-link" advances. They represent some amazing, groundbreaking science. They just happen to be mostly terrible.
So weak and strong science can be damaging. Why? because the weak link is that science (especially when paired with business) has a very weak link when it comes to understanding its proper mission of improving well-being, and ethically applying its advances accordingly. If we want to fix science, that's the link we need to focus on.
Thanks Adam - excellent piece. A pedantic point perhaps, but re: John Sulston's quote:
'...a 2:1 [a middling GPA in the British system]...'
A 2:1 (an upper second class honours degree) is actually only one step down from a first class honours degree, the highest grade possible. So, not really 'middling'. ;)
I agree that science is a strong-link problem, but double-blind peer review is actually beneficial for strong-link problems, because it improves sensitivity at the cost of specificity.
Great article, I thoroughly enjoyed it. However, science is very much *not* a strong-link problem. Progress in science fundamentally relies on an ever rising floor of tooling, without such a floor of mathematics, applied science, engineering, and a massive corpus of knowledge, science would be nothing more than cocktail and cigar banter (or fantastical sketches in da Vinci’s notebooks). The emergence of strong-link-esque scientists is dependent on such tools, devices, methods, and economic structures; without which there would be no platform for them to emerge from. This feedback loop, or bootstrapping, is actually iterating on the weak-link premise. In this bootstrapping system, the exploration energy required to advance outstanding unknowns increases superlinearly and not smoothly, which is why it can often feel stagnating. But as the floor rises, the infinitesimally small innovation a single scientist can hope to achieve in a lifetime is made possible.
The proof is very demonstrable: sabotage aside, nation-states that do not have legitimately functioning bootstrapping systems (grafting systems only) struggle to replicate even decades old science.
>>> 'Of course, it’s also easy to make the opposite mistake, to think you’re facing a strong-link
problem when in fact you’ve got a weak-link problem on your hands. It doesn’t really matter
how rich the richest are when the poorest are starving.....Whenever we demand laissez-faire, the
cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link
problem; we must promote the good!” '
Had the world approached economic growth as a weak link problem, many more hundreds of millions of poor would have been facing starvation today than actually are. You have very strong natural experiments that demonstrate this clearly - China pre and post 1979, India pre and post 1991, North and South Korea, West and East Germany.
If you want to 'find what's true and make it useful', learn more economics before throwing opinions out there.
Love this simple dichotomy for framing priorities. I'm curious about the attendant characteristics of weak- and strong-link paradigms, especially in more ambiguous situations or when the paradigms are applied inappropriately...are there pros and cons inherent to each, or do they depend on the application to the situation or both? I'm also curious about the process of transitioning from weak- to strong-link paradigms (or vice versa) in any given situation...what variables must change in order to declare the paradigm 'shifted'? Not that you can answer these questions...food for thought!
I do wonder if a possible reason why we treat a strong-link problem as a weak-link one is some version of the Von Restorff effect.
We have more “documentation” about how the world works and history that explains how we figured it out. If we have a lot of it, lots of bad stories will stick out, overriding success stories.
In the past, we had the sense that we could afford the hit of an unfortunate event for the sake of progress. Nowadays we “know” a lot so we feel obliged to protect ourselves from the “bad” instead of silencing it by making the “good” stronger.
An interesting mental framework with possible applications. It's unfortunate that you immediately use it for a bad example.
Drek science has consequences; how much bad institutional policy has come out of low numbers of scientific papers on subjects like psychological priming, which turn out to be unreplicable (and thus almost certainly not true)? Not to mention that the desire for novel research means that bad research actually drives out good.
I was very sceptical reading this article but it had an interesting premise, so I kept reading But when I came to this spot -
"There’s no point in picking some studies that are convenient to replicate, doing ‘em over, and reporting “only 36% of them replicate!” In a strong-link situation, most studies don’t matter. To borrow the words of a wise colleague: “What do I care if it happened a second time? I didn’t care when it happened the first time!”
- I realized that you don't actually understand science. Seeing if you can repeat results under different conditions is elemental to the scientific process. You don't want to invent a fancy new microwave that explodes because it suddenly got 2 degrees warmer than in the first and only study.
The rest of the essay was similarly questionable.
"I think there are two reasons why scientists act like science is a weak-link problem. [...] Fear."
Okay sure. I'm familiar with how shaky the financial security of academics are. The second point has to be ethics right. Right?
"The second reason is status." ????
You just made the same point again but from a different angle. What about ethics, seriously?
"The US government spends ~10x more on science today than it did in 1956, adjusted for inflation. We’ve got loads more scientists, and they publish way more papers. And yet science is less disruptive than ever, scientific productivity has been falling for decades, and scientists rate the discoveries of decades ago as worthier than the discoveries of today."
Wow I wonder what brilliant and ethical studies happened during that time that advanced science by titanic steps - like the Tuskeegee Experiment and several other top hits.
Seriously, I was waiting for you to start praising Elon Musk and Tesla's death trap cars at the end. With all due respect: What the fuck?
What this misses about the weak link problem is that you can't really ignore the low quality stuff, it has a cost. The more low quality stuff the harder it is to find the good stuff, especially if you have no good way to measure quality. What you want is to be able to rank them as objectively as possible . That's where ratings are so helpful. A good system to replace peer reviews would have scientists rating other papers in their field. I can envision a token gifting system.
A great article, but it starts with a wrong premise. Not all problems are either weak or strong-link problems. Some problems are "average level problems". That's where there is value in lifting the average quality. I think science is both a strong-link problem and an average-level problem. Science is a strong-link problem when you want large leaps forward in understanding of the world. That's the focus of the article. But in today's society science is also used to answer lots of small empirical questions in many different fields. It actually matters a great deal that the quality of that research is reasonably good. Its not quite a weak-link problem (although it is good to avoid outright frauds). I use research papers all the time for my work. It matters that the methods, data and theories get better and more rigorous over time. It also matters that scientific findings in the aggregate are trustwordy when they are used to guide important decisions.
I don't think my field is unique. A good friend of mine who is a doctor is constantly complaining that some of the research used to justify the prescription of some very common drugs is actually quite weak. Interestingly enough there are hundreds of papers and dozens of metastudies on those particular drugs, but still large uncertainties that could be solved with the right research design. This is something that really matters!
I don't think that this detracts from the overall point that the organisation of science today is broken. Whilst there certainly is constant improvement in some qualitative aspects, it seems to me that academia doesn't value the kind of rigorous and slightly boring empirical work that is actually usefull here and now. When I use empirical findings from research in my work, it is usually not what's the main focus of the papers. The main focus is usually some shiny new "advance" in the theory that often doesn't amount to much, but is the reason the paper gets published in a top journal. As a practitioner I would easily trade that for some more and better data. But assembeling data is hard and pretty unglamorous.
Love this framing. I think of different sports to explain the concept - basketball tends towards strong link (your team will go as far as Lebron James takes you) vs. soccer as weak link (It's great to have Messi on your team but if your goalie sucks it's not going to matter).
This is a wonderful, thought-provoking, and sobering essay. The academic treadmill needs an overhaul, but it isn't clear how that can happen.
dear adam,
this is a strong-link piece of writing! i'm going to strongly link it to people!
love,
myq
One thing I came to realise recently is the problem of a lack of shared vocabulary, particularly in the case of interdisciplinary work that you mention in the article as “hard to fund”, but which is probably also valid when trying to pitch novel ideas (which bring their own vocabulary) or try to overcome inertia or peer conformity, or compliance of consistency.
I would argue that in many cases of interdisciplinary work, the vocabulary used in the different disciplines is not the same or does not have the same meaning (case in point is “innovation” which habitually means different things to an engineer and an economist).
Dave Snowden highlights this nicely in his article on The Importance of Silos, where he shows that the less shared knowledge (thus vocabulary) a group has, the higher the cost of codification of knowledge becomes.
As cost of codification of knowledge increases with “group size” (in that case, more disciplines with different vocabularies), getting to understand each other and work together gets much more difficult and costly in interdisciplinary settings.
This also means that people doing the boundary spanning across silos need not only be good communicators per se, but actually be good translators between vocabularies of different silos.
Facinating. What I missed was how to allocate finite assets. Whether research slots or funding, not everyone can play.
I love this as a heuristic, but from where I sit in the humanities, I'm less concerned about science's strong-link problem in terms of innovation and very worried about its weak-link problems in mission and ethics.
Take the vaccination/autism fraud. Following your reasoning, that's a weak link and not something that we should be worried about. Except of course we have to be worried about it because it has precipitated and complicated numerous health crises.
I think what's important to recognize here is that the scientific and technological advances that have proven to be incredibly damaging, or at least potentially so (genetic manipulation, chemical and nuclear weapons, AI) are all "strong-link" advances. They represent some amazing, groundbreaking science. They just happen to be mostly terrible.
So weak and strong science can be damaging. Why? because the weak link is that science (especially when paired with business) has a very weak link when it comes to understanding its proper mission of improving well-being, and ethically applying its advances accordingly. If we want to fix science, that's the link we need to focus on.
Thanks Adam - excellent piece. A pedantic point perhaps, but re: John Sulston's quote:
'...a 2:1 [a middling GPA in the British system]...'
A 2:1 (an upper second class honours degree) is actually only one step down from a first class honours degree, the highest grade possible. So, not really 'middling'. ;)
I agree that science is a strong-link problem, but double-blind peer review is actually beneficial for strong-link problems, because it improves sensitivity at the cost of specificity.
I've written about this here: https://calvinmccarter.writeas.com/peer-review-worsens-precision-but-improves-recall
Great article, I thoroughly enjoyed it. However, science is very much *not* a strong-link problem. Progress in science fundamentally relies on an ever rising floor of tooling, without such a floor of mathematics, applied science, engineering, and a massive corpus of knowledge, science would be nothing more than cocktail and cigar banter (or fantastical sketches in da Vinci’s notebooks). The emergence of strong-link-esque scientists is dependent on such tools, devices, methods, and economic structures; without which there would be no platform for them to emerge from. This feedback loop, or bootstrapping, is actually iterating on the weak-link premise. In this bootstrapping system, the exploration energy required to advance outstanding unknowns increases superlinearly and not smoothly, which is why it can often feel stagnating. But as the floor rises, the infinitesimally small innovation a single scientist can hope to achieve in a lifetime is made possible.
The proof is very demonstrable: sabotage aside, nation-states that do not have legitimately functioning bootstrapping systems (grafting systems only) struggle to replicate even decades old science.
>>> 'Of course, it’s also easy to make the opposite mistake, to think you’re facing a strong-link
problem when in fact you’ve got a weak-link problem on your hands. It doesn’t really matter
how rich the richest are when the poorest are starving.....Whenever we demand laissez-faire, the
cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link
problem; we must promote the good!” '
Had the world approached economic growth as a weak link problem, many more hundreds of millions of poor would have been facing starvation today than actually are. You have very strong natural experiments that demonstrate this clearly - China pre and post 1979, India pre and post 1991, North and South Korea, West and East Germany.
If you want to 'find what's true and make it useful', learn more economics before throwing opinions out there.
Love this simple dichotomy for framing priorities. I'm curious about the attendant characteristics of weak- and strong-link paradigms, especially in more ambiguous situations or when the paradigms are applied inappropriately...are there pros and cons inherent to each, or do they depend on the application to the situation or both? I'm also curious about the process of transitioning from weak- to strong-link paradigms (or vice versa) in any given situation...what variables must change in order to declare the paradigm 'shifted'? Not that you can answer these questions...food for thought!
I do wonder if a possible reason why we treat a strong-link problem as a weak-link one is some version of the Von Restorff effect.
We have more “documentation” about how the world works and history that explains how we figured it out. If we have a lot of it, lots of bad stories will stick out, overriding success stories.
In the past, we had the sense that we could afford the hit of an unfortunate event for the sake of progress. Nowadays we “know” a lot so we feel obliged to protect ourselves from the “bad” instead of silencing it by making the “good” stronger.
An interesting mental framework with possible applications. It's unfortunate that you immediately use it for a bad example.
Drek science has consequences; how much bad institutional policy has come out of low numbers of scientific papers on subjects like psychological priming, which turn out to be unreplicable (and thus almost certainly not true)? Not to mention that the desire for novel research means that bad research actually drives out good.
I was very sceptical reading this article but it had an interesting premise, so I kept reading But when I came to this spot -
"There’s no point in picking some studies that are convenient to replicate, doing ‘em over, and reporting “only 36% of them replicate!” In a strong-link situation, most studies don’t matter. To borrow the words of a wise colleague: “What do I care if it happened a second time? I didn’t care when it happened the first time!”
- I realized that you don't actually understand science. Seeing if you can repeat results under different conditions is elemental to the scientific process. You don't want to invent a fancy new microwave that explodes because it suddenly got 2 degrees warmer than in the first and only study.
The rest of the essay was similarly questionable.
"I think there are two reasons why scientists act like science is a weak-link problem. [...] Fear."
Okay sure. I'm familiar with how shaky the financial security of academics are. The second point has to be ethics right. Right?
"The second reason is status." ????
You just made the same point again but from a different angle. What about ethics, seriously?
"The US government spends ~10x more on science today than it did in 1956, adjusted for inflation. We’ve got loads more scientists, and they publish way more papers. And yet science is less disruptive than ever, scientific productivity has been falling for decades, and scientists rate the discoveries of decades ago as worthier than the discoveries of today."
Wow I wonder what brilliant and ethical studies happened during that time that advanced science by titanic steps - like the Tuskeegee Experiment and several other top hits.
Seriously, I was waiting for you to start praising Elon Musk and Tesla's death trap cars at the end. With all due respect: What the fuck?
I'm honestly disappointed with this essay.
What this misses about the weak link problem is that you can't really ignore the low quality stuff, it has a cost. The more low quality stuff the harder it is to find the good stuff, especially if you have no good way to measure quality. What you want is to be able to rank them as objectively as possible . That's where ratings are so helpful. A good system to replace peer reviews would have scientists rating other papers in their field. I can envision a token gifting system.