74 Comments

A great article, but it starts with a wrong premise. Not all problems are either weak or strong-link problems. Some problems are "average level problems". That's where there is value in lifting the average quality. I think science is both a strong-link problem and an average-level problem. Science is a strong-link problem when you want large leaps forward in understanding of the world. That's the focus of the article. But in today's society science is also used to answer lots of small empirical questions in many different fields. It actually matters a great deal that the quality of that research is reasonably good. Its not quite a weak-link problem (although it is good to avoid outright frauds). I use research papers all the time for my work. It matters that the methods, data and theories get better and more rigorous over time. It also matters that scientific findings in the aggregate are trustwordy when they are used to guide important decisions.

I don't think my field is unique. A good friend of mine who is a doctor is constantly complaining that some of the research used to justify the prescription of some very common drugs is actually quite weak. Interestingly enough there are hundreds of papers and dozens of metastudies on those particular drugs, but still large uncertainties that could be solved with the right research design. This is something that really matters!

I don't think that this detracts from the overall point that the organisation of science today is broken. Whilst there certainly is constant improvement in some qualitative aspects, it seems to me that academia doesn't value the kind of rigorous and slightly boring empirical work that is actually usefull here and now. When I use empirical findings from research in my work, it is usually not what's the main focus of the papers. The main focus is usually some shiny new "advance" in the theory that often doesn't amount to much, but is the reason the paper gets published in a top journal. As a practitioner I would easily trade that for some more and better data. But assembeling data is hard and pretty unglamorous.

Expand full comment

Love this framing. I think of different sports to explain the concept - basketball tends towards strong link (your team will go as far as Lebron James takes you) vs. soccer as weak link (It's great to have Messi on your team but if your goalie sucks it's not going to matter).

Expand full comment
Apr 11, 2023Liked by Adam Mastroianni

This is a wonderful, thought-provoking, and sobering essay. The academic treadmill needs an overhaul, but it isn't clear how that can happen.

Expand full comment
Apr 12, 2023Liked by Adam Mastroianni

dear adam,

this is a strong-link piece of writing! i'm going to strongly link it to people!

love,

myq

Expand full comment

One thing I came to realise recently is the problem of a lack of shared vocabulary, particularly in the case of interdisciplinary work that you mention in the article as “hard to fund”, but which is probably also valid when trying to pitch novel ideas (which bring their own vocabulary) or try to overcome inertia or peer conformity, or compliance of consistency.

I would argue that in many cases of interdisciplinary work, the vocabulary used in the different disciplines is not the same or does not have the same meaning (case in point is “innovation” which habitually means different things to an engineer and an economist).

Dave Snowden highlights this nicely in his article on The Importance of Silos, where he shows that the less shared knowledge (thus vocabulary) a group has, the higher the cost of codification of knowledge becomes.

As cost of codification of knowledge increases with “group size” (in that case, more disciplines with different vocabularies), getting to understand each other and work together gets much more difficult and costly in interdisciplinary settings.

This also means that people doing the boundary spanning across silos need not only be good communicators per se, but actually be good translators between vocabularies of different silos.

Expand full comment

I love this as a heuristic, but from where I sit in the humanities, I'm less concerned about science's strong-link problem in terms of innovation and very worried about its weak-link problems in mission and ethics.

Take the vaccination/autism fraud. Following your reasoning, that's a weak link and not something that we should be worried about. Except of course we have to be worried about it because it has precipitated and complicated numerous health crises.

I think what's important to recognize here is that the scientific and technological advances that have proven to be incredibly damaging, or at least potentially so (genetic manipulation, chemical and nuclear weapons, AI) are all "strong-link" advances. They represent some amazing, groundbreaking science. They just happen to be mostly terrible.

So weak and strong science can be damaging. Why? because the weak link is that science (especially when paired with business) has a very weak link when it comes to understanding its proper mission of improving well-being, and ethically applying its advances accordingly. If we want to fix science, that's the link we need to focus on.

Expand full comment
Apr 11, 2023Liked by Adam Mastroianni

Facinating. What I missed was how to allocate finite assets. Whether research slots or funding, not everyone can play.

Expand full comment
Apr 12, 2023Liked by Adam Mastroianni

Thanks Adam - excellent piece. A pedantic point perhaps, but re: John Sulston's quote:

'...a 2:1 [a middling GPA in the British system]...'

A 2:1 (an upper second class honours degree) is actually only one step down from a first class honours degree, the highest grade possible. So, not really 'middling'. ;)

Expand full comment

Excuse me, Dr. Mastroianni, but I'd like to question your idea of "skulls of dead theories in science." Several of the dead theories you cite never truly died... although they were factually wrong IN THEIR IMMEDIATE CONTEXT, they went on to later, unintentionally contribute something great to science. Some examples:

"No more invisible rays shooting out of people’s eyes"

Ray Tracing is a massive innovation in computer graphics. It involves invisible rays shooting out of the user's camera, which may have been inspired by the theory of invisible rays shooting out of people's eyes. Thus, this dead theory may have went on to help create a recent computational innovation.

"No more measuring someone’s character by the bumps on their head"

Phrenology is now a cautionary story told to young scientist, to warn them of the danger of dedicating themselves to a lifetime of false science. In July 2023, for example, Simon Rose told a room of researchers the story of phrenology at the Cambridge University Evidence-Based Policing Conference. He used this story as a way to illustrate the dangers of blindly believing a methodology. In this light, phrenology can be seen as a massive 'discovery' of fake science.

I guess my point is that many of these "wrong" scientific ideas never really die. They go on to become angels that guide us away from bad practices (e.g., Phrenology), inspire future scientific innovations (e.g., Ray Tracing), or something else entirely. When looked in this light, the cost of stomping out bad science is even more sinister... it may be stomping out a study that, although it's incorrect in its immediate context, it may later become an angel that'll inspire a future scientific discovery...

Expand full comment

Great article, I thoroughly enjoyed it. However, science is very much *not* a strong-link problem. Progress in science fundamentally relies on an ever rising floor of tooling, without such a floor of mathematics, applied science, engineering, and a massive corpus of knowledge, science would be nothing more than cocktail and cigar banter (or fantastical sketches in da Vinci’s notebooks). The emergence of strong-link-esque scientists is dependent on such tools, devices, methods, and economic structures; without which there would be no platform for them to emerge from. This feedback loop, or bootstrapping, is actually iterating on the weak-link premise. In this bootstrapping system, the exploration energy required to advance outstanding unknowns increases superlinearly and not smoothly, which is why it can often feel stagnating. But as the floor rises, the infinitesimally small innovation a single scientist can hope to achieve in a lifetime is made possible.

The proof is very demonstrable: sabotage aside, nation-states that do not have legitimately functioning bootstrapping systems (grafting systems only) struggle to replicate even decades old science.

Expand full comment

I agree that science is a strong-link problem, but double-blind peer review is actually beneficial for strong-link problems, because it improves sensitivity at the cost of specificity.

I've written about this here: https://calvinmccarter.writeas.com/peer-review-worsens-precision-but-improves-recall

Expand full comment

I do wonder if a possible reason why we treat a strong-link problem as a weak-link one is some version of the Von Restorff effect.

We have more “documentation” about how the world works and history that explains how we figured it out. If we have a lot of it, lots of bad stories will stick out, overriding success stories.

In the past, we had the sense that we could afford the hit of an unfortunate event for the sake of progress. Nowadays we “know” a lot so we feel obliged to protect ourselves from the “bad” instead of silencing it by making the “good” stronger.

Expand full comment

An interesting mental framework with possible applications. It's unfortunate that you immediately use it for a bad example.

Drek science has consequences; how much bad institutional policy has come out of low numbers of scientific papers on subjects like psychological priming, which turn out to be unreplicable (and thus almost certainly not true)? Not to mention that the desire for novel research means that bad research actually drives out good.

Expand full comment
Apr 17, 2023·edited Apr 17, 2023

>>> 'Of course, it’s also easy to make the opposite mistake, to think you’re facing a strong-link

problem when in fact you’ve got a weak-link problem on your hands. It doesn’t really matter

how rich the richest are when the poorest are starving.....Whenever we demand laissez-faire, the

cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link

problem; we must promote the good!” '

Had the world approached economic growth as a weak link problem, many more hundreds of millions of poor would have been facing starvation today than actually are. You have very strong natural experiments that demonstrate this clearly - China pre and post 1979, India pre and post 1991, North and South Korea, West and East Germany.

If you want to 'find what's true and make it useful', learn more economics before throwing opinions out there.

Expand full comment

Great post, Dr. Mastroianni. I like how you took the concept from philanthropy and applied it to science and research. I hope you don't mind if I apply the lessons from your post to my writings on leadership.

I have some concerns that you dismissed giant replication projects as weak-link problems. What I've found in my line of work is that scientific research (even bad, non-reproducible research) is used as the foundation of policies and procedures that take decades to un-learn. Take cybersecurity policy (related to password strength) and building temperature standards, for example. Both are widely known by practitioners to be horribly outdated (women in my office bring electric heating blankets to avoid shivering in their cubicles), yet these policies persist. A society that follows ill-conceived rules for decades does real, measurable harm.

In the age where a lie can get halfway around the world before the truth can get its pants on, would you agree that some money spent on replication studies are worthwhile, if those studies can discredit bad science and stop people from relying on them? I understand that, on a long-enough time horizon, good science will prevail. But if it takes a major calamity (like a 9/11-scale disaster) to motivate structural change, would you agree that an ounce of prevention is worth a pound of cure? We can certainly let time filter out bad science, bad books, and bad music...but what if it's cost-effective to speed up the search for truth?

After all, know what NOT to do is as important as knowing what SHOULD be done.

Expand full comment

Maybe math is different. I have spent 150 hours over the last 10 years reviewing math papers, mostly for my favorite math journal. I check every line looking for mistakes. I feel like over 90% (maybe over 99%) of the theorems in the papers that I have reviewed are correct and because of that, future researchers can expect that 95% (maybe more) of the theorems that they use from our journal are correct.

Sometimes mistakes happen even with good authors and good reviewers. I remember years ago when I found two errors in the proof of a theorem in a published article written by a friend in our journal. I sent an email on a Friday to the author. Within 24 hours, he had fixed one of the errors, but on Monday he had to admit that the other error was uncorrectable. (I ended up finding a counter example to his idea (possibly a lemma).) We spent the next three months trying to find an alternative proof, and we were quite happy when we found one. We published a correction along with the corrected proof. The theorem was correct, but the original proof was wrong.

My point is that in mathematics, we want all the published theorems to be true so that future authors can depend on the veracity of the published theorems. Some math journals are better than others and I am guessing that only 90% of the published theorems are perfectly true, but we reviewers do try to move that up to 99%.

Expand full comment