75 Comments

A great article, but it starts with a wrong premise. Not all problems are either weak or strong-link problems. Some problems are "average level problems". That's where there is value in lifting the average quality. I think science is both a strong-link problem and an average-level problem. Science is a strong-link problem when you want large leaps forward in understanding of the world. That's the focus of the article. But in today's society science is also used to answer lots of small empirical questions in many different fields. It actually matters a great deal that the quality of that research is reasonably good. Its not quite a weak-link problem (although it is good to avoid outright frauds). I use research papers all the time for my work. It matters that the methods, data and theories get better and more rigorous over time. It also matters that scientific findings in the aggregate are trustwordy when they are used to guide important decisions.

I don't think my field is unique. A good friend of mine who is a doctor is constantly complaining that some of the research used to justify the prescription of some very common drugs is actually quite weak. Interestingly enough there are hundreds of papers and dozens of metastudies on those particular drugs, but still large uncertainties that could be solved with the right research design. This is something that really matters!

I don't think that this detracts from the overall point that the organisation of science today is broken. Whilst there certainly is constant improvement in some qualitative aspects, it seems to me that academia doesn't value the kind of rigorous and slightly boring empirical work that is actually usefull here and now. When I use empirical findings from research in my work, it is usually not what's the main focus of the papers. The main focus is usually some shiny new "advance" in the theory that often doesn't amount to much, but is the reason the paper gets published in a top journal. As a practitioner I would easily trade that for some more and better data. But assembeling data is hard and pretty unglamorous.

Expand full comment

You say the right things. Science touches many aspects of life, and if the research in it is not good enough, it will be a huge problem for all of us.

Expand full comment

Yeah, ordinary science is totally an average level problem. It’s paradigm-building science that’s totally incommensurable with today’s scientific institutions.

Expand full comment

Today's academic scientific institutions are hybrids like home dishwasher machines: not as good at debris removal as a spray gun, not as good at sanitization as a five-minute under-counter sanitizer appliance. But dishes aren't the addressable audience, people are, and this worst of both worlds is placing lower-stakes and fewer decisionmaking demands on the average small freestanding household on the average night to manually attend and gatekeep dishes from one lane to the other than a hybrid all-in-one batch dishwashing model will place on the people in that small independent household. Academic science has to prepare teacher-scholars but also justify itself and fund itself. Weak links are a problem in the time dimension when they spread faster than strong links and choke out the seedlings of this hybrid institutional-historical hybrid like weeds. Because other research institutions recruit from academic ones this is every body's longterm problem in shortterm weed management.

Expand full comment

Love this framing. I think of different sports to explain the concept - basketball tends towards strong link (your team will go as far as Lebron James takes you) vs. soccer as weak link (It's great to have Messi on your team but if your goalie sucks it's not going to matter).

Expand full comment

Within football (ie soccer) analytics, a classic finding was the attack is more strong link (Messi by himself instantly makes a word class attack) and defence is more weak link ("a defence is only as good as its worst defender").

Expand full comment

I find similar thoughts in this. What do you think about other sports, like team tennis?

Expand full comment
Apr 11, 2023Liked by Adam Mastroianni

This is a wonderful, thought-provoking, and sobering essay. The academic treadmill needs an overhaul, but it isn't clear how that can happen.

Expand full comment

I also think this essay woke me up from my boredom. It brought me to my senses.

Expand full comment
Apr 12, 2023Liked by Adam Mastroianni

dear adam,

this is a strong-link piece of writing! i'm going to strongly link it to people!

love,

myq

Expand full comment

One thing I came to realise recently is the problem of a lack of shared vocabulary, particularly in the case of interdisciplinary work that you mention in the article as “hard to fund”, but which is probably also valid when trying to pitch novel ideas (which bring their own vocabulary) or try to overcome inertia or peer conformity, or compliance of consistency.

I would argue that in many cases of interdisciplinary work, the vocabulary used in the different disciplines is not the same or does not have the same meaning (case in point is “innovation” which habitually means different things to an engineer and an economist).

Dave Snowden highlights this nicely in his article on The Importance of Silos, where he shows that the less shared knowledge (thus vocabulary) a group has, the higher the cost of codification of knowledge becomes.

As cost of codification of knowledge increases with “group size” (in that case, more disciplines with different vocabularies), getting to understand each other and work together gets much more difficult and costly in interdisciplinary settings.

This also means that people doing the boundary spanning across silos need not only be good communicators per se, but actually be good translators between vocabularies of different silos.

Expand full comment

Wow, I'm impressed with your knowledge. From what you said, I too am coming to the conclusion that a general vocabulary is needed. You've got me hooked, I will definitely add this book to my list.

Expand full comment

Agreed! In fact it’s hard enough finding two scientists who agree on what interdisciplinarity really is let alone how it can operate effectively.

Expand full comment

I love this as a heuristic, but from where I sit in the humanities, I'm less concerned about science's strong-link problem in terms of innovation and very worried about its weak-link problems in mission and ethics.

Take the vaccination/autism fraud. Following your reasoning, that's a weak link and not something that we should be worried about. Except of course we have to be worried about it because it has precipitated and complicated numerous health crises.

I think what's important to recognize here is that the scientific and technological advances that have proven to be incredibly damaging, or at least potentially so (genetic manipulation, chemical and nuclear weapons, AI) are all "strong-link" advances. They represent some amazing, groundbreaking science. They just happen to be mostly terrible.

So weak and strong science can be damaging. Why? because the weak link is that science (especially when paired with business) has a very weak link when it comes to understanding its proper mission of improving well-being, and ethically applying its advances accordingly. If we want to fix science, that's the link we need to focus on.

Expand full comment

For Wakefield's paper, lawyers for a lawsuit against the MMR vaccine maker recruited some of the 12 children in his study, and they paid him £400,000, which he did not disclose. So the problem with Wakefield's paper was conflict of interest verging on fraud, rather than uninteresting science. Indeed, if Wakefield hadn't committed fraud and his results were real, we might have called his paper strong-link science: a non-mainstream, controversial paper with big implications if true.

So one thing we might want to do in shifting toward a strong-link model is to reorient scientific policing even more heavily on scrutinizing papers that are likely to be controversial amongst the public and have immediate policy relevance. A paper saying that the MMR vaccine is linked to autism should get intense scrutiny both for the science and for potential conflict of interest. It shouldn't take a journalist more than a decade [1] to come out with a public account of the layers and layers of deception.

[1] https://www-healio-com.proxy.lib.umich.edu/news/pediatrics/20120325/wakefield-study-linking-mmr-vaccine-autism-uncovered-as-complete-fraud

Expand full comment

You are right, claims such as that the vaccine is linked to autism must be tested scientifically.

Expand full comment

The "R" in the MMR vaccine was strongly linked to autism in the 1970s. In that pregnant women who don't have rubella while pregnant are less likely to have children with autism.

Expand full comment

Actually, one of the largest problems is viewing ethics and mission as a weak-link problem. That is literally the expression of Puritanism and it comes with all the downstream consequences of such.

Expand full comment

I'm willing to rethink this - perhaps we should focus on reinforcing good ethics and ignoring bad ethics – but there are many cases where the consequences of moral failure are so dire that you have to also treat that as a weak link.

Puritanism? I'm not sure what you mean by that, unless it's in the general sense that Puritans were moral absolutists. You can work to eradicate ethical failure without being an absolutist.

Expand full comment

There must be something more in the model; we know where moral absolutism in a weak-link model goes, zero defect mentality. Ostracism. Circular firing squad.

However, simply ignoring moral violations (strong link) is also a model for failure because the violations get rewarded. Cheaters always win, then they propagate; and while they may not matter in small numbers, eventually the entire community decays.

So we need something else. Game theory offers us options for repeated games. Behavioralism offers some additional recommendations. Outcomes should be swift and as likely as possible, but need not be severe. Making a few mistakes with punishments (including punishing the innocent) isn't a huge problem, if the goal is to reduce the willingness to cheat. Mixed strategies with opportunity for second chances are what game theory recommends.

This doesn't really look like strong or weak link models.

Expand full comment

Very insightful - thanks. I think you’re absolutely right that the “link” model doesn’t work for ethics.

Though I’ll also reiterate my initial point about science: it’s biggest problem isn’t that innovation is being held back (even if it is) but that it is so often pursued without much consideration for morality.

Expand full comment
Mar 20·edited Mar 20

Yes, while science finds truths, the impact of those truths on people (and animals, plants and everything else) is also important.

Morality is based on the idea that some things are better than others. Science is typically neutral on that matter.

A commonly held moral precept is that people being alive is better than them being dead. Science, as such, doesn't share that opinion.

The "link" model is useful in moral thinking. Food safety (a weak-link issue mentioned in the article) is an important part of Jewish and Islamic morality, and I think should be an important part of morality more broadly. If someone's morality thinks it's important that a terminally ill person should continue to live if at all possible, then the strong-link situation with radical genius doctors mentioned in the article comes into play.

Morality is probably best practiced by not limiting oneself to either weak-link or strong-link thinking; but using both is a good idea.

Expand full comment
Apr 11, 2023Liked by Adam Mastroianni

Facinating. What I missed was how to allocate finite assets. Whether research slots or funding, not everyone can play.

Expand full comment
author

Thanks. Facinating idea. Would never pass government accountability requirements. I hope some wealthy altruists are convinced.

Expand full comment
Apr 12, 2023Liked by Adam Mastroianni

Thanks Adam - excellent piece. A pedantic point perhaps, but re: John Sulston's quote:

'...a 2:1 [a middling GPA in the British system]...'

A 2:1 (an upper second class honours degree) is actually only one step down from a first class honours degree, the highest grade possible. So, not really 'middling'. ;)

Expand full comment
author

Fair point. My intuitions on this may be warped by hanging around a bunch of Oxford students who acted like getting a 2:1 was a tragedy. Maybe that's more reasonable today, when 35% of people get a first and less than 25% get less than a 2:1: https://en.wikipedia.org/wiki/British_undergraduate_degree_classification. It seems like that has increased over time, so I imagine fewer people got firsts in the 60s. I'll change it to "roughly a B" because that seems more accurate.

Expand full comment

Excuse me, Dr. Mastroianni, but I'd like to question your idea of "skulls of dead theories in science." Several of the dead theories you cite never truly died... although they were factually wrong IN THEIR IMMEDIATE CONTEXT, they went on to later, unintentionally contribute something great to science. Some examples:

"No more invisible rays shooting out of people’s eyes"

Ray Tracing is a massive innovation in computer graphics. It involves invisible rays shooting out of the user's camera, which may have been inspired by the theory of invisible rays shooting out of people's eyes. Thus, this dead theory may have went on to help create a recent computational innovation.

"No more measuring someone’s character by the bumps on their head"

Phrenology is now a cautionary story told to young scientist, to warn them of the danger of dedicating themselves to a lifetime of false science. In July 2023, for example, Simon Rose told a room of researchers the story of phrenology at the Cambridge University Evidence-Based Policing Conference. He used this story as a way to illustrate the dangers of blindly believing a methodology. In this light, phrenology can be seen as a massive 'discovery' of fake science.

I guess my point is that many of these "wrong" scientific ideas never really die. They go on to become angels that guide us away from bad practices (e.g., Phrenology), inspire future scientific innovations (e.g., Ray Tracing), or something else entirely. When looked in this light, the cost of stomping out bad science is even more sinister... it may be stomping out a study that, although it's incorrect in its immediate context, it may later become an angel that'll inspire a future scientific discovery...

Expand full comment

Great article, I thoroughly enjoyed it. However, science is very much *not* a strong-link problem. Progress in science fundamentally relies on an ever rising floor of tooling, without such a floor of mathematics, applied science, engineering, and a massive corpus of knowledge, science would be nothing more than cocktail and cigar banter (or fantastical sketches in da Vinci’s notebooks). The emergence of strong-link-esque scientists is dependent on such tools, devices, methods, and economic structures; without which there would be no platform for them to emerge from. This feedback loop, or bootstrapping, is actually iterating on the weak-link premise. In this bootstrapping system, the exploration energy required to advance outstanding unknowns increases superlinearly and not smoothly, which is why it can often feel stagnating. But as the floor rises, the infinitesimally small innovation a single scientist can hope to achieve in a lifetime is made possible.

The proof is very demonstrable: sabotage aside, nation-states that do not have legitimately functioning bootstrapping systems (grafting systems only) struggle to replicate even decades old science.

Expand full comment

I agree that science is a strong-link problem, but double-blind peer review is actually beneficial for strong-link problems, because it improves sensitivity at the cost of specificity.

I've written about this here: https://calvinmccarter.writeas.com/peer-review-worsens-precision-but-improves-recall

Expand full comment

I do wonder if a possible reason why we treat a strong-link problem as a weak-link one is some version of the Von Restorff effect.

We have more “documentation” about how the world works and history that explains how we figured it out. If we have a lot of it, lots of bad stories will stick out, overriding success stories.

In the past, we had the sense that we could afford the hit of an unfortunate event for the sake of progress. Nowadays we “know” a lot so we feel obliged to protect ourselves from the “bad” instead of silencing it by making the “good” stronger.

Expand full comment

If people tend to be conservative and cautious, then knowing more things to fear will lead to more cautious action.

Expand full comment

An interesting mental framework with possible applications. It's unfortunate that you immediately use it for a bad example.

Drek science has consequences; how much bad institutional policy has come out of low numbers of scientific papers on subjects like psychological priming, which turn out to be unreplicable (and thus almost certainly not true)? Not to mention that the desire for novel research means that bad research actually drives out good.

Expand full comment

You are quite right that the weak-link filter needs to be applied to every paper before it is adopted as truth. Of course, this filter doesn't need to be applied at all stages of a scientific development, and doing so has some downsides. The article is perhaps too focused on the problems caused by not funding the investigation of radical hypotheses.

There were many edible plants that people didn't bother eating much because they weren't very interesting. One that was only a little more interesting than the others was wild asparagus. An enterprising person decided to cultivate it. Through selective breeding, we now have a strong-tasting vegetable that fills a niche better than any other. A plant with strong traits was focused on, while weaker plants were ignored. A strong-link process.

This process having been completed, it is more important now to ensure that the individual stalks are not contaminated with beetle eggs. A weak-link process.

Expand full comment
Apr 17, 2023·edited Apr 17, 2023

>>> 'Of course, it’s also easy to make the opposite mistake, to think you’re facing a strong-link

problem when in fact you’ve got a weak-link problem on your hands. It doesn’t really matter

how rich the richest are when the poorest are starving.....Whenever we demand laissez-faire, the

cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link

problem; we must promote the good!” '

Had the world approached economic growth as a weak link problem, many more hundreds of millions of poor would have been facing starvation today than actually are. You have very strong natural experiments that demonstrate this clearly - China pre and post 1979, India pre and post 1991, North and South Korea, West and East Germany.

If you want to 'find what's true and make it useful', learn more economics before throwing opinions out there.

Expand full comment

"It doesn’t really matter how rich the richest are when the poorest are starving" was the position of people like Deng rather than those like Mao. The author is agreeing with you - the starving socialist countries you list were overly fixated on "how rich the richest are". They were too focused on ensuring no-one (other than the ruler) was rich, creating a situation where the poor starved.

Expand full comment

If you're right and the author agrees with my position, I wish they had written more clearly.

Expand full comment

Great post, Dr. Mastroianni. I like how you took the concept from philanthropy and applied it to science and research. I hope you don't mind if I apply the lessons from your post to my writings on leadership.

I have some concerns that you dismissed giant replication projects as weak-link problems. What I've found in my line of work is that scientific research (even bad, non-reproducible research) is used as the foundation of policies and procedures that take decades to un-learn. Take cybersecurity policy (related to password strength) and building temperature standards, for example. Both are widely known by practitioners to be horribly outdated (women in my office bring electric heating blankets to avoid shivering in their cubicles), yet these policies persist. A society that follows ill-conceived rules for decades does real, measurable harm.

In the age where a lie can get halfway around the world before the truth can get its pants on, would you agree that some money spent on replication studies are worthwhile, if those studies can discredit bad science and stop people from relying on them? I understand that, on a long-enough time horizon, good science will prevail. But if it takes a major calamity (like a 9/11-scale disaster) to motivate structural change, would you agree that an ounce of prevention is worth a pound of cure? We can certainly let time filter out bad science, bad books, and bad music...but what if it's cost-effective to speed up the search for truth?

After all, know what NOT to do is as important as knowing what SHOULD be done.

Expand full comment

Maybe math is different. I have spent 150 hours over the last 10 years reviewing math papers, mostly for my favorite math journal. I check every line looking for mistakes. I feel like over 90% (maybe over 99%) of the theorems in the papers that I have reviewed are correct and because of that, future researchers can expect that 95% (maybe more) of the theorems that they use from our journal are correct.

Sometimes mistakes happen even with good authors and good reviewers. I remember years ago when I found two errors in the proof of a theorem in a published article written by a friend in our journal. I sent an email on a Friday to the author. Within 24 hours, he had fixed one of the errors, but on Monday he had to admit that the other error was uncorrectable. (I ended up finding a counter example to his idea (possibly a lemma).) We spent the next three months trying to find an alternative proof, and we were quite happy when we found one. We published a correction along with the corrected proof. The theorem was correct, but the original proof was wrong.

My point is that in mathematics, we want all the published theorems to be true so that future authors can depend on the veracity of the published theorems. Some math journals are better than others and I am guessing that only 90% of the published theorems are perfectly true, but we reviewers do try to move that up to 99%.

Expand full comment

Your comment is a stark reminder of the two types of problem that are relevant here - given a well-defined theory, one can examine postulated theorems that are stated in that theory, and hopefully ascertain whether the particular theorem is correct according to the rules of the theory. There's a set of rules, and the examiner simply checks everything against the pre-existing rules in a mechanical way. There's no opinion involved.

On the other hand, given a postulated new theory, or a postulated change to an existing theory, there isn't any pre-existing set of rules that can be used in a mechanical way to determine acceptance or otherwise of the postulates. And in these cases, "peer review" boils down to some combination of opinion, career interest, maintenance of the statues quo, rejection of novelty, etc. Which is why mathematics is today stuck with arcane notions such a set theory that began life inspired by religion, and which is chick full of contradictions that are euphemistically called paradoxes.

Expand full comment