102 Comments

If only more academics had your style and originality, I might not have started passing years ago on all requests to review papers. It is indeed incredibly boring.

I'm in a scientific field, but thankfully my job success is not tied to publication quantity. In general, I love presenting and talking about my work in great detail but hate writing papers, though I do it occasionally. That's my perspective as a "producer" of science.

As a "consumer" of science, I long ago gave up on keeping up with the literature in my field. If I tried, I'd have no time to do my actual job. It doesn't help that the vast majority of published papers represent minuscule progress at best, despite the typical overstated claims of originality in their introductions. At worst, they represent results of poorly designed experiments that do not measure what they claim to measure or do not support the claimed conclusions. So I must agree with you that as a method of quality control, peer review has failed. Whether or not peer review is replaced with something else, we have an excessive volume problem that requires a solution for scientists who want to stay current but can't spend hours and hours each week reading the literature.

Expand full comment

I love this.

I got an MS degree in computer science and am the kind of person that would probably be best contributing to the world doing academic research, but in my time at graduate school, I felt like the academics were playing some weird, hierarchical game that was only tangentially related to advancing the frontiers of human knowledge and solving difficult problems that would aid humanity.

What I see you doing looks to me like the kind of thing that, at scale, would generate more light than heat.

Expand full comment

This is an incredibly refreshing commentary on peer review, thanks for sharing the comments and your thoughts.

Expand full comment
Jan 4, 2023Liked by Adam Mastroianni

hallelujah thank you for writing this. my friend just sent me your earlier post today. i am here at my desk spending my umpteenth hour of free labor reviewing a manuscript (coming in at a hot 159 pages w/SI) and i am gritting my teeth through every minute. i actually hate it? as i was reading this follow up post i was thinking how one messed up thing about the process now is that projects, universities, grant funders etc need some kind of metric to gauge project 'success'. of course they use # of publications as that metric, or the most significant one. nevermind whether the project actually created the change it proposed to or had the desired impact on actual people. # of pubs is a mostly fake measure of 'productivity' that justifies a project (and it's personnel). i mean, i guess it's not fake as it surely represents a shitton of person hours, but fake as in actual impact or meaning or importance. as a university research staff on 100% soft $ mostly from federal grants, i feel locked into this dumb situation. but anyway maybe some of the fear and backlask from the post is like, if we get rid of peer review publications we won't have a tick sheet to evaluate what's good and we will be faced with the issue of confronting our own obsolescence.

Expand full comment
Dec 27, 2022·edited Dec 27, 2022

There are times I think many scientists don't really "believe" in the science they claim they do. The fact that they use the word "believe" is telling, and it's unsurprising that so many these days treat it as something built on a foundation of belief.

They often think of science more as a shield to protect their status from criticism, rather than the bicycle they must ride to work in order to get their actual work done.

Expand full comment

Very excited to hear what you have in the works for funding. I know quite a few people working on that exact problem in various ways, including Alexey Guzey and the Arc Institute people. Is any way I can help brainstorm or introduce you to people?

And yeah, people really can be dicks on the Internet, especially when you break out of the friendly neighborhood of your own newsletter into the broader open combat arena of Twitter/super large newsletters. It gets very exhausting.

Expand full comment
Jan 13, 2023Liked by Adam Mastroianni

The next place to apply this critique is in grant application evaluation. Some years ago, after twenty years or so of being an NSF grant reviewer at various levels, I was asked to fill out a satisfaction survey. In it, I pointed out that they had the process exactly backward. The hire hundreds of reviewers, fly them to DC and lock them in a hotel for days, and individually review thousands of grant applications in a way that makes sure three independent teams of reviewers look at each application. The applications that pass muster are sent on to a higher level of review and looked at again, and then the ones that are successful are announced -- and then roughly 12% of them are funded because of budget restrictions!

I suggested that instead they should hire grad students or recent grads to read the initial applications and weed out the ones that make no sense at all, then have a lottery to select a fraction of the resulting group for a serious detailed review by a small group of reviewers, and fund the ones that were selected. If they then had some money left for more, they could randomly select another cohort and find some more to fund.

I was disinvited from ever participating in NSF reviews again.

Expand full comment

messing with the system will buck entrenched interests, not only of the scholar with prestige to capture, but of the journals with revenue streams to protect, and readers who want ex ante assurance that time spent reading will be justified. (I would also add that in my world (surgery) industry likes the current system too, as a paper in a good journal is a credible token that might be used endorse their wares.)

to the extent that scarcity creates/reflects value, scarcity will dominate. until the recent past, journal space was rate limiting; now, bandwidth and attention are critical. it makes sense with a new coin of the realm, new systems of sorting will be needed

Expand full comment

I think you've done a good job critiquing peer review as a barrier to entry and as a service to paper-readers, but that's not how it really functions in practice, for better or worse. Peer review is a system by which random, anonymous paper-writers are able to force other researchers to read their work. In other words, peer review is a service to paper-writers. It is especially a service to paper-writers lacking prestigious affiliations and notable previous work -- authors whose papers would go unread if they just posted on arXiv. So in this sense, peer review is a path to entry, not a barrier to entry.

This is why being a reviewer is so painful: it imposes a uniform prior for paper quality, even though we know that some researchers do much better work than others, and we would prefer to focus on reading their work. This is why even peer-review is not a guarantee of quality: the process ignores said priors, and also reduces the incentive for developing a reputation as someone who only submits quality work.

At the same time, blinded paper reviewing is probably the only way to identify good work coming from new and unknown researchers. In other words, there is a precision-recall tradeoff to peer review. If reviewers were not forced to read papers they otherwise would not, precision would increase but recall would decrease.

Expand full comment

I think that if we got rid of peer review tomorrow then rational scientists would rapidly organize an ad hoc system for vetting papers that would look a lot like peer review, just without the for profit publishers. Your analysis misses this because it focuses too much attention on the role of arguments (and experiments, and evidence) and not enough on the role of trust and reputation.

Sure, at the end of the day science is supposed to be about falsifiability. In theory everyone is supposed to be able to replicate all the experiments, in order to verify or refute for themselves. But this takes time and energy that none of us have. Therefore, the overwhelming majority of what we "know" to be true comes second hand, relying on assertions made by people that we trust. We "know" global warming is happening and is man-made. But have *you* gone out and spent 50 years measuring temperatures, measuring levels of CO2, measuring how much is produced by human activity? No: you are relying on others to tell you that it's happening, and to tell you why it's happening.

The same is true within a narrow scientific discipline. Given that anyone can post a PDF on the internet (and often does), none of us in the field have time to read and assess every argument/experiment that is presented. We have to rely on heuristics to determine what we pay attention to and rely on in the unending flow of new material. And the best heuristic we've come up with so far is to rely on the judgement of other experts in the field. This creates a circularity of course: we define experts as people who have done lots of good work, while we rely on the experts to tell us which work is good. This does have flaws---it creates the opportunity for a field to be captured by an oligarchy of non-experts. But we haven't come up with anything better. In particular your proposed approach will certainly let all the papers get published, but it does nothing to address the need for us to filter the good work from bad in order to make progress.

Meanwhile, though imperfect, the existing network-of-trust approach does have some effective self-correction mechanisms. Peer review means that even an expert has their work checked by others. And if they lose their mojo and start submitting bad work? Well, it generally fails to get published, and over time their reputation as experts suffers and they lose their power to influence what is considered "good". Conversely, the graduate student you mention in your post can start with zero reputation, submit work that is double-blind reviewed by experts, get it accepted, and over time accrue reputation as an expert that comes with the power to assess others' work in the field.

A concrete example: I used to have a strong reputation as a theoretician and received many requests to review theoretical work. But ten years ago I shifted areas; requests continued to come in for a while but eventually they trailed off as I lost my status as expert in that field; meanwhile my reviewing work in my new field has ramped up as I have gained a reputation for expertise in it.

You take it as a sign of failure that fraudulent papers get published all the time. I take it as a sign of success that we ultimately *know* those papers are fraudulent. And again, we don't have a better option: if we held up publication of every paper until it was carefully replicated, science would slow to a crawl. Instead, we rely on the trust network. The work that I rely on has been reviewed by (anonymous) people in my community who I don't know, but who have been *selected* by people in my community who I do know. That gives me confidence to trust it. Sometimes that confidence is misplaced, but that imperfection is unavoidable if we want to make progress.

There's nothing unusual about this system; it's the way most human knowledge is constructed. Which I suspect you know better than I given your research area. But for those less familiar, I suggest Amy Bruckman's new book, "Should You Believe Wikipedia: Online Communities and the Construction of Knowledge".

Expand full comment

Greatly enjoyed this and previous article on peer review. As you hint at, it's really all about the winners writing history. Fortunately for science prestige never wins in the long run! (Tho that can be a frustratingly long run).

Expand full comment

Love this commentary Adam.

I’m wondering, what do you think about the value of graduate school / formal academic training? To what extent is graduate school just a system for bringing people into the peer review publishing environment? You went to a good graduate school; was that a valuable experience? Would you recommend it to others who want to do research? Under your vision for publishing, it doesn’t seem like graduate school would be that important... or am I missing something? Basically, how do you square graduate school / academic careers with your vision for publishing?

I finished my BS a couple years ago, and have been constantly going back and forth on going to graduate school. I’m super interested in doing research, but I probably wouldn’t be able to get into a very good grad school, and I am not super optimistic about the academic environment... plus, graduate school is an enormous opportunity cost since I’m already making decent money in industry.

Expand full comment

Nobody defends the systemic status quo like a tenured professor. Just as in politics, we find some of the greatest and most positively influential figures in history in academia... And just as in politics, they're drowning in a sea of self-serving hypocrites.

Expand full comment

Adam,

I'm a long time business consultant, author of 9 books and have worked in 300+ industries. I am also deeply involved in science including publishing papers, participating in peer review, organized a $10 million technology prize.

Across 300 industries I have never seen any other profession where people are more afraid to say what they really think - or discuss what needs to be discussed - than in professional science.

Expand full comment

One upside of publishing (peer reviewed or otherwise) is the permanence. Science is a cumulative enterprise and an internet full of constantly moving, disappearing, or changing content is no place for a citation. Johnathan Zittrain discussed the scale of this issue here:

https://www.theatlantic.com/technology/archive/2021/06/the-internet-is-a-collective-hallucination/619320/

And I think this is related to an answer to your question "But why would you stop listening to comments and making your paper better just because it’s now publicly accessible?" Because I might cite it. Then you might make your paper "better" -- okay maybe it is better, maybe you actually had it right the first time and you've gone off the rails, but either way you may have destroyed the context for understanding my paper. This needs to be weighed against the idea of ever evolving online unpublished papers.

In conclusion, all papers should be uploaded to GitHub and their citations should be included as submodules.

Expand full comment

15000 years per year is an extraordinary number, but phrasing it like that really undersells it. Just to put it in perspective, that means that at any moment, 15 thousand scientists are working on peer review. A normal full time job has you working about a quarter of the time (5 out of 21 eight hour blocks per week). This means that the peer review project uses the equivalent of sixty thousand full time science positions.

Sixty thousand! That's a lot. It's five CERNs. It's bigger than any scientific organization I could find in 2 minutes on Google. If I were employing sixty thousand scientists, I would be expected to show a lot of results.

Expand full comment