I think your key connection here, which I, a military historian, had not made until now, is that peer review came out of those heady days of the early Cold War when THE EXPERTS arrived at the Pentagon. The most famous of these was MacNamara. I leave the Vietnam analogy in your capable hands. From my perspective, this is the moment when scientists were being asked to think unthinkable things and question every set of assumptions until they had exhausted all cognitive powers, for the stakes were so very, very high. We may forgive the motivations and still see the shortcomings here.
I should also note that this all fits with Max Weber's theory that specialist classes make informational gatekeeping processes into safeguards for their own status. Terry Shinn wrote about it studying L'Ecole Polytechnique and I have encountered it in my own study of the French naval engineering school. If you want to add this to the social history of how entire professions can fail together, for decades, there is historiography.
Interesting! It does seem to fit into a broader moment of standardization. The gatekeeping angle makes a lot of sense to me––in the 19th century, a lot of science was done by dabblers. Once science becomes a profession, it has to erect a moat around itself, and journals will do that nicely.
Standardization is intimately connected to military revolution throughout the modern period. Everyone argues what that is, but for me, it starts in 1494, when the first effective siege artillery appeared. Cannons require intimate knowledge and care. By 1600, Europeans were loading paper or canvas cartridges with measured amounts of gunpowder into cannons, which is pretty much how the modern howitzer works. Too much diversity in an artillery park proved to be a bad thing; better to have a few standard barrel calibers. Napoleon could have grand batteries because they all fired the same shot, simplifying his logistics. I could go on at length about this topic, and sometimes do. My point here is that military necessity has often created new bureaucratic approaches within professions, and that failures arise out of the groupthink of those professions defending their own existence.
EDIT: BuOrd and the Mark-14 torpedo in 1942 is another example.
That's very interesting. Why do you think it's the military revolution specifically, and not industrialization as a whole that enforced standardization?
The standards emerged within industries that were responding to military necessity during a period of rapid change. Chicken or egg? State centralization and other questions started before the Westphalian peace, when almost all the means of production related to the military revolution was in private hands.
Take cannonballs. You want them to be exactly the right size to fit the barrel and roll down it easily. If you have 12 different cannon calibers, and I have 6, then you are not actually winning, because your supply chain must provide twice as many sizes of shot. The cannon will need to be test-fired with a double load of powder to ensure safety. To aim the cannon, you will need a level, and if you have a lot of gunners using levels, you will want them to all be the same. Cannon-makers tried to make every cannon tube meet the same standards, allowing any gunner to use any cannon of the same make. The gunpowder had to be produced in the right size grains and powder makers perfected the right balance of ingredients. And on and on and on.
So now imagine you are in a country whose language you don't speak, that kings and parliaments have declared is at war, and you must procure cannonballs of exactly the right diameter. "Je voudrais un boulet de canon de trois poids en livres s'il vous plaît. Dix mille"
Vauban looked at the disasters of the Thirty Years War and recognized the logistical realities of defending France. He built artillery fortresses on the frontiers and connected them with roads and bridges so that he could march armies on them quickly. Every road and bridge had to be strong enough for an army to cross it with cannons. Behold the invention of highway safety standards. The school created to improve and maintain them, L'Ecole des Ponts et Chausses, was the first modern engineering school. It still exists.
The Church placed Copernicus under house arrest in the 17th Century until he capitulated his theory on Heliocentrism. and later punished Galileo for allowing the theory out into the light of day (pardon the play on words).
Now, here in the 21st Century, we have replaced the Catholic Church with a Politically Charged Peer Process (Read Government Controlled Bureaucracy) that speaks Power to Truth and like the Church, prevents said truth from reaching the sanitizing rays of sunshine.
This all supported by a Legacy Media "Ministry of Truth" that manages the Bureaucratic Newspeak and uses information suppression as its greatest instrument of Propaganda.
I'm not a historian but I read a different spin on the Galileo incident, namely that the church was proud of Galileo and did not want to take any action against him, but was pressured to do so by Galileo's fellow scientists who were jealous of his fame and accomplishments. Would not be surprised if the accepted history re Galileo is a result of peer reviewed spin.
Computer Science in its early years was awash with informational gatekeepers. They gave credence to the saying that knowledge is power. People made whole careers out of it. Interestingly, the Internet brought them all crashing down as the information that mattered had changed and was now universally available.
The gatekeeper thing is probably the most important element of this. It's what keeps the system going. Science gatekeeping has pervaded our whole society, to the point where many say that if you're not an academic scientist, you have no right to an *opinion* about scientific matters.
I can (and will, in another comment) come up with a better way to do peer review. But until this gatekeeper incentive changes or is worked with somehow, nothing substantive will change. And that I don't know how to do.
I would like a more adversarial system. Not that scientists should become lawyers, but the strength of an adversarial system of evidence is that both litigants are expected to present the best possible case for and against, respectively. So, here is the article, here is the damning critique, here is the weak argument. Or the article, a critique that doesn't score many points, a strong argument from strong evidence. Or maybe both sides score important valid points with the reader. Facts and law both matter in court, but I guess here it would be scientific law, and the reader gets to judge the merits according to both fact and science. Not perfect, but a starting place perhaps.
Gatekeeping, like peer review, starts with the best of intentions and becomes a problem as the profession begins to defend its own existence.
I agree entirely, but the problem is modifying or eliminating the incentive structure that incentivizes gatekeeping. It's easy to say what we want things to look like; the hard problem is figuring out how to get there.
The problem is that given the system we have, most people who aren't academic scientists really don't have anything useful to contribute. Their views are based on religion, political ideology, or just mental illness because if you are interested in actually learning about something you normally go through academia.
Agreed. There are quite a few people out there who have PhDs but do not work in academia. In my field (software engineering) it is common to encounter CS PhDs, Math PhDs, Physics PhDs, etc. Meaning, they have all the training to qualify for academia, but they work on practical projects.
If we are talking about life sciences, I'd bet on the average actuary, civil engineer or economist in a debate against the average biology professor on statistics. Life science researchers are quite weak on mathematics.
Weber's ideas are the foundation of "elite theory," a whole body of sociopolitical study. "Protestant Ethic" and "Theory of Economic and Social Organization" may be his most famous titles but if you're not a philosopher, maybe start with Gianfranco Poggi's book about Weber.
Hi Matt, as a naval architect working for a French marine company, i would be very interested in your study of french naval engineering. do you have links pls?
I also recommend the magazine articles of French naval historian Luc Feron, Terry Shinn's book about L'Ecole Polytechnque, Theodore Ropp's "Development of a Modern Navy," and Arne Roksund's book about the Jeune Ecole. Primary sources include the works of Honore Sebastian Vial du Clairbois and Pieere-Charles Francois Dupin. You'll see a distinct French aesthetic developed around the severe tumblehome hull shapes of early steel warships.
I find a lot of the climate scientists are quite open about the limitations and uncertainties that accompany climate modeling. The activists and politicians are the ones who view it as a religion that cannot be questioned.
I don't think it failed perhaps as much as it stopped working, especially as the number of scientists exploded. A good idea becomes a bureaucratic nightmare with scale.
The more centralized an entity is, the more likely it is to be the target of corruption. This system is corrupted. We need to decentralize science, the FDA and more.
I'm much more sanguine. Some things ought to be centralised, and some not. All systems gets somehwat corrupted over time unless they evolve, centralised or no.
Decentralizing a system greatly increases problems of fostering excellence, finding and collaborating with peers, and actually doing the critical work of building knowledge - by first understanding and synthesizing what has been done, which requires some centralization or indexing, then adding to it in a way that others who work in the area can find and recognize.
Yes, corruption pressures are critical, but very frequently, corrupted metrics are still an improvement on no measurements at all. https://mpra.ub.uni-muenchen.de/98288/
This is a classic misunderstanding of systems. You can absolutely have organized, collaborating decentralized systems if that is your goal. There are hybrid systems too.
None of the examples in that post seem to work as well as the corresponding centralized ones. They are certainly less liable to be corrupted, but they don't really fit any of the criteria I noted - building knowledge, or enhancing collaboration, or fostering good solutions.
Any you ignored the central point, which was about metrics, which are central to the reason peer review and similar systems exist.
...but markets themselves are almost never decentralized. In fact, they aggregate prices only if they are centralized or standardized, and otherwise they tend to be fragmented and inefficient. Thankfully, we in the west have regulators and system designers that (centrally) insist on standards and rules so that markets work. Which is why essentially no-one is trying to have an IPO or have their shares traded anywhere other than in liberal democratic western country's markets.
That's…not how this works. There are some centralized markets—monopolies or monopsonies—but it is just in those markets that pricing works poorly. Where prices are functioning well and fluidly, you see decentralization—many buyers, many sellers.
Well, yes, but on the other hand, peer review became widespread in large part as a response to the expansion of science in the postwar era. Journal editors simply couldn't keep up, so they relied more and more on external reviewers.
The one thing I’m surprised didn’t make it into this amazing article is how the digital age transformed how we view science. As little as 25 years ago, you needed journals to physically publish and distribute your work. And no one was going to get your paper delivered without an established organization approving it first.
Today, the logistical and material cost of reading a paper is virtually zero. We convinced ourselves that journals were providing more of a service than simply being the paper boy because we needed them for that, at minimum. We then built an entire incentive and ladder climbing system based on that silly hang up.
It seems to me that 'believe science' mostly means 'leave it to us' as a kind of argument from authority. I know someone (a scientist) who is viscerally angry that non-scientific people question the fitness of the scientific 'system' in surfacing materially objective truths about complex things without being inevitably distorted by incentives.
Maybe they're just afraid to lose elite status and peer review is simply a way to circle the wagons.
I totally agree. There's an idea in psychology called "social dominance orientation," which is basically the extent to which you agree that some groups in society should dominate other groups. People usually think of this in terms of race, and of course all academics recoil at that. But they are generally on board with elites dominating non-elites. The changes to scientific publishing over the last sixty years are a very effective way of ensuring that nobody outside the system gets to call themselves a scientist.
The difference is that elites actually have some idea what they're doing and the general public doesn't, which is not applicable to race if for no other reason than that biologically there's no such thing as race.
Of course ancestry exists and people get genes from their ancestors and those ancestors lived in particular places. But racial categories like "Black", "White", "Asian", and "mixed" are arbitrary and cannot be rigorously defined in an objectively correct way. This fact leads to obviously unjust absurdities whenever they are written into laws, as with trials for miscegenation, or more recent disputes over whether someone qualifies as a member of a racial group for affirmative action purposes.
They aren't arbitrary if they can be tested for and identified via DNA. It isn't a coincidence that those groups we see are the same ones we can identify with DNA. All you are doing is parroting long debunked Gouldian lies. Race exists and has consequences. No one ever thought otherwise until jews began committing academic fraud and used their influence to shut down the research.
An idea: research papers should have the reviewers names listed same way as the authors. Thus, they bear the responsibility for their review. It is also a way to check if the paper is the product of a groupthink.
Yes, if we're going to act like something has been vetted, someone's reputation should be on the line. For what it's worth, that was the point behind early scientific societies voting on what to publish––if it puts out rubbish, the whole society looks bad.
Loved the article! Now convince universities to drop the publish-or-perish requirement. How many y of my life have I spent reading rubbish in abominable APA style whose only function was to add a line to a CV!
You would have to convince grant agencies to take the "contributions to science" out of proposal requirements. People are publishing, because they have to get grants for institutions. It all comes back to $$ not the science.
I was thinking that scientific data (mostly academia) should have solid quality control and quality assurance data that gets uploaded into a database everyone can use, instead of being lost in these publications the public does not have access to. They should provide standardized SOPs for all their data.
I did read this interesting article about creating algorithms to run quick quality control checks on the data before the paper goes to reviewers, seems like a good start.
I like your suggestion of a public database. And thanks for the link. Inconsistencies between test statistics and p-values could indeed be caught by such a program. But lurking behind statistical inference lies the assumption that the sample data were randomly selected from the population of interest. Except when the population is small and well defined, this is impractical or impossible. The problem of external validity can't be solved with statistics, but since everyone is in the same boat, we ignore it.
Okay, I am going to step into a dangerous area here.
First, I am a retired accountant, not trained in the sciences. I have acquaintances at the PhD level (including in the sciences) who have gone through the "necessity" of publishing. Okay, I have stated my (non) credentials and am now ready for your attack.
It is my observation that a large part of the idea of the scientific process absolutely involves publishing of knowledge gleaned and then surviving challenge by those with contrary views. All of those presenting views are to be accorded courtesy and treated honorably. At least, as I naively see the construct.
It is my further observation that in recent years there were two very important issues that absolutely did not see this process extended. I speak of those with "unusual" [a deliberate obfuscation, you might say] views on climate change and COVID. Both those topics, it is my (further, further?) observation are not treated as being subject to courteous and honorable scientific discourse but rather as religion where someone who questions the assertions is vilified and subject to dishonor.
I offer to you an analogy. As an accountant, it is my training to seek balance, i.e. the debits must equal the credits. I therefore suggest that you folks in the scientific community have a larger problem than Mr. Mastroianni has identified. I do not deny that the accepted "religion" of climate change or COVID is perhaps correct but why is no one looking at the contrary possibilities and why do you as scientists allow the vilification, dishonor, defunding, de-platforming, etc. of those who want to explore alternative possibilities?
Oh, and as to the article itself? It seems to my non-scientific way of thinking to be a very good analysis, so, Bravo, Mr. Mastroianni!
Thanks for posting, Ken! I don't know if I would agree with your views on climate change and covid, but I certainly agree that censorship isn't the way to solve scientific disputes. In a fair fight, the truth prevails eventually. Anyone who wants the deck stacked in favor of their idea must, deep down, doubt whether it's really true.
I think people also have a very strong conviction that it's not enough for *most* people to share their beliefs––it must be 100%. For me, anyway, I'd love my point of view to prevail, but I would be *afraid* if 100% of people thought the way I did. What if we're all wrong? That's why what I want most is not for my ideas to be enforced by fiat, but for the rules to be fair to all ideas.
One of the dumbest sentences ever uttered. This is saying people should have the right to commit fraud, lie in public office, lie about science, commit perjury, etc.
It was already like that. Most people are just unaware, because most have not found themselves on the "wrong side" of the accepted narrative until Covid.
It's interesting you cite COVID because in the first few months of the pandemic, thousands of scientific papers were published, read and cited without a proper peer-review process. It was an all hand on deck and a lot of studies got discredited afterwards but were important in an emergency. What you get through the media or governmental sources is either biased or so heavily summarized that it might have appeared not objective towards Science. But it was, just no one bother to read or understand the scientific literature objectively.
Lots of good points here and I find myself nodding in agreement. A couple more thoughts.
One is that a system that relies on volunteer labor should generally be suspect. I got zero dollars for being an associate editor of a journal, having to assign papers to peer reviewers and then synthesize their reviews, which generally were cursory. The reviewers got zero dollars too. Oh, I know that reviewing papers is considered a 'professional obligation,' but that's the point: the peer-review system has extraordinary responsibilities for which it is willing to pay nothing. Only journals make money.
The second point is that transparency seems like a far better approach to quality control than peer review. Having to post one's data and code is highly disciplining because there's always the chance that another researcher might use them to show that you estimated a parameter or interpreted your estimates incorrectly. It's rare for someone to put in that effort, but the discipline comes from the fact that it might happen. Maybe worth an experiment.
I imagine the difference is whether unpaid labor is truly voluntary. Reviewing papers without pay is effectively compulsory in academia. Is contributing to open source software effectively compulsory in order to have a career in tech?
Not only is Tchebeycheff's comment a misunderstanding of open source software, it's also just wrong.
Open Source Software (OSS) just means that the source code for the software is open for anyone to view, and according to license, edit, copy and redistribute.
OSS doesn't inherently mean "free" software: neither that the developers were unpaid, nor that the final product is free for anyone to use. Many OSS developers do so under payroll, the largest tech companies all have big stakes in OSS projects and developers working in part or exclusively in OSS.
A lot of OSS software has a license that allows any user to copy and modify the code, and that if they intend to monetize it, then they must also make their source code open.
A lot more "for pay" software than people think heavily relies on open source software.
Furthermore, you could view the fact that the source is open to review by anyone as a system that has the possible positives of a peer review system (scrutiny, improvement, refinement, critique) with none of the drawbacks (publish or perish, pay to get reviewed, gatekeeping information and elitism).
Any amateur can write or contribute to OSS if they so desire.
There is also closed source software that is free, which is freeware/shareware.
Another thought, one might argue that the rapid pace of development in computer software both in terms of tools, programs, paradigms, etc. can be attributed to not being stifled by peer review, (computer science and software development are not the same) only that the end result works. There is certainly a lot of critique to be made for software development practices.
Adam, I really love all of your writing on scientific publishing. Also, this made me go read your Things Could Be Better paper again, which might be the first time that I read a full paper in full *twice*, *for fun*.
Also just to note that the reliance of university tenure and promotion standards on measuring the publication of a certain number of "peer reviewed" articles also maintains the rigidity of this system
A wonderful article. I've been arguing something similar for many years: https://breast-cancer-research.biomedcentral.com/articles/10.1186/bcr2742 It's fascinating to me that a process at the heart of science is faith not evidence based. Indeed, believing in peer review is less scientific than believing in God because we have lots of evidence that peer review doesn't work, whereas we lack evidence that God doesn't exist. Why does the juggernaut roll on? Because there are lots of jobs, profit, and reputations that depend on continuing to believe in it?
Hi, Richard! I didn't realize at first that you are the same Richard Smith I was reading as I wrote this article. Thanks for all your work on this subject!
Ever tried questioning what the results are while doing a peer review? Ever asked about underlying assumptions? Most peer reviews are done by people that have the same background, want the same results, and are under the same impetus to publish, publish, publish. They know that if they are critical, then when it is their turn the others will be critical. I have had people get very defensive when I question anything, and I have had other people amazed and astonished that they actually get relevant feedback beyond really basic stuff! It is uncomfortable, and boring to do peer reviews. All the incentives for it are backwards.
Loud applause! I’ve published only 2 papers in the past 4 years and by “published” I mean uploaded to ReaearchGate. I’m very happy with the readership and the constructive discussions and feedback that has allowed me to further refine my theories as the field progresses.
Given the abject rubbish appearing in even the most prestigious journals, I feel no urge to submit my stuff or pay their fees.
The business brilliance of Robert Maxwell created the monster. Ever increasing numbers of journals, paid submissions and subscriptions and the complicity of institutions that also want beauracitize systems add plenty of fuel to the problem.
We need to think about the systems that govern over science. If we make them transparent and decentralize them, we can fix everything. Put everything on chain so other scientists can build off each other.
Try to find Balaji Srinivasan talking about this on The Knowledge Project podcast:
I agree with your conclusion about the current system being suboptimal. However, I'm curious if you think there's anything more efficient than chaos? For example, what if Arxiv had an openreview-esque comments section? So that communications were centralized. Or something like a reddit-esque community governed forum?
I think progress requires a healthy dose of chaos. Those kinds of systems would be better than the one we have now, but I would always want there to be some kind of outlet for weird stuff.
We need to think about the systems that govern over science. If we make them transparent and decentralize them, we can fix everything. Put everything on chain so other scientists can build off each other.
Try to find Balaji Srinivasan talking about this on The Knowledge Project podcast:
I wonder if we made a) reviews optional b) showed the reviews c) let the authors respond to the reviews and show those too i.e. the whole conversation (like in any blog comments section!) d) give the reviewers that contributed most thoughtfully almost as much prominence as the authors themselves (in the author's opinion? or perhaps a legitimate curator role who narrates the whole 'story'?) e) linked to follow up work that supported or contradicted (again the web makes this easy) we could start to build better mechanisms for encouraging, thinking about and testing our ideas.
I think that would be a huge improvement on the current system. I still worry that if it becomes centralized or mandatory it will eventually get captured just like this system did. But I'd certainly be interested in trying it.
Yes there is a risk of capture - perhaps in any situation where groups of humans 'collaborate' there is a risk of groupthink or even partial groupthink (is that a thing?). So we need the groups, that naturally form through shared interests, remain as open and dynamic as possible. To mitigate against noise or even sabotage would remain a challenge but one we can figure out, starting with verifiable identities.
Another interesting idea might be to allow the traditional paper to be deconstructed so hypotheses can be proposed, experiments to test them proposed and reported, conclusions drawn and avenues of further exploration as separate units. This might speed up the progress by allowing more people to participate in the conversation before a full paper is published.
I don't know what the solution is. Full disclosure, I've been working in journal publishing for 15 years.
You successfully self-published an article. If everyone who wrote a paper did that, there would be 100s of manuscripts uploaded weekly with zero quality control and zero discoverability unless like a self-publishing fiction author you work your ass off at social media to get noticed.
You talk about pre-1950s academic publishing, but the world was a much smaller place then and with a lot less research happening. You didn't have China pushing their academics to publish - many of them being unable graduate until they've been published.
Perhaps is this the world shaped by peer review.
Obviously, there are issues with peer review and with for-profit journal publishing. I'm certainly not 100% comfortable with the way things are going, especially since OA became the drive for profit.
I don't think my industry will survive as it is. I know other colleagues feel this way. However, I don't know what the future is.
I don't know what the future is either. I agree that discoverability is a problem, but I don't think peer review and journals solve it, and they in part create it. Here are three reasons:
1) For knowledge to be discoverable, it has to be public, and journals prevent that by paywalling knowledge or rejecting it in the first place. Publication bias is well-known to be a huge problem that distorts our view of the truth, and the cause of publication bias is publication.
2) Discoverable knowledge should be easily digestible. Journals encourage sprawling, jargony articles that nobody actually reads from beginning to end, diluting any useful knowledge so much that it takes a long time to concentrate it again.
3) Most research isn't good and should be ignored anyway. Nobody knows exactly how many articles are never cited after they're published (https://blogs.lse.ac.uk/impactofsocialsciences/2014/04/23/academic-papers-citation-rates-remler/), but it seems to be a pretty large chunk. If you could also filter out papers that are never cited again except by their own authors, that chunk would probably grow. So if we had a morass of unread papers, I think that's both perfectly fine and in fact what we already have, except right now we waste additional time reviewing those papers first before they're ignored forever.
The solution is to decentralize science in a transparent system.
As Balaji likes to say: "Science" is simply the ability to be independently repeatable. If you have some prestigious study, the chances of it being centralized are great. And if it is centralized, then it is likely corrupted.
The only thing that matters is if the experiment can be independently replicated.
"You successfully self-published an article. If everyone who wrote a paper did that, there would be 100s of manuscripts uploaded weekly with zero quality control and zero discoverability unless like a self-publishing fiction author you work your ass off at social media to get noticed." The big word in that passage is "if." As you point out, uploading 100s of manuscripts every week would not be worthwhile — which strongly suggests that people won't do it. They would probably only upload things that seem worthwhile.
We would need an experiment to determine what actually would happen without peer review. (We already know what happens *with* peer review, and it's not exactly an unalloyed good: unreproducible results, exaggerations of significance, outright lies (e.g., fake data), 100s of papers published weekly...)
I think the alternative deserves a try, though I'm not sure how to handle it. Perhaps a specific discipline could drop peer review to see what actually happens.
I think your key connection here, which I, a military historian, had not made until now, is that peer review came out of those heady days of the early Cold War when THE EXPERTS arrived at the Pentagon. The most famous of these was MacNamara. I leave the Vietnam analogy in your capable hands. From my perspective, this is the moment when scientists were being asked to think unthinkable things and question every set of assumptions until they had exhausted all cognitive powers, for the stakes were so very, very high. We may forgive the motivations and still see the shortcomings here.
I should also note that this all fits with Max Weber's theory that specialist classes make informational gatekeeping processes into safeguards for their own status. Terry Shinn wrote about it studying L'Ecole Polytechnique and I have encountered it in my own study of the French naval engineering school. If you want to add this to the social history of how entire professions can fail together, for decades, there is historiography.
Interesting! It does seem to fit into a broader moment of standardization. The gatekeeping angle makes a lot of sense to me––in the 19th century, a lot of science was done by dabblers. Once science becomes a profession, it has to erect a moat around itself, and journals will do that nicely.
Standardization is intimately connected to military revolution throughout the modern period. Everyone argues what that is, but for me, it starts in 1494, when the first effective siege artillery appeared. Cannons require intimate knowledge and care. By 1600, Europeans were loading paper or canvas cartridges with measured amounts of gunpowder into cannons, which is pretty much how the modern howitzer works. Too much diversity in an artillery park proved to be a bad thing; better to have a few standard barrel calibers. Napoleon could have grand batteries because they all fired the same shot, simplifying his logistics. I could go on at length about this topic, and sometimes do. My point here is that military necessity has often created new bureaucratic approaches within professions, and that failures arise out of the groupthink of those professions defending their own existence.
EDIT: BuOrd and the Mark-14 torpedo in 1942 is another example.
That's very interesting. Why do you think it's the military revolution specifically, and not industrialization as a whole that enforced standardization?
The standards emerged within industries that were responding to military necessity during a period of rapid change. Chicken or egg? State centralization and other questions started before the Westphalian peace, when almost all the means of production related to the military revolution was in private hands.
Take cannonballs. You want them to be exactly the right size to fit the barrel and roll down it easily. If you have 12 different cannon calibers, and I have 6, then you are not actually winning, because your supply chain must provide twice as many sizes of shot. The cannon will need to be test-fired with a double load of powder to ensure safety. To aim the cannon, you will need a level, and if you have a lot of gunners using levels, you will want them to all be the same. Cannon-makers tried to make every cannon tube meet the same standards, allowing any gunner to use any cannon of the same make. The gunpowder had to be produced in the right size grains and powder makers perfected the right balance of ingredients. And on and on and on.
So now imagine you are in a country whose language you don't speak, that kings and parliaments have declared is at war, and you must procure cannonballs of exactly the right diameter. "Je voudrais un boulet de canon de trois poids en livres s'il vous plaît. Dix mille"
Vauban looked at the disasters of the Thirty Years War and recognized the logistical realities of defending France. He built artillery fortresses on the frontiers and connected them with roads and bridges so that he could march armies on them quickly. Every road and bridge had to be strong enough for an army to cross it with cannons. Behold the invention of highway safety standards. The school created to improve and maintain them, L'Ecole des Ponts et Chausses, was the first modern engineering school. It still exists.
Thank you for your in-depth answer, I learned a lot!
The Church placed Copernicus under house arrest in the 17th Century until he capitulated his theory on Heliocentrism. and later punished Galileo for allowing the theory out into the light of day (pardon the play on words).
Now, here in the 21st Century, we have replaced the Catholic Church with a Politically Charged Peer Process (Read Government Controlled Bureaucracy) that speaks Power to Truth and like the Church, prevents said truth from reaching the sanitizing rays of sunshine.
This all supported by a Legacy Media "Ministry of Truth" that manages the Bureaucratic Newspeak and uses information suppression as its greatest instrument of Propaganda.
I'm not a historian but I read a different spin on the Galileo incident, namely that the church was proud of Galileo and did not want to take any action against him, but was pressured to do so by Galileo's fellow scientists who were jealous of his fame and accomplishments. Would not be surprised if the accepted history re Galileo is a result of peer reviewed spin.
Did you happen to find this narrative in “Against Method”?
I saw it online but perhaps it was someone referring to that book - don't remember, honestly.
Computer Science in its early years was awash with informational gatekeepers. They gave credence to the saying that knowledge is power. People made whole careers out of it. Interestingly, the Internet brought them all crashing down as the information that mattered had changed and was now universally available.
The gatekeeper thing is probably the most important element of this. It's what keeps the system going. Science gatekeeping has pervaded our whole society, to the point where many say that if you're not an academic scientist, you have no right to an *opinion* about scientific matters.
I can (and will, in another comment) come up with a better way to do peer review. But until this gatekeeper incentive changes or is worked with somehow, nothing substantive will change. And that I don't know how to do.
I would like a more adversarial system. Not that scientists should become lawyers, but the strength of an adversarial system of evidence is that both litigants are expected to present the best possible case for and against, respectively. So, here is the article, here is the damning critique, here is the weak argument. Or the article, a critique that doesn't score many points, a strong argument from strong evidence. Or maybe both sides score important valid points with the reader. Facts and law both matter in court, but I guess here it would be scientific law, and the reader gets to judge the merits according to both fact and science. Not perfect, but a starting place perhaps.
Gatekeeping, like peer review, starts with the best of intentions and becomes a problem as the profession begins to defend its own existence.
I agree entirely, but the problem is modifying or eliminating the incentive structure that incentivizes gatekeeping. It's easy to say what we want things to look like; the hard problem is figuring out how to get there.
The problem is that given the system we have, most people who aren't academic scientists really don't have anything useful to contribute. Their views are based on religion, political ideology, or just mental illness because if you are interested in actually learning about something you normally go through academia.
That is true of most people who are not academic scientists, yes. But that leaves a very large number of people.
Let me try again, as that was not clear:
Yes, most people who aren't academic scientists really don't have anything useful to contribute.
But that leaves a very large number of people who are not academic scientists who *do* have something useful to contribute.
Agreed. There are quite a few people out there who have PhDs but do not work in academia. In my field (software engineering) it is common to encounter CS PhDs, Math PhDs, Physics PhDs, etc. Meaning, they have all the training to qualify for academia, but they work on practical projects.
If we are talking about life sciences, I'd bet on the average actuary, civil engineer or economist in a debate against the average biology professor on statistics. Life science researchers are quite weak on mathematics.
Hi Matt, I'm very curious about Max Weber's theory of gatekeeping that you mentioned! Where is this from? I would love to read more about it!
Weber's ideas are the foundation of "elite theory," a whole body of sociopolitical study. "Protestant Ethic" and "Theory of Economic and Social Organization" may be his most famous titles but if you're not a philosopher, maybe start with Gianfranco Poggi's book about Weber.
Fascinating Weber (para-)quote. I ran into it yesterday in one of Fukuyama's books (The Origins of Political Order).
Hi Matt, as a naval architect working for a French marine company, i would be very interested in your study of french naval engineering. do you have links pls?
I also recommend the magazine articles of French naval historian Luc Feron, Terry Shinn's book about L'Ecole Polytechnque, Theodore Ropp's "Development of a Modern Navy," and Arne Roksund's book about the Jeune Ecole. Primary sources include the works of Honore Sebastian Vial du Clairbois and Pieere-Charles Francois Dupin. You'll see a distinct French aesthetic developed around the severe tumblehome hull shapes of early steel warships.
The two most recent posts on my site:
https://www.polemology.net/p/the-beaux-arts-of-war
https://www.polemology.net/p/eighty-years-of-french-navalism-in
Many thanks for those interesting links. great site!
"...the social history of how entire professions can fail together, for decades, there is historiography."
Who needs it? Just look at global warming.
I wish we could get some remotely definitive account of that.
Just read the peer-reviewed review articles.
…which all indicate that the profession is doing great, and is definitely right?
You aren't supposed to question it.
I find a lot of the climate scientists are quite open about the limitations and uncertainties that accompany climate modeling. The activists and politicians are the ones who view it as a religion that cannot be questioned.
I don't think it failed perhaps as much as it stopped working, especially as the number of scientists exploded. A good idea becomes a bureaucratic nightmare with scale.
Also the weak link point is excellent!
The more centralized an entity is, the more likely it is to be the target of corruption. This system is corrupted. We need to decentralize science, the FDA and more.
Like this:
https://joshketry.substack.com/p/embrace-decentralized-systems-fear
I'm much more sanguine. Some things ought to be centralised, and some not. All systems gets somehwat corrupted over time unless they evolve, centralised or no.
I would say that decentralized systems become corrupt by becoming centralized.
Simply putting it on chain would solve a lot of the issues. Listen to Balaji Srinivasan talk about it here: https://fs.blog/knowledge-project-podcast/balaji-srinivasan-2/
Decentralizing a system greatly increases problems of fostering excellence, finding and collaborating with peers, and actually doing the critical work of building knowledge - by first understanding and synthesizing what has been done, which requires some centralization or indexing, then adding to it in a way that others who work in the area can find and recognize.
Yes, corruption pressures are critical, but very frequently, corrupted metrics are still an improvement on no measurements at all. https://mpra.ub.uni-muenchen.de/98288/
This is a classic misunderstanding of systems. You can absolutely have organized, collaborating decentralized systems if that is your goal. There are hybrid systems too.
https://joshketry.substack.com/p/embrace-decentralized-systems-fear
None of the examples in that post seem to work as well as the corresponding centralized ones. They are certainly less liable to be corrupted, but they don't really fit any of the criteria I noted - building knowledge, or enhancing collaboration, or fostering good solutions.
Any you ignored the central point, which was about metrics, which are central to the reason peer review and similar systems exist.
I mean, what is a market other than an organized, collaborative decentralized system?
...but markets themselves are almost never decentralized. In fact, they aggregate prices only if they are centralized or standardized, and otherwise they tend to be fragmented and inefficient. Thankfully, we in the west have regulators and system designers that (centrally) insist on standards and rules so that markets work. Which is why essentially no-one is trying to have an IPO or have their shares traded anywhere other than in liberal democratic western country's markets.
That's…not how this works. There are some centralized markets—monopolies or monopsonies—but it is just in those markets that pricing works poorly. Where prices are functioning well and fluidly, you see decentralization—many buyers, many sellers.
But of course the decentralization of markets goes far, far beyond that. Read this (it's short): https://fee.org/resources/i-pencil/
As for regulation and standards, many quite successful companies were started before those things were the norm.
Simply putting it on chain would solve a lot of the issues. Listen to Balaji Srinivasan talk about it here: https://fs.blog/knowledge-project-podcast/balaji-srinivasan-2/
Thanks for pointing me to that podcast. That guy seems pretty brilliant.
Agreed 100%—but *how*?
Simply putting it on chain would solve a lot of the issues. Listen to Balaji Srinivasan talk about it here: https://fs.blog/knowledge-project-podcast/balaji-srinivasan-2/
Well, yes, but on the other hand, peer review became widespread in large part as a response to the expansion of science in the postwar era. Journal editors simply couldn't keep up, so they relied more and more on external reviewers.
Yes that's part of what I meant
The one thing I’m surprised didn’t make it into this amazing article is how the digital age transformed how we view science. As little as 25 years ago, you needed journals to physically publish and distribute your work. And no one was going to get your paper delivered without an established organization approving it first.
Today, the logistical and material cost of reading a paper is virtually zero. We convinced ourselves that journals were providing more of a service than simply being the paper boy because we needed them for that, at minimum. We then built an entire incentive and ladder climbing system based on that silly hang up.
No longer.
Yes, that's a great point!
It seems to me that 'believe science' mostly means 'leave it to us' as a kind of argument from authority. I know someone (a scientist) who is viscerally angry that non-scientific people question the fitness of the scientific 'system' in surfacing materially objective truths about complex things without being inevitably distorted by incentives.
Maybe they're just afraid to lose elite status and peer review is simply a way to circle the wagons.
I totally agree. There's an idea in psychology called "social dominance orientation," which is basically the extent to which you agree that some groups in society should dominate other groups. People usually think of this in terms of race, and of course all academics recoil at that. But they are generally on board with elites dominating non-elites. The changes to scientific publishing over the last sixty years are a very effective way of ensuring that nobody outside the system gets to call themselves a scientist.
Thanks for that, Adam - it's a new concept to me but instantly slots into personal experience.
64% of psychology research can't be replicated...
https://www.theguardian.com/commentisfree/2015/aug/28/psychology-experiments-failing-replication-test-findings-science
The difference is that elites actually have some idea what they're doing and the general public doesn't, which is not applicable to race if for no other reason than that biologically there's no such thing as race.
I cant imagine anyone saying that out loud with a straight face. I have to wonder how you define genes as not biological.
Of course ancestry exists and people get genes from their ancestors and those ancestors lived in particular places. But racial categories like "Black", "White", "Asian", and "mixed" are arbitrary and cannot be rigorously defined in an objectively correct way. This fact leads to obviously unjust absurdities whenever they are written into laws, as with trials for miscegenation, or more recent disputes over whether someone qualifies as a member of a racial group for affirmative action purposes.
They aren't arbitrary if they can be tested for and identified via DNA. It isn't a coincidence that those groups we see are the same ones we can identify with DNA. All you are doing is parroting long debunked Gouldian lies. Race exists and has consequences. No one ever thought otherwise until jews began committing academic fraud and used their influence to shut down the research.
An idea: research papers should have the reviewers names listed same way as the authors. Thus, they bear the responsibility for their review. It is also a way to check if the paper is the product of a groupthink.
Yes, if we're going to act like something has been vetted, someone's reputation should be on the line. For what it's worth, that was the point behind early scientific societies voting on what to publish––if it puts out rubbish, the whole society looks bad.
this even further disincentivizes publishing work that goes against the consensus
And also a way to judge whether adverse competition between those working in the same fields might have been a factor.
Loved the article! Now convince universities to drop the publish-or-perish requirement. How many y of my life have I spent reading rubbish in abominable APA style whose only function was to add a line to a CV!
PS: The alchemy link was great.
If you succeed in making a Philosopher's Stone, let me know!
You would have to convince grant agencies to take the "contributions to science" out of proposal requirements. People are publishing, because they have to get grants for institutions. It all comes back to $$ not the science.
I was thinking that scientific data (mostly academia) should have solid quality control and quality assurance data that gets uploaded into a database everyone can use, instead of being lost in these publications the public does not have access to. They should provide standardized SOPs for all their data.
I did read this interesting article about creating algorithms to run quick quality control checks on the data before the paper goes to reviewers, seems like a good start.
https://www.nature.com/articles/d41586-022-03791-5
I like your suggestion of a public database. And thanks for the link. Inconsistencies between test statistics and p-values could indeed be caught by such a program. But lurking behind statistical inference lies the assumption that the sample data were randomly selected from the population of interest. Except when the population is small and well defined, this is impractical or impossible. The problem of external validity can't be solved with statistics, but since everyone is in the same boat, we ignore it.
Except this also exists in the humanities, and in fields where there are no grants.
Okay, I am going to step into a dangerous area here.
First, I am a retired accountant, not trained in the sciences. I have acquaintances at the PhD level (including in the sciences) who have gone through the "necessity" of publishing. Okay, I have stated my (non) credentials and am now ready for your attack.
It is my observation that a large part of the idea of the scientific process absolutely involves publishing of knowledge gleaned and then surviving challenge by those with contrary views. All of those presenting views are to be accorded courtesy and treated honorably. At least, as I naively see the construct.
It is my further observation that in recent years there were two very important issues that absolutely did not see this process extended. I speak of those with "unusual" [a deliberate obfuscation, you might say] views on climate change and COVID. Both those topics, it is my (further, further?) observation are not treated as being subject to courteous and honorable scientific discourse but rather as religion where someone who questions the assertions is vilified and subject to dishonor.
I offer to you an analogy. As an accountant, it is my training to seek balance, i.e. the debits must equal the credits. I therefore suggest that you folks in the scientific community have a larger problem than Mr. Mastroianni has identified. I do not deny that the accepted "religion" of climate change or COVID is perhaps correct but why is no one looking at the contrary possibilities and why do you as scientists allow the vilification, dishonor, defunding, de-platforming, etc. of those who want to explore alternative possibilities?
Oh, and as to the article itself? It seems to my non-scientific way of thinking to be a very good analysis, so, Bravo, Mr. Mastroianni!
Thanks for posting, Ken! I don't know if I would agree with your views on climate change and covid, but I certainly agree that censorship isn't the way to solve scientific disputes. In a fair fight, the truth prevails eventually. Anyone who wants the deck stacked in favor of their idea must, deep down, doubt whether it's really true.
I think people also have a very strong conviction that it's not enough for *most* people to share their beliefs––it must be 100%. For me, anyway, I'd love my point of view to prevail, but I would be *afraid* if 100% of people thought the way I did. What if we're all wrong? That's why what I want most is not for my ideas to be enforced by fiat, but for the rules to be fair to all ideas.
“I wholly disapprove of what you say—and will defend to the death your right to say it.” (Voltaire)
One of the dumbest sentences ever uttered. This is saying people should have the right to commit fraud, lie in public office, lie about science, commit perjury, etc.
It was already like that. Most people are just unaware, because most have not found themselves on the "wrong side" of the accepted narrative until Covid.
Good comment, on a good article. Possibly biased, as I too am an accountant!
It's interesting you cite COVID because in the first few months of the pandemic, thousands of scientific papers were published, read and cited without a proper peer-review process. It was an all hand on deck and a lot of studies got discredited afterwards but were important in an emergency. What you get through the media or governmental sources is either biased or so heavily summarized that it might have appeared not objective towards Science. But it was, just no one bother to read or understand the scientific literature objectively.
Lots of good points here and I find myself nodding in agreement. A couple more thoughts.
One is that a system that relies on volunteer labor should generally be suspect. I got zero dollars for being an associate editor of a journal, having to assign papers to peer reviewers and then synthesize their reviews, which generally were cursory. The reviewers got zero dollars too. Oh, I know that reviewing papers is considered a 'professional obligation,' but that's the point: the peer-review system has extraordinary responsibilities for which it is willing to pay nothing. Only journals make money.
The second point is that transparency seems like a far better approach to quality control than peer review. Having to post one's data and code is highly disciplining because there's always the chance that another researcher might use them to show that you estimated a parameter or interpreted your estimates incorrectly. It's rare for someone to put in that effort, but the discipline comes from the fact that it might happen. Maybe worth an experiment.
“ One is that a system that relies on volunteer labor should generally be suspect.”
Precisely why I distrust open source software.
I imagine the difference is whether unpaid labor is truly voluntary. Reviewing papers without pay is effectively compulsory in academia. Is contributing to open source software effectively compulsory in order to have a career in tech?
Late reply, but absolutely not.
Not only is Tchebeycheff's comment a misunderstanding of open source software, it's also just wrong.
Open Source Software (OSS) just means that the source code for the software is open for anyone to view, and according to license, edit, copy and redistribute.
OSS doesn't inherently mean "free" software: neither that the developers were unpaid, nor that the final product is free for anyone to use. Many OSS developers do so under payroll, the largest tech companies all have big stakes in OSS projects and developers working in part or exclusively in OSS.
A lot of OSS software has a license that allows any user to copy and modify the code, and that if they intend to monetize it, then they must also make their source code open.
A lot more "for pay" software than people think heavily relies on open source software.
Furthermore, you could view the fact that the source is open to review by anyone as a system that has the possible positives of a peer review system (scrutiny, improvement, refinement, critique) with none of the drawbacks (publish or perish, pay to get reviewed, gatekeeping information and elitism).
Any amateur can write or contribute to OSS if they so desire.
There is also closed source software that is free, which is freeware/shareware.
Another thought, one might argue that the rapid pace of development in computer software both in terms of tools, programs, paradigms, etc. can be attributed to not being stifled by peer review, (computer science and software development are not the same) only that the end result works. There is certainly a lot of critique to be made for software development practices.
“Any amateur can write or contribute to OSS if they so desire.”
Also why I distrust it.
Contributions are still subject to approval from the owners/maintainers of the project. You can't just come in and change anything you want.
Scientific publishing houses have a 37% profit margin*. They perpetuate this "professional obligation" narrative because the key to 37% is not to pay your (otherwise very expensive) laborers. When I'm asked to review, I ask how much I will be paid. Fair question - paying for skilled labor is non-controversial everywhere in our society. Strangely, I never get an answer (or a follow up request). * https://infrastructure.sparcopen.org/landscape-analysis/elsevier#:~:text=Elsevier%20operates%20at%20a%2037,operates%20at%20a%2023%25%20margin.
Adam, I really love all of your writing on scientific publishing. Also, this made me go read your Things Could Be Better paper again, which might be the first time that I read a full paper in full *twice*, *for fun*.
Thanks, Étienne! Your review on Making Nature was formative background reading for this post.
Also just to note that the reliance of university tenure and promotion standards on measuring the publication of a certain number of "peer reviewed" articles also maintains the rigidity of this system
A wonderful article. I've been arguing something similar for many years: https://breast-cancer-research.biomedcentral.com/articles/10.1186/bcr2742 It's fascinating to me that a process at the heart of science is faith not evidence based. Indeed, believing in peer review is less scientific than believing in God because we have lots of evidence that peer review doesn't work, whereas we lack evidence that God doesn't exist. Why does the juggernaut roll on? Because there are lots of jobs, profit, and reputations that depend on continuing to believe in it?
Hi, Richard! I didn't realize at first that you are the same Richard Smith I was reading as I wrote this article. Thanks for all your work on this subject!
Ever tried questioning what the results are while doing a peer review? Ever asked about underlying assumptions? Most peer reviews are done by people that have the same background, want the same results, and are under the same impetus to publish, publish, publish. They know that if they are critical, then when it is their turn the others will be critical. I have had people get very defensive when I question anything, and I have had other people amazed and astonished that they actually get relevant feedback beyond really basic stuff! It is uncomfortable, and boring to do peer reviews. All the incentives for it are backwards.
Or they are competing peers and rivals for status and grant money in their field.
Loud applause! I’ve published only 2 papers in the past 4 years and by “published” I mean uploaded to ReaearchGate. I’m very happy with the readership and the constructive discussions and feedback that has allowed me to further refine my theories as the field progresses.
Given the abject rubbish appearing in even the most prestigious journals, I feel no urge to submit my stuff or pay their fees.
Pay to publish your article, pay to read your article––it really is a beautiful scam.
The business brilliance of Robert Maxwell created the monster. Ever increasing numbers of journals, paid submissions and subscriptions and the complicity of institutions that also want beauracitize systems add plenty of fuel to the problem.
We need to think about the systems that govern over science. If we make them transparent and decentralize them, we can fix everything. Put everything on chain so other scientists can build off each other.
Try to find Balaji Srinivasan talking about this on The Knowledge Project podcast:
Start this at 26:10
https://www.bing.com/videos/search?q=the+kowledge+project+balaji&view=detail&mid=35DE2BA7D38A7A3B41C635DE2BA7D38A7A3B41C6&FORM=VIRE
Also read this
https://joshketry.substack.com/p/embrace-decentralized-systems-fear
I agree with your conclusion about the current system being suboptimal. However, I'm curious if you think there's anything more efficient than chaos? For example, what if Arxiv had an openreview-esque comments section? So that communications were centralized. Or something like a reddit-esque community governed forum?
I think progress requires a healthy dose of chaos. Those kinds of systems would be better than the one we have now, but I would always want there to be some kind of outlet for weird stuff.
We need to think about the systems that govern over science. If we make them transparent and decentralize them, we can fix everything. Put everything on chain so other scientists can build off each other.
Try to find Balaji Srinivasan talking about this on The Knowledge Project podcast:
Start this at 26:10
https://www.bing.com/videos/search?q=the+kowledge+project+balaji&view=detail&mid=35DE2BA7D38A7A3B41C635DE2BA7D38A7A3B41C6&FORM=VIRE
Also read this
https://joshketry.substack.com/p/embrace-decentralized-systems-fear
Thanks. Another great observation and article.
I wonder if we made a) reviews optional b) showed the reviews c) let the authors respond to the reviews and show those too i.e. the whole conversation (like in any blog comments section!) d) give the reviewers that contributed most thoughtfully almost as much prominence as the authors themselves (in the author's opinion? or perhaps a legitimate curator role who narrates the whole 'story'?) e) linked to follow up work that supported or contradicted (again the web makes this easy) we could start to build better mechanisms for encouraging, thinking about and testing our ideas.
I think that would be a huge improvement on the current system. I still worry that if it becomes centralized or mandatory it will eventually get captured just like this system did. But I'd certainly be interested in trying it.
You’ve heard of The Seeds of Science...?
https://www.theseedsofscience.org/about
Yep, I recommend it on Substack! I prefer a direct-to-consumer model, but I am in favor of all forms of experimentation.
Yes there is a risk of capture - perhaps in any situation where groups of humans 'collaborate' there is a risk of groupthink or even partial groupthink (is that a thing?). So we need the groups, that naturally form through shared interests, remain as open and dynamic as possible. To mitigate against noise or even sabotage would remain a challenge but one we can figure out, starting with verifiable identities.
Another interesting idea might be to allow the traditional paper to be deconstructed so hypotheses can be proposed, experiments to test them proposed and reported, conclusions drawn and avenues of further exploration as separate units. This might speed up the progress by allowing more people to participate in the conversation before a full paper is published.
I don't know what the solution is. Full disclosure, I've been working in journal publishing for 15 years.
You successfully self-published an article. If everyone who wrote a paper did that, there would be 100s of manuscripts uploaded weekly with zero quality control and zero discoverability unless like a self-publishing fiction author you work your ass off at social media to get noticed.
You talk about pre-1950s academic publishing, but the world was a much smaller place then and with a lot less research happening. You didn't have China pushing their academics to publish - many of them being unable graduate until they've been published.
Perhaps is this the world shaped by peer review.
Obviously, there are issues with peer review and with for-profit journal publishing. I'm certainly not 100% comfortable with the way things are going, especially since OA became the drive for profit.
I don't think my industry will survive as it is. I know other colleagues feel this way. However, I don't know what the future is.
I don't know what the future is either. I agree that discoverability is a problem, but I don't think peer review and journals solve it, and they in part create it. Here are three reasons:
1) For knowledge to be discoverable, it has to be public, and journals prevent that by paywalling knowledge or rejecting it in the first place. Publication bias is well-known to be a huge problem that distorts our view of the truth, and the cause of publication bias is publication.
2) Discoverable knowledge should be easily digestible. Journals encourage sprawling, jargony articles that nobody actually reads from beginning to end, diluting any useful knowledge so much that it takes a long time to concentrate it again.
3) Most research isn't good and should be ignored anyway. Nobody knows exactly how many articles are never cited after they're published (https://blogs.lse.ac.uk/impactofsocialsciences/2014/04/23/academic-papers-citation-rates-remler/), but it seems to be a pretty large chunk. If you could also filter out papers that are never cited again except by their own authors, that chunk would probably grow. So if we had a morass of unread papers, I think that's both perfectly fine and in fact what we already have, except right now we waste additional time reviewing those papers first before they're ignored forever.
The solution is to decentralize science in a transparent system.
As Balaji likes to say: "Science" is simply the ability to be independently repeatable. If you have some prestigious study, the chances of it being centralized are great. And if it is centralized, then it is likely corrupted.
The only thing that matters is if the experiment can be independently replicated.
We need to decentralize science with technology.
https://joshketry.substack.com/p/embrace-decentralized-systems-fear
Well, Balaji has no clue.
There is only scientific method - positing falsifiable hypothesis and refuting it. There is no such thing as “Science” the noun.
"You successfully self-published an article. If everyone who wrote a paper did that, there would be 100s of manuscripts uploaded weekly with zero quality control and zero discoverability unless like a self-publishing fiction author you work your ass off at social media to get noticed." The big word in that passage is "if." As you point out, uploading 100s of manuscripts every week would not be worthwhile — which strongly suggests that people won't do it. They would probably only upload things that seem worthwhile.
We would need an experiment to determine what actually would happen without peer review. (We already know what happens *with* peer review, and it's not exactly an unalloyed good: unreproducible results, exaggerations of significance, outright lies (e.g., fake data), 100s of papers published weekly...)
I think the alternative deserves a try, though I'm not sure how to handle it. Perhaps a specific discipline could drop peer review to see what actually happens.