26 Comments

What's your argument against doing grant lotteries? You mention Goodhart's law up front. But part of that idea is that if you replace suboptimal metrics with other metrics, people are just going to figure out other ways of optimizing for those new metrics. Even if they're fuzzier and less specific. So why not go all the way toward the unhackable? Maybe there's a comprise in there—with a threshold for who is a sufficiently trustworthy / competent applicant, or a weighted probability based on some subset of your proposed criteria.

The main problem that I have with your Trust Windfall argument is that it sounds, in essence, like the way being rich works. You get a little bit rich. This gets you closer to other rich people, who begin to trust you more. Then you get more rich, because they're in charge of the assets and these assets get allocated disproportionately to rich-adjacent you via trust. That totally seems like something you could hack! In your example, it's kind of like replacing one Lin-Manuel problem with another. The extent to which lots of people trust him is also going to go up after he won a Tony. So is the criteria to identify the trustworthy people who are most currently undervalued by the market, e.g., your gulag thespian friend?

To put my concern another way: you point out this is overtly based on choosing people who are your friends. I feel like people will either unconsciously choose in-group members / people they are interpersonally fond of; or they will try to address the unconscious bias and choose people who blatantly different from themselves. Both are hard to square with exactly how much they cloud of judgments of "merit". Curious to know what you think about that!

At any rate, I agree with the primary argument of your piece: Oxford degrees are useless.

Expand full comment

Lotteries are a great baseline. Can we do better than giving money away completely at random, to anybody?

One way is to have people at least apply for the money, so that we give money only to people who have some use for it. But then we encourage lots of frivolous applications, so we need some way of excluding the worst ones, and now we're applying criteria, and pretty soon we're Goodhart'd again.

I think Windfalls are practically unhackable, for a few reasons. You don't know who the Agents are and they're constantly changing, so you don't know to hack. The lag time between becoming an Agent and giving out Windfalls could be a matter of hours or days, so you don't have much time to hack even if you discovered one's identity. Plus, Agents should be giving Windfalls based on evaluations of past behavior, not present behavior, so you'd have to be doing your hacks before the Windfalls even reached your social network. And each Agent would have slightly different criteria, so hacks that would work on one Agent might not work on another.

If I became the first Agent, my choices are pretty much locked in. I nominated B as the second Agent, someone whose judgment I trust but who has a different social network. She makes her choices. At this point, even if people know I was an Agent, they don't know who the next Agent is. You could even wait to announce the first round of Windfalls until the second round is selected, wait to announce the third until the second is selected, etc.

I select B because I think she has good judgment and will pick people who would do something broadly beneficial if they had more money. I am sure the kind of people she would pick are different from the ones I would pick. I want her to pick people she likes, because I think being liked by B is a good signal of being a good person. I also know her well enough to know that she won't reward based on liking alone.

Any Agent could betray the trust put in them or just choose poorly, and it's possible that Windfalls would just circle around the same few social networks and keep rewarding the same kind of people. I think Windfalls are less likely to do that than any other form of awarding grants, for the same reason I would let a friend take care of my child: they could do something bad, but I trust them not to.

Expand full comment

Yeah, I can see all that. I love the idea of having trust be at the center of the system. Like, these are scientists. They're here because they want to do the most impactful / interesting science. So we should trust them to operate in a way that's consistent with that goal. You've outlined the most important ways that the current grant system prevents them from doing that — most notably, in my opinion, wasting their time by having them spend all day filling out grant applications rather than doing actual science. And because science is by definition about finding out something we don't already know, there's a certain level of fundamental unknowability in what's going to work and what isn't. The money-allocation system should lean into that.

I'm still not convinced by your lottery-skepticism though. I think if you could find a way of eliminating un-serious contenders, then random chance allocation is the only strategy that is truly unhackable. Plus it completely eliminates the problem of having to waste time on applications, since you know nothing you do beyond being eligible for the lottery will affect your chances of winning. I can still see three issues with the Agents-picking-other-Agents scheme: (a) People really aren't going to be biased toward giving money to people from fancy institutions with fancy positions? I dunno, man. I feel like post-Tony LMM is still gonna be a disproportionately favored; (b) There's an incentive to pick people who are in your specialized area. Why? Because if they can do more better science, then those papers will be more likely to cite your work. Gulag guy isn't going to cite your shit. That seems like a pretty strong incentive, at least under the current academic pressures; and (c) It still seems to me like the hack here is "be good at networking". And maybe that isn't quite as a bad as like "have an impressive looking CV" but still — stereotypically, scientists (especially, perhaps, the most inventive ones) are pretty bad at networking.

Trying to anticipate your counterargument... The whole point of your scheme is that you trust the individual Agents to say "ah, here's someone whose work deserves credit." And then because of that interpersonal trust, the recipient is rewarded. It doesn't take a big social network. It just takes one single person to acknowledge an individual researcher's value. So in the fullness of time, you'd eventually explore every node in the network. To which I'd reply: yeah, maybe.

I think potentially one larger point here is what incentivizes otherwise decently-motivated people to hack the system. That definitely has something to do with highly-contested competition for scare resources. Like, in the early days of cognitive science (say, the 1950s) the funding system worked pretty well! Because it was a genius funding scheme? Of course not. It was because the ratio of available money to people vying for it was super generous. In the 1970s, that ratio got fucked up and academia hasn't recovered since.

So I'm 100% on board with your argument that the funding system is screwed up. And I'm also totally on board with trying to use "trust" as the ultimate outcome/metric within a re-vamped scheme. But I'm not 100% sure that any scheme, no matter how better-thought-out it is, is going to solve the problem of scare resources as effectively as figuring out how to flood the system with more resources (... or fewer people).

Expand full comment

I think the main problems you outline are problems with the scientific establishment, and my hope would be Windfalls have no tie to it at all; they pop in and out at the whims of Agents. If I was the first Agent, the second one would be outside science.

I think most of the scientific establishment itself exists as one big funding hack. So long as there are big pots of money that you could get if you hack hard enough, it makes sense to stick around and try to get them. I think this encourages a parasitic form of science that asks "what can I do to get funded?"

I know there's a lot of debate about whether scientific progress has slowed down since the 70s, but it seems like a massive increase in investment should have created an indisputably massive leap in progress. The fact that it didn't suggests that science isn't a system where dumping more money in makes progress come out. So I'm sympathetic to the "fewer people" solution. Christianity was at its best when being a Christian could get you thrown to the lions; maybe science is at its best when it's done by weirdos and independents, rather than corporate types. Breaking the funding apart would be a step towards de-professionalization and de-institutionalization.

To your points:

A) If I get to pick five people to receive $1 million each, I'm not picking the person who has plenty of money already. But maybe other people would be different.

B) That's true if I'm still optimizing for succeeding in a hierarchical system; my hope would be receiving a big influx of cash encourages people to step outside of it. If someone gave me $1 million, I'd say sayonara to academia.

C) You can network your way into someone's acquaintance, but networking into their inner circle is hard.

Expand full comment

This is excellent! It's a cool idea and definitely deserving of at least a few turns of the evidence dice.

Expand full comment

I like the idea of the evidence dice, suggests that God has a hand in determining whether we get to learn something or not

Expand full comment

Thanks for sharing this Adam--I really appreciate the problem statement. One concern with your proposed solutions is that networks tend to be very homophilic. I can easily imagine sincere, well-intended use of social networks to leverage trust working well, but also being very homogenous, and open to challenges that they constrain diversity.

Expand full comment

Agreed. I think the same problems pervade conventional grant funding, but each step is institutionalized so the effects of homophily seem more acceptable––committees picking professors to fund, professors picking grad students to mentor, colleges picking high schoolers to admit, teachers agreeing to recommend students, etc. I don't know of any efforts to mitigate homophily at any of these stages that aren't hackable. One reason I believe in this idea is that if I got to nominate an Agent, I know at least a few people who I think would do the best to overcome their homophilic tendencies that I think anyone could reasonably do––and I expect they know someone who they think could do this too.

Expand full comment

Haha, I'm guessing you were applying to the FTX Future Fund? (We were also doing the same for Manifold Markets.) FTX FF is actually already trying something similar to Trust Windfalls with their regranting program: https://ftxfuturefund.org/announcing-our-regranting-program/

You might also appreciate the way Speculation Grants work, which also have this function of "moving trust into the hands of others" https://survivalandflourishing.fund/speculation-grants

The entire space of new grant funding mechanisms seems to be very hot at the moment! I'm personally bullish on Impact Certificates/Retroactive Public Goods Funding, the "new type of philanthropic institution" that Scott alluded to.

Expand full comment

Thanks for both! I'm also happy to see these new attempts at philanthropy pop up. I hope that very soon anybody who tries to give $300 million to Yale is so publicly humiliated that they have to do something else.

Expand full comment

A "Secret Shopper" method to award grant money? I like the tone of that. As you touched on, the greatest hurdle is for the Agent to maintain that secrecy-anonymity. One blabby remark and the Agent would be the proverbial lottery winner with "friends" and "long lost relatives" coming out of the weeds.

Expand full comment

For sure! One possibility is that once you're selected as an Agent you're whisked away to a bunker and you have to make your decisions before you come out (though you're allowed to stay and think as long as you want). Agents shouldn't need much more information than they already have, but if they really need to know someone, someone on the outside acquires it for you.

Expand full comment

Trust Windfalls remind me of self-organized funding allocation: https://www.science.org/content/article/new-system-scientists-never-have-write-grant-application-again

My own idea to supplement the usual alternatives of flat or random (lottery) funding is a bootstrap method: let anyone apply for a small amount, e.g. $20,000, where the bar for approval is basically being able to spell their own name. Then, if they can demonstrate success or progress (including null results), they unlock the chance to apply for double the money in a similarly streamlined process, and the process repeats.

Expand full comment

This is cool! Thanks for sharing, I hadn't heard of this. I think this would be a cool experiment. I wish it wasn't tied to being a professor, since every step toward being a professor is also hacked.

In your system, how do you measure success? This has come up a lot in discussions I've had about this idea (see DoctorDT below). Publications and citations are both pretty dismal. Valuing science seems easier decades later––for instance, everybody now pretty much agrees Kahneman and Tversky did something pretty useful in the 70s––but valuing it before it's done or shortly after seems hard, at least when we all have to agree to a number.

Expand full comment

While not exactly the same approach, I see something similar successfully happening at work with funding awarded to more applied technology oriented projects.

We have a heuristic where we basically implement this: first time funding requesters will be told up front that they can only expect moderate funding amounts. If they “successfully” (success being an elusive concept in itself, I know) run that project, win some more funding in open competitions and generally prove their worthiness of the funding awarded by making progress and convincing others with their progress to invest as well, the funding awarded will increase.

It’s not exactly personal trust as with the agents, but given the repeated and very direct contact over the years, is getting close.

We even deliberately designed some “entry level” funding schemes to have a facility to test this more systematically with newcomers.

We are quite open about this process and what we want to achieve so there is inevitably attempts at “grant hacking”. But usually it’s so obvious we can call it out even if done via those in power, and in the rare case of a bad choice, the requesters will fail at the next attempt b/c of non-performance.

It also helps to understand that in order to find the truly outstanding raw diamonds, you must be ready to sift through a lot of sand. If you can do the latter with smaller entry level projects, chances are you’ll still have substantial amounts for the diamonds once uncovered. ☺️

Expand full comment

The definition of success is obviously the important implementation detail. I envision it as potentially a sliding scale depending on how much funding is requested, where the lower end of the scale could just be an accepted conference submission or journal publication (not even considering impact or citations). The handling of null results could follow some of the related ideas from https://www.worksinprogress.co/issue/escaping-sciences-paradox/. I don't have a solid idea yet on how to scale this up for research that needs larger amounts of funding, where a more stringent definition and level of success would be desired in practice.

Expand full comment

Trust Windfalls are a really interesting idea. I wonder if they would end up captured, though? There's an adage that says "A's hire A's, but B's hire C's", meaning that people who are good-at-what-they-do in a general sense have the skills to recognize and admire other people who are good-at-what-they-do and want to work with them, whereas posers just want to work with people who will suck up to them.

So say you institute a Trust Windfall Foundation and identify your first round of 100 Agents. You do your best to identify people who are trustworthy, but because recognizing ability is hard, only 90 of them are really "A"s and the other 10 are "B" or "C"s. Now in the next round, a bunch of people get funded, and the baton gets passed. The people who get the batons also have the same level of noise as you did, so 81 of them are "A"s, and 19 are "B"s and "C"s. After a few iterations of this simple model, you end up in a situation where almost all of the Agents are not-As, and almost all of the money goes to cronies or people the agents want to suck up to them.

1. Is this model accurate? Probably not: generally people are pretty good at understanding other people, and I feel like people are generally pretty trustworthy as long as they're not incentivized not to be. But how do you incentivize the agents themselves to grant money to people who will actually do interesting things with it, rather than to people who are fun to hang out with or will use the money in ways that benefit the agent? How do you incentivize the agents to not reveal they are agents to their friends and use their granting power to encourage cronyism and suck-uping?

2. How to get around this? I suppose you, the Foundation, could keep records on the success of each grant, and terminate germlines that aren't fruitful after a few generations, while seeding new ones. I suppose you could select next-agents from a list of suggested ones, and inform your choice using information from broader sources.

This whole thing sounds a bit like a sampling problem to me. Given the social network, how do you get your money-sampling Monte Carlo chain to spend the most time around the nodes that maximize "Value"? A big part of that is defining "Value".

Expand full comment

Thanks for these questions and this model! It's a helpful way to think about it.

Whether the chains degrade over time is an empirical question, as is whether they asymptote to a higher level than conventional grants are at now. So your guess is as good as mine, but here's why I feel confident that degradation won't be as bad as your model suggests, and why I think the asymptote would be higher regardless.

1) I think when people are actually distributing windfalls, they'll be giving them to people they know very, very well. If I'm an agent, I'm not giving a million bucks to someone I met at a conference once. These are people I'd trust with my life. I could be wrong about who to trust, but as far as social judgments go, I think this is as good as they get. The rate at which windfalls get misplaced should thus be similar to the rate at which best friends betray each other. So nonzero, but I bet it's way lower than the rate that conventional grants get misplaced.

2) All things operate on chains of trust anyway, and I think this form is less subject to degradation than others. NSF grant committees are made out of academics who got picked by other academics, etc., and they in turn pick academics to receive grants. But part of the way they pick is hacked––applications, interviews, etc.––so I think the NSF grant committee chain is worse than an unhacked chain.

3) Even if the chains go awry sometimes, the chains aren't the only thing differentiating this system from conventional grants. It seems pretty easy to outdo bankrolling scammy master's degrees and p-hacked papers just by giving unrestricted grants and ditching applications. How will we know we're doing better? It's a hard question, though we already don't know how well we're doing with conventional grants because we don't have good measures. But my hope is that the difference will be big enough that you could see it with your bare eyes, however you want to measure it.

We could try to build in lots of incentives, checks, etc. to try to maintain the quality of the system, but if they actually end up being necessary, I'd expect them to get captured just like current grant funding has been captured. I think windfalls either work because trust works or they work at all. And they might not! But I think the problems with the way we do things now are obvious and serious enough that it's at least worth an experiment.

Expand full comment

Perhaps one way to mitigate against capture would be to introduce randomness in the choice of agent. For example 10% of the time the next agent is chosen at random or from a pool of people who wish to be agents and register themselves - no qualifications necessary. Of course those agents are unlikely to be in the network of trust and so perhaps a greater chance of funding wasted but also a chance of new isolated networks of worthy recipients being found.

Expand full comment

Could the idea of a basic income guarantee be viewed as the state trusting everyone? At least enough to ensure they have a walk away option from any unreasonable situation. Of course, it would not be substantial enough to let everyone do their best work e.g. a nuclear scientist with an intriguing (but unconventional) idea to improve fusion reactors which needs millions to test. But a basic income guarantee might be a good baseline on which to build a trust based grant system.

Expand full comment

“Noted anti-vaxxer Naomi Wolf” had me spitting out my tea. You are on the wrong side of morality and history here.

Expand full comment

“Noted anti-vaxxer Naomi Wolf” had me spitting out my tea. You are on the wrong side of morality and history here.

Expand full comment

I quite like your proposal. Also, a network of secret funding agents, going around and trying to identify who is essentially a mensch? This is a great idea for a show or a novel.

People being people, though, I'm afraid not everyone can separate how they feel about their friends and the work their friends might be doing. Maybe your point is that that feeling itself should be trusted. But we know people have biases, and that extraverted people have larger social networks... So it might still be a bit unfair, but probably better than the system we have now.

Also, Agents have to evaluate not only who is deserving of a True Windfall, but also who would make a good next agent. What if there was some sort of central group to whom Agents had to file a report or discuss their choices with?

Expand full comment

Variables here are discoverability, effectiveness, and likely magnitude. The problem, I think, is that it's a classic "pick two" problem. We do see glimmers: Venkatesh Rao's Summer of Protocols experience is an idea, even if it skimps on magnitude. (His circle of people can solve for the first two...within their larger circle, which itself is not large.)

MacArthur, which I'd argue is even more successful than the impressive stats you quote, seems to have solved for this about as well as you might expect. (And I suspect Miranda entered consideration long before his Tony.) That is, to attract the attention of the "invisible college" that selects the award, you need to be truly extraordinary, so the circle knows about you. You need to be personable enough, which is a *fantastic* proxy for effectiveness, and your work needs to be potentially pathbreaking, which also solves for a lot of the other two.

And if anyone tries to start this, I'll tell you what I tell my startups: solve for discoverability first.

Expand full comment

I like the idea. It reminds me of a system present in some artistic areas: when headhunters go to theaters and mark actors they like, so that when they need someone for a project, they can invite them.

Similarly, in scientific circles there are similar scheme when a project leader wants to hire a specific person they have encountered. And then the job announcement would be tailored as much as possible for this specific person. This hacks public selection procedures,but in reality implements exactly the proposed trust scheme. Someone (on, the grant agency through a grant application) gave the money to the project leader - and now they determine to whom the money should go.

This may sound a lot like corruption. But when I explicitly asked one person doing this, they responded it is not corruption. It would be had they been getting something inappropriate from that action for themselves - but they are not.

So, in a sense, the system does operate... in the shadows.

Apart from the trust issue, there is one more drawback of the proposed system. The grantee is passive in this approach, so it may stimulate "everyone is obliged to me" attitude.

But. Having funding schemes based on different principles mitigates that risk.

Expand full comment

This is late and a bit superfluous, but, on the MacArthur grants specifically, Thomas Frank had a nice piece in The Baffler a few years back.

' What James English tells us about the countless foundations and academies that make these awards is that they are not simply neutral observers, impartially recognizing merit from some lofty height. They are always engaged in a cultural project of their own—usually to establish themselves as authorities and their own concerns as correct ones.

In pursuit of that project, all award programs face the same problems. Because the reputation of the prize must itself be established for the academy in question to set about judging the merits of others, all prize programs gravitate toward convention. They tend overwhelmingly to reward people whose reputations are already made. Indeed, as the competition between prizes grows more intense, English tells us, the pressure to associate a prize with safe and unquestionably prestigious figures only grows. This is why competing prizes within a field always tend to converge on the same individuals, virtual prize magnets who are fated to stagger through life under the weight of their accumulated laurels.

...

The real promise of the MacArthur Fellowship program is that it does not require grant writing, or applications, or even achievement of a conventional sort. It could theoretically be used to bypass the world of foundation favorites altogether. It could single out worthy individuals who have been unfairly overlooked, lift them up, launch their careers, and force the world to pay attention. That even seems to have been one of the ideas for the program in the beginning.

Well, today the program rarely does any of those things. Instead, and just like nearly every other prize program in the world, it chooses noncontroversial figures and rewards the much-rewarded, giving in to what James English calls “the desire to have already famous and massively consecrated individuals on their list of winners.”

Sort through that list of winners and you’ll find lots of the usual prize magnets, the foundation favorites, the celebrated New Yorker authors, the “20 Under 40”set, the people you heard profiled on NPR a short while ago, the person who just got the National Book Award, or the John Bates Clark medal, or a Ford Foundation Leadership Grant. The resumes of certain winners are thick with honors: Junot Diaz was a literary champion many times over by the time he won in 2012, while Robert Penn Warren, who received one of the very first MacArthur Fellowships, had won the Pulitzer three times by that point in his life and had received more than a dozen honorary degrees.'

https://thebaffler.com/latest/phone-rings-genius-cult

Expand full comment