My theory is that as more people compete, the top candidates become those who are best at gaming the system rather than actually being the best. Someone has probably studied this. My only evidence is job applications for GAFAM and Tinder tho.
I've spent most of my career working, chatting and hanging out with what might be best described as "passionate weirdos" in various quantitative areas of research. I say "weirdos" because they're people driven by an obsession with a topic, but don't always fit the mold by having the ideal combination of background, credentials and personality to land them on a big tech company research team.
The other day I was spending some time with a researcher from Deep Mind and I was surprised to find that while they were sharp and curious to an extent, nearly every ounce of energy they expended on research was strategic. They didn't write about research they were fascinated by, they wrote and researched on topics they strategically felt had the highest probability getting into a major conference in a short period of time to earn them a promotion. While I was a bit disappointed, I certainly didn't judge them because they are just playing the game. This person probably earns more than many rooms of smart, passionate people I've been in, and that money isn't for smarts alone; it's for appealing to the interests of people with the money.
You can see this very clearly by comparing the work being done in the LLM space to that being done in the Image/Video diffusion model space. There's much more money in LLMs right now, and the field is flooded with papers on any random topic. If you dive in, most of them are not reproducible or make very questionable conclusions based on the data they present, but that's not of very much concern so long as the paper can be added to a CV.
In the stable diffusion world it's mostly people driven by personal interest (usually very non-commericial personal interests) and you see tons of innovation in that field but almost no papers. In fact, if you really want to understand a lot of the most novel work coming out of the image generation world you often need to dig into PRs made by an anonymous users with anime themed profile pic.
The bummer of course is that there are very hard limits on what any researcher can do with a home GPU training setup. It does lead to creative solutions to problems, but I can't help but wonder what the world would look like if more of these people had even a fraction of the resources available exclusively to people playing the game.
This is such a nuanced problem. Like any creative endeavour, the most powerful and significant research is driven by an innate joy of learning, creating, and sharing ideas with others. How far the research can be taken is then shaped by resource constraints. The more money you throw at the researchers, the more results they can get. But there seems to be a diminishing returns kind of effect as individual contributors become less able to produce results independently. The research narrative also gets distorted by who has the most money and influence, and not always for the better (as recent events in Alzheimer's research has shown).
The problem is once people's livelihoods depend on their research output rather than the research process, the whole research process becomes steadily distorted to optimise for being able to reliably produce outputs.
Anyone who has invested a great deal of time and effort into solving a hard problem knows that the 'eureka' moment is not really something that you can force. So people end up spending less time working on problems that would contribute to 'breakthroughs' and more time working on problems that will publish.
The tragedy is exactly what you said: all that energy, creativity, and deep domain obsession locked out of impact because it’s not institutionally “strategic.”
> I certainly didn't judge them because they are just playing the game.
Please do judge them for being parasitical. They might seem successful by certain measures, like the amount of money they make, but I for one simply dislike it when people only think about themselves.
As a society, we should be more cautious about narcissism and similar behaviors. Also, in the long run, this kind of behaviour makes them an annoying person at parties.
There is an implication that passionate weirdos are good by nature. You either add value in the world or you don't. A passionate, strange actor or musician who continues trying to "make it" who isn't good enough to be entertaining is a parasite and/or narcissist. A plumber who is doing the job purely for money is a value add (assuming they aren't ripping people off) - and they are playing the game - the money for work game.
I'm not a plumber, but I think the analogy would require the plumber to be strategical about every job they take on. Within a few months, our plumber would only be plumbing for millionaires, installing golden faucets at extreme price points. I would then stop befriending said plumber.
You have a friend who is a plumber, and he figures a way to serve customers with money at high price points with an honest service, and you kick him to the curb?
Extreme price points, not honest service. It's all fairly hypothetical. I don 't know exactly where you'd like the discussion to go. In reality things are always more complicated. I just think that it is a generally a good idea to call out people on anti-social behavior. When, where and how to exactly do that differs in every situation.
This take is simply wrong in a way that I would normally just sigh and move on, but it's such a privileged HN typical pov that I feel like I need to address it. If a plumber did plumbing specifically because someone needed it and he would be paid, would you call them a narcissist? If a gardener built a garden how their customer wanted would you call them a narcissist? Most of the world doesn't get to float around in a sea of VC money doing whatever feels good. They find a need, address it, and get to live another day. Productively addressing what other people need and making money from it isn't narcissism, it's productivity.
You are comparing a skilled trade that commands ~100k annual compensation to positions that have recently commanded 100 million dollars in compensation upon signing, no immediate productivity required, as this talent denial is considered strategic.
You consider the person who expects eventual ethical behavior from people that have 'won' capitalism (never have to labour again) to be privileged.
Please don't read too much into this single word. The comment above mentioned "nearly every ounce of energy they expended on research was strategic", and I was keeping that in mind while writing my remark.
Please read my sibling comment where I expand a bit on what I meant to say.
You dislike them because they don’t benefit you indirectly by benefiting society at large.
The incentive structure is wrong, incentivizing things that benefit society would be the solution not judging those that exist in the current system by pretending altruism is somehow not part of the same game.
I agree that the system itself is dysfunctional, and I understand the argument that individuals are shaped or even constrained by it. However, in this case, we are talking about people who are both exceptionally intelligent and materially secure. I think it's reasonable to expect such individuals to feel some moral responsibility to use their abilities for broader good.
As for whether that expectation is "selfish" on my part, I think that question has been debated for centuries in ethics, and I'm quite comfortable landing on the side that says not all disapproval is self-interest. In my own case, I'm not benefiting much either :)
I just don't think so, these exceptionally intelligent people are masters at pattern recognition, logic, hyper-focus, task completion in a field. Every single thing will tell them don't go against the flow, don't stick your neck out, don't be a hero, don't take on risk. Or you will end up nailed to a cross.
To me this is an insane position to take or to expect from anyone, its some just world fallacy thing perpetuated by too much Hollywood.
I am going to flip the script for a minute. I am a killer, driver, pilot, mechanic one the best ones out there, I beat the game, I won. So let me just stop and change the world, for what?
> Every single thing will tell them don't go against the flow, don't stick your neck out, don't be a hero, don't take on risk. Or you will end up nailed to a cross.
Except the situation is more like monkeys and a ladder. The ones "nailing them to the cross" are the same ones in those positions. This is the same logic as "life was tough for me, so life should be tough for you." It's idiotic!
> So let me just stop and change the world, for what?
This is some real "fuck you, I got mine" attitude. Pulling the ladder up behind you.
We have a long history in science of seeing that sticking your neck out, taking risks, and being different are successful tools to progressing science[0]. Why? Because you can't make paradigm shifts by maintaining the current paradigm. We've also seen that this behavior is frequently combated by established players. Why? Because of the same attitude, ego.
So we've created this weird system where we tell people to think different and then punish them for doing so. Yeah, people are upset about it. I find that unsurprising. So yeah, fuck you, stop pulling the ladder up behind you. You're talking as if they just leave the ladder alone, but these are the same people who end up reviewing papers, grants, and are thus the gatekeepers of progress. Their success gives them control of the ladders and they make the rules.
[0] Galileo, Darwin, Gauss, Kepler, Einstein, and Turing are not the only members of this large club. Even more recently we have Karikó who ended up getting the 2023 Nobel prize in Medicine and Akerlof, Spence, Stiglitz who got the 2001 Nobel prize in economics for their rejected work. This seems to even be more common among Nobel laureates!
There is a difference between being selfish in the sense that you want others to contribute back to the society that we are all part of, and being selfish in the sense that you want to compete for exclusive rewards.
You can call this difference whatever you want, don't pretend that they are morally or effectively equivalent.
Thanks for sharing. I did not know this law existed and had a name. I know nothing about nothing but it appears to be the case that the interpretation of metrics for policies assume implicitly the "shape" of the domain. E.g. in RL for games we see a bunch of outlier behavior for policies just gaming the signal.
There seems to be 2 types
- Specification failure: signal is bad-ish, a completely broken behavior --> local optimal points achieved for policies that phenomenologically do not represent what was expected/desired to cover --> signaling an improvable reward signal definition
- Domain constraint failure: signal is still good and optimization is "legitimate", but you are prompted with the question "do I need to constraint my domain of solutions?"
- finding a bug that reduces time to completion of a game in a speedrun setting would be a new acceptable baseline, because there are no rules to finishing the game earlier
- shooting amphetamines on a 100m run would probably minimize time, but other factors will make people consider disallowing such practices.
I view Goodhart's law more as a lesson for why we can never achieve a goal by offering specific incentives if we are measuring success by the outcome of the incentives and not by the achievement of the goal.
This is of course inevitable if the goal cannot be directly measured but is composed of many constantly moving variables such as education or public health.
This doesn't mean we shouldn't bother having such goals, it just means we have to be diligent at pivoting the incentives when it becomes evident that secondary effects are being produced at the expense of the desired effect.
> This is of course inevitable if the goal cannot be directly measured
It's worth noting that no goal can be directly measured[0].
I agree with you, this doesn't mean we shouldn't bother with goals. They are fantastic tools. But they are guides. The better aligned our proxy measurement is with the intended measurement then the less we have to interpret our results. We have to think less, spending less energy. But even poorly defined goals can be helpful, as they get refined as we progress in them. We've all done this since we were kids and we do this to this day. All long term goals are updated as we progress in them. It's not like we just state a goal and then hop on the railroad to success.
It's like writing tests for code. Tests don't prove that your code is bug free (can't write a test for a bug you don't know about: unknown unknown). But tests are still helpful because they help evidence the code is bug free and constrain the domain in which bugs can live. It's also why TDD is naive, because tests aren't proof and you have to continue to think beyond the tests.
If I hadn't seen it in action countless times, I would belive you. Changelists, line counts, documents made, collaborator counts, teams lead, reference counts in peer reviewed journals...the list goes on.
You are welcome to prove me wrong though. You might even restore some faith in humanity, too!
The Zoological Survey of India would like to know but hasn't figured out a good way to do a full census. If you have any ideas they would love to hear them.
Naja naja has Least Concern conservation status, so there isn't much funding in doing a full count, but there are concerns as encroachment both reduces their livable habitat and puts them into more frequent contact with humans and livestock.
If it were a good metric there wouldn't be a few phone books worth of regulations on what you can do before and during running 100 meters. From banning rocket shoes, to steroids, to robot legs the 100 meter run is a perfect example of a terrible metric both intrinsically as a measure of running speed and extrinsically as a measure of fitness.
What do you mean? People start doping or showing up with creatively designed shoes and you need to layer on a complicated system to decide if that's cheating, but some of the methods are harder to detect and then some people cheat anyway, or you ban steroids or stimulants but allow them if they're by prescription to treat an unrelated medical condition and then people start getting prescriptions under false pretexts in order to get better times. Or worse, someone notices that the competition can't set a good time with a broken leg.
So what is your argument, that it doesn't apply everywhere therefore it applies nowhere?
You're misunderstanding the root cause. Your example works as the the metric is well aligned. I'm sure you can also think of many examples where the metric is not well aligned and maximizing it becomes harmful. How do you think we ended up with clickbait titles? Why was everyone so focused on clicks? Let's think about engagement metrics. Is that what we really want to measure? Do we have no preference over users being happy vs users being angry or sad? Or are those things much harder to measure, if not impossible to, and thus we focus on our proxies instead? So what happens when someone doesn't realize it is a proxy and becomes hyper fixated on it? What happens if someone does realize it is a proxy but is rewarded via the metric so they don't really care?
Your example works in the simple case, but a lot of things look trivial when you only approach them from a first order approximation. You left out all the hard stuff. It's kinda like...
Edit: Looks like some people are bringing up metric limits that I couldn't come up with. Thanks!
> So what is your argument, that it doesn't apply everywhere therefore it applies nowhere?
I never said that. Someone said the law collapses, someone asked for a link, I gave an example to prove it does break down in some cases at least, but many cases once you think more about it. I never said all cases.
If it works sometimes and not others, it's not a law. It's just an observation of something that can happen or not.
You're right. My bad. I inferred that through the context of the conversation.
> If it works sometimes and not others, it's not a law.
I think you are misreading and that is likely what lead to the aforementioned misunderstanding. You're right that it isn't a scientific law, but the term "law" gets thrown around a lot in a more colloquial manner. Unfortunately words are overloaded and have multiple meanings. We do the same thing to "hypothesis", "paradox", and lots of other things. I hope this clarifies the context. (even many of the physics laws aren't as strong as you might think)
But there are many "laws" used in the same form. They're eponymous laws[0], not scientific ones. Read "adage". You'll also find that word used in the opening sentence on the Wiki article I linked as well as most (if not all) of them in [0]
I disagree with all of those examples, they are misunderstanding what it means for the metric to break down in the context of the law, but alas. "If you run a different race" lol.
That's the key part. The metric has context, right?
And that's where Goodhart's "Law" comes in. A metric has no meaning without context. This is why metrics need to be interpreted. They need to be evaluated in context. Sometimes this context is explicit but other times it is implicit. Often people will hack the metric as the implicit rule is not explicit and well that's usually a quick way to make those rules explicit.
Here's another way to think about it: no rule can be so perfectly written that it has no exceptions.
could you explain what you think the difference is?
a metric is chosen, people start to game the system by doing things that make the metric improve but the original intent is lost. increasingly specific rules/laws have to be made up to make the metric appear to work, but it becomes a lost cause as more and more creative ways are found to work around the rules.
Exactly, that's the definition. It doesn't apply to timing a 100m race. There's many such situations that are simple enough and with perfect information available where this doesn’t break down and a metric is just a metric and it works great.
Which is not to the detriment of the observation being true in other contexts, all I did was provide a counter example. But the example requires the metric AND the context.
Do you know certain shoes are banned in running competitions?
There's a really fine line here. We make shoes to help us run faster and keep our feet safe, right? Those two are directly related, as we can't run very fast if our feet are injured. But how far can this be taken? You can make shoes that dramatically reduce the impact when the foot strikes the ground, which reduces stress on the foot and legs. But that might take away running energy, which adds stresses and strains to the muscles and ligaments. So you modify your material to put energy back into the person's motion. This all makes running safer. But it also makes the runner faster.
Does that example hack the metric? You might say yes but I'm certain someone will disagree with you. There's always things like this where they get hairy when you get down to the details. Context isn't perfectly defined and things aren't trivial to understand. Hell, that's why we use pedantic programming languages in the first place, because we're dealing with machines that have to operate void of context[0]. Even dealing with humans is hard because there's multiple ways to interpret anything. Natural language isn't pedantic enough for perfect interpretation.
Do you have an example that doesn't involve an objective metric? Of course objective metrics won't turn bad. They're more measurements than metrics, really.
I'd like to push back on this a little, because I think it's important to understanding why Goodhart's Law shows up so frequently.
*There are no /objective/ metrics*, only proxies.
You can't measure a meter directly, you have to use a proxy like a tape measure. Similarly you can't measure time directly, you have to use a stop watch. In a normal conversation I wouldn't be nitpicking like this because those proxies are so well aligned with our intended measures and the lack of precision is generally inconsequential. But once you start measuring anything with precision you cannot ignore the fact that you're limited to proxies.
The difference of when we get more abstract in our goals is not too dissimilar. Our measuring tools are just really imprecise. So we have to take great care to understand the meaning of our metrics and their limits, just like we would if we were doing high precision measurements with something more "mundane" like distance.
I think this is something most people don't have to contend with because frankly, very few people do high precision work. And unfortunately we often use algorithms as black boxes. But the more complex a subject is the more important an expert is. It looks like they are just throwing data into a black box and reading the answer, but that's just a naive interpretation.
Sure, if you get a ruler from the store it might be off by a fraction of a percent in a way that usually doesn't matter and occasionally does, but even if you could measure distance exactly that doesn't get you out of it.
Because what Goodhart's law is really about is bureaucratic cleavage. People care about lots of diverging and overlapping things, but bureaucratic rules don't. As soon as you make something a target, you've created the incentive to make that number go up at the expense of all the other things you're not targeting but still care about.
You can take something which is clearly what you actually want. Suppose you're commissioning a spaceship to take you to Alpha Centauri and then it's important that it go fast because otherwise it'll take too long. We don't even need to get into exactly how fast it needs to go or how to measure a meter or anything like that, we can just say that going fast is a target. And it's a valid target; it actually needs to do that.
Which leaves you already in trouble. If your organization solicits bids for the spaceship and that's the only target, you better not accept one before you notice that you also need things like "has the ability to carry occupants" and "doesn't kill the occupants" and "doesn't cost 999 trillion dollars" or else those are all on the chopping block in the interest of going fast.
So you add those things as targets too and then people come up with new and fascinating ways to meet them by sacrificing other things you wanted but didn't require.
What's really happening here is that if you set targets and then require someone else to meet them, they will meet the targets in ways that you will not like. It's the principal-agent problem. The only real way out of it is for principals to be their own agents, which is exactly the thing a bureaucracy isn't.
I've just taken another step to understand the philosophy of those bureaucrats. Clearly they have some logic, right? So we have to understand why they think they can organize and regulate from the spreadsheet. Ultimately it comes down to a belief that the measurements (or numbers) are "good enough" and that they have a good understanding of how to interpret them. Which with many bureaucracies that is the belief that no interpretation is needed. But we also see that behavior with armchair experts who try to use data to evidence their conclusion rather than interpret data and conclude from that interpretation.
Goodhart had focused on the incentive structure of the rule, but that does not tell us how this all happens and why the rule is so persistent. I think you're absolutely right that there is a problem with agents, and it's no surprise that when many introduce the concept of "reward hacking" that they reference Goodhart's Law. Yes, humans can typically see beyond the metric and infer the intended outcome, but ignore this because they don't care and so fixate on the measurement because that gives them the reward. Bureaucracies no doubt amplify this behavior as they are well known to be soul crushing.
But we should also be asking ourselves if the same effect can apply in settings where we have the best of intentions and all the agents are acting in good faith and trying to interpret the measure instead of just game it. The answer is yes. Idk, call it Godelski's Corollary if you want (I wouldn't), but it this relates to Goodhart's Law at a fundamental level. You can still have metric hacking even when agents aren't aware or even intending to do so. Bureaucracy is not required.
In a sense you can do the same thing to yourself. If you self-impose a target and try to meet it while ignoring a lot of things that you're not measuring even though they're still important, you can unintentionally sacrifice those things. But there's a difference.
In that case you have to not notice it, which sets a much lower cap on how messed up things can get. If things are really on fire then you notice right away and you have the agency to do something different.
Whereas if the target is imposed by a far-off hierarchy or regulatory bureaucracy, the people on the ground who notice that things are going wrong have no authority to change it, which means they carry on going wrong.
Or put it this way: The degree to which it's a problem is proportional to the size of the bureaucracy. You can cause some trouble for yourself if you're not paying attention but you're still directly exposed to "hear reason or she'll make you feel her". If it's just you and your boss who you talk to every day, that's not as good but it's still not that bad. But if the people imposing the target aren't even in the same state, you can be filling the morgue with bodies and still not have them notice.
> In a sense you can do the same thing to yourself.
Of course. I said you can do it unknowingly too.
> The degree to which it's a problem is proportional to the size of the bureaucracy.
Now take a few steps more and answer "why". What are the reasons this happens and what are the reasons people think it is reasonable? Do you think it happens purely because people are dumb? Or smart but unintended. I think you should look back at my comment because it handles both cases.
To be clear, I'm not saying you're wrong. We're just talking about the concept at different depths.
I don't think the premise that everything is a proxy is right. We can distinguish between proxies and components.
A proxy is something like, you're trying to tell if hiring discrimination is happening or to minimize it so you look at the proportion of each race in some occupation compared to their proportion of the general population. That's only a proxy because there could be reasons other than hiring discrimination for a disparity.
A component is something like, a spaceship needs to go fast. That's not the only thing it needs to do, but space is really big so going fast is kind of a sine qua non of making a spaceship useful and that's the direct requirement rather than a proxy for it.
Goodhart's law can apply to both. The problem with proxies is they're misaligned. The problem with components is they're incomplete. But this is where we come back to the principal-agent problem.
If you could enumerate all of the components and target them all then you'd have a way out of Goodhart's law. Of course, you can't because there are too many of them. But, many of the components -- especially the ones people take for granted and fail to list -- are satisfied by default or with minimal effort. And then enumerating the others, the ones that are both important and hard to satisfy, gets you what you're after in practice.
As long as the person setting the target and the person meeting it are the same person. When they're not, the person setting the target can't take anything for granted because otherwise the person meeting the target can take advantage of that.
> What are the reasons this happens and what are the reasons people think it is reasonable? Do you think it happens purely because people are dumb? Or smart but unintended.
In many cases it's because there are people (regulators, corporate bureaucrats) who aren't in a position to do something without causing significant collateral damage because they only have access to weak proxies, and then they cause the collateral damage because we required them to do it regardless, when we shouldn't have been trying to get them to do something they're in no position to do well.
> I don't think the premise that everything is a proxy is right.
I said every measurement. That is a key word.
I know we're operating at a level that most people never encounter, but you cannot in fact measure a meter. You can use a reference tool like a ruler to try to measure distance which is calibrated. But that's a proxy. You aren't measuring a meter, you're measuring with a tool that is estimating a meter. You can get really precise and use a laser. But now you're actually doing a time of flight measurement, where a laser is bouncing off of something and you're measuring the time it takes to come back. Technically you're always getting 2x the measurement but either way you're actually not measuring distance you're measuring a light impulse (which is going to have units like candles or watts) and timing it, which we then convert those units to meters. You can continue this further to even recognize the limits of each of those estimates and this is an important factor if you're trying to determine the sensitivity (and thus error) of your device.
So I think you really aren't understanding this point. There is no possible way you can directly measure even the most fundamental scientific units (your best chance is going to probably be a mole but quantum mechanics is going to fuck you up).
> The problem with proxies is they're misaligned. The problem with components is they're incomplete.
If you pay close attention to what I'm talking about then you might find that these aren't as different as you think they are.
> If you could enumerate all of the components and target them all then you'd have a way out of Goodhart's law.
Which is my point. It isn't just that you can't because they are abstract, you can't because the physical limits of the universe prevent you to in even the non-abstract cases.
I am 100% behind you in that we should better define what we're trying to measure. But this is no different than talking about measuring something with higher precision. Our example above moved from a physical reference device to a laser and a stopwatch. That's a pretty dramatic shift, right? Uses completely different mechanisms. So abstract what you're thinking just a little so we can generalize the concept. I think if you do then we'll be on the same page.
> In many cases
I think you misunderstood my point here. Those were rhetorical questions and the last sentence tells you why I used them. They were not questions I needed answering. Frankly, I believe something similar is happening throughout our conversation since you are frequently trying to answer questions that don't need answering and telling me things which I have even directly acknowledged. It's creating a weird situation where I don't know how to answer because I don't know how you'll interpret what I'm saying. You seem to think that I'm disagreeing with you on everything and that just isn't true. For the most part I do agree. But to get you on the same level as me I need you to be addressing why these things are happening. Keep asking why until you don't know. That exists at some depth, right? It's true for everyone since we're not omniscient gods. My conclusion certainly isn't all comprehensive, but it does find this interesting and critical part where we run into something you would probably be less surprised about if you looked at my name.
But there is no way to know who is truly the 'best'. The people who position and market themselves to be viewed as the best are the only ones who even have a chance to be viewed as such. So if you're a great researcher but don't project yourself that way, no one will ever know you're a great researcher (except for the other great researchers who aren't really invested in communicating how great you are). The system seems to incentivize people to not only optimize for their output but also their image. This isn't a bad thing per se, but is sort of antithetical to the whole shoulder of giants ethos of science.
The problem is that the best research is not a competitive process but a collaborative one. Positioning research output as a race or a competition is already problematic.
right. Also, the idea that there is a "best" researcher is already problematic. You could have 10 great people in a team, and it would be hard to rank them. Rating people in order of performance in a team is contradictory to the idea of building a great team. ie, you could have 10 people all rated 10 which is really the goal when building a team.
Yeah I think this is a general principle. Just look at the quality of US presidents over time, or generations of top physicists. I guess it’s just a numbers game: the number of genuinely interested people is relatively constant while the number of gamers grows with the compensation and perceived status of the activity. So when compensation and perceived status skyrockets the ratio between those numbers changes drastically.
I think the number of generally interested people goes up. Maybe the percent stays the same? But honestly, I think we kill passion for a lot of people. To be cliche, how many people lose the curiosity of a child? I think the cliche exists for a reason. It seems the capacity is in all of us and even once existed.
To some extent I think that’s just human nature, or even animal nature. The optimal explore / exploit tradeoff changes as we age. When we’re children it’s beneficial to explore. As adults it’s often more beneficial to exploit. But you need cultural and organizational safeguards that protect those of us who are more childish and explorative from those that are more cynical and exploitative. Otherwise pursuits of truth aren’t very fruitful.
I have seen absolutely incredible, best in the world type engineers, much smarter than myself, get fired from my FAANG because of the performance games.
I persist because I'm fantastic at politics while being good enough to do my job. Feels weird man.
It is pretty simple - if the rewards are great enough and the objective difficult enough, at some point it becomes more efficient to kneecap your competitors rather than to try to outrun them.
I genuinely thing science would be better served if scientist got paid modest salaries to pursue their own research interests and all results became public domain. So many Universities now fancy themselves startup factories, and startups are great for some things, no doubt, but I don't think pure research is always served by this strategy.
> if scientist got paid modest salaries to pursue their own research interests and all results became public domain
I would make that deal in a heartbeat[0,1].
We made a mistake by making academia a business. The point was that certain research creates the foundation for others to stand on, but it is difficult to profit off those innovations and by making those innovations public then the society at large will profit by several orders of magnitude more than you would have if you could have. Newton and Leibniz didn't become billionaires by inventing calculus, yet we wouldn't have the trillion dollar businesses and half the technology we have today if they hadn't. You could say the same about Tim Burner Lee's innovation.
The idea that we have to justify our research and sell it as profitable is insane. It is as if being unaware of the past itself. Yeah, there's lots of failures in research, it's hard to push the bounds of human knowledge (surprise?). But there are hundreds, if not millions, of examples where that innovation results in so much value that the entire global revenue is not enough. Because the entire global revenue stands on this very foundation. I'm not saying scientists need to be billionaires, but it's fucking ridiculous that we have to fight so hard to justify buying a fucking laptop. It is beyond absurd.
I would categorize people into 2 broad extremes. 1) those that care two hoots about what others or the system expects of them and in that sense are authentic and 2) those that only care about what others or the system expects of them, and in that sense are not authentic. There is a spectrum in there.
Anytime a system gets hyper-competitive and the stakes are high, it starts selecting for people who are good at playing the system rather than just excelling at the underlying skill
that's what happens at the top of most competitive domains. Just take a look at pro sports; guys are looking for millimeters to shave off and they turn to "playing the game" rather than merely improving athletic performance. Watching a football game (either kind) and a not-small portion of the action is guys trying to draw penalties or exploit the rules to get an edge.