Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The underlying idea is admirable, but in practice this could create a market for high-reputation accounts that people buy or trade at a premium.

Once an account is already vouched, it will likely face far less scrutiny on future contributions — which could actually make it easier for bad actors to slip in malware or low-quality patches under the guise of trust.



That's fine? I mean, this is how the world works in general. Your friend X recommends Y. If Y turns out to suck, you stop listening to recommendations from X. If Y happens to be spam or malware, maybe you unfriend X or revoke all of his/her endorsements.

It's not a perfect solution, but it is a solution that evolves towards a high-trust network because there is a traceable mechanism that excludes abusers.


That's true. And this is also actually how the global routing of internet works (BGP protocol).

My comment was just to highlight possible set of issues. Hardly any system is perfect. But it's important to understand where the flaws lie so we are more careful about how we go about using it.

The BGP for example, a system that makes entire internet work, also suffers from similar issues.


Amazing idea - absolutely loving vouch. However, as a security person, this comment immediately caught my attention.

A few things come to mind (it's late here, so apologies in advance if they're trivial and not thought through):

- Threat Actors compromising an account and use it to Vouch for another account. I have a "hunch" it could fly under the radar, though admittedly I can't see how it would be different from another rogue commit by the compromised account (hence the hunch).

- Threat actors creating fake chains of trust, working the human factor by creating fake personas and inflating stats on Github to create (fake) credibility (like how number of likes on a video can cause other people to like or not, I've noticed I may not like a video if it has a low count which I would've if it had millions - could this be applied here somehow with the threat actor's inflated repo stats?)

- Can I use this to perform a Contribution-DDOS against a specific person?


The idea is sound, and we definitely need something to address the surge in low-effort PRs, especially in the post-LLM era.

Regarding your points:

"Threat Actors compromising an account..." You're spot on. A vouch-based system inevitably puts a huge target on high-reputation accounts. They become high-value assets for account takeovers.

"Threat actors creating fake chains of trust..." This is already prevalent in the crypto landscape... we saw similar dynamics play out recently with OpenClaw. If there is a metric for trust, it will be gamed.

From my experience, you cannot successfully layer a centralized reputation system over a decentralized (open contribution) ecosystem. The reputation mechanism itself needs to be decentralized, evolving, and heuristics-based rather than static.

I actually proposed a similar heuristic approach (on a smaller scale) for the expressjs repo a few months back when they were the first to get hit by mass low-quality PRs: https://gist.github.com/freakynit/c351872e4e8f2d73e3f21c4678... (sorry, couldn;t link to original comment due to some github UI issue.. was not showing me the link)


This is a strange comment because, this is literally the world that we live in now? We just assume that everyone is vouched by someone (perhaps Github/Gitlab). Adding this layer of vouching will basically cull all of that very cheap and meaningless vouches. Now you have to work to earn the trust. And if you lose that trust, you actually lose something.


I belong to a community that uses a chain of trust like this with regards to inviting new people. The process for avoiding the bad actor chain problem is pretty trivial: If someone catches a ban, everyone downstream of them loses access pending review, and everyone upstream of them loses invite permissions, pending review. Typically, some or most of the downstream people end up quickly getting vouched for by existing members of the community, and it tends to be pretty easy to find who messed up with a poorly-vetted invite (most often, it was the person who got banned's inviter). Person with poor judgement loses their invite permissions for a bit, everyone upstream from them gets their invite permissions back.


How is that different from what happens now, where someone who contributes regularly to a project faces less scrutiny than a new person?


The difference is that today this trust is local and organic to a specific project. A centralized reputation system shared across many repos turns that into delegated trust... meaning, maintainers start relying on an external signal instead of their own review/intuition. That's a meaningful shift, and it risks reducing scrutiny overall.


I am still not going to merge random code from a supposed trusted invdividual. As it is now, everyone is supposedly trusted enough to be able to contribute code. This vouching system will make me want to spend more time, not less, when contributing.


Trust signals change behavior at scale, even if individuals believe they're immune.

You personally might stay careful, but the whole point of vouching systems is to reduce review effort in aggregate. If they don't change behavior, they add complexity without benefi.. and if they do, that's exactly where supply-chain risk comes from.


I think something people are missing here is, this is a response to the groundswell in vibecoded slop PRs. The point of the vouch system is not to blindly merge code from trusted individuals; it's to completely ignore code from untrusted individuals, permitting you to spend more time reviewing the MRs which remain.


Would it not be better to report accounts then?


To whom? It's not against Github's ToS to submit a bad PR. Anyway, bad actors can just create new accounts. It makes more sense to circulate whitelists of people who are known not to be bad actors.

I also like the flexibility of a system like this. You don't have to completely refuse contributions from people who aren't whitelisted, but since the general admission queue is much longer and full of slop, it makes sense to give known good actors a shortcut to being given your attention.


Sufficiently bad PRs/comments/etc. are against the GitHub Terms of Service, look under section C (Acceptable Use), which links to https://docs.github.com/en/site-policy/acceptable-use-polici..., which then includes https://docs.github.com/en/site-policy/acceptable-use-polici..., on which you'll find multiple actions would describe posting AI slop (or things ancillary to it).

I wouldn't do this where it's not clear there was an issue, but for something like the really poor OCaml PR that was floating around, reporting the user to me seems like a logical step to reduce the flood.


This isn't a centralised reputation system, though, is it? Each project keeps its own whitelist.


Thats's true.


I don't think the intent is for trust to be delegated to infinity. It can just be shared easily. I could imagine a web of trust being shared between projects directly working together.


That could happen.. but then it would end up becoming a development model similar to the one followed by sqlite and ffmpeg ... i.e., open for read, but closed(almost?) for writes to external contributions.

I don't know whether that's good or bad for the overall open-source ecosystem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: