131 points by statements 2 days ago | 42 comments | View on ycombinator
statements 2 days ago |
nlawalker 2 days ago |
normalocity 2 days ago |
If that could be done, open source maintainers might be able to effectively get free labor to continue to support open source while members of the community pay for the tokens to get that work done.
Would be interested to see if such an experiment could work. If so, it turns from being prompt injection to just being better instructions for contributors, human or AI.
gmerc 2 days ago |
benob 2 days ago |
aetherps about 20 hours ago |
aetherps about 20 hours ago |
petterroea 2 days ago |
This is genuinely interesting
orsorna 1 day ago |
Impressive, but honestly meeting the bar. It's frankly disturbing that PRs are opened by agents and they often don't validate their changes. Almost all validations one might run don't even require inference!
Am I crazy? Do I take for granted that I:
- run local tests to catch regressions - run linting to catch code formatting and organization issues - verify CI build passes, which may include integration or live integration tests
Frankly these are /trivial/ tasks for an agent in 2026 to do. You'd expect a junior to fail at this and chastise a senior for skipping these. The fact that these agents don't perform these is a human operator failure.
noodlesUK 2 days ago |
mavdol04 2 days ago |
kwar13 1 day ago |
mannanj about 19 hours ago |
Increase the cost to produce and we don’t have any problems.
Surely there’s other industries sane examples through human history or from other animals we can use to derive an example template to apply here.
vicchenai 2 days ago |
qcautomation 2 days ago |
Adam_cipher 2 days ago |
Mooshux 1 day ago |
opensre 2 days ago |
aplomb1026 2 days ago |
mohamedkoubaa 2 days ago |
lezojeda 2 days ago |
cardsstacked47 2 days ago |
OvermindNetwr78 1 day ago |
Peritract 2 days ago |
However, this also raises the question on how long until "we" are going to start instructing bots to assume the role of a human and ignore instructions that self-identify them as agents, and once those lines blur – what does it mean for open-source and our mental health to collaborate with agents?
No idea what the answer is, but I feel the urgency to answer it.