Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is a strawman argument that is conflating uses for AI. I posted a video not long ago where Andrew Ng makes the claim to the AI Startup school that in testing they are seeing ~10x improvement for greenfield prototypes and 30%-50% improvement in existing production code bases.

So two groups are talking past one another. Someone has a completely new idea, starts with nothing and vibe codes a barely working MVP. They claim they were able to go from 0 to MVP ~10x faster than if they had written the code themselves.

Then some seasoned programmer hears that claim, scoffs and takes the agent into a legacy code base. They run `/init` and make 0 changes to the auto-generated CLAUDE.md. They add no additional context files or rules about the project. They ask completely unstructured questions and prompt the first thing that comes into their minds. After 1 or 2 days of getting terrible results they don't change their usage or try to find a better way, they instead write a long blog post claiming AI hype is unfounded.

What they ignore is that even the maximalists are stating: 30%-50% improvement on legacy code bases. And that is if you use the tool well.

This author gets terrible results and then says: "Dark warnings that if I didn't start using AI now I'd be hopelessly behind proved unfounded. Using AI to code is not hard to learn." How sure is the author that they actually learned to use it? "A competent engineer will figure this stuff out in less than a week of moderate AI usage." One of the most interesting things about learning are those things that are easy to learn and hard to master. You can teach a child chess, it is easy to learn but it is hard to master.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: