Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So your assumption is that it will ultimately be the users of software themselves who will throw some every day language at an AI and it will reliably generate something that meets those users' intuitive expectations?
 help



Yes, it will be at least as reliable as an average software engineer at an average company (probably more reliable than that), or at least as reliable as a self-driving car where a user says get me to this address, and the car does it better (statistically) than an average human driver.

I think this could work for some tasks but not for others.

We didn't invent formal languages to give commands to computers. We invented them as a tool for thinking and communicating things that are hard to express in natural language.

I doubt that we will stop thinking and I doubt that it will ever be efficient to specify tasks purely in terms of natural language.

One of my first jobs as a software engineer was for a bank (~30 years ago). This bank manager wasn't a man of many words. He just handed us an Excel sheet as a specification for what he wanted us to implement.


My job right now is to translate natural English statements from my bosses/colleagues into natural English instructions for Claude. Yes, it takes skill and experience to do this effectively. But I don't see any reasons Gemini 4, Opus 5 or GPT-6 won't be able to do this just as well as I do.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: