I agree with both you and the GP. Yes, coding is being totally revolutionized by AI, and we don't really know where the ceiling will be (though I'm skeptical we'll reach true AGI any time soon), but I believe there still an essential element of understanding how computer systems work that is required to leverage AI in a sustainable way.
There is some combination of curiosity of inner workings and precision of thought that has always been essential in becoming a successful engineer. In my very first CS 101 class I remember the professor alluding to two hurdles (pointers and recursion) which a significant portion of the class would not be able to surpass and they would change majors. Throughout the subsequent decades I saw this pattern again and again with junior engineers, bootcamp grads, etc. There are some people no matter how hard they work, they can't grok abstraction and unlock a general understanding of computing possibility.
With AI you don't need to know syntax anymore, but to write the write prompts to maintain a system and (crucially) the integrity of its data over time, you still need this understanding. I'm not sure how the AI-native generation of software engineers will develop this without writing code hands-on, but I am confident they will figure it out because I believe it to be an innate, often pedantic, thirst for understanding that some people have and some don't. This is the essential quality to succeed in software both in the past and in the future. Although vibe coding lowers the barrier to entry dramatically, there is a brick wall looming just beyond the toy app/prototype phase for anyone without a technical mindset.
I get its necessary for investment, but I'd be a lot happier with these tools if we didn't keep making these wild claims, because I'm certainly not seeing 10x the output. When I ask for examples, 90% its claude code (not a beacon of good software anyway but if nearly everyone is pointing to one example it tells you thats the best you can probably expect) and 10% weekend projects, which are cool, but not 10x cool. Opus 4.5 was released in Dec 2025, by this point people should be churning out year long projects in a month, and I certainly haven't seen that.
I've used them a few times, and they're pretty cool. If it was just sold as that (again, couldn't be, see: trillion dollar investments) I wouldn't have nearly as much of a leg to stand on
Any semi-capable coder could build a Reddit clone by themselves in a week since forever. It's a glorified CRUD app.
The barrier to creating a full blown Reddit the huge scaling, not the functionality. But with AWS, Azure, Google Cloud, and backends like S3, CF etc, this hasn't been a barrier since a decade or more, either.
What I could do in a week is maybe set up an open source clone of reddit (that was written by many people for many months) and customize it a little bit.
And I have a pretty decent career behind me as a aoftware developer and my peers percieved me as kinda good.
Even capable coders can’t create a Reddit clone in a week. Because it’s not just a glorified CRUD app. And I encourage you to think a bit harder before arguing like that.
Yes you can create a CRUD app in some kind of framework and style it like Reddit. But that’s like putting lines on your lawn and calling it a clone of the Bernabeu.
But even if you were right, the real barrier to building a Reddit clone is getting traction. Even if you went viral and did everything right, you’d still have to wait years before you have the brand recognition and SEO rankings they enjoy.
In what way (that's not related to the difficulty of scaling it, which I already addressed separately)?
The point of my comment was:
"Somebody with AI cloning Reddit in a week is not as special as you make it to be, all things considering. A Reddit clone is not that difficult, it's basically a CRUD app. The difficult part of replicating it, or at least all the basics of it, is its scaling - and even that wouldn't be as difficult for a dev in 2026, the era of widespread elastic cloud backends".
The Bernabeu analogy handwavingly assumes that Reddit is more challenging than a homegrown clone, but doesn't address in what way Reddit differs from a CRUD app, and how my comment doesn't hold.
And even if it did, it would be moot regarding the main point I make, unless the recent AI-clone also handles those differentiating non-CRUD elements and thus also differs from a CRUD app.
>But even if you were right, the real barrier to building a Reddit clone is getting traction.
True, but not relevant to my point, which is about the difficulty of cloning Reddit coding-wise, not business wise, and whether it's or isn't any great feat for someone using AI to do it.
Calling Reddit a CRUD app isn’t wrong, it’s just vacuous.
It strips away every part that actually makes Reddit hard.
What happens when you sign up?
A CRUD app shows a form and inserts a row.
Reddit runs bot detection, rate limits, fingerprinting, shadow restrictions, and abuse heuristics you don’t even see, and you don’t know which ones, because that knowledge is their moat.
What happens when you upvote or downvote?
CRUD says “increment a counter.”
Reddit says “run a ranking algorithm refined over years, with vote fuzzing, decay, abuse detection, and intentional lies in the UI.” As the number you see is not the number stored.
What happens when you add a comment?
CRUD says “insert record.”
Reddit applies subreddit-specific rules, spam filters, block lists, automod logic, visibility rules, notifications, and delayed or conditional propagation.
What happens when you post a URL?
CRUD stores a string.
Reddit fingerprints it, deduplicates it, fetches metadata, detects spam domains, applies subreddit constraints, and feeds it into ranking and moderation systems.
Yes, anyone can scaffold a CRUD app and style it like Reddit.
But calling that a clone is like putting white lines on your lawn and calling it the Bernabeu.
You haven’t cloned the system, only its silhouette.
> Reddit runs bot detection, rate limits, fingerprinting, shadow restrictions, and abuse heuristics you don’t even see, and you don’t know which ones, because that knowledge is their moat.
> Reddit says “run a ranking algorithm refined over years, with vote fuzzing, decay, abuse detection, and intentional lies in the UI.” As the number you see is not the number stored.
> etc...
The question is; is moltbook doing this? That was the original point, it took a week to build a basic reddit clone, as you call it the silhouette, with AI, that should surely be the point of comparison to what a human could do in that time
I mean as has already been pointed out the fact that its a clone is a big reason why, but then I also think I could probably churn out a simple clone of reddit in less than a week. We've been through this before with twitter, the value isnt the tech (which is relatively straightforward), its the userbase. Of course Reddit has some more advanced features which would be more difficult, but I think the public db probably tells you that wasn't much of a concern to Moltbook either, so yeh, I reckon I could do that.
Double your estimate and switch the unit or time to next larger one. That's how programmers time estimate tend to be. So two months and I'm right there with you.
Even if I am only slightly more productive, it feels like I am flying. The mental toll is severely reduced and the feel good factor of getting stuff done easily (rather than as a slog) is immense. That's got to be worth something in terms of the mental wellbeing of our profession.
FWIW I generally treat the AI as a pair programmer. It does most of the typing and I ask it why it did this? Is that the most idiomatic way of doing it? That seems hacky. Did you consider edge case foo? Oh wait let's call it a BarWidget not a FooWidget - rename everything in all other code/tests/make/doc files Etc etc.
I save a lot of time typing boilerplate, and I find myself more willing (and a lot less grumpy!!!) to bin a load of things I've been working on but then realise is the wrong approach or if the requirements change (in the past I might try to modify something I'd been working on for a week rather than start from scratch again, with AI there is zero activation energy to start again the right way). Thats super valuable in my mind.
I absolutely share your feelings. And I realise I’m way less hesitant to pick up the dredge tasks; migrating to new major versions of dependencies, adding missing edge case tests, adding CRUD endpoints, nasty refactorings, all these things you usually postpone or go on procrastination sprees on HN are suddenly very simple undertakings that you can trivially review.
Because the world is still filled with problems that would once have been on the wrong side of the is it worth your time matrix ( https://xkcd.com/1205/ )
There are all sorts of things that I, personally, should have automated long ago that I threw at claud to do for me. What was the cost to me? Prompt and a code review.
Meanwhile, on larger tasks an LLM deeply integrated into my IDE has been a boon. Having an internal debate on how to solve a problem, try both, write a test, prove out what is going to be better. Pair program, function by function with your LLM, treat it like a jr dev who can type faster than you if you give it clear instructions. I think you will be shocked at how quickly you can massively scale up your productivity.
Yup, I've already run like 6 of my personal projects including 1 for my wife that I had lost interest in. For a few dollars, these are now actually running and being used by my family. These tools are a great enabler for people like me. lol
I used to complain when my friends and family gave me ideas for something they wanted or needed help with because I was just too tired to do it after a day's work. Now I can sit next to them and we can pair program an entire idea in an evening.
If it is 20% slower for you to write with AI, but you are not stressed out and enjoy it so you actually code then the AI is a win and you are more productive with it.
I think that's what is missing from the conversation. It doesn't make developers faster, nor better, but it can automate what some devs detest and feel burned out having to write and for those devs it is a big win.
If you can productively code 40 hours a week with AI and only 30 hours a week without AI then the AI doesn't have to be as good, just close to as good.
I'm in agreeance with you 100%. A lot of my job is coming into projects that have been running already and having to understand how the code was written, the patterns, and everything else. Generating a project with an LLM feels like doing the same thing. It's not going to be a perfect code base but it's enough.
Last night I was working on trying to find a correlation between some malicious users we had found and information we could glean from our internet traffic and I was able to crunch a ton of data automatically without having to do it myself. I had a hunch but it made it verifiable and then I was able to use the queries it had used to verify myself. Saved me probably 4 or 5 hours and I was able to wash the dishes.
The matrix framing is a very nice and way to put it. This morning I asked my assistant to code up a nice debugger for a particular flow in my application. It’s much better than I would have had time/patience to build myself for a nice-to-have.
I sort of have a different view of that time matrix. If AI is only able to help me do tasks that are of low value, where I previously wouldn’t have bothered—- is it really saving me anything? Before where I’d simply ignore auxiliary tasks, and focus on what matters, I’m now constantly detoured with them thinking “it’ll only take ten minutes.”
I also primarily write Elixir, and I have found most Agents are only capable of writing small pieces well. More complicated asks tend to produce unnecessarily complicated solutions, ones that may “work,” on the surface, but don’t hold up in practice. I’ve seen a large increase in small bugs with more AI coding assistance.
When I write code, I want to write it and forget about it. As a result, I’ve written a LOT of code which has gone on to work for years without touching it. The amount of time I spent writing it is inconsequential in every sense. I personally have not found AI capable of producing code like that (yet, as all things, that could change).
Does AI help with some stuff? Sure. I always forget common patterns in Terraform because I don’t often have to use it. Writing some initial resources and asking it to “make it normal,” is helpful. That does save time. Asking it to write a gen server correctly, is an act of self-harm because it fundamentally does not understand concurrency in Erlang/BEAM/OTP. It very much looks like it does, but it 100% does not.
tldr; I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.
It excels at this, and if you have it deeply integrated into your workflow and IDE/dev env the loop should feel more like pair programing, like tennis, than it should feel like its doing everything for you.
> I also primarily write Elixir,
I would also venture that it has less to do with the language (it is a factor) and more to do with what you are working on. Domain will matter in terms of sample size (code) and understanding (language to support). There could be 1000s of examples in its training data of what you want, but if no one wrote a commment that accurately describes what that does...
> I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.
This is spot on. I stopped thinking of it as "AI" and started thinking of it as "power tools". Useful, and like a power tool you should be cautious because there is danger there... It isnt smart, it's not doing anything that isnt in its training data, but there is a lot there, everything, and it can do some basic synthesis.
Like others are saying, AI will accelerate the gap between competent devs and mediocre devs. It is a multiplier. AI cannot replace fundamentals, at least not a good helmsman with a good rational, detail-oriented mind. Having fundamentals (skill & knowledge) + using AI will be the cheat code in the next 10 years.
The only historical analogue of this is perhaps differentiating a good project manager from an excellent one. No matter how advanced, technology will not substitute for competence.
At the company I work for, despite pushing widespread adoption, I have seen exactly a zero percent increase in the rate at which major projects get shipped.
This is what keeps getting me. People here keep posting benchmarks, bragging about 5x, 10x, 20x. None of the companies we work with are putting anything faster.
The evangelist response is to call it a skill issue, but looking around it seems like no one anywhere is actually pushing out new products meaningfully faster.
Several experiments have shown quality of output at every skill level drops.
In many cases the quantity of output is good enough to compensate, but quality is extremely difficult to improve at scale. Beefing up QA to handle significantly more code of noticeably lower quality only goes so far.
> But something I'd bet money on is that devs are 10x more productive at using these tools.
If this were true, we should be seeing evidence of it by now, either in vastly increased output by companies (and open source projects, and indie game devs, etc), or in really _dramatic_ job losses.
This is assuming a sensible definition of 'productive'; if you mean 'lines of code' or 'self-assessment', then, eh, maybe, but those aren't useful metrics of productivity.
It is tempting to think that we can delegate describing the mental model to AI, but it seems like all of this boils down to humans making bets, and it also seems like the fundamental bets engineers are making are about the formalisms that encode the product and make it valuable.
What an awful professor! When I first tried to learn pointers, I didn't get it. I tried again 6 months later and suddenly it clicked. The same thing happened for another guy I was learning with.
So the professor just gaslit years of students into thinking they were too dumb to get programming, and also left them with the developmental disability of "if you can't figure something out in a few days, you'll never get it".
I don’t think there will be an “AI native” generation of developers. AI will be the entity that “groks pointers” and no one else will know it or care what goes on under the hood.
There is some combination of curiosity of inner workings and precision of thought that has always been essential in becoming a successful engineer. In my very first CS 101 class I remember the professor alluding to two hurdles (pointers and recursion) which a significant portion of the class would not be able to surpass and they would change majors. Throughout the subsequent decades I saw this pattern again and again with junior engineers, bootcamp grads, etc. There are some people no matter how hard they work, they can't grok abstraction and unlock a general understanding of computing possibility.
With AI you don't need to know syntax anymore, but to write the write prompts to maintain a system and (crucially) the integrity of its data over time, you still need this understanding. I'm not sure how the AI-native generation of software engineers will develop this without writing code hands-on, but I am confident they will figure it out because I believe it to be an innate, often pedantic, thirst for understanding that some people have and some don't. This is the essential quality to succeed in software both in the past and in the future. Although vibe coding lowers the barrier to entry dramatically, there is a brick wall looming just beyond the toy app/prototype phase for anyone without a technical mindset.