Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is almost surely wrong for most developers, or else rewrites wouldn't fail to deliver within the estimated time so often. Rewrites per definition already has a perfect specification in the old code, just write something working the same way using a new architecture. But that is still really hard to deliver apparently.

This is precisely why rewrites fail!

I've never seen a rewrite where the devs had a perfect understanding of what the old code was doing. They understand the happy path, probably. Not the millions of edge cases through the years.

They only learn the requirements after knocking out the easy stuff and then getting into the gritty bits of bringing over all the edge cases that didn't fit their new mental model easily.



On the flip side, a full rewrite is really the only way to surface and understand all of those edge cases. People seem to harp on the idea that rewrites are bad, but I find them to be a natural part of the SDLC. It's a way to refresh the mental model for the devs currently working on it, since the original dev(s) probably moved on long ago. Updating the tech or architecture itself is just a byproduct.


That's an interesting take, and getting that context is valuable, but it seems like there really should be a way to do that that's less disruptive and destructive to "actually being able to deliver new features" than a full rewrite that stops the world for months or longer...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: