Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is honestly the key difference here. I’m morally okay with using Claude Max Whatever with something like OpenCode because it’s literally the same thing from the usage pattern perspective. Plugging Nanoclaw into it is a whole another thing.


It probably doesn't help that the creator of OpenClaw just got hired by Anthropic's competitor.

This sounds like engineering, finance, and legal got together and decided they were in an untenable position if OpenAI started nudging OpenClaw to burn even more tokens on Anthropic (or just never optimize) + continually updated workarounds to using subscription auth. But I'm sure OpenAI would never do something like that...

At the end of the day, it's the same 'fixed price plan for variable use on a constrained resource' cellular problem: profitability becomes directly linked to actual average usage.


> OpenAI started nudging OpenClaw to burn even more tokens on Anthropic

Not possible: OpenClaw is run by a foundation, and is open source, which means OpenAI has no leverage to do such a thing.


Because open source has always been completely independent of unrelated corporate entities who employ people to work on it?


Because anyone can actually check the code, which means if there's any funny business, someone will come across it eventually and blow it open.


There probably wouldn’t be anything funny-looking – it might look like a genuine mistake in implementation that burns 2× or 3× tokens somehow (which, considering OpenClaw is vibe coded in the purest sense of this term, would blend right in).


Regardless, such things would eventually be found. Just as OpenClaw was tasked with finding and improving science repos (though unwelcome), it could - and very likely will - be tasked with improving its own codebase.


The bug that was causing the crazy token burn was added on Feb 15. It was claimed to have been fixed on Feb 19 (see https://github.com/openclaw/openclaw/pull/20597 ) but it's unclear to me whether that fix has been rolled out yet or if it completely solved the problem. (see https://github.com/openclaw/openclaw/issues/21785 )

TLDR: the commit broke caching so the entire conversation history was being treated as new input on each call instead of most of the conversation being cached.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: