Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming this experiment involved isolating the LLM from its training set?




Of course it didn't. Not sure you really can do that - LLMs are a collection of weights from the training set, take away the training set and they don't really exist. You'd have to train one from scratch excluding these books and all excerpts and articles about them somehow, which would be very expensive and I'm pretty sure the OP didn't do that.

So the test seems like a nonsensical test to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: