I don't get how this solves the problem of edge cases with self driving
Even if you can generate simulated training data, don't you still have the problem where you don't even know what the edge cases you need to simulate are in the first place?
Well it certainly helps,doesn't it? This system is going to encounter more edge cases than a single human ever would. Hopefully the lessons from known unknowns generqlise to unknowns. And once they've been seen once they took can become part of the corpus.
It might be "never-ending", but you're going to encounter edge cases in approximate proportion to the rate at which they actually occur. Anyway, the hope would be to learn behaviors which generalize, not to respond to each edge case ad-hoc; the edge cases provide out-of-sample tests of generalizability.
Neither does the car — it won't drive into what LIDAR sees as a wall. But stopping is not good enough, it needs to be able to navigate the obstacle as well.
Also, even if the car behaved perfectly anyway, these scenarios are useful for testing — validating that the expected behavior happens.
Even if you can generate simulated training data, don't you still have the problem where you don't even know what the edge cases you need to simulate are in the first place?