I have a question. With the logic of neural networks, and pattern recognition, is it not then possible to "predict" everything in everything? Like predicting the future to an exact "thing"? Is this not a tool to manipulate for instace the stock market?
It is possible to try it, and some people do (high speed trading is just that, plus taking advantage of privileged information that speed provides to react before anyone else).
However there are two fundamental problems to computational predictions. The first one obviously is accuracy. A model is a compressed memorization of everything observed so far; a prediction with it is just projecting into the future the observed patterns. In a chaotic system, that goes only so far; the most regular, predictable patterns are obvious to everybody and give less return, and the chaotic system states where prediction would be more valuable are the less reliable. You cannot build a perfect oracle that would fix that.
The second problem is more insidious. Even if you were able to build a perfect oracle, acting on its predictions would become part of the system itself. That would change the outcomes, making the system behave in a different way as it was trained, and thus less reliable. If several people do it at the same time, there's no way to retrain the model to take into account the new behaviour.
There's the possibility (but not a guarantee) to reach a fixed point, that a Nash equilibrium would appear where such system becomes into a stable cycle, but that's not likely in a changing environment where everybody tries to outdo everyone else.
Ah, this actually connects a few dots for me. It helps explain why models seem to have a natural lifetime, once deployed at scale, they start interacting with and shaping the environment they were trained on. Over time, data distributions, usage patterns, and incentives shift enough that the model no longer functions as the one originally created, even if the weights themselves haven’t changed.
That also makes sense of the common perception that a model feels “decayed” right before a new release. It’s probably not that the model is getting worse, but that expectations and use cases have moved on, people push it into new regimes, and feedback loops expose mismatches between current tasks and what it was originally tuned for.
In that light, releasing a new model isn’t just about incremental improvements in architecture or scale; it’s also a reset against drift, reflexivity, and a changing world. Prediction and performance don’t disappear, but they’re transient, bounded by how long the underlying assumptions remain valid.
That means all the AI companies that "retire" a model is not because of their new better model only, but also because of decay?
PS. I clean wrote above with AI, (not native englishmen)
Well nothing is stopping you from attempting to predict everything with neural networks but that doesn't mean your predictions will be (1) good (2) consistently useful or (3) economical. Transformer models for example suffer from (2) and especially (3) in their current iteration.
DNNs learn patterns, for them to work there must be some. The stock market almost entirely reliant on random real world events that aren't recurrent so you can't predict much at all.