During learning process we, humans, getting new information, discretizating it, and repeatedly re-read it in our head until understood.

and if we have some base to rely on, it usually works. For current level ML this process is done by one single step - preparation, aka training the model.

The next generation ML models must be able to learn on the fly by re-training information instantly. It is partially done with Chain Of Thoughts, but model isn’t improving itself, just solving it as a single function.

Imagine humanity where to learn a new language you need to grown a child who will learn it from the start. How slow we would evolve that way? Isn’t AI models evolving that way now?