That’s kinda scarying and inspiring. It sounds unbelievable but that’s true. No one predicted how LLM turns out to work. Literally, it was an accident. People tried to look what will happen if we mix enormous amount of data and train algorithms on it.

Next was Chain of Thoughts. I remember people just tried to optimize response and improve quality. They accidentally invented reasoning. In the way we see it. Embarrassing.

I don’t say that we must invent something like this intentionally, but for me it looks like an invention of electricity - it exsisted far before us, and we just described it and added rules how we can use it. But does it means that our role in it is overestimated? Can such thing ever happen without us, by some “lightning hit the tree” situation?

Another question is what else we can achieve by just tightening together lot of resources?