One thing you can do with an LLM is to, in some sense, ask the collective consciousness things, in a conversation format. You ask something, get an answer – and that answer is a statistically relevant one for the region of the LLM that your question activated. Ask about perennial flowers and your question shoots off inside the LLM and reaches a region where there’s statistics about text about this topic. And the LLM spits out tokens plucked from this region, and there’s your answer.
The illusion is that you’re having a conversation with an entity that understands and responds to what you’re saying. But I think it’s important to try to remind yourself to not approach them in this way. I’m not sure that AGI will make a difference in this regard, but it will probably make the illusion much more convincing.