I can assure you LLMs are, in general, not great at extracting concepts and work upon it, like a human mind is. LLMs are statistical parrots, that have learned to associate queries with certain output patterns like code chunks, or text chunks, etc. They are not really intelligent, certainly not like a human is. They cannot follow instructions like a human does, because of this. Problem is, they seem just intelligent enough that they can fool someone wanting to believe them to be intelligent even though there is no intelligence, by any measure, behind their replies.
It also doesn’t help that you have AGI evangelists like Yarvin and Musk who keep saying that the techno-singularity/techno-god is the ONLY WAY TO SAVE US, and that we’re RIGHT ON THE EDGE, so a lot of dumb fucks see that and go “well obviously this is like querying an average human mind which has access to all of human knowledge for the problem if superhuman jntelligence is right around the corner.”
I can assure you LLMs are, in general, not great at extracting concepts and work upon it, like a human mind is. LLMs are statistical parrots, that have learned to associate queries with certain output patterns like code chunks, or text chunks, etc. They are not really intelligent, certainly not like a human is. They cannot follow instructions like a human does, because of this. Problem is, they seem just intelligent enough that they can fool someone wanting to believe them to be intelligent even though there is no intelligence, by any measure, behind their replies.
It also doesn’t help that you have AGI evangelists like Yarvin and Musk who keep saying that the techno-singularity/techno-god is the ONLY WAY TO SAVE US, and that we’re RIGHT ON THE EDGE, so a lot of dumb fucks see that and go “well obviously this is like querying an average human mind which has access to all of human knowledge for the problem if superhuman jntelligence is right around the corner.”