I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.
Any good examples on how to explain this in simple terms?
Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?
I agree with you, and you worded what I was clumsily trying to say. Thank you:)
With naturalism I mean the philosphical idea that only natural laws and forces are present in this world. Or as an extension, the idea that here is only matter.