This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems

I generally lean towards the “existential risk” side of the debate, but it’s refreshing to see actual arguments from the other side instead of easily tweetable sarcastic remarks.

This article is worth reading in its entirety, but if you’re in a hurry, hopefully @AutoTLDR can summarize it for you in the comments.

  • Bryan Elliott@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’m of the mind that the whole “AI could be an existential threat” mindset is some deeply “one simple trick” thinking mixed with “fear of the unknown” thinking. That is, there’s some convoluted path towards the unattainable that superhuman AI could suss out and would have the resources to execute, that individual humans or groups thereof could not - and that path necessarily leads to destruction. I’m not well convinced by it.