𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠

  • 0 Posts
  • 117 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle
  • The difference between ban and suspend isn’t a temporal difference. Here’s the Cambridge dictionary definition of “suspend”:

    to stop something from being active, either temporarily or permanently (see: https://dictionary.cambridge.org/dictionary/english/suspend)

    Here’s the definition for “ban”:

    to forbid (= refuse to allow) something, especially officially (see https://dictionary.cambridge.org/dictionary/english/ban?q=Ban)

    The difference between the two is the subject: an active process or service can be suspended, but something specific (e.g. an action, object or person) can be banned. Ban also implies a more official act in order to punish someone or prevent something (Johnny was banned from entering the bus), whereas a suspension doesn’t necessarily have that ‘negative’ context (e.g. the bus service was suspended, which doesn’t imply this happened because the bus driver was drunk or something).

    In a more Lemmy-specific context, you could say you suspended someone’s access to the platform, or that you banned them from the platform. Neither way of saying it implies anything about the duration. You can’t however really say you suspended someone from the platform, that doesn’t really work.

    In this context, I think the direct implication that a ban is handed out because someone did something bad is a lot clearer than when you use the word suspension. Because of that I believe ban to be the more context-appropriate word here. Suspend does not carry that connotation as something can be suspended for a whole host of reasons, none of which have to be related to rule-breaking. For example, federation with another instance could be suspended temporarily until the other instance does (or doesn’t do) something that is required for technical reasons.










  • What they didn’t prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It’s just that this particular method of inferential training, what they call “AI-by-Learning,” is an NP-hard computational problem.

    This is exactly what they’ve proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).

    They merely mentioned these methods to show that it doesn’t matter which method you pick. The explicit point is to show that it doesn’t matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.

    But it’s easy to just define general intelligence as something approximating what humans already do.

    No, General Intelligence has a set definition that the paper’s authors stick with. It’s not as simple as “it’s a human-like intelligence” or something that merely approximates it.




  • Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

    That’s assuming that we are a general intelligence. I’m actually unsure if that’s even true.

    That doesn’t mean they’ve proven there’s no pathway at all.

    True, they’ve only calculated it’d take perhaps millions of years. Which might be accurate, I’m not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it’s still very unlikely and definitely not “right around the corner” as the AI-bros claim (and that near-future thing is what the paper set out to disprove).




  • The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that’s a billion times faster than what we have now, perfect training data that you can sample without bias and you’re only aiming for an AGI that performs slightly better than chance, it’s still completely infeasible to do within the next few millenia. Ergo, it’s definitely not “right around the corner”. We’re lightyears off still.

    They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what’s even remotely feasible. And that’s provided you don’t even have to deal with all the constraints that exist in the real world.

    We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won’t improve or anything, it’ll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.

    It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we’re not as smart as we think we are either.