• 0 Posts
  • 8 Comments
Joined 4 months ago
cake
Cake day: July 7th, 2024

help-circle
  • i’d agree that we don’t really understand consciousness. i’d argue it’s more an issue of defining consciousness and what that encompasses than knowing its biological background.

    Personally, no offense, but I think this a contradiction in terms. If we cannot define “consciousness” then you cannot say we don’t understand it. Don’t understand what? If you have not defined it, then saying we don’t understand it is like saying we don’t understand akokasdo. There is nothing to understand about akokasdo because it doesn’t mean anything.

    In my opinion, “consciousness” is largely a buzzword, so there is just nothing to understand about it. When we actually talk about meaningful things like intelligence, self-awareness, experience, etc, I can at least have an idea of what is being talked about. But when people talk about “consciousness” it just becomes entirely unclear what the conversation is even about, and in none of these cases is it ever an additional substance that needs some sort of special explanation.

    I have never been convinced of panpsychism, IIT, idealism, dualism, or any of these philosophies or models because they seem to be solutions in search of a problem. They have to convince you there really is a problem in the first place, but they only do so by talking about consciousness vaguely so that you can’t pin down what it is, which makes people think we need some sort of special theory of consciousness, but if you can’t pin down what consciousness is then we don’t need a theory of it at all as there is simply nothing of meaning being discussed.

    They cannot justify themselves in a vacuum. Take IIT for example. In a vacuum, you can say it gives a quantifiable prediction of consciousness, but “consciousness” would just be defined as whatever IIT is quantifying. The issue here is that IIT has not given me a reason to why I should care about them quantifying what they are quantifying. There is a reason, of course, it is implicit. The implicit reason is that what they are quantifying is the same as the “special” consciousness that supposedly needs some sort of “special” explanation (i.e. the “hard problem”), but this implicit reason requires you to not treat IIT in a vacuum.


  • Bruh. We literally don’t even know what consciousness is.

    You are starting from the premise that there is this thing out there called “consciousness” that needs some sort of unique “explanation.” You have to justify that premise. I do agree there is difficulty in figuring out the precise algorithms and physical mechanics that the brain uses to learn so efficiently, but somehow I don’t think this is what you mean by that.

    We don’t know how anesthesia works either, so he looked into that and the best he got was it interrupts a quantom wave collapse in our brains

    There is no such thing as “wave function collapse.” The state vector is just a list of probability amplitudes and you reduce those list of probability amplitudes to a definite outcome because you observed what that outcome is. If I flip a coin and it has a 50% chance of being heads and a 50% chance of being tails, and it lands on tails, I reduce the probability distribution to 100% probability for tails. There is no “collapse” going on here. Objectifying the state vector is a popular trend when talking about quantum mechanics but has never made any sense at all.

    So maybe Roger Penrose just wasted his retirement on this passion project?

    Depends on whether or not he is enjoying himself. If he’s having fun, then it isn’t a waste.


  • The only observer of the mind would be an outside observer looking at you. You yourself are not an observer of your own mind nor could you ever be. I think it was Feuerbach who originally made the analogy that if your eyeballs evolved to look inwardly at themselves, then they could not look outwardly at the outside world. We cannot observe our own brains as they only exist to build models of reality, if our brains had a model of itself it would have no room left over to model the outside world.

    We can only assign an object to be what is “sensing” our thoughts through reflection. Reflection is ultimately still building models of the outside world but the outside world contains a piece of ourselves in a reflection, and this allows us to have some limited sense of what we are. If we lived in a universe where we somehow could never leave an impression upon the world, if we could not see our own hands or see our own faces in the reflection upon a still lake, we would never assign an entity to ourselves at all.

    We assign an entity onto ourselves for the specific purpose of distinguishing ourselves as an object from other objects, but this is not an a priori notion (“I think therefore I am” is lazy sophistry). It is an a posteriori notion derived through reflection upon what we observe. We never actually observe ourselves as such a thing is impossible. At best we can over reflections of ourselves and derive some limited model of what “we” are, but there will always be a gap between what we really are and the reflection of what we are.

    Precisely what is “sensing your thoughts” is yourself derived through reflection which inherently derives from observation of the natural world. Without reflection, it is meaningless to even ask the question as to what is “behind” it. If we could not reflect, we would have no reason to assign anything there at all. If we do include reflection, then the answer to what is there is trivially obvious: what you see in a mirror.


  • There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties?

    What makes quantum mechanics distinct from classical mechanics is the fact that not only are there interference effects, but statistically correlated systems (i.e. “entangled”) can seem to interfere with one another in a way that cannot be explained classically, at least not without superluminal communication, or introducing something else strange like the existence of negative probabilities.

    If it wasn’t for these kinds of interference effects, then we could just chalk up quantum randomness to classical randomness, i.e. it would just be the same as any old form of statistical mechanics. The randomness itself isn’t really that much of a defining feature of quantum mechanics.

    The reason I say all this is because we actually do know why there is a distinction between quantum and non-quantum objects and why large objects do not exhibit quantum properties. It is a mixture of two factors. First, larger systems like big molecules have smaller wavelengths, so interference with other molecules becomes harder and harder to detect. Second, there is decoherence. Even small particles, if they interact with a ton of other particles and you average over these interactions, you will find that the interference terms (the “coherences” in the density matrix) converge to zero, i.e. when you inject noise into a system its average behavior converges to a classical probability distribution.

    Hence, we already know why there is a seeming “transition” from quantum to classical. This doesn’t get rid of the fact that it is still statistical in nature, it doesn’t give you a reason as to why a particle that has a 50% chance of being over there and a 50% chance of being over here, that when you measure it and find it is over here, that it wasn’t over there. Decoherence doesn’t tell you why you actually get the results you do from a measurement, it’s still fundamentally random (which bothers people for some reason?).

    But it is well-understood how quantum probabilities converge to classical probabilities. There have even been studies that have reversed the process of decoherence.




  • I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.


  • Quantum internet is way overhyped and likely will never exist. There are not only no practical benefits to using QM for internet but it has huge inherent problems that make it unlikely to ever scale.

    • While technically yes you can make “unbreakable encryption” this is just a glorified one-time cipher which requires the key to be the same length of the message, and AES256 is already considered unbreakable even by quantum computers, so good luck cutting your internet bandwidth in half for purely theoretical benefits that exist on paper but will never be noticeable in practice!
    • Since it’s a symmetric cipher it doesn’t even work for internet communication unless you have a way to distribute keys, and there is something called quantum key distribution (QKD) based around algorithms like BB84. However, this algorithm only allows you to guarantee that you can exchange keys without anyone snooping in on it being undetected, but it does not actually stop them from snooping in on your key like Diffie-Hellman achieves. Meaning, a person can literally shut down the entire network traffic just by observing the packets in transit without having to even do anything to do them. How can the government and private companies possibly build an internet whereby you guarantee nobody ever looks at packages as they’re transmitted through the network?
    • QKD is also susceptible to man-in-the-middle attacks just like Diffie-Hellman, which we solve that problem in classical cryptography with digital signature algorithms. There are quantum digital signature algorithms (QDS) but they rely on Holevo’s theorem which says that the “collapse” is effectively a one-way process and only limited amount of information can be extrapolated from it, and thus you cannot derive the qubit’s initial state simply by measuring it. The problem, however, is Holevo’s theorem also says if you had tons of copies of the same qubit, you could derive even more information from it. Meaning, all public keys would have to be consumable, because making copies of them would undermine their security, and this makes it just not something that can scale.

    And all this for what? You have all these drawbacks for what? Imagined security benefits that you won’t actually notice in real life? Only people I could ever see using this are governments that are hyperparanoid. A government intranet could be highly controlled, highly centralized, and not particularly large scale by its very nature that you don’t want many people having access to it. So I could see such a government getting something like that to work, but there would be no reason to replace the internet with it.