• NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    118
    arrow-down
    8
    ·
    7 months ago

    Even if it were possible to scan the contents of your brain and reproduce them in a digital form, there’s no reason that scan would be anything more than bits of data on the digital system. You could have a database of your brain… but it wouldn’t be conscious.

    No one has any idea how to replicate the activity of the brain. As far as I know there aren’t any practical proposals in this area. All we have are vague theories about what might be going on, and a limited grasp of neurochemistry. It will be a very long time before reproducing the functions of a conscious mind is anything more than fantasy.

    • ƬΉΣӨЯΣƬIKΣЯ@feddit.de
      link
      fedilink
      arrow-up
      52
      arrow-down
      1
      ·
      7 months ago

      Counterpoint, from a complex systems perspective:

      We don’t fully know or are able toodel the details of neurochemistry, but we know some essential features which we can model, action potentials in spiking neuron models for example.

      It’s likely that the details don’t actually matter much. Take traffic jams as an example. There is lots of details going on, driver psychology, the physical mechanics of the car etc. but you only need a handful of very rough parameters to reproduce traffic jams in a computer.

      That’s the thing with “emergent” phenomena, they are less complicated than the sum of their parts, which means you can achieve the same dynamics using other parts.

      • tburkhol@lemmy.world
        link
        fedilink
        arrow-up
        32
        ·
        7 months ago

        Even if you ignore all the neuromodulatory chemistry, much of the interesting processing happens at sub-threshold depolarizations, depending on millisecond-scale coincidence detection from synapses distributed through an enormous, and slow-conducting dendritic network. The simple electrical signal transmission model, where an input neuron causes reliable spiking in an output neuron, comes from skeletal muscle, which served as the model for synaptic transmission for decades, just because it was a lot easier to study than actual inter-neural synapses.

        But even that doesn’t matter if we can’t map the inter-neuronal connections, and so far that’s only been done for the 300 neurons of the c elegans ganglia (i.e., not even a ‘real’ brain), after a decade of work. Nowhere close to mapping the neuroscientists’ favorite model, aplysia, which only has 20,000 neurons. Maybe statistics will wash out some of those details by the time you get to humans 10^11 neuron systems, but considering how badly current network models are for predicting even simple behaviors, I’m going to say more details matter than we will discover any time soon.

        • Dr. Bob@lemmy.ca
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          7 months ago

          Thanks fellow traveller for punching holes in computational stupidity. Everything you said is true but I also want to point out that the brain is an analog system so the information in a neuron is infinite relative to a digital system (cf: digitizing analog recordings). As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

          • Blue_Morpho@lemmy.world
            cake
            link
            fedilink
            arrow-up
            14
            arrow-down
            1
            ·
            7 months ago

            As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.

            So it’s not infinite and can be digitized. :)

            But to be more serious, digitized analog recordings is a bad analogy because audio can be digitized and perfectly reproduced. Nyquist- Shannon theory means the output can be perfectly reproduced. It’s not approximate. It’s perfect.

            https://en.m.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

            • intensely_human@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              Analog signals can only be “perfectly” reproduced up to a specific target frequency. Given the actual signal is composed of infinite frequencies, you needs twice infinite sampling frequency to completely reproduce it.

              • Blue_Morpho@lemmy.world
                cake
                link
                fedilink
                arrow-up
                2
                ·
                7 months ago

                There aren’t infinite frequencies.

                “The mean free path in air is 68nm, and the mean inter-atomic spacing is some tens of nms about 30, while the speed of sound in air is 300 m/s, so that the absolute maximum frequency is about 5 Ghz.”

                • intensely_human@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 months ago

                  The term “mean free path” sounds a lot like an average to me, implying an distribution which extends beyond that number.

                  • Blue_Morpho@lemmy.world
                    cake
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    7 months ago

                    One cubic centimeter of air contains 90,000,000,000,000 atoms. In that context, mean free path is 68nm up to the limits of your ability to measure. That is flip a coin 90 million million times and average the heads and tails. It’s going to be extremely close to 50%.

                    Not to mention that at 5ghz, the sound can only propagate 68 nm.

            • Dr. Bob@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              It’s an analogy. There is actually an academic joke about the point you are making.

              A mathematician and an engineer are sitting at a table drinking when a very beautiful woman walks in and sits down at the bar.

              The mathematician sighs. “I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. There’ll always be some finite distance between us.”

              The engineer gets up and starts walking. “Ah, well, I figure I can get close enough for all practical purposes.”

              The point of the analogy is not that one can’t get close enough so that the ear can’t detect a difference, it’s that in theory analog carries infinite information. It’s true that vinyl recordings are not perfect analog systems because of physical limitations in the cutting process. It’s also true for magnetic tape etc. But don’t mistake the metaphor for the idea.

              Ionic movement across membranes, especially at the scale we are talking about, and the density of channels in the system is much closer to an ideal system. How much of that fidelity can you lose before it’s not your consciousness?

              • Blue_Morpho@lemmy.world
                cake
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                7 months ago

                "I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. "

                I get it’s a joke but that’s a bad joke. That’s a convergent series. It’s not infinite. Any 1st year calculus student would know that.

                "it’s that in theory analog carries infinite information. "

                But in reality it can’t. The universe isn’t continous, it’s discrete. That’s why we have quantum mechanics. It is the math to handle non contiguous transitions between states.

                How much of that fidelity can you lose before it’s not your consciousness?

                That can be tested with c elegans. You can measure changes until a difference is propagated.

                • Dr. Bob@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  7 months ago

                  Measure differences in what? We can’t ask *c. elegans * about it’s state of mind let alone consciousness. There are several issues here; a philosophical issue here about what you are modeling (e.g. mind, consciousness or something else), a biological issue with what physical parameters and states you need to capture to produce that model, and how you would propose to test the fidelity of that model against the original organism. The scope of these issues is well outside a reply chain in Lemmy.

        • ƬΉΣӨЯΣƬIKΣЯ@feddit.de
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          Yes the connectome is kind of critical. But other than that, sub threshold oscillations can and are being modeled. It also does not really matter that we are digitizing here. Fluid dynamics are continuous and we can still study, model and predict it using finite lattices.

          There are some things that are missing, but very clearly we won’t need to model individual ions and there is lots of other complexity that will not affect the outcome.

      • Yondoza@sh.itjust.works
        link
        fedilink
        arrow-up
        9
        ·
        7 months ago

        I heard a hypothesis that the first human made consciousness will be an AI algorithm designed to monitor and coordinate other AI algorithms which makes a lot of sense to me.

        Our consciousness is just the monitoring system of all our bodies subsystems. It is most certainly an emergent phenomenon of the interaction and management of different functions competing or coordinating for resources within the body.

        To me it seems very likely that the first human made consciousness will not be designed to be conscious. It also seems likely that we won’t be aware of the first consciousnesses because we won’t be looking for it. Consciousness won’t be the goal of the development that makes it possible.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        I’d say the details matter, based on the PEAR laboratory’s findings that consciousness can affect the outcomes of chaotic systems.

        Perhaps the reason evolution selected for enormous brains is that’s the minimum necessary complexity to get a system chaotic enough to be sensitive to and hence swayed by conscious will.

        • ƬΉΣӨЯΣƬIKΣЯ@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          PEAR? Where staff participated in trials, rather than doing double blind experiments? Whose results could not be reproduced by independent research groups? Who were found to employ p-hacking and data cherry picking?

          You might as well argue that simulating a human mind is not possible because it wouldn’t have a zodiac sign.

    • Sombyr@lemmy.zip
      link
      fedilink
      arrow-up
      25
      ·
      7 months ago

      We don’t even know what consciousness is, let alone if it’s technically “real” (as in physical in any way.) It’s perfectly possible an uploaded brain would be just as conscious as a real brain because there was no physical thing making us conscious, and rather it was just a result of our ability to think at all.
      Similarly, I’ve heard people argue a machine couldn’t feel emotions because it doesn’t have the physical parts of the brain that allow that, so it could only ever simulate them. That argument has the same hole in that we don’t actually know that we need those to feel emotions, or if the final result is all that matters. If we replaced the whole “this happens, release this hormone to cause these changes in behavior and physical function” with a simple statement that said “this happened, change behavior and function,” maybe there isn’t really enough of a difference to call one simulated and the other real. Just different ways of achieving the same result.

      My point is, we treat all these things, consciousness, emotions, etc, like they’re special things that can’t be replicated, but we have no evidence to suggest this. It’s basically the scientific equivalent of mysticism, like the insistence that free will must exist even though all evidence points to the contrary.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        ·
        7 months ago

        Also, some of what happens in the brain is just storytelling. Like, when the doctor hits your patellar tendon, just under your knee, with a reflex hammer. Your knee jerks, but the signals telling it to do that don’t even make it to the brain. Instead the signal gets to your spinal cord and it “instructs” your knee muscles.

        But, they’ve studied similar things and have found out that in many cases where the brain isn’t involved in making a decision, the brain does make up a story that explains why you did something, to make it seem like it was a decision, not merely a reaction to stimulus.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          That seems like a lot of wasted energy, to produce that illusion. Doesn’t nature select out wasteful designs ruthlessly?

          • wols@lemm.ee
            link
            fedilink
            arrow-up
            5
            ·
            7 months ago

            TLDR:
            Nature can’t simply select out consciousness because it emerges from hardware that is useful in other ways. The brain doesn’t waste energy on consciousness, it uses energy for computation, which is useful in a myriad ways.

            The usefulness of consciousness from an evolutionary fitness perspective is a tricky question to answer in general terms. An easy intuition might be to look at the utility of pain for the survival of an individual.

            I personally think that, ultimately, consciousness is a byproduct of a complex brain. The evolutionary advantage is mainly given by other features enabled by said complexity (generally more sophisticated and adaptable behavior, social interactions, memory, communication, intentional environment manipulation, etc.) and consciousness basically gets a free ride on that already-useful brain.
            Species with more complex brains have an easier time adapting to changes in their environment because their brains allow them to change their behavior much faster than random genetic mutations would. This opens up many new ecological niches that simpler organisms wouldn’t be able to fill.

            I don’t think nature selects out waste. As long as a species is able to proliferate its genes, it can be as wasteful as it “wants”. It only has to be fit enough, not as fit as possible. E.g. if there’s enough energy available to sustain a complex brain, there’s no pressure to make it more economical by simplifying its function. (And there are many pressures that can be reacted to without mutation when you have a complex brain, so I would guess that, on the whole, evolution in the direction of simpler brains requires stronger pressures than other adaptations)

            • merc@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              Yeah. This is related to supernatural beliefs. If the grass moves it might just be a gust of wind, or it might be a snake. Even if snakes are rare, it’s better to be safe than sorry. But, that eventually leads to assuming that the drought is the result of an angry god, and not just some random natural phenomenon.

              So, brains are hard-wired to look for causes, even inventing supernatural causes, because it helps avoid snakes.

      • arendjr@programming.dev
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        7 months ago

        let alone if it’s technically “real” (as in physical in any way.)

        This right here might already be a flaw in your argument. Something doesn’t need to be physical to be real. In fact, there’s scientific evidence that physical reality itself is an illusion created through observation. That implies (although it cannot prove) that consciousness may be a higher construct that exists outside of physical reality itself.

        If you’re interested in the philosophical questions this raises, there’s a great summary article that was published in Nature: https://www.nature.com/articles/436029a

        • Sombyr@lemmy.zip
          link
          fedilink
          arrow-up
          15
          ·
          7 months ago

          On the contrary, it’s not a flaw in my argument, it is my argument. I’m saying we can’t be sure a machine could not be conscious because we don’t know that our brain is what makes us conscious. Nor do we know where the threshold is where consciousness arises. It’s perfectly possible all we need is to upload an exact copy of our brain into a machine, and it’d be conscious by default.

          • NaibofTabr@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            The problem with this is that even if a machine is conscious, there’s no reason it would be conscious like us. I fully agree that consciousness could take many forms, probably infinite forms - and there’s no reason to expect that one form would be functionally or technically compatible with another.

            What does the idea “exact copy of our brain” mean to you? Would it involve emulating the physical structure of a human brain? Would it attempt to abstract the brain’s operations from the physical structure? Would it be a collection of electrical potentials? Simulations of the behavior of specific neurochemicals? What would it be in practice, that would not be hand-wavy fantasy?

            • Sombyr@lemmy.zip
              link
              fedilink
              arrow-up
              4
              ·
              7 months ago

              I suppose I was overly vague about what I meant by “exact copy.” I mean all of the knowledge, memories, and an exact map of the state of our neurons at the time of upload being uploaded to a computer, and then the functions being simulated from there. Many people believe that even if we could simulate it so perfectly that it matched a human brain’s functions exactly, it still wouldn’t be conscious because it’s still not a real human brain. That’s the point I was arguing against. My argument was that if we could mimic human brain functions closely enough, there’s no reason to believe the brain is so special that a simulation could not achieve consciousness too.
              And you’re right, it may not be conscious in the same way. We have no reason to believe either way that it would or wouldn’t be, because the only thing we can actually verify is conscious is ourself. Not humans in general, just you, individually. Therefore, how conscious something is is more of a philosophical debate than a scientific one because we simply cannot test if it’s true. We couldn’t even test if it was conscious at all, and my point wasn’t that it would be, my point is that we have no reason to believe it’s possible or impossible.

              • intensely_human@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                7 months ago

                Unfortunately the physics underlying brain function are chaotic systems, meaning infinite (or “maximum”) precision is required to ensure two systems evolve to the same later states.

                That level of precision cannot be achieved in measuring the state, without altering the state into something unknown after the moment of measurement.

                Nothing quantum is necessary for this inability to determine state. Consider the problem of trying to map out where the eight ball is on a pool table, but you can’t see the eight ball. All you can do is throw other balls at it and observe how their velocities change. Now imagine you can’t see those balls either, because the sensing mechanism you’re using is composed of balls of equal or greater size.

                Unsolvable problem. Like a box trying to contain itself.

                • Blue_Morpho@lemmy.world
                  cake
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  7 months ago

                  Chaos comes into play as a state changes. The poster above you talks about copying the state. Once copied the two states will diverge because of chaos. But that doesn’t preclude consciousness. It means the copy will soon have different thoughts.

            • intensely_human@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              We make a giant theme park where people can interact with androids. Then we make a practically infinite number of copies of this theme park. We put androids in the copies and keep providing feedback to alter their behavior until they behave exactly like the people in the theme park.

          • arendjr@programming.dev
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            7 months ago

            I see that’s certainly a different way of looking at it :) Of course I can’t say with any authority that it must be wrong, but I think it’s a flaw because it seems you’re presuming that consciousness arises from physical properties. If the physical act of copying a brain’s data were to give rise to consciousness, that would imply consciousness is a product of physical reality. But my position (and that of the paper I linked) is that physical reality is a product of mental consciousness.

            • Gabu@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              7 months ago

              That’s based on a pseudoscientific interpretation of quantum physics not related to actual physics.

              • arendjr@programming.dev
                link
                fedilink
                arrow-up
                1
                arrow-down
                2
                ·
                7 months ago

                Do elaborate on the batshit part :) It’s a scientific fact that physical matter does not exist in its physical form when unobserved. This may not prove the existence of consciousness, but it certainly makes it plausible. It certainly invalidates physical reality as the “source of truth”, so to say. Which makes the explanation that physical reality is a product of consciousness not just plausible, but more likely than the other way around. Again, not a proof, but far from batshit.

                • Sombyr@lemmy.zip
                  link
                  fedilink
                  arrow-up
                  5
                  ·
                  7 months ago

                  I think you’re a little confused about what observed means and what it does.
                  When unobserved, elementary particles behave like a wave, but they do not stop existing. A wave is still a physical thing. Additionally, observation does not require consciousness. For instance, a building, such as a house, when nobody is looking at it, does not begin to behave like a wave. It’s still a physical building. Therefore, observation is a bit of a misnomer. It really means a complex interaction we don’t understand causes particles to behave like a particle and not a wave. It just happens that human observation is one of the possible ways this interaction can take place.
                  An unobserved black hole will still feed, an unobserved house is still a house.
                  To be clear, I’m not insulting you or your idea like the other dude, but I wanted to clear that up.

                  • arendjr@programming.dev
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    7 months ago

                    Thanks, that seems a fair approach, although it doesn’t have me entirely convinced yet. Can you explain what the physical form of a wave function is? Because it’s not like a wave, such as waves in the sea. It’s really a wave function, an abstract representation of probabilities which in my understanding does not have any physical representation.

                    You say the building does not start acting like a wave, and you’re right, that would be silly. But it does enter into a superposition where the building can be either collapsed or not. Like Schreudinger’s cat, which can be dead or alive, and will be in a superposition of both until observation happens again. And yes, the probabilities of this superposition are indeed expressed through the wave function, even though there is no physical wave.

                    It’s true observation does not require consciousness. But until we know what does constitute observation, I believe consciousness provides a plausible explanation.

                • Gabu@lemmy.world
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  7 months ago

                  It’s a scientific fact that physical matter does not exist in its physical form when unobserved.

                  No, it’s not. The quantum field and the quantum wave exist whether or not you observe it, only the particle behavior changes based on interaction. Note how I specifically used the word “interaction”, not “observation”, because that’s what a quantum physicist means when they say the wave-particle duality depends on the observer. They mean that a wave function collapses once it interacts definitely, not only when a person looks at it.

                  It certainly invalidates physical reality as the “source of truth”, so to say

                  How so, when the interpretation you’re citing is specifically dependant on the mechanics of quantum field fluctuation? How can physical reality not exist when it is physical reality that gives you the means to (badly) justify your hypothesis?

        • Gabu@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          That’s pseudoscientific bullshit. Quantum physics absolutely does tell us that there is a real physical world. It’s incredibly counterintuitive and impossible to fully describe, but does exist.

          • NaibofTabr@infosec.pub
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Heh, well… I guess that depends on how you define “physical”… if quantum field theory is correct then everything we experience is the product of fluctuations in various fields, including the physical mass of protons, neutrons etc. “Reality” as we experience it might be more of an emergent property, as illusory as the apparent solidity of matter.

      • Blue_Morpho@lemmy.world
        cake
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        7 months ago

        I read that and the summary is, “Here are current physical models that don’t explain everything. Therefore, because science doesn’t have an answer it could be magic.”

        We know consciousness is attached to the brain because physical changes in the brain cause changes in consciousness. Physical damage can cause complete personality changes. We also have a complete spectrum of observed consciousness from the flatworm with 300 neurons, to the chimpanzee with 28 billion. Chimps have emotions, self reflection and everything but full language. We can step backwards from chimps to simpler animals and it’s a continuous spectrum of consciousness. There isn’t a hard divide, it’s only less. Humans aren’t magical.

        • nnullzz@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          I understand your point. But science has also shown us over time that things we thought were magic were actually things we can figure out. Consciousness is definitely up there in that category of us not fully understanding it. So what might seem like magic now, might be well-understood science later.

          Not able to provide links at the moment, but there are also examples on the other side of the argument that lead us to think that maybe consciousness isn’t fully tied to physical components. Sure, the brain might interface with senses, consciousness, and other parts to give us the whole experience as a human. But does all of that equate to consciousness? Is the UI of a system the same thing as the user?

        • Queen HawlSera@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          And we know the flatworm and chimp don’t have non-local brains because?

          I’m just saying, it didn’t seem like anyone was arguing that humans were special, just that consciousness may be non-local. Many quantum processes are, and we still haven’t ruled out the possibility of Quantum phenomena happening in the brain.

          • Blue_Morpho@lemmy.world
            cake
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            6 months ago

            Because flatworm neurons can be exactly modeled without adding anything extra.

            It’s like if you said, “And we know a falling ball isn’t caused by radiation because?” If you can model a ball dropping in a vacuum without adding any extra variables to your equations, why claim something extra? It doesn’t mean radiation couldn’t affect a falling ball. But adding radiation isn’t needed to explain a falling ball.

            The neurons in a flatworm can be modeled without adding quantum effects. So why bother adding in other effects?

            And a minor correction, “non local” means faster than light. Quantum effects do not allow faster than light information transfer. Consciousness by definition is information. So even if quantum processes affected neurons macroscopically, there still couldn’t be non local consciousness.

      • Xhieron@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Thank you for this. That was a fantastic survey of some non-materialistic perspectives on consciousness. I have no idea what future research might reveal, but it’s refreshing to see that there are people who are both very interested in the questions and also committed to the scientific method.

    • Maggoty@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      I think we’re going to learn how to mimic a transfer of consciousness before we learn how to actually do one. Basically we’ll figure out how to boot up a new brain with all of your memories intact. But that’s not actually a transfer, that’s a clone. How many millions of people will we murder before we find out the Zombie Zuckerberg Corp was lying about it being a transfer?

    • Gabu@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      7 months ago

      You could have a database of your brain… but it wouldn’t be conscious.

      Where is the proof of your statement?

      • NaibofTabr@infosec.pub
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        Well there’s no proof, it’s all speculative and even the concept of scanning all the information in a human brain is fantasy so there isn’t going to be a real answer for awhile.

        But just as a conceptual argument, how do you figure that a one-time brain scan would be able to replicate active processes that occur over time? Or would you expect the brain scan to be done over the course of a year or something like that?

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          7 months ago

          You make a functional model of a neuron that can behave over time like other neurons do. Then you get all the synapses and their weights. The synapses and their weights are a starting point, and your neural model is the function that produces subsequent states.

          Problem is brians don’t have “clock cycles”, at least not as strictly as artificial neural networks do.

      • BestBouclettes@jlai.lu
        link
        fedilink
        arrow-up
        23
        ·
        7 months ago

        ChatGPT is not conscious, it’s just a probability language model. What it says makes no sense to it and it has no sense of anything. That might change in the future but currently it’s not.

        • h3ndrik@feddit.de
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          7 months ago

          And it doesn’t have any internal state of mind. It can’t “remember” or learn anything from experience. You need to always feed everything into the context or stop and retrain it to incorporate “experiences”. So I’d say that rules out consciousness without further systems extending it.

          • merc@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            7 months ago

            Also, actual brains arise from desires / needs. Brains got bigger to accommodate planning and predicting.

            When a human generates text, the fundamental reason for doing so is to fulfill some desire or need. When an LLM generates text it’s because the program says to generate the next word, then the next, then the next, based on a certain probability of words appearing in a certain order.

            If an LLM writes text that appears to be helpful, it’s not doing it out of a desire to be helpful. It’s doing it because it’s been trained on tons of text in which someone was being helpful, and it’s mindlessly mimicking that behaviour.

            • h3ndrik@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              7 months ago

              Isn’t the reward function in reinforcement learning something like a desire it has? I mean training works because we give it some function to minimize/maximize… A goal that it strives for?! Sure it’s a mathematical way of doing it and in no way as complex as the different and sometimes conflicting desires and goals I have as a human… But nonetheless I think I’d consider this as a desire and a reason to do something at all, or machine learning wouldn’t work in the first place.

              • merc@sh.itjust.works
                link
                fedilink
                arrow-up
                2
                ·
                7 months ago

                The reward function for an LLM is about generating a next word that is reasonable. It’s like a road-building robot that’s rewarded for each millimeter of road built, but has no intention to connect cities or anything. It doesn’t understand what cities are. It doesn’t even understand what a road is. It just knows how to incrementally add another millimeter of gravel and asphalt that an outside observer would call a road.

                If it happens to connect cities it’s because a lot of the roads it was trained on connect cities. But, if its training data also happens to contain a NASCAR oval, it might end up building a NASCAR oval instead of a road between cities.

                • h3ndrik@feddit.de
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  7 months ago

                  That is an interesting analogy. In the real world it’s kinda similar. The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so. And it happens to be their job to build roads. Their desire is probably to get through the day and earn a decent living. And further along the chain, not even their boss nor the city engineer necessarily “wants” the road to go in a certain direction.

                  Talking about large language models instead of simpler forms of machine learning makes it a bit complicated. Since it’s and elaborate trick. Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world. The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

                  I’m not that sure. Sure the weights of an ML model in itself don’t have any desire. They’re just numbers. But we have more than that. We give it a prompt, build chatbots and agents around the models. And these are more complex systems with the capability to do something. Like do (simple) customer support or answer questions. And in the end we incentivise them to do their job as we want, albeit in a crude and indirect way.

                  And maybe this is skipping half of the story and directly jumping to philosophy… But we as humans might be machines, too. And what we call desires is a result from simpler processes that drive us. For example surviving. And wanting to feel pleasure instead of pain. What we do on a daily basis kind of emerges from that and our reasoning capabilities.

                  It’s kind of difficult to argue. Because everything also happens within a context. The world around us shapes us and at the same time we’re part of bigger dynamics and also shape our world. And large language models or the whole chatbot/agent are pretty simplistic things. They can just do text and images. They don’t have conciousness or the ability to remember/learn/grow with every interaction, as we do. And they do simple, singular tasks (as of now) and aren’t completely embedded in a super complex world.

                  But I’d say that an LLM answers a question correctly (which it can do) and why it does it due to the way supervised learning works… And the road construction worker building the road towards the other city and how that relates to his basic instincts as a human… Are kind of similar concepts. They’re both results of simpler mechanisms that are also completely unrelated to the goal the whole entity is working towards. (I mean not directly related… I.e. needing money to pay for groceries and paving the road.)

                  I hope this makes some sense…

                  • merc@sh.itjust.works
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    7 months ago

                    The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so.

                    But, the construction workers aren’t the ones who designed the road. They’re just building some small part of it. In the LLM case that might be like an editor who is supposed to go over the text to verify the punctuation is correct, but nothing else. But, the LLM is the author of the entire text. So, it’s not like a construction worker building some tiny section of a road, it’s like the civil engineer who designed the entire highway.

                    Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world

                    No, it doesn’t. They learn nothing. They’re simply able to generate text that looks like the text generated by people who do know math. They certainly don’t know any concepts. You can see that by how badly they fail when you ask them to do simple calculations. They quickly start generating text that looks like it contains fundamental mistakes, because they’re not actually doing math or anything, they’re just generating plausible next words.

                    The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

                    No, there’s no intelligence, no reasoning. The can fool humans into thinking there’s intelligence there, but that’s like a scarecrow convincing a crow that there’s a human or human-like creature out in the field.

                    But we as humans might be machines, too

                    We are meat machines, but we’re meat machines that evolved to reproduce. That means a need / desire to get food, shelter, and eventually mate. Those drives hook up to the brain to enable long and short term planning to achieve those goals. We don’t generate language its own sake, but instead in pursuit of a goal. An LLM doesn’t have that. It merely generates plausible words. There’s no underlying drive. It’s more a scarecrow than a human.

      • embed_me@programming.dev
        link
        fedilink
        arrow-up
        10
        arrow-down
        7
        ·
        7 months ago

        🥱

        The only people with this take are people who don’t understand it. Plus growth and decline is an inherent part of consciousness, unless the computer can be born, change then die in some way it can’t really achieve consciousness.