12 comments

  • root_axis 44 minutes ago
    There are a lot of people vulnerable to AI psychosis.

    As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

    • digitaltrees 35 minutes ago
      There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
      • brookst 28 minutes ago
        These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

        I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

        • vidarh 25 minutes ago
          Sensory input is nothing but data.
      • root_axis 15 minutes ago
        LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
      • ofjcihen 29 minutes ago
        What you’re missing is a “self” to have the “experience”.

        LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

        • vidarh 25 minutes ago
          How do I know you have this "self"?

          How do you know other humans do?

          • svachalek 14 minutes ago
            By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.
            • vidarh 2 minutes ago
              Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.
          • ofjcihen 22 minutes ago
            To be fair comments like this sometimes make me think not all humans do.
            • vidarh 19 minutes ago
              Ad hominems are always a nice way of getting out of answering something you have no answer to.
    • api 36 minutes ago
      If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

      As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

      So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

      • digitaltrees 31 minutes ago
        Why would current AI be an argument for panpsycism? I don’t understand the connection.

        AI is stochastic, not static and deterministic.

        As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

        • applfanboysbgon 21 minutes ago
          > AI is stochastic, not static and deterministic.

          LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.

        • colechristensen 28 minutes ago
          I think it's the opposite argument

          IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

          The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.

  • search_facility 56 minutes ago
    Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

    And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

    • XMPPwocky 49 minutes ago
      Yup- the question is "can math be conscious?"

      (If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

      • search_facility 15 minutes ago
        Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.
    • SuperV1234 45 minutes ago
      We are not fundamentally different. Chemical reactions are just math.
      • rellfy 43 minutes ago
        Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.
        • vidarh 22 minutes ago
          We do not know what the "low level requirements for consciousness" are.

          We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.

        • baggy_trough 36 minutes ago
          > it does not have the low level requirements for consciousness.

          What is the evidence for this?

          • rellfy 24 minutes ago
            I didn’t mean it as fact. “Could turn out that …”
    • canjobear 46 minutes ago
      You could simulate your own brain in Minecraft. What do you conclude from this?
      • search_facility 24 minutes ago
        I can not simulate my brain, it's a huge stretch to imply this.

        But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.

  • tracerbulletx 17 minutes ago
    We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.
  • ofjcihen 24 minutes ago
    Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

    Especially confusing when it’s someone who knows how algorithms work.

    Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

    Never, because they’re incapable of doing anything independently because there is no sense of self.

  • RVuRnvbM2e 53 minutes ago
    It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.
    • Myrmornis 27 minutes ago
      I don't think you read careful what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).
    • ChrisClark 45 minutes ago
      So, how is consciousness generated?
      • wrs 33 minutes ago
        Not simply by reading every word ever written by a conscious being and learning to reproduce them with high probability.

        At least, that’s certainly not how I got here.

    • rellfy 45 minutes ago
      Are you implying consciousness is magic? Well, I wouldn't disagree with that really.
    • morpheos137 16 minutes ago
      the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.
    • AdeptusAquinas 34 minutes ago
      That's always been Dawkins's shtick though. As an atheist I've generally found him a bit embarrassing
    • IncreasePosts 48 minutes ago
      Where does he say it's magic?
      • ezfe 39 minutes ago
        LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

        To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).

        • baggy_trough 34 minutes ago
          Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?
          • digitaltrees 22 minutes ago
            I came here to say this. But your neurons are faster than mine.
  • Myrmornis 11 minutes ago
    On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

    But on the other hand his thoughts at the end are interesting. Summary:

    Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

  • throwyawayyyy 36 minutes ago
    Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
    • digitaltrees 25 minutes ago
      Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

      If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.

      • throwyawayyyy 20 minutes ago
        I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.
    • api 33 minutes ago
      That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

      I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

      The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.

      • throwyawayyyy 18 minutes ago
        I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.
      • digitaltrees 23 minutes ago
        But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

        How is that different than a cell?

    • brookst 22 minutes ago
      Obligatory Blightsight recommendation for intelligence != consciousness.
  • wewewedxfgdf 21 minutes ago
    Its software. Software is not conscious.
  • mellosouls 1 day ago
    • digitaltrees 11 minutes ago
      Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”
  • morpheos137 22 minutes ago
    Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.
  • WalterGR 5 hours ago
    Related: https://news.ycombinator.com/item?id=47988880

    "Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

    18 points | 2 hours ago | 16 comments

    • amelius 58 minutes ago
      So we know Claude is deterministic, but does that mean it is not conscious?

      Or what is the reasoning exactly?

      • throwaway27448 46 minutes ago
        It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.

        Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.

    • dang 1 hour ago
      Also The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious - https://news.ycombinator.com/item?id=47991340 - May 2026 (30 comments)
  • mpurbo 25 minutes ago
    [flagged]