• lloram239@feddit.de
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    LLMs are fundamentally different from human consciousness.

    They are also fundamentally different from a toaster. But that’s completely irrelevant. Consciousness is something you get when you put intelligent in an agent that has to move around in and interact with an environment. A chatbot has no use for that, it’s just there to mush through lots of data and produce some, it doesn’t have or should worry about its own existence.

    It simply returns the next most-likely word in a response.

    So does the all knowing oracle that predicts the lotto numbers from next week. It being autocomplete does not limit its power.

    LLMs are a dead end.

    There might be better or faster approaches, but it’s certainly not a dead end. It’s a building block. Add some long term memory, bigger prompts, bigger model, interaction with the Web, etc. and you can build a much more powerful bit of software than what we have today, without even any real breakthrough on the AI side. GPT as it is today is already “good enough” for a scary number of things that used to be exclusively done by humans.

    • Veraticus@lib.lgbtOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      A chatbot has no use for that, it’s just there to mush through lots of data and produce some, it doesn’t have or should worry about its own existence.

      It literally can’t worry about its own existence; it can’t worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.

      Add some long term memory, bigger prompts, bigger model, interaction with the Web, etc. and you can build a much more powerful bit of software than what we have today, without even any real breakthrough on the AI side.

      I agree this would be a very useful chatbot. But it is still not a toaster. Nor would it be conscious.

      • Communist@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        It literally can’t worry about its own existence; it can’t worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.

        Who cares? This has no real world practical usecase. Its thoughts are what it says, it doesn’t have a hidden layer of thoughts, which is quite frankly a feature to me. Whether it’s conscious or not has nothing to do with its level of functionality.

      • emptiestplace@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        You seem unfamiliar with the concept of consciousness as an emergent property.

        What if we dramatically reduce the cost of training - what if we add realtime feedback mechanisms as part of a perpetual model refinement process?

        As far as I’m aware, we don’t know.

        How are you so confident that your feelings are not simply a consequence of complexity?

        • Veraticus@lib.lgbtOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Even if they are a result of complexity, that still doesn’t change the fact that LLMs will never be complex in that manner.

          Again, LLMs have no self-awareness. They are not designed to have self-awareness. They do not have feelings or emotions or thoughts; they cannot have those things because all they do is generate words in response to queries. Unless their design fundamentally changes, they are incompatible with consciousness. They are, as I’ve said before, complicated autosuggestion algorithms.

          Suggesting that throwing enough hardware at them will change their design is absurd. It’s like saying if you throw enough hardware at a calculator, it will develop sentience. But a calculator will not do that because all it’s programmed to do is add numbers together. There’s no hidden ability to think or feel lurking in its design. So too LLMs.