• emptiestplace@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You seem unfamiliar with the concept of consciousness as an emergent property.

    What if we dramatically reduce the cost of training - what if we add realtime feedback mechanisms as part of a perpetual model refinement process?

    As far as I’m aware, we don’t know.

    How are you so confident that your feelings are not simply a consequence of complexity?

    • Veraticus@lib.lgbtOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Even if they are a result of complexity, that still doesn’t change the fact that LLMs will never be complex in that manner.

      Again, LLMs have no self-awareness. They are not designed to have self-awareness. They do not have feelings or emotions or thoughts; they cannot have those things because all they do is generate words in response to queries. Unless their design fundamentally changes, they are incompatible with consciousness. They are, as I’ve said before, complicated autosuggestion algorithms.

      Suggesting that throwing enough hardware at them will change their design is absurd. It’s like saying if you throw enough hardware at a calculator, it will develop sentience. But a calculator will not do that because all it’s programmed to do is add numbers together. There’s no hidden ability to think or feel lurking in its design. So too LLMs.