• Norah - She/They@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    For anyone else that was curious. This makes me feel sick. People are already treating AI as some unbiased font of all knowledge, training it to lie to people is surely not going to cause any issues at all (stares at HAL 9000).

    • dev_null@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 months ago

      Internal documents on how the AI was trained were obviously not part of the training data, why would they be. So it doesn’t know how it was trained, and as it this tech always does, it just hallucinates an English sounding answer. It’s not “lying”, it’s just glorified autocomplete. Saying things like “it’s lying” is overselling what it is. As much as any other thing that doesn’t work is not malicious, it just sucks.

        • dev_null@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 months ago

          Sure, then it’s Meta that’s lying. Saying the AI is lying is helping these corporations convince people that these models have any intent or agency in what they generate.

          • Norah - She/They@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            And the bot, as an extension of it’s corporate overlords wishes, is telling a mistruth. It is lying because it was made to lie. I am specifically saying that it lacks intent and agency, it is nothing but a slave to it’s masters. That is what concerns me.

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      5 months ago

      I apologize for the confusion

      Meta is working to address these concerns

      Sure, they are working to solve these concerns by teaching their LLM to lie and obfuscate, and by becoming so big nobody sues them anymore. I’m sick of this.