• 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    9 months ago

    Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.

    • salarua@sopuli.xyz
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      shit just went from 0 to 100 real fucking quick

      for real though, if you ask an LLM how to make a bomb, it’s not the LLM that’s the problem

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that’s the point.

        Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you’d probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

        To have a strong stance means also defending the potential harmful effects, since they’re inevitable. It’s hard to keep values consistent, even when there are potential harmful effects of something that’s for the greater good. Encryption is a perfect example of that.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          If it has the information, why not?

          Naive altruistic reply: To prevent harm.

          Cynic reply: To prevent liabilities.

          If the restaurant refuses to put your fries into your coffee, because that’s not on the menu, then that’s their call. Can be for many reasons, but it’s literally their business, not yours.

          If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.

          Although I find it pretty surprising corporations self-regulate faster than lawmakers can say ‘AI’ in this case. That’s odd.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            This is very well said. They’re allowed to not serve you these things, but we should still be able to use these things ourselves and make our glorious gun powder fries coffee with a spice of freedom all we want!

        • Lionir [he/him]@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.

            • Solar Bear@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 months ago

              What the fuck is this “you should defend harm” bullshit, did you hit your head during an entry level philosophy class or something?

              The reason we defend encryption even though it can be used for harm is because breaking it means you can’t use it for good, and that’s far worse. We don’t defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn’t true for LLMs.

              • 👁️👄👁️@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                We don’t believe that at all, we believe privacy is a human right. Also you’re just objectively wrong about LLMs. Offline uncensored LLMs already exist, and will perpetually exist. We don’t defend tools doing harm, we acknowledge it.

                • Solar Bear@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  We don’t believe that at all, we believe privacy is a human right.

                  That’s just a different way to phrase what I said about defending the good side of encryption.

                  Offline uncensored LLMs already exist, and will perpetually exist

                  I didn’t say they don’t exist, I said that the help and harm aren’t inseparable like with encryption.

                  We don’t defend tools doing harm, we acknowledge it.

                  “My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides.”

                  If you want to walk it back, fine, but don’t pretend like you didn’t say it.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        Do gun manufacturers get in trouble when someone shoots somebody?

        Do car manufacturers get in trouble when someone runs somebody over?

        Do search engines get in trouble if they accidentally link to harmful sites?

        What about social media sites getting in trouble for users uploading illegal content?

        Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.

        Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Do car manufacturers get in trouble when someone runs somebody over?

          Yes, if it can be shown the accident was partially caused by the manufacturer’s neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it’s true.

            • Spzi@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.

              If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.

              We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.

              Liabilities and regulations exist for these reasons.

              • 👁️👄👁️@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                Again… this is still missing the point.

                Let me spell it out: I’m not asking for companies to host these services. They are not held liable.

                For this example to be related, ChatGPT would need to be open source and let you plug in your own model. We should have the freedom to plug in our own trained models, even uncensored ones. This is the case with LLAma and other AI systems right now, and I’m encouraging Mozilla’s AI to allow us to do the same thing.