Archived link

Opinionated article by Alexander Hanff, a computer scientist and privacy technologist who helped develop Europe’s GDPR (General Data Protection Regulation) and ePrivacy rules.

We cannot allow Big Tech to continue to ignore our fundamental human rights. Had such an approach been taken 25 years ago in relation to privacy and data protection, arguably we would not have the situation we have to today, where some platforms routinely ignore their legal obligations at the detriment of society.

Legislators did not understand the impact of weak laws or weak enforcement 25 years ago, but we have enough hindsight now to ensure we don’t make the same mistakes moving forward. The time to regulate unlawful AI training is now, and we must learn from mistakes past to ensure that we provide effective deterrents and consequences to such ubiquitous law breaking in the future.

  • teawrecks@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    3 days ago

    Destroying it is both not an option, and an objectively regressive suggestion to even make.

    Destruction isn’t possible because even if you deleted every bit of information from every hard drive in the world, now that we know it’s possible, someone would recreate it all in a matter of months.

    Regressive because you’re literally suggesting that we destroy a new technology because we’re afraid of what it will do to the technology it replaces. Meanwhile, there’s a very decent chance that AI is our best chance at solving the energy/climate crises through advancing nuclear tech, as well as surviving the next pandemic via ground breaking protein folding tech.

    I realize AI tech makes people uncomfortable (for…so many reasons), but becoming old fashioned conservatives in response is not a solution.

    I would take it a step further than public domain, though. I would also make any profits from illegally trained AI need to be licensed from the public. If you’re going to use an AI to replace workers, then you need to pay taxes to the people proportional to what you would be paying those it replaces.

    • Bronzebeard@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      I never suggested destroying the technology that is “AI”. I’m not uncomfortable about AI, I’ve even considered pivoting my career in that direction.

      I suggested destroying the particular implementation that was trained on the illegitimate data. If someone can recreate it using legitimate data, GREAT. That’s what we want to happen. The tool isn’t the problem. It’s the method they’re using to train them.

      Please don’t make up random ass narratives I never even hunted at, and then argue against them.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        3 days ago

        I didn’t misinterpret what you were saying, everything I said applies to the specific case you lay out. If illegal networks were somehow entirely destroyed, someone would just make them again. That’s my point, there’s no way around that, there’s just holding people accountable when they do it. IMO that takes the form of restitutions to the people proportional to profits.

        • Bronzebeard@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          This is the dumb kind of “best do nothing, because both no is perfect” approach to making sure no disincentives are ever taken because someone somewhere else might also try to do the illegal thing that they’ll lose access to the moment they’re caught…

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            3 days ago

            What the? I’m literally saying what action to take, what is happening? Is there maybe a bug where you only see the first few characters of my post? Are you able to read these characters I’m typing? Testing testing testing. Let me know how far you get. Maybe there’s just too many words for you? Test test. Say “elephant” if you can read this.

    • Sas [she/her]@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      3 days ago

      Mate LLMs are literally gobbling up energy as if they’re working at a power plant gloryhole. It’s furthering the climate crisis, not solving it. They’re also incapable of logic to make something new so they’re not gonna invent anything. AI in general has it’s uses but LLMs are not the golden goose you should bet on. And profits from them are afaik non existent. They only come from investors thinking it’ll be profitable some day but it’s a way too energy intense process to be profitable

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        3 days ago

        I understand that you are familiar with the buzzword “LLM”, but let me introduce you to a different one: transformers.

        Virtually all modern successful AIs are based on transformers, LLMs included. I agree that LLMs currently amount to a chinese-room-inspired parlor trick, but the money involved has no doubt advanced all transfomer-based AI research, both directly (what works for LLMs may generalize) and indirectly (the market demand for LLMs in consumer products has created the a demand for power and compute hardware).

        We have transformer-based AI to thank for our understanding of the covid19 protein, and developing a safe and effective vaccine in a timely manner.

        The massive demand for energy has convinced Microsoft, Meta, and others to invest in their own modern nuclear power plants, representing a monumental step forward in sustainable energy generation that we have been trying to convince the US government to take for decades.

        Modern AI is being used to solve the hardest problems of nuclear fusion. If we can finally crack that nut, there’s no telling what’s possible.

        But specifically when it comes to LLMs, profitable or not, people obviously find them useful. People aren’t using it in place of search engines, or doing all their homework with it because they don’t find it useful. My only argument is that any AI trained on public content without consent should be required to effectively buy a license from, or pay royalties to the public. If McDonald’s is going to replace their front counters with AI trained on public content, then they should have to pay taxes proportional to how much use they get from that AI.

        In the theoretical extreme, if someone trains an AI on the general public’s data, and is able to create an AI that somehow replaces every job on earth, then congrats, we now live in a post-work society, we just need to reach out and take it rather than letting one person capitalize infinitely.

        And at the end of the day, if you honestly believe the profits from AI are non-existent, then what are you worried about? All those companies putting all their eggs in the LLM basket are going to disappear overnight when the AI bubble finally pops, right?

        • Sas [she/her]@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          There’s a reason why in my comment i talked about LLMs as bad while saying AI in general has it’s uses. The reason being this post being about LLMs.

          I know very well that specialized AI has a lot of uses in medical science and other fields but that’s not really what got hit with all the hype, is it? The hype is managers saw a language model give seemingly better answers to questions than John Rando from 2 blocks down the road so they’re now looking to cut out all the already low paid workers and spoiler alert we will not land in a society where the general public profits from not having work. It will be the same owners of capital profiting as per usual.

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            2
            ·
            3 days ago

            we will not land in a society where the general public profits from not having work. It will be the same owners of capital profiting as per usual.

            If we do nothing, sure. I’m suggesting, like the article, that we do something.

            The only sentiment I took issue with was the poster above who suggested that somehow the solution would be to delete/destroy illegally trained networks. I’m just saying that’s not practical nor progressive. AI is here to stay, we just need to create legislature that ensures it works for us, especially when it couldn’t have been built without us.