• BrikoX@vlemmy.netOP
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The problem I see is that a lot of the LLM models are already open-source, so the legislation might try to limit it’s usage, but that should have been done before their training. Right now anyone with a simple laptop can download a model and run it locally fully offline. Same goes to other AI technologies. To be honest the time to regulate was when those crappy deepfakes started showing a few years back, now it’s defintely too late.

    Just like, after the industrial revolution, we are beginning to come to a shared conclusion that climate change, as a result of industrial pollution, has begun to affect our lives.

    And yet we still haven’t done anything to slow it down let alone stop it and that has been an issue for decades.

    • Dankenstein@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      LLMs are very rudimentary forms of AI, they’re not even the kind of AI that I think we should be worried about.

      I wouldn’t even call them an “intelligence” at all, more like an aggregated set of data.

      Not an ML and AI specialist or anything but what I’ve seen of some of the technical details, with my “some” education and experience in software engineering, did not lead me to believe we are anywhere close to actual artificial intelligence.