- cross-posted to:
- technews
- cross-posted to:
- technews
cross-posted from: https://vlemmy.net/post/289714
Archived version: https://archive.ph/qjsq7
Archived version: https://web.archive.org/web/20230626125012/https://www.wired.com/story/us-ai-regulation-congress-briefings/
The problem I see is that a lot of the LLM models are already open-source, so the legislation might try to limit it’s usage, but that should have been done before their training. Right now anyone with a simple laptop can download a model and run it locally fully offline. Same goes to other AI technologies. To be honest the time to regulate was when those crappy deepfakes started showing a few years back, now it’s defintely too late.
And yet we still haven’t done anything to slow it down let alone stop it and that has been an issue for decades.
LLMs are very rudimentary forms of AI, they’re not even the kind of AI that I think we should be worried about.
I wouldn’t even call them an “intelligence” at all, more like an aggregated set of data.
Not an ML and AI specialist or anything but what I’ve seen of some of the technical details, with my “some” education and experience in software engineering, did not lead me to believe we are anywhere close to actual artificial intelligence.