Our tech columnist found Google, Meta and Microsoft are taking your conversations, photos or documents to teach their AI how to write, paint and pretend to be human.
Elon Musk, chief executive of Tesla, recently bragged to his biographer that he had access to 160 billion video frames per day shot from the cameras built into people’s cars to fuel his AI ambitions.
“Everybody is sort of acting as if there is this manifest destiny of technological tools built with people’s data,” says Ben Winters, a senior counsel at the Electronic Privacy Information Center (EPIC), who has been studying the harms of generative AI.
When you sign up to use Google’s new Workspace Labs AI writing and image-generation helpers for Gmail, Docs, Sheets and Slides, the company warns: “don’t include personal, confidential, or sensitive information.”
Who’d ever have imagined a vacation photo they posted in 2009 would be used by a megacorporation in 2023 to teach an AI to make art, put a photographer out of a job, or identify someone’s face to police?
It tells me its “foundational” AI models — the software behind things like Bard, its answer-anything chatbot — come primarily from “publicly available data from the internet.” Our private Gmail didn’t contribute to that, the company says.
After I pushed back, the company said it would “not train our generative AI models on people’s messages with their friends and families.” At least it agreed to draw some kind of red line.
The original article contains 1,682 words, the summary contains 217 words. Saved 87%. I’m a bot and I’m open source!
This is the best summary I could come up with:
Elon Musk, chief executive of Tesla, recently bragged to his biographer that he had access to 160 billion video frames per day shot from the cameras built into people’s cars to fuel his AI ambitions.
“Everybody is sort of acting as if there is this manifest destiny of technological tools built with people’s data,” says Ben Winters, a senior counsel at the Electronic Privacy Information Center (EPIC), who has been studying the harms of generative AI.
When you sign up to use Google’s new Workspace Labs AI writing and image-generation helpers for Gmail, Docs, Sheets and Slides, the company warns: “don’t include personal, confidential, or sensitive information.”
Who’d ever have imagined a vacation photo they posted in 2009 would be used by a megacorporation in 2023 to teach an AI to make art, put a photographer out of a job, or identify someone’s face to police?
It tells me its “foundational” AI models — the software behind things like Bard, its answer-anything chatbot — come primarily from “publicly available data from the internet.” Our private Gmail didn’t contribute to that, the company says.
After I pushed back, the company said it would “not train our generative AI models on people’s messages with their friends and families.” At least it agreed to draw some kind of red line.
The original article contains 1,682 words, the summary contains 217 words. Saved 87%. I’m a bot and I’m open source!