The EU has such, General Data Protection Regulation (GDPR), works reasonably well. Pretty good place to start.
might
That word is carrying a mighty big load.
What’s one that doesn’t suck?
Historical background on current events: Heather Cox Richardson.
Yeah, lots of opinions, a few facts: one of the discussions.
Sumsnag
True in a way. However, there is a rather large collection of speculation on the Internet that is quite an undertaking to correct. And a large population of people and bots willing to speculate. Also, having once been speculated, each speculation takes on a life of its own. If it gets much more substantial, forget Skynet, we’re busy creating Specunet and its sidekick Confusionet – an insidious duo.
This looks to be more an endorsement of moderation principles and rules, not determining truth of comments.
For the difficulties in determining what’s true, see the kerfuffle about Media Bias Fact Check.
There’s certainly a history of Unix and Unix-like forks; which is rather simple compared to the Linux distro forks (go right to the big pic).
Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.
Alan Turing puts it similarly, the question is nonsense. However, if you define “machine” and “thinking”, and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).
For more, see: Computing Machinery and Intelligence, and Turing’s original paper (which goes into the Imitation Game).
Oooooh, okay, I misread. Apologies.
Yet use AI (possibly) to determine users’ AI answers.
The “running joke” used by millions for serious and playful projects? [edited for punctuation]
Let’s extend this thought experiment a little. Consider just forum posts; the numbers will be somewhat similar for articles and other writings, as well as photos and videos.
A bot creates how many more posts than a human? Being (ridiculously) conservative, we’ll say 10x more.
On day one: 10 humans are posting (for simplicity’s sake) 10 times a day, totaling 100 posts. Bot is posting 100 a day. For a total of 200 human and bot posts; 50% of which are the bot.
In your (extended) example, at the end of a year: 10 humans are still posting 100 times a day. The 10 bots are posting a total of 1000 times a day. Bots are at 90%, humans 10%.
This statistic can lead you to think human participation in the Internet is difficult to find.
Returning to reality, consider how inhuman AI bots are, with each probably able to outpost humans by millions or billions of times under millions of aliases each. If you find search engines, articles, forums, reviews, and such are bonkers now, just wait a few years. Predicting general chaotic nonsense for the Internet is a rational conclusion, with very few islands of humanity. Unless bots are stopped.
Right now though, bots are increasing.
Exactly. A more accurate headline would be “Americans are Falling Behind on their Income.”
Yes, though in some locales there are “work crews” (slave labor) that clear brush, road litter, and such for businesses, organizations, the state, and individuals.
Back in 2000, there was something like that for the kernel with SELinux (Security-Enhanced Linux). Which continues to live in various distributions’ kernels. Not a full O/S though, and not generally regarded as a PoS.
Yeah, there are two basic approaches to safety: evidence of harm and evidence of safety. Evidence of safety is the higher standard (e.g. broad long-term independent studies). Evidence of harm is a low standard (e.g. small studies, short-term studies). Guess which one is used for herbicides, pesticides, food, …
Yeah, that sounds reasonable in the long run (years), while the laptop plan is more immediately useful.