The search term to find it is autologon, but as everyone has mentioned, this is a last resort and JF should just be run as a service.
I am a Meat-Popsicle
The search term to find it is autologon, but as everyone has mentioned, this is a last resort and JF should just be run as a service.
You really need to get your backups in order.
kbin obviously!
Minimum open services is indeed best practice but be careful about making statements that the attack surface is relegated to open inbound ports.
Even Enterprise gear gets hit every now and then with a vulnerability that’s able to bypass closed port blocking from the outside. Cisco had some nasty ones where you could DDOS a firewall to the point the rules engine would let things through. It’s rare but things like that do happen.
You can also have vulnerabilities with clients/services inside your network. Somebody gets someone in your family to click on something or someone slips a mickey inside one of your container updates, all of a sudden you have a rat on the inside. Hell even baby monitors are a liability these days.
I wish all the home hardware was better at zero trust. Keeping crap in isolation networks and setting up firewalls between your garden and your clients can either be prudent or overkill depending on your situation. Personally I think it’s best for stuff that touches the web to only be allowed a minimum amount of network access to internal devices. Keep that Plex server isolated from your document store if you can.
If by not linked you mean wholly owned by…
https://www.mozilla.org/en-US/about/governance/organizations/
The Mozilla Corporation, a wholly owned subsidiary of the Mozilla Foundation, works with the community to develop software that advances Mozilla’s principles. This includes the Firefox browser, which is well recognized as a market leader in security, privacy and language localization. These features make the Internet safer and more accessible.
Billy West is 72, and Katey Sagal is 70. They’re trying to voice 20yo’s. There is a limited amount of content we’re going to get out of them. I’m also sure the budget wasn’t was it once was for the VA’s or the writers.
I’m happy that we’re getting more content. Even mediocre content has gems in it. I do wish they’d stray away from current issues as it takes them so long to produce episodes at this burn rate that the content is a bit aged by the time we’re watching it.
I suspect their financial position has changed. Perhaps Google’s being found as a monopoly has made them decide not to help fund Mozilla’s efforts as substantially.
Ashley Boyd lead the advocacy team, here’s the kind of stuff they were doing:
https://blog.mozilla.org/en/mozilla/mozilla-welcomes-ashley-boyd-vp-of-advocacy/
In fall of 2016, Mozilla fought for common-sense copyright reform in the EU, creating public education media that engaged over one million citizens and sending hundreds of rebellious selfies to EU Parliament. Earlier in 2016, Mozilla launched a public education campaign around encryption and emerged as a staunch ally of Apple in the company’s clash with the FBI. Mozilla has also fought for mass surveillance reform, net neutrality and data retention reform.
https://techcrunch.com/2024/11/05/mozilla-foundation-lays-off-30-staff-drops-advocacy-division/
“The Mozilla Foundation is reorganizing teams to increase agility and impact as we accelerate our work to ensure a more open and equitable technical future for us all. That unfortunately means ending some of the work we have historically pursued and eliminating associated roles to bring more focus going forward,” read the statement shared with TechCrunch.
Reading between the lines, I’d keep an eye on them collecting your data and consider one of the privacy-focused forks.
Lrrreconsilable ndndifferences
Fry wrote the comic saving Leela near the intro, foreshadowing him saving Leela later in the episode.
Yeah, a company got toasted because one of their admins was running Plex and had tautulli installed and opened to the outside figuring it was read-only and safe.
Zero day bug in tat exposed his Plex token. They then used another vulnerability in Plex to remote code execute. He was self-hosting a GitHub copy of all the company’s code.
Home assistant Web app would be fine.
We’re a long way from trusting it to do something critical without intervention.
AI would be good at looking at an X-ray after a doctor and pointing out anomalies. But it would be bad to have it tell the doctor that everything looks fine.
Yeah, you still need the CPU to move all the data to the video card and to and from the memory. The stuff I play doesn’t mind 30 frames per second, I’m not really much of a stickler for high settings. But even the shitty unity games are starting to struggle
China certainly could be lying.
Half of the US states are purposely bankrupting their education systems to make sure that the 1 percenters are the only ones with any advantage. Even in the States that aren’t actively trying to stamp out education the poor and middle class can’t afford a respectable education.
China is sitting on a pile of natural resources and doesn’t have any problems with underpaying and working people to death.
They’re set up to do a lot with very little, they have a lot of people and resources and they’re not afraid to educate enough people to get the job done.
It’s not just space, they’re getting places with electric cars that we can’t touch.
It’ll be interesting to see where all this ends up.
He wants in on the new authoritarian regime. Slowing down or stopping electric cars is on their to do list.
I keep a root folder. On Windows it’s in c:\something on Linux it’s in /something
Under there I’ve got projects organized by language. This helps me organize nix shells and venvs.
Syncthing keeps the code bases and synced between multiple computers
I don’t separate work from home because they don’t live in the same realm.
Only home stuff in the syncthing.
It tells me what document in the collection it used, But it doesn’t give me too much in the way of context or anything about the exact location in the document. It will usually give me some wording if I’m missing it and I can go to the document and search for that wording.
I’m just one person searching a handful of documents so the sample size is pretty small for repeatability, so far, if it says it’s in there, it’s in there. It definitely misses things though, I’m still early in the process. I need to try some different models and perhaps clean up the data a little bit for some of the stuff.
Using the documentation as source data It doesn’t seem to hallucinate or insist things are wrong, it’s more likely to say I don’t see any information about that when the data is clearly in the data set somewhere.
YW on the responses I’m having fun with it even if it’s taking forever to get it to dial in and be truly useful.
Trident VGA?
I got a 3DFX voodoo as soon as they came out. GL quake was mind-blowing.
I bought a Riva TNT
Then a GeForce 2
Then a Radeon 9000
Then for a bunch of years I just moved into laptop after laptop with discrete GPUs.
Now I still have a 1080 and a 2070 doing a little bit of light AI work and video transcoding for me. But I’m still relying on crappy laptop GPUs for all my gaming. They’re good enough.
I have two projects for it right now. The first is shoving my labyrinth of HOA documents into it so I can answer quick questions about the HOA docs or at least find the right answer more effectively.
The second is for work, I shoved a couple months of slack, some Google docs, some PDFs all about our production product. Next I’m going to start shoving some of GitHub in there. It would be kind of nice to have something that I could ask where is the shorting algorithm and how does it work and it could give me back where the source code is in any documentation related to it.
The HOA docs I could feed into GPT, I’m still a little apprehensive to handover all of our production code to a public AI though.
I’ve got it running on a 2070 super and I’ve got another instance running on a fairly new ARC. It’s not fast, But it’s also not miserable. I’m running on the medium sized models I only have so much VRAM to deal with. It’s kind of like trying to read the output off a dot matrix printer.
The natural language aspect is better than trying to shove it into a conventional search engine, say I don’t know what a particular function is called or some aspect or what the subcompany my HOA uses to review architectural requests. Especially for the work stuff when there’s so many different types of documents lying around. I still need to try some different models though my current model is a little dumb about context. I’m also having a little trouble with technical documentation that doesn’t have a lot of English fluff. It’s like I need it to digest a dictionary to go along with the documents.
Searx is fancy about it though, It queries everybody and gives you the results that came back from multiple places. This effectively eliminates ads, AI, and unless they all missed it, spam.
Context for the masses…
”Strabismus (crossed eyes) is a common eye condition among children. It is when the eyes are not lined up properly and they point in different directions (misaligned). One eye may look straight ahead while the other eye turns in, out, up, or down."