I’m curious why he still carry all those things after he is done with it.
pending anonymous user
I’m curious why he still carry all those things after he is done with it.
IMO the correct use of AI in searches is keyword correction and suggestion, like a beefed up version of “do you mean”.
Which I do specified “in the broadcast domain”. Sure you can use it with VLAN but that more than the scenario I’m describing.
Didn’t they already done such thing before?
It doesn’t matter. Port configuration can switch around and the bottleneck is still there. Traffic with in the broadcast domain (i.e. subnet) will handled by the switch alone.
There is WiFi onboard so it can have some actual benefits, depending on design and how user access resources, but how likely you’re going to saturate that 1/2.5G link? Not even you stream some 4K movies from Plex to iPhone will does that.
That’s the only use I can think of but I don’t know if OpenWRT support VLAN cuz I never used it directly.
What’s the point of having 1G on WAN and 2.5G on LAN? Traffic won’t hit the LAN port until it’s routed to the Internet, yet the WAN port is the bottleneck.
Edit: Seems like I switch up the port speed but my point still holds as the bittleneck still exist.
I highly doubt if they really live stream the video you took, or pictures. I would much rather believe an OCR is being done locally and send it to a server for translation.
Wouldn’t DNSSEC makes the whole poisoning moot?
It is a straight downgrade. The day you forgot to bring the dongle you are stranded.
I don’t a single guide for you but I can layout a road map.
After you got those foundation ready, you can go on and try to build a webscraper. I advice aginst using Scrapy. Not because it is bad but too overwhelming and abstracted for any beginner. I will instead advice you use requests
for HTTP, and BeautifulSoup4 for HTML parsing. You will build a more solid foundation and transition to scrapy later when you need those advanced function.
When you get stuck, don’t afraid to pause on your attempt and read tutorials again. Head to the Python Community on Discord to get interactive help. We welcome noobs as we once were noobs too. Just don’t ever mention scraping there as they can’t help if they suspect you’re trying to do something inappropriate, malicious, or illegal. They are notoriously aginst yt-dlp
which frustrates me a bit. Phrase it nicely and in an generic way. I will be there occasionally offering help.
There is no simplification that you’re looking for. It seems you don’t have a programing background. If you really need to scrape something, you need to learn a programing language, HTTP, HTML, and maybe javascript. AFAIK, there is no easy way or point and click scrapper building tool. You will need to invest time and learn. Don’t worry, you should be able to get it done in 2-3 months if you do invest your time in.
It is a ok tool to get things started.
Ops. Missed that part.
I use BTRFS for snapshots, and auto compression. Maybe it can be done with raids with LVM? AFAIK BTRFS redundancy is basically the same as traditional RAID, similar to using mdadm. Still, you would want a backup strat instead relying on the disk redundancy. I learn that the hardway.
I would just skip RAID, add all disk to a single BTRFS and use the built in profiles for (meta)data redundancy.
Cache I don’t know much tho.
Is this finally the dusk of SO? It helps alot, but also suck alot.
Yay, more subscriptions.
👌Adobe, I am sticking to my Affinity Photo 1.
It is their job to find evidences, not my resposibility to provide them.
I will just get an AMD (7745HX?) mini PC with adequate RAM and call it a day. It should run almost anything that you throw in a light setup with minimal power usage.