![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.dbzer0.com/pictrs/image/a18b0c69-23c9-4b2a-b8e0-3aca0172390d.png)
what kind of thumbnails are you seeing?
I see a good lookin guy pointing at a Black box.
what kind of thumbnails are you seeing?
I see a good lookin guy pointing at a Black box.
hey yeah, no stress!
just lemme know if you’d want someone to brainstorm with.
lemme know if you need some tshooting remotely, if schedules permit, we can do screenshares
I had this issue when I used kubernetes, sata SSDs cant keep up, not sure what Evo 980 is and what it is rated for but I would suggest shutting down all container IO and do a benchmark using fio.
my current setup is using proxmox, rusts configured in raid5 on a NAS, jellyfin container.
all jf container transcoding and cache is dumped on a wd750 nvme, while all media are store on the NAS (max. BW is 150MBps)
you can monitor the IO using IOstat once you’ve done a benchmark.
I’d check high I/O wait, specially if your all of the vms are on HDDs.
one of the solution I had for this issue was to have multiple DNS servers. solved it by buying a raspberry pi zero w and running a 2nd small instance of pihole there. I made sure that the piZeroW is plugged on a separate circuit in my home.
i didnt have a problem with network ports (I use a switch) what I shouldve considered during purchasing was the number of drives (sata ports), pcie features (bifurcation, version, number of nvme slots)
I need to do high IOPs for my research now and I am stuck with raid0 commodity SSDs in 3 ports.
deleted by creator
I’m running a PBS instance (plus networking containers) for 4years now, cc on file for the first 2 years, now on file, but my usecase is operating within the free-forever tier.
My instance has not been deleted by them, though I’ve rebuilt the multiple times since.
The region you are on might be struggling with capacity issues, I use middle east region and never encountered account/vm deletions (yet). For my case, latency isnt an issue so i dont mind having it ona far away region.
Depends on what kind of service the malicious requests are hitting.
Fail2ban can be used for a wide range of services.
I don’t have a public facing service (except for a honeypot), but I’ve used fail2ban before on public ssh/webauth/openvpn endpoint.
For a blog, you might be well served by a WAF, I’ve used modsec before, not sure if there’s anything that’s newer.
I’d make my own nas.
I got this step, defederating essentially says to them that I dont consent to them getting my data.
But I’m really missing something here, since any instance that zucc controls that is federated to the large instances just exposes my data to zucc.
Defederating is one step, the instance owners have taken that step now, so far so good, well then zucc will just create a lemmy/kbin instance that they own, they join the fed and not even announce meta’s affiliation with it, my data is still zucc’ed.
I should’ve been more clear about my question, how would I, as a lemmy user, know if an instance has gone rogue (taken over by another entity, meta/fb/ig).
My actual worry is about an instance stealthily created by meat/fb/ig that is not identified as a threads instance/service. Say you have deferedated the fuck out of all known identified Meta created instance so they cant push trash content, then as an example:
an instance owner gets bribed and creates another instance to federate with established instances and gives control of it to FB. At this point fb/ig/meta know they’d just be kicked out again if they even peeped that they now own the inatance.
What is the trust model between instances, where/when does it break?
if the instance that meta now owns doesn’t push out threads-content, they still have access to our data and I’ll just be unaware of it and next thing we know we getting profiled from what we post in our private instances.
Thanks for this visual. I’d extend the question to:
Will facebook be able to create dummy instances that would federate with the large/established instances and take our information?
I know fuck all about this.
As for me this is what I can’t follow too, i understand that fackbook cant be trusted, and the federation is based on trust between instance admins to not do something fuckey.
So our data and rights (my country was victim of CA) are unsafe when federated with threads, these are what people are saying.
what is stopping facebook from creating a dummy instance, not disclose it is theirs, and federate with the instances that rejected the known threads instances?
I agree with this, what I suggested is not a best practice, I should preface my post with that.
And I feel your pain! I get calls that are extremes, like people putting too much security where the ticket is “P1 everything is down, fly every engineer here” for an nACL/SG they created.
The other extreme is that deliberate exposure of services to the public internet (other service providers send us an email and ask us to do something about it, but not our monkeys, shared responsibility, etc).
I’m currently using oraclecloud for my bots. I work in the space (cloud/systems engg) and the first thing that got me was that the oracle ubuntu instances have custom iptables in place for security.
I’m not sure if it still has, but last i checked a year ago I had to flush iptables before I was able to use other ports. I didint really want to deal with another layer of security to manage as I was just using the arm servers for my hobby.
It might be something worth checking, it isn’t specific to lemmy though.
I found it unintuitive because other major cloud providers do not have any host firewall/security in place (making it easier to manage security using SG/NACL, through the console).
That’s the setup when I started.
I picked up an x230 with a broken screen and used it as a hypervisor (proxmox 5.4).
I used whatever resources available to me at the time and learned weird networking (passing through nics for a router on a stick configuration).
I used that x230 until the mobo gave up.
hypervisor: proxmox
vms: rhel 9.2