• 3 Posts
  • 241 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • Epic have come a long way from Epic MegaGames, and it isn’t always a fairytale story I suppose.

    Someone here on Lemmy highlighted that quite nicely when Valve dropped their Half Life documentary. Valve embraces their past. They cherish it. They still maintain their old games to honor their success.

    Epic on the other hand completely wiped old Unreal titles from the relevant stores and don’t give a fuck about supporting any of them. Which is a shame. Also I admire the tech behind of modern Unreal engines, so there are still geniuses at work who are likely passionate. Too bad they essentially only ride the Fortnite train outside their engine development.





  • aksdb@feddit.detoMemes@lemmy.mlYeah, well...
    link
    fedilink
    arrow-up
    3
    ·
    5 months ago

    That is - IMO - what critical thinking is meant to be … thinking about alternative explanations and evaluating their viability or probability.

    Unfortunately a lot of people use the term “critical thinking” as just another way to rationalize why they are against something, without actually weighing the options.





  • I would consider Todd Howard to be part of development (since he directs the creative and narrative angle, from what I understand).

    He defended bad performance with “get better hardware”. He defended criticism of the content with “you play the game wrong”.

    Both are bullshit “excuses”. The first one was even debunked by modders who showed that there was potential for optimization. And modders are far more limited than engine devs. The game doesn’t look ugly, but there are far better looking games with more scene complexity out there that run better.

    And “you play it wrong” is bullshit because if enough people play it wrong to have an effect on the rating of the game, then the game is badly designed. Part of game design is making sure the game explains itself or subtly pulls players in the right direction. Either they failed with that, or there simply is no clear direction. But that’s not the players fault.






  • As with every software/product: they have different features.

    ZFS is not really hip. It’s pretty old. But also pretty solid. Unfortunately it’s licensed in a way that is maybe incompatible with the GPL, so no one wants to take the risk of trying to get it into Linux. So in the Linux world it is always a third-party-addon. In the BSD or Solaris world though …

    btrfs has similar goals as ZFS (more to that soon) but has been developed right inside the kernel all along, so it typically works out of the box. It has a bit of a complicated history with it’s stability/reliability from which it still suffers (the history, not the stability). Many/most people run it with zero problems, some will still cite problems they had in the past, some apparently also still have problems.

    bcachefs is also looming around the corner and might tackle problems differently, bringing us all the nice features with less bugs (optimism, yay). But it’s an even younger FS than btrfs, so only time will tell.

    ext4 is an iteration on ext3 on ext2. So it’s pretty fucking stable and heavily battle tested.

    Now why even care? ZFS, btrfs and bcachefs are filesystems following the COW philisophy (copy on write), meaning you might lose a bit performance but win on reliability. It also allows easily enabling snapshots, which all three bring you out of the box. So you can basically say “mark the current state of the filesystem with tag/label/whatever ‘x’” and every subsequent changes (since they are copies) will not touch the old snapshots, allowing you to easily roll back a whole partition. (Of course that takes up space, but only incrementally.)

    They also bring native support for different RAID levels making additional layers like mdadm unnecessary. In case of ZFS and bcachefs, you also have native encryption, making LUKS obsolete.

    For typical desktop use: ext4 is totally fine. Snapshots are extremely convenient if something breaks and you can basically revert the changes back in a single command. They don’t replace a backup strategy, so in the end you should have some data security measures in place anyway.

    *Edit: forgot a word.



  • One problem is that they need to put a price tag and therefore a timeline on such a project. Due to the complexity and the many unknown unknowns in theses decades worth of accumulated technical debts, no one can properly estimate that. And so these projects never get off and typically die during planning/evaluation when both numbers (cost and time) climb higher and higher the longer people think about it.

    IMO a solution would be to do it iteratively with a small team and just finish whenever. Upside: you have people who know the system inside-out at hand all the time should something come up. Downside of course is that you have effectively no meaningful reporting on when this thing is finished.