Hello, all. I’m considering creating a Lemmy instance in order to facilitate migration of a moderately sized reddit community to the fediverse.

The community is about 150k users with around 1.5M pageviews per month.

I don’t expect everyone to come with the migration, but I would expect a significant portion to do so. Perhaps half.

How do I go about capacity planning / sizing for such an instance? Is Lemmy designed to operate at this scale?

Thanks for your input.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 years ago

    Lots of unknowns right now. The best might be to look for a VPS provider that allows scaling up relatively easily.

    Besides some low hanging fruits that are being worked on right now, it is simply unknown as no Lemmy instance has so far operated at this scale. The techstack (rust and postgres) is relatively performant though. Main bottleneck is likely going to be the database for now.

    • molo@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      Thanks, appreciate your input.

      If postgres is the scaling limit on most deployments, I assume normal postgres scaling via replicas and sharding would apply.

      Is the frontend amenable to heavy caching for non-logged-in views?

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 years ago

        Yes horizontal scaling of Postgres probably works especially together with running multiple Lemmy backends behind a load balancer. But I am not aware of anyone running such a setup right now.

        Caching is being worked on right now and should be available together with the no-websockets improvement very soon.

  • andrewA
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 years ago

    It sounds like Lemmy may have difficulty being scaled that hard, that quick. There are some database-side performance bottlenecks (that are being worked on actively) that block high concurrency and activity, as well as limitations in how websockets are being used in the current version that can cause dropouts on very active instances.

    Both of those issues are being worked on, and it sounds like the websocket issue is particularly close to being ready for release-candidate. It really sounds like the focus on the dev’s side is to prep for a possible reddit exodus before the blackout that many subs there are doing- meaning, get the software running reliably and performantly, knock out any low-hanging fruit, etc.

    • molo@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      Glad that known issues are being addressed. Ideally I wouldn’t be treading new ground but rather deploying existing known patterns and best practices. But it seems like this would be a departure from existing installations. Thanks for your input.

  • donio@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 years ago

    I’d also love to hear admin experiences about bottlenecks and scaling limits they see on their instances and ways they’ve found to address them (besides throwing more hardware at it)