I’d like to invite you all to share your thoughts and ideas about Lemmy. This feedback thread is a great place to do that, as it allows for easier discussions than Github thanks to the tree-like comment structure. This is also where the community is at.

Here’s how you can participate:

  • Post one top-level comment per complaint or suggestion about Lemmy.
  • Reply to comments with your own ideas or links to Github issues related to the complaints.
  • Be specific and constructive. Avoid vague wishes and focus on specific issues that can be fixed.
  • This thread is a chance for us to not only identify the biggest pain points but also work together to find the best solutions.

By creating this periodic post, we can:

  • Track progress on issues raised in previous threads.
  • See how many issues have been resolved over time.
  • Gauge whether the developers are responsive to user feedback.

Your input may be valuable in helping prioritize development efforts and ensuring that Lemmy continues to meet the needs of its community. Let’s work together to make Lemmy even better!

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    27 days ago

    I think by default bots should not be allowed anywhere. But if that’s a bridge too far, then their use should have to be regularly justified and explained to communities. Maybe it should even be a rule that their full code has to be released on a regular basis, so users can review it themselves and be sure nothing fishy is going on. I’m specifically thinking of the Media Bias Fact Checker Bot (I know, I harp on it too much). It’s basically a spammer bot at this point, cluttering up our feeds even when it can’t figure out the source, and providing bad and inaccurate information when it can. And mods refuse to answer for it.

    • PumpkinDrama@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      27 days ago

      Even large social media platforms have trouble dealing with bots, and with AI advancements, these bots will become more intelligent. It feels like a hopeless task to address. While you could implement rules, you would likely only eliminate the obvious bots that are meant to be helpful. There may be more sophisticated bots attempting to manipulate votes, which are more difficult to detect, especially on a federated platform.

      • BertramDitore@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        27 days ago

        For sure, it’s not an easy problem to address. But I’m not willing to give up on it just yet. Bad actors will always find a way to break the rules and go under the radar, but we should be making new rules and working to improve these platforms in good faith, with the assumption that most people want healthy communities that follow the rules.

        • PumpkinDrama@reddthat.comOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          27 days ago

          I’m particularly concerned about the potential for automods to become a problem on Lemmy, especially if it gains popularity like Reddit. I believe a Discourse-style trust level system could be a better approach for Lemmy’s moderation, but instead of rewarding “positive contributions,” which often leads to karma farming, the system should primarily recognize user engagement based on time spent on the platform and reading content. Users would gradually earn privileges based on their consistent presence and understanding of the community’s culture, rather than their ability to game the system or create popular content. This approach would naturally distribute moderation responsibilities among seasoned users who are genuinely invested in the community, helping to maintain a healthier balance between user freedom and community standards, and reducing the reliance on bot-driven moderation and arbitrary rule enforcement that often plagues many Reddit communities.

          Grant users privileges based on activity level

          • BertramDitore@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            27 days ago

            That’s a very cool concept. I’d definitely be willing to participate in a platform that has that kind of trust system baked in, as long as it respected my privacy and couldn’t broadcast how much time I spend on specific things etc. Instance owners would also potentially get access to some incredibly personal and lucrative user data, so protections would have to be strict. But I guess there are a lot of ways to get at positive user engagement in a non-invasive way. I think it could solve a lot of current and potential problems. I wish I was confident the majority of users would be into it, but I’m not so sure.