He generally shows most of the signs of the misinformation accounts:

  • Wants to repeatedly tell basically the same narrative and nothing else
  • Narrative is fundamentally false
  • Not interested in any kind of conversation or in learning that what he’s posting is backwards from the values he claims to profess

I also suspect that it’s not a coincidence that this is happening just as the Elon Musks of the world are ramping up attacks on Wikipedia, specially because it is a force for truth in the world that’s less corruptible than a lot of the others, and tends to fight back legally if someone tries to interfere with the free speech or safety of its editors.

Anyway, YSK. I reported him as misinformation, but who knows if that will lead to any result.

Edit: Number of people real salty that I’m talking about this: Lots

  • ByteOnBikes@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    19 hours ago

    spend money to generate quality LLM output, they can post as much as they want on virtually all social media sites.

    $20 for a chatgpt pro account and fractions of pennies to run a bot server. It’s really extremely cheap to do this.

    I don’t have an answer to how to solve the “motivated actor” beyond mass tagging/community effort.

    • kava@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      19 hours ago

      $20 for a chatgpt pro account and fractions of pennies to run a bot server. It’s really extremely cheap to do this.

      openAI has checks for this type of thing. They limit number of requests per hour with the regular $20 subscription

      you’d have to use the API and that comes at a cost per request, depending on which model you are using. it can get expensive very quickly depending on what scale of bot manipulation you are going for

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      17 hours ago

      Heuristics, data analysis, signal processing, ML models…etc

      It’s about identifying artificial behavior not identifying artificial text, we can’t really identify artificial text, but behavioral patterns are a higher bar for botters to get over.

      The community isn’t in a position to do anything about it the platform itself is the only one in a position to gather the necessary data to even start targeting the problem.

      I can’t target the problem without first collecting the data and aggregating it. And Lemmy doesn’t do much to enable that currently.