Post contents (and a mirror):

BotDefense is wrapping up operations

TL;DR below.

When we announced the BotDefense project in 2019, we had no idea how large the project would become. Our initial list of bots was just 879 accounts. Most of them were annoying rather than outright malicious.

Since then, we’ve witnessed the rise of malicious bots being used to farm karma for the purpose of spamming and scamming users across Reddit and we’ve done our best to help communities stem the tide. We spent countless hours finding and reviewing accounts, writing code to automate detections, and reviewing appeals (mostly from outright criminals and karma farmers definitely running bots, but we typically unban about 4 accounts per month, and unlike similar bots an unban means that we unban the account everywhere we banned it).

Along the way, we’ve struggled with the scope of the problem, rewritting our back-end code multiple times and figuring out how to scale to the 3,650 subreddits that BotDefense now moderates. We came up with new algorithms to identify content theft, reduce the number of times we accidentally ban an innocent account, and more. In January of 2023, we added an incredible 10,070 bots to our ban list which now stands at an incredible 144,926 accounts.

Like many anti-abuse projects on Reddit, we’ve done all of this for free while putting up with Reddit’s penchant for springing detrimental changes on developers and moderators (e.g., adding API limits without advance notice and blocking Pushshift) and figuring out workarounds for numerous scalability issues that Reddit never seems to fix. Without Pushshift, the number of malicious bots we were able to ban dropped to 5,517 in May.

Now, Reddit has changed the Reddit API terms to destroy third-party apps and harm communities. A group of developers and moderators tried to convince Reddit to not continue down this path and communities protested like never before, but that was all in vain. Reddit is so brazenly hostile to moderators and developers that the CEO of Reddit has referred to us as “landed gentry”.

With these changes and in this environment, we no longer believe we can effectively perform our mission. The community of users and moderators submitting accounts to us depend on Pushshift, the API, and third-party apps. And we would be deluding ourselves if we believed any assurances from Reddit given the track record of broken promises. Investing further resources into Reddit as a platform presents significant risks, and it’s safer to allocate one’s time, energy, and passions elsewhere.

Therefore, we have already disabled submissions of new accounts and our back-end analytics, and we will be disabling future actions on malicious and annoying bots. We will continue to review appeals and process unbans for a minimum of 90 days, or until Reddit breaks the code running BotDefense.

We’d rather be figuring out how to combat the influx of ChatGPT bots flooding Reddit, temu bots flooding subreddits with fake comments, and every other malicious bot out there, of course.

At this time, we advise keeping BotDefense as a moderator through October 3rd so any future unbans can be processed. We will provide updates if the situation changes or if we have any other news to share.

Finally, I want to thank all of the users and moderators who have contributed accounts, my co-moderators who have helped review countless accounts, and to all of the communities that have trusted us with helping moderate their subreddits.

Regards.

— dequeued

TL;DR With the API changes now in place, we no longer believe we can effectively perform our mission so we are sunsetting BotDefense. We recommend keeping BotDefense on as a moderator through October 3rd so any unbans can be processed.

  • TrontheTechie@infosec.pub
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Let’s say I have an account with lots of positive karma. Let’s say I take that account, and make it look nice, I can look like a paragon of a community, or a customer service account or anything I want. Now let’s say I go into a mmo community and use that nice good looking account to run a scam where I get people to send me passwords and 2FA codes, now I’m running off with their MMO gold and selling it.

    Let’s say I setup an account that seems to be related to a crypto wallet company, you post to a subreddit asking for help and I come along and convince you to send me your crypto, or to screenshot something that compromises your seed without you thinking, or send you to a webpage that looks like you’re signing a transaction to sign in.

    Basically if karma is a metric of community trust, someone will use that trust against the community

    • DogMuffins@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Is this really a thing that happens? I’m incredulous.

      Would people really give their passwords and 2FA codes to a reddit stranger? … and of the people who would, how many of those would not give their passwords and 2FA codes to a reddit stranger if their karma was too low.

      Same with the crypto example.

      Do people really think that karma means someone is trust worthy ? That seems kind of absurd.

      • TrontheTechie@infosec.pub
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Yes it happens, no I wouldn’t, go search the 2007scape sub for hacked and hijacked. The Monero subreddit has people pretending to be part of cake wallet. Go to cryptocurrency and ask for a link to an exchange.

        Think of how stupid the average person is, and realize half of them are stupider than that.

        George Carlin