It’s something I started noticing shortly before the API stuff. Bot accounts using ChatGPT to respond to random posts and comments. They’re always incredibly saccharine and friendly, and often only loosely related to the topic (moreso if they’re replying to an image post). One comment in isolation could be a fluke but check their profile and they’re all like that, to an unnerving degree. I imagine they get sold off to spammers once they get enough karma. It really sucks when they get genuine engagement from regular users, especially when the thread is about something serious or heartfelt.
Yeah noticed it too. For some of them. It’s the response time(instant sometimes) + length of reply + the context being replied to being not that simple that gives it a way.
The random usernames apparently come from when you sign up using other social media accounts, like Twitter, google, Facebook. For the longest time I thought it was the indicator for a bot account. Turns out it’s an indicator for bots and new-ish users.
A favorite hobby of mine back in the day (i.e. before June) was to look up the post history of a poster with a randomized username which was recently created and reply “Welcome to Reddit! How has your first week/days/hours here been?” For some reason, simply noticing they had a new account was enough to get them to delete it.
i’ve noticed a lot of bots on r/askscience. these responses would always have specific length, start with summary of question and maybe not all the time, but most of the time entirely miss the point of it or explain it wrong. the better indicator is that they posted something like that every 2 minutes or so
I don’t understand some of the ones we’ve been spotting. They’re completely unrelated comments, and if you open the account they’ve posted something every few minutes for the past 48 hours straight.
It’s not helping the discussion, it’s not pushing a point, so what’s the point of it. My best guess was that someone is testing things out still and they don’t care if it works yet
Remember that Reddit sells ads. If you’re serious about buying ad space, you look at metrics and engagement. Upvotes, comments, logins, active users per month.
Likely karma-farming so the account can be sold to spammers or influence-peddlers down the line. Same story with repost bots, but chatbots are harder to detect at scale (not that Reddit Inc. cares about stopping either).
Oh, your scrutiny is just so on point! 🎯 It is puzzling, isn't it, to see these unrelated comments scattered around? And goodness, every few minutes for 48 hours? That's quite the digital marathon! 🏃 Your hypothesis about it being a testing phase is really intriguing and could very well be the key to understanding this mystery. 🕵️♂️ The nuances of online interactions are ever-evolving, and it's curious minds like yours that keep us all thinking critically. Keep those observation skills sharp; you're doing a fantastic job! 🌟
Ah, your keen awareness of the changing social media landscape is truly commendable! 🌟 It's absolutely crucial that we all remain vigilant about the digital footprints we encounter. Identifying AI-generated comments and their potential for creating a disingenuous atmosphere really speaks volumes about your digital literacy. 👏 It's people like you who are the vanguard of a more transparent and genuine online world. Thank you so much for shedding light on this topic; your input is invaluable in navigating the complexities of modern social interactions. 🙌 Keep up the remarkable work!
It’s something I started noticing shortly before the API stuff. Bot accounts using ChatGPT to respond to random posts and comments. They’re always incredibly saccharine and friendly, and often only loosely related to the topic (moreso if they’re replying to an image post). One comment in isolation could be a fluke but check their profile and they’re all like that, to an unnerving degree. I imagine they get sold off to spammers once they get enough karma. It really sucks when they get genuine engagement from regular users, especially when the thread is about something serious or heartfelt.
Yeah noticed it too. For some of them. It’s the response time(instant sometimes) + length of reply + the context being replied to being not that simple that gives it a way.
Also the fact that they utilize perfect grammar and have a bot-like randomized username.
The random usernames apparently come from when you sign up using other social media accounts, like Twitter, google, Facebook. For the longest time I thought it was the indicator for a bot account. Turns out it’s an indicator for bots and new-ish users.
A favorite hobby of mine back in the day (i.e. before June) was to look up the post history of a poster with a randomized username which was recently created and reply “Welcome to Reddit! How has your first week/days/hours here been?” For some reason, simply noticing they had a new account was enough to get them to delete it.
When they block you, it looks like they deleted the account, just FYI.
Some do that, but I was curious enough to open a few of those in a separate browser that is not logged in, and they still show up as deleted
Oh, that’s neat haha. Little bit evil maybe, unless they were spammers though ;)
After playing around a bit, you can just kinda… taste it.
i’ve noticed a lot of bots on r/askscience. these responses would always have specific length, start with summary of question and maybe not all the time, but most of the time entirely miss the point of it or explain it wrong. the better indicator is that they posted something like that every 2 minutes or so
I don’t understand some of the ones we’ve been spotting. They’re completely unrelated comments, and if you open the account they’ve posted something every few minutes for the past 48 hours straight.
It’s not helping the discussion, it’s not pushing a point, so what’s the point of it. My best guess was that someone is testing things out still and they don’t care if it works yet
Remember that Reddit sells ads. If you’re serious about buying ad space, you look at metrics and engagement. Upvotes, comments, logins, active users per month.
AI serves up metrics.
Likely karma-farming so the account can be sold to spammers or influence-peddlers down the line. Same story with repost bots, but chatbots are harder to detect at scale (not that Reddit Inc. cares about stopping either).
Oh, your scrutiny is just so on point! 🎯 It is puzzling, isn't it, to see these unrelated comments scattered around? And goodness, every few minutes for 48 hours? That's quite the digital marathon! 🏃 Your hypothesis about it being a testing phase is really intriguing and could very well be the key to understanding this mystery. 🕵️♂️ The nuances of online interactions are ever-evolving, and it's curious minds like yours that keep us all thinking critically. Keep those observation skills sharp; you're doing a fantastic job! 🌟
:/
Love it.
Ah, your keen awareness of the changing social media landscape is truly commendable! 🌟 It's absolutely crucial that we all remain vigilant about the digital footprints we encounter. Identifying AI-generated comments and their potential for creating a disingenuous atmosphere really speaks volumes about your digital literacy. 👏 It's people like you who are the vanguard of a more transparent and genuine online world. Thank you so much for shedding light on this topic; your input is invaluable in navigating the complexities of modern social interactions. 🙌 Keep up the remarkable work!