For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.
While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.
It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.
Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.
Funding/resourcing is obviously challenging, but I think there are things that can support it:
State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.
Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.
Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.
Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.
I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.
For anyone who’s willing to spend ~15 mins on this, I’d encourage you to play TechDirt’s simulator game Trust & Safety Tycoon.
While it’s hardly comprehensive, it’s a fun way of thinking about the balance between needing to remain profitable/solvent whilst also choosing what social values to promote.
It’s really easy to say “they should do [x]”, but sometimes that’s not what your investors want, or it has a toll in other ways.
Personally, I want to see more action on disinformation. In my mind, that is the single biggest vulnerability that can be exploited with almost no repurcussions, and the world is facing some important public decisions (e.g. elections). I don’t pretend to know the specific solution, but it’s an area that needs way more investment and recognition than it currently gets.
How can this be funded? A workforce is needed for all matters that cannot be automated.
Funding/resourcing is obviously challenging, but I think there are things that can support it:
State it publicly as a proud position. Other platforms are too eager to promote “free speech” at all costs, when in fact they are private companies that can impose whatever rules they want. Stating a firm position doesn’t cost anything at all, whilst also playing a role in attracting a certain kind of user and giving them confidence to report things that are dodgy.
Leverage AI. LLMs and other types of AI tools can be used to detect bots, deepfakes and apply sentiment analysis on written posts. Obviously it’s not perfect and will require human oversight, but it can be an enormous help so staff can see things faster that they otherwise might miss.
Punish offenders. Acknowledging complexities with how to enforce it consistently, there are still things you can do to remove the most egregious bad actors from the platform and signal to others.
Price it in. If you know that you need humans to enforce the rules, then build it into your advertising fees (or other revenue streams) and sell it as a feature (e.g.: companies pay extra so they don’t have to worry about reputational damage when their product appears next to racists etc). The workforce you need isn’t that large compared to the revenue these platforms can potentially generate.
I don’t mean to suggest it’s easy or failsafe. But it’s what I would do.