If we allow the far-right to continue merging political power with AI without guardrails, we will see the rise of a system where freedom is algorithmically rationed.
I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:
Spreading awareness of how AI operates, what it does, what it doesn’t, what it’s good at, what it’s bad at, how it’s changing, (Such as knowing there are hundreds if not thousands of regularly used AI models out there, some owned by corporations, others open source, and even others somewhere in between), reduces misconceptions and makes people more skeptical when they see material that might have been AI generated or AI assisted being passed off as real. This is especially important to teach during transition periods such as now when AI material is still more easily distinguishable from real material.
_
People creating a hostile environment where AI isn’t allowed to be discussed, analyzed, or used in ethical and good faith manners, make it more likely some people who desperately need to be aware of #1 stay ignorant. They will just see AI as a boogeyman, failing to realize that eg. AI slop isn’t the only type of material that AI can produce. This makes them more susceptible to seeing something made by AI and believing or misjudging the reality of the material.
_
Corporations, and those without the incentive to use AI ethically, will not be bothered by #2, and will even rejoice people aren’t spending time on #1. It will make it easier for them to claw AI technology for themselves through obscurity, legislation, and walled gardens, and the less knowledge there is in the general population, the more easily it can be used to influence people. Propaganda works, and the propagandist is always looking for technology that allows them to reach more people, and ill informed people are easier to manipulate.
_
And lastly, we must reward those that try to achieve #1 and avoid #2, while punishing those in #3. We must reward those that use the technology as ethically and responsibly as possible, as any prospect of completely ridding the world of AI are just futile at this point, and a lot of care will be needed to avoid the pitfalls where #3 will gain the upper hand.
#1 doesn’t have anything to do with liking it, though, that’s just… knowing what it is. I know what it is, and I dislike it. Like a bad movie, it’s really easy to do.
It will make it easier for them to claw AI technology for themselves
Okay, but why do we want it? What does it do for us? So what if only corporations have it: it sucks.
Do you remember bitcoin and NFTs? Those didn’t pan out very well. They were solutions looking for problems. What is it about AI that I should be excited about?
It can’t simultaneously be super easy and bad, yet also a massive propaganda tool. You can definitely dislike it for legitimate reasons though.
I’m not trying to anger you or something, but if you know about #1, you should also know why it’s a good tool for misinformation. Or you might, as I proposed, be part of the group that incorrectly assumed they already know all about it and will be more likely to fall for AI propaganda in the future.
eg. Trump posting pictures of him as the pope, with Gaza as a paradise, etc.
These still have some AI tells, and Trump is a grifting moron with no morals or ethics, so even if it wasn’t AI you would still be skeptical.
But one of these days someone like him that you don’t know ahead of time is going to make an image or a video that’s just plausible enough to spread virally. And it will be used to manufacture legitimacy for something horrible, as other propaganda has in the past.
but why do we want it? What does it do for us?
You yourself might not want it, and that’s totally fine.
It’s a very helpful tool for creatives such as vfx artists and game developers, who are kind of masters of making things not real, seem real. The difference is, that they don’t want to lie or obfuscate what tools they use, but #2 gives them a huge incentive to do just that, not because they don’t want to disclose it, but because chronically overworked and underpaid people don’t also have time to deal with a hate mob on the side.
And I don’t mean they use it as a replacement for their normal work, or just to sit around and do nothing, but they integrate it into their processes to enhance either the quality, or to reduce time spent on tasks with little creative input.
And that’s a self perpetuating cycle. People hide their AI usage to avoid hate -> making less people aware of the depths of what it can be used for, making them only think AI slop or other obviously AI generated material is all it can do -> which makes them biased towards any kind of AI usage because they think it’s easy to use well or just lazy to use -> in turn making people hide their AI usage more.
By giving creatives the room to teach others about what AI helped them do, regardless of wanting to like or dislike it, such as through behind the scenes, artbooks, guides, etc. We increase the awareness in the general population about what it can actually do, and that it is being used. Just imagine a world where you never knew about the existence of VFX, or just thought it was used for that one stock explosion and nothing else.
PS. Bitcoin is still around and decently big, I’m not a fan of that myself, but that’s just objective reality. NFTs have always been mostly good for scams. But really, these technologies have little to no bearing on the debate around AI, history is littered with technologies that didn’t end up panning out, but it’s the ones that do that cause shifts. AI is such a technology in my eyes.
How… does… AI… solve the problems of corporate AI? What is the strategy?
I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:
_
_
_
#1 doesn’t have anything to do with liking it, though, that’s just… knowing what it is. I know what it is, and I dislike it. Like a bad movie, it’s really easy to do.
Okay, but why do we want it? What does it do for us? So what if only corporations have it: it sucks.
Do you remember bitcoin and NFTs? Those didn’t pan out very well. They were solutions looking for problems. What is it about AI that I should be excited about?
It can’t simultaneously be super easy and bad, yet also a massive propaganda tool. You can definitely dislike it for legitimate reasons though. I’m not trying to anger you or something, but if you know about #1, you should also know why it’s a good tool for misinformation. Or you might, as I proposed, be part of the group that incorrectly assumed they already know all about it and will be more likely to fall for AI propaganda in the future.
eg. Trump posting pictures of him as the pope, with Gaza as a paradise, etc. These still have some AI tells, and Trump is a grifting moron with no morals or ethics, so even if it wasn’t AI you would still be skeptical. But one of these days someone like him that you don’t know ahead of time is going to make an image or a video that’s just plausible enough to spread virally. And it will be used to manufacture legitimacy for something horrible, as other propaganda has in the past.
You yourself might not want it, and that’s totally fine.
It’s a very helpful tool for creatives such as vfx artists and game developers, who are kind of masters of making things not real, seem real. The difference is, that they don’t want to lie or obfuscate what tools they use, but #2 gives them a huge incentive to do just that, not because they don’t want to disclose it, but because chronically overworked and underpaid people don’t also have time to deal with a hate mob on the side.
And I don’t mean they use it as a replacement for their normal work, or just to sit around and do nothing, but they integrate it into their processes to enhance either the quality, or to reduce time spent on tasks with little creative input.
If you don’t believe me that’s what they use it for, here’s a list of games on Steam with at least an 75% rating, 10000 reviews, and an AI disclosure.
And that’s a self perpetuating cycle. People hide their AI usage to avoid hate -> making less people aware of the depths of what it can be used for, making them only think AI slop or other obviously AI generated material is all it can do -> which makes them biased towards any kind of AI usage because they think it’s easy to use well or just lazy to use -> in turn making people hide their AI usage more.
By giving creatives the room to teach others about what AI helped them do, regardless of wanting to like or dislike it, such as through behind the scenes, artbooks, guides, etc. We increase the awareness in the general population about what it can actually do, and that it is being used. Just imagine a world where you never knew about the existence of VFX, or just thought it was used for that one stock explosion and nothing else.
PS. Bitcoin is still around and decently big, I’m not a fan of that myself, but that’s just objective reality. NFTs have always been mostly good for scams. But really, these technologies have little to no bearing on the debate around AI, history is littered with technologies that didn’t end up panning out, but it’s the ones that do that cause shifts. AI is such a technology in my eyes.