@QuentinCallaghan@sopuli.xyz @lisko@sopuli.xyz @doo@sh.itjust.works @Moxvallix@sopuli.xyz
I’ve been getting complaints about footage that is technically non-violent but leaves enough to the imagination that it apparently bothers people, so I wanted to get input regarding where the NSFW line should be drawn.
The video posted here is a perfect example of where I think the limit is. The only complaint I can see is the discharge of a firearm. Does that count? Is ‘being able to put 2 and 2 together to know what happened’ enough to warrant flagging as NSFW? Was the previous status quo good enough?
Setting aside the obvious “I know it when I see it”, please share your thoughts.
Until a better consensus of where the line should be for the community I will leave it temporarily flagged as NSFW.
The original idea of “NSFW” is to not show content that might be problematic for someone at work.
I personally don’t have any problem with violence being visible; an employer isn’t going to care, and nobody’s going to care if I’m viewing a violent video in a restaurant or something. But I’m American. Maybe there are places where there are cultural differences. I would like the option to not show nudity in thumbnails and inline images, but that’s not a factor for war videos. As far as I’m concerned, the community not being flagged NSFW is fine.
Abstract the intent of nsfw originally and understand it as a tool to prevent the autoplay of videos or as a warning it might have content that isn’t “happy and normal”
Submitted videos, which should be what is relevant here, don’t autoplay.
I’ll also add that:
While the community is not explicitly one devoted to combat videos, it should be pretty clear that that is part of what is here. If someone is going to browse such a community, then they may well get videos that contain soldiers being killed.
One argument might be “what if someone wants to browse All and click on videos and watch them and finds videos that have death in them morally-objectionable”. I don’t browse All – I think that trying to whitelist makes much more sense than blacklisting – but my personal view is that anyone that does so is implicitly accepting that they’re going to get a firehose of content of all sorts. Some of that is going to be political statements that they don’t agree with. Some is going to be content that might have language that one might find objectionable. Some might be images that one finds repulsive – we have one person on !imageai@sh.itjust.works who rather famously likes making “gross” images. Some of it might just be offensive to various parties, like off-color jokes. A very considerable amount of it might be material that parent might not want their six-year-old seeing, like discussions about sexuality or profanity. Some of it might be religiously-unacceptable to various groups.
There are a very considerable number of things that some group, somewhere, might object to.
I do not think that it is reasonable to repurpose NSFW to be a “might not personally like” flag that is placed on anything that anyone out there might potentially not want to see. The scope there is simply too broad. Every group somewhere has their own personal preferences, and has an incentive to try to convert “All” into a feed that matches their personal interests.
I think that it’s fine to recognize that someone, somewhere, might have different views than someone else, and that one day, having a curation system with finer-grained classification of content added to the Threadiverse – perhaps with someone other than the submitter responsible for adding that variety of tags – that is relatively fine-grained may be a good technical solution. My personal view is that the idea of “taglists” that users or groups can publish and other users can subscribe to, which attach a classification to the content of other users, is a good way to do this. That is, User A submits content. User B – which might be a bot or a group of humans – adds that submitted item to a list they publish with a classification, which might include a recommendation to hide or to bring to the attention of a user or simply to attach a content tag. User C can choose whether-or-not to subscribe to User B’s “tag feed”, and their client perhaps how to interpret that feed. C maybe takes a delay on content visibility to provide time for tagging to be applied to new content. Tag feeds could attach tags to users or to communities. That’s a situation that I think might be workable, scales to an infinite number of content preferences, and permits for fine-grained content classification, without anyone imposing their content preferences on anyone else or requiring submitters or moderators to try to understand and adapt to all forms of content preference around the world.
In the absence of such a solution, I am comfortable placing the burden on those who want a particular sort of content to do the filtration themselves, rather than just pushing that content to the other side of a wall for everyone else as well.
I do not think that trying to repurpose NSFW for other content-filtering purposes is a reasonable approach. The intent of NSFW is to let people browse content in public or workplace environments. It is obviously not completely perfect – there is no one set of precisely-identical global norms for what is acceptable in public. But there is enough overlap that I think that it’s at least possible to talk about that as a concept.
I will add one final note. While I personally do not think that repurposing the NSFW flag in this way is justified, and think that down that road just lies an infinite number of arguments with various groups who don’t want to see various forms of content and want their preferences being made the norm for everyone, if the moderators here ultimately decide that doing so is their decision, I would then advocate for a different change. Keep !ukraine@sopuli.xyz’s NSFW flag off…but create a new sister community, !UkraineWarCombatVideos@sopuli.xyz. Move combat video content to that community, and flag that community NSFW, or at least require submitters there to flag a video NSFW if it contains death (or a close view of death, or whatever). Have each community link to the other in the sidebar. That keeps all the content in the former community other than combat video visible under prior rules. I think that there are many problems with this approach, starting with the “infinite groups with their own preferences who will make their own cases to alter the All feed”, and then that there are plenty of news articles that contain non-war-video content and analysis but might also contain war videos…think The War Zone. Not to mention that the content on linked pages might change, something that The War Zone often does with embedded video updates. Many different news sources do not engage in this form of censorship, and are not going to bother segregating their own content. But it’s at least a subset of the problems that the proposed “flag the whole Ukraine community NSFW” approach has.
To be clear, I was not proposing keeping this community flagged NSFW permanently. That was a quick, naive solution I temporarily implemented not realizing how much of an effect it would have. I’m really asking ‘what content should be required to have a NSFW flag?’, and unless you have additional concerns we’ve settled on flagging combat videos that show the people involved (see pinned post / sidebar).