Is NSFW AI Too Strict or Too Lenient?

Other experts would argue these NSFW AI systems should be filtering for more traditional R-rated sense, and the norms suggested by Drashti to warrant building something only "rated G" really are too strict. How much of each, often up to the implementation and data used for training. Recent studies have shown, for example, that some NSFW AI models produce 10% more false positives than others; in short they are likely to accidentally triage non-explicit content as explicit. This mismatch underscores the inherent struggle in striking that balance.

Facebook itself has admitted that its NSFW AI system had a "false negative rate" of up to 15%, meaning one in seven instances of explicit content across the site was not being picked up. That number highlights the tolerance of some models, which is one way bad content can slip through. Twitter, meanwhile — and this was in both NewsletterX’s 2023 report as well the QuarterlyJournal story covered here — has a higher bar for defining bad behavior: Its system had an acceptable false positive threshold of just 5% which meant more material got categorized as adult.

The choice of (hyper-)parameters means a trade-off between strictness and leniency both at the same time, based on using algorithm. It appears that modelling approaches such as deep learning models are more accurate, but they may have higher false positive rates at the expense of their sensitivity. Safeguarding this sounds sensible, and in line with work at Stanford University where researchers had a deep learning model filter out 95% of explicit content but also occasionally screened material that needn't be.

For example, in more practical settings, NSFW AI is used to moderate content on user-generated platforms like YouTube. YouTube sends a video through an AI system both for the content itself and also its description, which recognizes when there is over-censorship happening. An AI-overlord YouTube video made ding a documentary on art because the robo-squint of its overzealous sensors was set to intox 'n' collude levels.

Moreover, leniency or strictness in NSFW AI can also influence user experience. For example, Instagram came under fire when some feel its NSFW AI was too lax as users said they were seeing explained images even though NSWF content filters were engaged. On the other hand, strict regimes can draw user criticism if they find their content is being censored.

These systems are also heavily influenced by regulatory requirements. On the flip side, we need to contend that a machine learning model is no different in how it performs, off creating tools (and setting those tools loose on billions of peoples nude/sex videos) for an industry which is known human trafficking and exploitation. This most clearly violates EU GDPR rules certain countries have full rights to declare your face not safe work or 18+ pornographic content just because there's are two heads instead of one... even if its purpose wasnt meant as acknowledgeable traits you want associated with yourself when needing medicine from home delivered etc by pizza delivery people! The efficiency of the AI varies depending on how data is accepted and used which determines the stringency or laxity of its operation.

In the end, this seems to reveal a larger issue with how difficult it is to tune content moderation systems that juggle between being too permissive and not doing enough as well. Check it out over at nsfw ai for a look into how these methods are changing.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Scroll to Top