Why Big Tech automated moderation fails…

Robots spotted “intimate content” and banned Mary McIntyre from Twitter and robots handled the appeal processes upholding the ban.

Now Mary has a choice – admit she broke the rules and shared “intimate content”, when she obviously has not, to get access to her account or, well there is no or as the appeal process has been exhausted.

Astronomer in Twitter limbo over ‘intimate’ meteor – BBC News

If a Twitter person had been involved in the appeal process it would have been immediately apparent that the automated moderation had made a mistake.

This is the same automated moderation that Twitter will cite when questioned about how it protects vulnerable people from exploitation or abuse via its platform.

However the real story here is that for every false positive like this that makes the headlines, how much “intimate content” is shared that should be caught by the filters. I believe there is no way to answer that question, particularly as Elon has just slashed the workforce at Twitter.

Clive Catton MSc (Cyber Security) – by-line and other articles

Update

Following the publication of the BBC article, Mary McIntyre’s Twitter account has been reinstated!

Further Reading

‘Overtly sexual’ cow blocked as Facebook ad – BBC News

Smart Thinking Solutions supports this UK Government initiative:

Let’s stop abuse together – Stop Abuse Together (campaign.gov.uk)