Twitter faces advertiser boycott over police failure over child pornography

Twitter is not good, the very bad year continues, as the company was forced this week to inform some advertisers that their ads had been displayed in the app alongside tweets soliciting child pornography and other abuse material.

As reported by Reuters:

Brands ranging from Walt Disney, NBCUniversal and Coca-Cola, to a children’s hospital, were among some 30 advertisers who appeared on the profile pages of Twitter accounts that peddle links to exploit material.

The discovery was made by cybersecurity group Ghost Data, which worked with Reuters to uncover the ad placement issues, dealing another blow to the app’s ongoing business prospects.

Already in a state of disarray amid the ongoing Elon Musk takeover saga, and following recent revelations from his former security chief that he is lax on data security and other measures, Twitter is now also facing an exodus of advertisers, with major brands such as Dyson, Mazda and Ecolab suspending their Twitter campaigns in response.

What, really, is the least concerning part of the discovery, the Ghost Data report also identifying more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period.

Ghost Data says Twitter failed to remove more than 70% of accounts during the study period.

The findings raise further questions about Twitter’s inability or willingness to deal with potentially harmful content, with The Verge reporting late last month that Twitter “cannot accurately detect child sexual exploitation and non-consensual nudity on a large scale”.

This finding stems from an investigation into Twitter’s proposed plan to gGive adult content creators the ability to start selling OnlyFans-style paid subscriptions within the app.

Rather than working to remedy the abundance of pornographic material on the platform, Twitter has instead considered addressing it, which would undoubtedly increase the risk factor for advertisers not wanting their promotions to appear. next to potentially offensive tweets.

Which is likely happening, on an even grander scale than this new report suggests, as Twitter’s own internal investigation into its OnlyFans-esque proposal found that:

Twitter couldn’t safely allow adult creators to sell subscriptions because the company didn’t — and still doesn’t — effectively police harmful sexual content on the platform. »

In other words, Twitter couldn’t risk facilitating the monetization of mining hardware in the app, and since it has no way to tackle it, it had to drop the proposal before it could. really gaining popularity.

With that in mind, these new findings come as no surprise — but again, the advertiser backlash is likely to be significant, which could force Twitter to launch a new crackdown anyway.

For its part, Twitter says it is investing more resources dedicated to child safety, “including hiring new positions to write policies and implement solutions.”

So great, Twitter is acting now. But these reports, based on a survey of Twitter’s own reviews, show that Twitter has been aware of this potential problem for some time – not child exploitation specifically, but adult content fears it has no way to control.

In fact, Twitter overtly helps promote adult content, albeit inadvertently. For example, in the “For You” section of my “Explorer” tab (i.e. the first Explore page in the app), Twitter always recommends that I follow “Facebook” as a subject, based on my tweets and the people I follow. in the app.

Here are the tweets he highlighted as some of the best topical tweets for “Facebook” yesterday:

It’s not pornographic material as such, but I would point out that if I type on one of these profiles, I will find it fairly quickly. And again, these tweets are highlighted based on Twitter’s own topical tweeting algorithm, which is based on engagement with tweets that mention the topic term. These completely unrelated and off-topic tweets are then pushed by Twitter itself, to users who have shown no interest in adult content.

It is clear, based on all available evidence, that Twitter has a pornography problem and is doing little to address it.

Adult content distributors view Twitter as the best social network for advertising because it’s less restrictive than Facebook and has a much wider reach than niche adult sites, while Twitter has the usage and engagement advantages of material hosting than other social platforms simply wouldn’t allow.

That’s probably why he’s been willing to turn a blind eye to this for so long, to the point that it’s now being highlighted as a much bigger issue.

Although it is important to note that adult content, on its own, is not inherently problematic, at least among consenting adult users. It’s Twitter’s approach to child abuse and exploitative content that’s the real problem.

And Twitter’s systems are said to be “woefully inadequate” in this regard.

As reported by The Verge:

A 2021 report found that the processes used by Twitter to identify and remove child sexual exploitation material are woefully inadequate – largely manual at a time when large corporations are increasingly turning to capable automated systems. to capture material that is not reported by PhotoDNA. Twitter’s main application software is “a legacy and unsupported tool” called RedPanda, according to the report. “RedPanda is by far one of the most fragile, inefficient, and undersupported tools we offer,” an engineer quoted in the report said.

Indeed, further analysis of Twitter’s CSE detection systems revealed that of the one million reports submitted each month, 84% contain newly discovered material – “none of which would be flagged” by Twitter’s systems.

So while it’s the advertisers putting the pressure back on the company in this case, it’s clear that Twitter’s problems go way beyond ad placement issues alone.

Getting to Twitter’s bottom line, however, may be the only way to force the platform to act – though it’s interesting to see how willing and able Twitter is to adopt a larger plan to address this issue at the moment. amid its ongoing ownership battle.

In its takeover agreement with Elon Musk, there is a provision that states that Twitter must:

“Use its commercially reasonable efforts to preserve material components of its current business organization substantially intact.”

IIn other words, Twitter can’t make meaningful changes to its operational structure while it’s in the transition phase, which is currently under debate as it heads into a legal battle with Musk.

Would launching a major update to its CSE detection models be considered a substantial change – substantial enough to alter the company’s operational structure at the time of the initial agreement?

Essentially, Twitter probably doesn’t want to make major changes. But it might be necessary, especially if more advertisers join this new boycott and push the company to take immediate action.

It’s likely to be a mess anyway, but it’s a huge concern for Twitter, which should rightly be held accountable for its systemic failures in this regard.

About Dianne Stinson

Check Also

Seersucker Live Return with Patricia Lockwood

Editor’s note: Seersucker co-founder Christopher Berinato is a contributor to Savannah Morning News and DO …