Twitter To Start Labelling Fake News Under New Policy

Twitter written with letter blocks

After working on its manipulated media policy over the last few months, Twitter has this week unveiled its official rule against the posting of deceptive manipulated content, while it’s also launching a new tag for detected edited material.

As per Twitter, it’s new, official rule on manipulated media usage is:

“You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide context.”

Twitter made the announcement on Tuesday in a blog post. The initial draft was announced in November. It said it collected user feedback ahead of finalizing the new policy, using the hashtag #TwitterPolicyFeedback.

The policy includes deep-fake videos that have been edited using artificial intelligence or other advanced software to distort a person’s appearance or speech while appearing to be authentic, as well as other kinds of edited photos and videos that have been “substantially edited,” it said.

With deepfakes potentially on the rise as a tool for misinformation, Twitter’s approach seems like a good one, allowing for satirical or light-hearted uses of the evolving form, while implementing clear warnings on potentially damaging content.

Indeed, deepfakes have become a major focus for online providers in recent times, with both Google and Facebook also launching new research initiatives to help them detect and action the same. From a consumer perspective, deepfakes haven’t really had a major impact as yet (that we know of), but the increased action from the major online players suggests that a new wave is coming and that as such technology becomes easier to use and more accessible, it will inevitably be seen as a tool for manipulating public opinion, where possible.

Hopefully, these added prompts will at least slow any such momentum, and prompt people to reconsider. It’ll likely be impossible to stop deepfakes from having any impact, but prompt labeling like this could act as a significant deterrent.

Comment here