Facebook said it had displayed warnings on more than 180 million pieces of content viewed on Facebook by people in the US that were debunked by third-party fact checkers, between March 1 and U.S. Election Day 2020.
(Subscribe to our Today’s Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
Facebook on Thursday said it had launched new artificial intelligence (AI)-powered systems to help detect misinformation on its platform.
The systems rely on several technologies, including ObjectDNA, details of which were published earlier this year in a study titled ‘An Analysis of Object Embeddings for Image Retrieval.’
Unlike typical computer vision tools, which look at the entire image in order to understand the content, ObjectDNA focuses on specific key objects within the image while ignoring background clutter. “This allows us to find reproductions of the claim that use pieces from an image we’ve already flagged, even if the overall pictures are very different from each other,” Facebook noted in a blog post.
The California-based technology company’s AI system also uses the ‘LASER’ cross-language sentence-level embedding which helps evaluate semantic similarity of sentences. It works for content containing either text or images, or both.
Between March 1 through U.S. Election Day 2020, Facebook displayed warnings on more than 180 million pieces of content viewed on Facebook by people in the U.S. that were debunked by third-party fact checkers, the company said. Facebook credited its AI tools, stating that they helped flag likely problems for review and automatically find new instances of previously identified misinformation.
To apply warning labels, the company deployed SimSearchNet++, an improved image matching model that is trained using self-supervised learning to match variations of an image.
SimSearchNet++ is resilient to a wider variety of image manipulations, such as crops, blurs, and screenshots, making it crucial with a visuals-first platform such as Instagram, the company stated. Facebook also previously developed a set of systems to predict when two pieces of content convey the same meaning even though they look very different.
Facebook also rolled out a deepfake detection model to combat misinformation in the form of videos, it said in the statement.