Advertisement

Microsoft will identify manipulated media with a confidence score

It will tell you how likely it is that the content you’re watching is fake.

Reuters TV / Reuters

The pandemic and upcoming US presidential election have made misinformation even more dangerous than usual. So, as part of its Defending Democracy Program, Microsoft is rolling out new tools to combat misinformation, specifically deepfakes.

The new Microsoft Video Authenticator will analyze still photo and video images and share how likely it is that the media is artificially manipulated. Users will see a confidence score, or percentage chance, that the media is manipulated, and in videos, that score will show in real-time on each frame. Microsoft says the tool works by detecting subtle elements that might not be detectable by the human eye.

The company will also allow content producers to add digital hashes and certificates to content they produce. Those labels will be included in the content’s metadata, and a reader, which will exist as a browser extension, will check that the certificates match the hashes. This will help assure viewers that the content is authentic and hasn’t been changed.

Microsoft isn’t stopping here. It says the fight against misinformation in its many forms will be ongoing and that the company is committed to pushing back against that bogus content. Microsoft isn’t alone. TikTok banned deepfakes to fight election meddling, and Twitter recently labeled a video created by the White House social media director and retweeted by Donald Trump as “manipulated media.”

If you buy something through a link in this article, we may earn commission.