Deepfakes is an Artificial Intelligence (AI) technique that generates realistic but fake videos of people doing or saying things that they have never done or spoken. It has significant implications in determining the legitimacy of the information that’s being presented online nowadays.
Deepfakes became famous in 2017 when a Reddit user named deepfake made a fake porn video of a celebrity by swapping her face with the original face in the video. Before the technological advancements in deepfakes, it was a concern for people having thousands of photos and videos over the internet.
However, with the development of apps like ZAO and its ease and perfection in creating deepfake videos, this AI technique is spreading across like fire amongst the common public and soon will be one of the biggest threats to our social privacy.
Facebook group is the world’s biggest social media network, and with the growing popularity of deepfakes video, it has created apprehension in the minds of the founders of this social giant.
Owing to this anticipated risk, Facebook has now decided to take a step ahead to fight this fire with the fire itself. Facebook is now planning to develop a faultless and robust technology to detect the videos created by deepfake technique.
To do this, Facebook has collaborated with other AI giants like Microsoft and the Partnership on AI, along with some researchers from universities like MIT, University of Maryland, Cornell Tech, University of Oxford, College Park, UC Berkeley and the University of Albany-SUNY to build the Deepfake Detection Challenge (DFDC).
DFDC is an open contest to develop an algorithm to detect the deepfakes videos or images and Facebook is investing $10 million as grants and rewards for the teams that would come up with ingenious solutions to this problem.
To build any AI-algorithm that works with high precision, the algorithm must be trained across the large dataset. To tackle this issue of having a pretty large dataset, Facebook has come up with a solution to making more deepfakes videos themselves.
However, users of Facebook have nothing to worry about it as Facebook has decided to not use any of their user data for this very purpose but instead have thought about another solution.
Facebook has decided to hire paid actors to make their deepfakes videos and images which will then serve as the dataset for building an efficient algorithm to detect deepfakes.
Facebook announced that the dataset will be provided to the teams interested in participating in the challenge and working on developing the most efficient algorithm for detecting deepfakes in December this year at the Conference on Neural Information Processing Systems (NeurIPS) taking place in Vancouver, Canada.
Facebook itself will also be one of the participants of this challenge but will not be accepting any prize money.
However, before making the dataset available for the public to work upon it, Facebook has decided to test the quality of its dataset and other challenge parameters through a targeted technical working session held at the International Conference on Computer Vision (ICCV) this October. Through this test, Facebook wants to ensure the quality of Deepfakes Detection Challenge.
To solve a problem that is continuously evolving and is more like spam spreading across the world, Facebook has decided to take the help of researchers, AI industries and communities and AI enthusiasts all over the world to build faultless algorithms against deepfakes.
Facebook is also working on deciding the rules and policies regarding any misinformation such as deepfakes videos on their platform and will make the policies public ones they are sure about the same.
It would be engrossing to see if the step taken by Facebook to overcome the adverse impacts of deepfakes will solve the problem or not.