Advancements in the field of AI are noteworthy and are progressing at a pace no one ever imagined. AI-systems are getting better and better over time. Technological advancements in AI-systems are making our life easier and more comfortable on one hand and making it scarier on the other hand.
In today’s internet age, news and media are a critical source of information worldwide. Any leak of wrong information can lead to irreparable situations and circumstances.
Fake news was a term not many people used a few years back but now has become one of the greatest threats in our society. To overcome the consequences of any fake news created by humans, many AI research companies are trying to develop AI-systems that would detect such fake news.
A philosophy, often adopted for making the AI-systems better at what they are supposed to be doing is, “To know your enemy, you must become your enemy.” The same is adopted while developing AI-systems to detect if the given news article is real or fake. To train the AI-system to detect a piece of fake news correctly, AI-system needs to learn to create a piece of fake news itself.
OpenAI, a research company founded in December 2015 by Elon Musk and Sam Altman aims at developing AI-systems in a way that would benefit humanity. OpenAI also aims at building safe and beneficial AI-systems.
In February 2019, OpenAI introduced to the world their new development in text generation and language modeling, GPT-2 which soon after its release became a topic of great public concern and created much buzz around it because of its ability to write a realistic fake news article.
According to OpenAI, GPT-2 was trained to predict the next word in the 40GB internet text. It was to have an AI-system that could generate text by adapting to the style and content of the conditioning text and would allow the users to generate realistic and coherent continuation of any topic of their choice.
However, soon, OpenAI realized that the texts generated using this algorithm are far too perfect and had to figure out if a human or a robot has written it.
When fed with a starting line, “Russia has declared war on the United States after Donald Trump accidentally … “, GPT-2 wrote an entire fake news article,
Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.
Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.
The US and Russia had an uneasy relationship since 2014 when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.
Looking at the extent to which this algorithm could write a convincing text, the company decided not to release the entire version to the world as it feared the extent to which it would flood the internet with fake articles and news.
Despite early apprehensions that GPT-2 would create a risk of fake news, later this year in May 2019, OpenAI released a new updated version of GPT-2 which is said to be six times more powerful than its previous release.
OpenAI also claims that there is an even more powerful version of GPT-2 they have but has not been released yet and would release it a few months down the line if the already released version is not used for malicious purposes.
In contrast to GPT-2, researchers and scientist from Harvard University and MIT-IBM Watson AI Lab are trying to develop an algorithm GLTR, that would detect the probability of a passage being created using algorithms such as GPT-2.
It would be interesting to see if GLTR would be able to detect the likelihood of a passage being fake and which is created using GPT-2’s strongest version.