AI already writes science fiction and fake news

Artificial intelligence is already able to create false news.

The current advances in artificial intelligence make it one of the technologies that will have the most substantial impact on the fourth industrial revolution. Companies that integrate it into their work processes will have a competitive advantage over those that have not yet decided to do so. Meanwhile, international organizations estimate that this will have a meaningful impact on the increase of the Gross Domestic Product (GDP) in Latin America.

At the same time that this technology shows unquestionable progress, more people and organizations express concern about the use that could be given to these technologies. These concerns range from threats to computer security to worries about how they could affect jobs. 

However, new concerns have recently emerged with this technology that has to do more about how it can be used to create fake news in the world.

Fake news made by Artificial Intelligence

There is growing concern about fake news spread virally and organically by people on social networks. The danger of misinformation now grows with the possibility that Artificial Intelligence automatically creates a false report.

This possibility made the OpenAI artificial intelligence research institute decide not to publish in its entirety a program that writes fake news. 

The MIT Technology Review published in February the discovery of the researchers of this institute. They proved that a linguistic algorithm for general purposes could be trained with a large number of texts from the web. This IA algorithm allows you to do translations, answer questions, and do other useful tasks. However, they soon realized that this technology could also be used to create abusive content. 

“We started to test it, and we quickly realized that it was possible to generate malicious content much easier.”

Jack Clark, Policy Director at OpenAI

The program can work just by putting a phrase like “Russia declares war on the United States after Trump accidentally …” and the language processing algorithm does the rest of the work.

For Clark, this program shows how AI can be used to generate compelling fake news, as well as publications for social networks or content generation. The OpenAI team is concerned that this tool can be used by those who deny reports on climate change or to generate scandals at election time.

In a short time, artificial intelligence will be able to produce fake news, offensive twits, or offensive comments that can be more convincing. 

“It is very clear that if this technology matures – and I give it between one or two years – it can be used for misinformation and propaganda.”

Jack Clark, Policy Director at OpenAI

OpenAI tries to go one step further in these scenarios. The research team not only does research in this field but also plays an active role in the face of the potential risks of artificial intelligence. This organization participated in the creation of a report published in 2018 that indicates the dangers of this technology, including the risks for disinformation.

Concerns about how this technology can be used in the future caused OpenAI to publish only a simplified version of the algorithm.

Other AI dangers

On previous occasions, OpenAI has warned about the need to keep AI investigations secret because of their potential hazards. This position was sustained in a study published in 2018 together with the Universities of Oxford and Cambridge, in addition to the Electronic Frontier Foundation. 

AI has the potential to improve industry processes and make the economy more productive. However, this technology also creates new opportunities for criminals or oppressive governments.

Among the risks that have been pointed out is the possibility of creating smarter scams for the theft of information and identity, malware that disperses as an epidemic, robots used to kill and the possibility of creating a panoptic that monitors the behavior of everyone.

All new technology has always generated new concerns

These scenarios add to the concerns that already exist about the jobs that can be eliminated by this technology. The paths that you take in the future will depend on us moving forward to the most catastrophic scenarios. 

For Chinese billionaire Jack Ma, concerns about new technologies have been a constant over the past 200 years. At the World Artificial Intelligence Conference held at the end of August 2019 in Shanghai, the founder of e-commerce Alibaba mentioned that “In every technological revolution, people start to worry. In the last 200 years, we have worried that new technology will take our jobs.”

In the past, Jack Ma was a defender of the 996 working day in China, which promotes working 12 hours every six weeks in technology companies. However, at the conference in Shanghai, where Elon Musk was also present, Ma said that in the future these workdays will not be necessary and that people will be able to work less and less. “I believe that in the future people will be able to work three days a week, four hours a day.”

“I think that thanks to artificial intelligence, people will have more time to enjoy being human beings.”

Jack Ma, Founder of Alibaba