Misuse and Abuse in Artificial Intelligence

Misuse and Abuse in AI

May 27, 2021
By Henry Silva

Artificial intelligence is slowly taking over the modern world and economy. Today, more than 37% of businesses and organizations around the world use AI in their activities, a number which will keep increasing given the vast amount of research being done in this area. As we’ve discussed before, AI is already becoming part of our daily lives, by helping the decision making behind loan lending, vehicle driving, and even by improving the performance of small businesses. In a context of rapid expansion, misuse is inevitable. What can we do about AI misuse? What are its consequences?

First, what exactly is considered AI misuse? To put it simply, AI misuse occurs when AI is used with ill intent by people or organizations. Misuse is different from bias, which is a direct consequence of our biased data used to create AI. Instead, AI misuse is a direct consequence of people who purposefully wanted to use AI for unethical and harmful actions, like violating privacy, censoring users on online networks, and gathering data illegally from the Internet.

One of the biggest and most harmful examples of AI misuse are the so-called deepfakes. The term comes from a combination of “Deep learning” and “Fake media,” and it describes AI models that generate fake media, such as fake images, audio and videos. What began as an interesting research topic into AI, has become an enormous online problem. Several people with malicious intentions began using deepfakes in order to target and harm certain individuals and groups, like by creating fake pornographic videos of celebrities or fake audio for blackmailing and stealing money from companies and individuals. These people are those who understand how to manipulate AI technology for negative purposes.

Another famous misuse of artificial intelligence are AI-based password guessers, which are programs that continuously try to guess your password for several different websites and programs. These guessers have already been around for quite some time, but now are becoming even more powerful by employing AI in order to make better guesses about a user’s passwords. One such algorithm was able to successfully guess more than 25% of all passwords from data gathered from LinkedIn. This means that one in every four people who use LinkedIn had their passwords stolen — an unprecedented accuracy for these types of programs. By being able to easily obtain user’s passwords, hackers can massively expand on their ability to do harmful actions, like leaking sensitive information, stealing money from bank accounts, and others.

It’s clear how AI misuse can lead to major consequences for the online world and for society. It can destroy people’s lives by creating deepfakes — especially women who are more likely to be victimized by deepfake attacks — and it can harm millions of Internet users by violating their online accounts. Not to mention the many other examples of AI misuse by big companies and institutions that violate our privacy and freedom of speech.

So what can we do about it? The answer seems to be regulation. The European Union is aiming to regulate many activities considered to be AI misuse by imposing heavy fines on companies that do not comply with the proposed regulations. Other activists are already protesting and trying to turn lawmakers' attention to the dangers of deepfakes, especially in the context of so-called “revenge porn.”

So far, the U.S. has positioned itself against the overregulation of AI, and strict regulations tackling the aforementioned issues are yet to be seen.