Should Artificial Intelligence Be Regulated?
By Henry Silva, AI developer
Artificial Intelligence is rapidly taking over our daily lives. AI-driven algorithms have evolved from simple and niche tools to vast and comprehensive technologies that have already transformed the transportation industry and improved the way we detect and deal with mental health issues, among other advances. But some scientists and experts are raising important concerns over whether there ought to be a limit to how powerful AI becomes. They ask questions about how companies and governments use AI. And, more specifically, they want to know how these increasingly powerful technologies will affect us.
Many big tech companies are already developing tools in “high-risk areas of AI,” such as facial recognition and bank lending — applications which are known to be influenced by racial and gender bias. And these tools are not only being developed at a fast pace, but also being sold to other institutions and groups like the military and other law enforcement agencies that sometimes do not prioritize human rights and personal liberties.
And these technologies are not some far fetched threat; they are here and they are affecting you, especially if you belong to underrepresented groups such as the Black and Latino communities. Regulating AI now can be the decisive factor in allowing for you or your child to obtain a university degree in the future, or a bank loan, or even citizenship. Not only that, but enforcing regulations early on could be the difference between a more just and equal future and an even more racist and biased one.
All of this raises the question: How should AI be regulated?
Well, the European Union might have a good idea. Just recently, the E.U. released a new proposal for governing the rise of AI technologies. The idea behind this is to set strict and well-defined rules for developing AI in the hopes of guaranteeing that the technology heads in a humane direction. The new policy will try to regulate the emerging technology and hold tech companies accountable for their actions. Not only that, but the proposal also provides useful classifications of different types of AI tools and categorizes which ones are more dangerous. Some of the aforementioned “high-risk areas,” like facial recognition in public spaces, might be banned altogether.
But as novel and beneficial as these policies might be, we still need to hope that they encourage the U.S. government to take similar action. After all, the U.S. is the biggest country in tech production and yet continues to lack strict rules regarding AI. And despite the fact that state governments and other institutions have been trying to regulate this technology, we still haven’t seen how our country will handle this on a federal level. The hope is that our government will pursue regulation with an eye on treating people equally.