The Ethics of Artificial Intelligence
By Henry Silva, AI developer
Whenever a new technology becomes mainstream, arguments tend to emerge regarding its ethical use, and how we, as a society, can proceed in using its capabilities for our benefit. Artificial intelligence is no different. Experts and activists around the world are discussing how we can turn AI research into a more ethical and humane endeavour. To date, AI has proven it can lead to irresponsible outcomes especially in underserved communities. Since this technology is already changing the world in myriad ways, we need to make sure that changes in AI development are beneficial to everyone.
What exactly is AI ethics? According to the Alan Turing Institute: “AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.” In other words, AI ethics are the dos and don'ts for AI research and development. For example, it’s widely accepted that, when training a new AI algorithm (instructions), you should always account for biases, like gender and racial biases, and make sure to at least remove the majority of those biases before making your algorithm available for the public.
But why do we need such a thing? Well, the main reason is because AI is increasingly becoming a bigger part of our lives, and is no longer confined to academia. For example, AI is now responsible for deciding which applicants should receive a bank loan, who should be hired for a job, for directing medical efforts to patients in hospitals, and much more. So, we indeed need to ensure that these technologies are being developed in an ethical and humane manner.
Unfortunately, there are still some issues regarding AI ethics that are hard to deal with as a global community. As AI models increase in both size and complexity, even the scientists and programmers who create them can’t entirely explain how they work. This becomes a problem when we start using AI in important scenarios, like the aforementioned loan lending and job selecting. In order for AI to be considered ethical, we need to be able to convincingly explain why it makes certain decisions. A person who is rejected for a loan, for example, has the right to understand why that decision was made. The argument that the AI behind the decision “just works” is not enough. If we want AI to be ethical, we need to put more effort into explaining how it makes its decisions.
But, thankfully, an increasing amount of effort is being put into thinking about AI ethics and improving the quality of AI algorithms. Big tech companies are already adding staff members to work specifically with AI ethics; many big AI institutions are creating comprehensive guidelines for ethical development in AI; and even governments are starting to regulate and restrict the ways AI can be used and created, in order to provide a more ethical development.
And so, even though there are a lot of issues regarding the lack of AI ethics in our current society, we can be hopeful that this situation will improve in the future and that we will be able to make AI development a more humane and responsible activity. As always, we should educate ourselves and continue learning and fighting for a better future for technology.