Blog

Building ethical AI

Written 09 May, 2019, 6 minutes to read

Artificial intelligence (AI) adoption is increasing at a rapid pace. It's already transforming everyday tasks to make our lives easier – filtering inbox spam, recommending products and optimizing search engine results. And it holds great potential to help us solve some the world’s biggest problems, like the looming environmental crisis.

But beyond the benefits and possibilities, AI is riddled with latent ethical issues. Unlike the “unintelligent” technology that preceded it, AI comes with a unique set of risks, whose magnitude we still haven’t fully gauged. So is it right to be weary of AI? Is the technology too “potentially dangerous” to use? What are the main ethical issues holding it back? And what can those building AI do to solve them?

Can AI make fair decisions?

A recent study by Gartner showed that AI adoption has grown by over 270% in the past four years. And yet, we are still lacking a set understanding of what a “good” algorithm output means. At a superficial level, the answer seems obvious – the best output is the one which produces the most accurate results.

But scratch beneath the service, and it becomes clear that there is an innate tension between decisions which are the “most accurate” (based on data inputs) and those which are fair. It’s easy to believe that our machines are objective, but in the case of AI, decisions are influenced by parameters defined within the algorithm and datasets – both of which are defined by humans.

So in practice, the most ‘accurate’ results are often amplifications of dangerous social biases. Generations of inequality have led to a situation where the datasets technicians use to train algorithms often unfairly skew the outputs.

Take the example of the the UK police force, which adopted an algorithm to assess the risk of a prisoner reoffending. Although the algorithm had high levels of accuracy – making mathematically correct calculations – its results disproportionately targeted people from poorer areas, unintentionally perpetuating social inequalities.

So for our algorithms to be truly effective, it is important to establish what a fair output means. This base understanding would help ensure that technology is designed inclusively, and protects groups who have historically been discriminated against.

Who is responsible AI’s decisions?

Even with an understanding of what makes a good result, this is no guarantee that an algorithm won’t produce an unwanted or harmful output. In some cases, these outputs are relatively harmless and can be easily corrected.

Take Timely for example – an AI time tracker that can automatically track work activity and create timesheets by itself. If a user isn’t completely happy with its draft time entries, they can actively alter them. The algorithm will then learn from their edits to understand what “correct” looks like.

But for other applications, using AI carries much higher levels of risk and decisions cannot be so easily redressed. AI used for self-driving cars is a prime example of this, and has already resulted in a number of casualties and deaths.

It raises a fundamental ethical question about responsibility. In the case of a death on the road, who is responsible: a programmer, a company, the user or the algorithm itself? But beyond this, who decides what rules an AI should follow in anticipation of a collision? Car manufacturers have the incentive to prioritize the safety of their passengers, but should this come at the cost of harming pedestrians? As ongoing research from MIT shows, there is no world consensus for these questions.

It puts pressure on the very core of our social judicial responsibility. In order to standardize where blame lies, we need to establish ethical norms of responsibility and codify them into our laws. Doing so would remove ambiguity over who is responsible and ensure AI is developed with full accountability – incentivizing improved oversight.

Aligning AI with human values

But there is also the need to ensure that AI is aligned to our own human values. One commentator has compared this alignment problem to the parable of King Midas, who wished everything he touched would turn to gold.

It doesn’t end well for him – soon after receiving his gift, King Midas quickly realizes that he can no longer eat food and ends up turning his daughter into gold. But what’s the parallel here? That encoding overarching goals into our AI without accounting for specific values can lead to a misguided implementation that we never intended.

For an example, imagine programming a self-driving car to take you to the airport as quickly as possible, without any guidelines as to how: it could end in a helicopter chase and severe car sickness. Or picture programming a robot to cook food for your kids: without knowing any better, once all food in the house runs out it might unwittingly cook your cat.

At present, the ethical risks of these misalignments are limited, since AI is used narrowly for a single or finite set of tasks. So for now at least, it is relatively easy to predict the potential problems that may arise – and limit any potential harm caused. But obviously, more complex or abstract applications will require a more sophisticated – and extensive – set of rules to guide AI behavior.

Artificial General Intelligence

Looking forward, there is an effort to develop artificial general intelligence (AGI), which does not focus on one specific task but instead operates across a number of different contexts. At its most basic level, this will increase the difficulty in foreseeing possible latent issues.

AGI presents a huge unknown in terms of control. Our human brains possess intellectual capabilities which have allowed us to become the dominant species on Earth. But when AGI supercedes us in this area, how can we embed values to ensure our human will remains in control?

It’s a bit of an abstract and distant problem, but is mentally close enough for Stephen Hawking and Bill Gates to have voiced their concerns. The worry stems from the fact that AGI would seek constant optimization, and improving its intelligence is one way of doing this. But AGI arguably presents far greater opportunity for a positive intelligence explosion, since it could potentially analyze the processes that produce its intelligence, improve upon them, and develop a successor which does the same.

While It may all seem too far away, the speed at which AGI could develop superintelligence creates a pressing need to discuss the alignment problem. Ensuring that AI aligns to a clear set of values – that we build human ethics into our machines – is an essential first step to ensuring the power of AI remains within human control.

Looking to harness the good side of AI? See what it means for employees and how to automate the small jobs you hate.

icon_paperplane

Get ideas, tips & updates

Read also

Will algorithms make us passive?

25 April, 2019 • 1 minutes to read

The “easiness” of AI is its biggest challenge. For many, being handed shortcuts and predictions goes against our individual decision-making responsibility. Caught between a desire for simplicity and a need for control, the advance of AI raises a pressing question: will algorithms make us passive?

The easiest way to increase your productivity starts with outsourcing work. But how do you go about it? And what software can you trust to do a good job? To shorten your search, here’s our definitive list of all the low-value tasks everyone should outsource, and the best apps available for the job.

Are you too available at work?

17 April, 2019 • 1 minutes to read

We all want to be good employees – replying to requests promptly and being part of the discussion. But this shouldn’t come at the expense of the job we were hired to do. The danger of being too available at work is very real – here’s how to ensure your culture and wellbeing doesn’t take a back seat.