Blog

Building trust in the age of AI

Written 07 March, 2019, 5 minutes to read

The jury is still out on whether AI is humanity’s friend or foe. It’s improving society in countless ways – making court cases fairer and saving lives by predicting when floods will happen – but the idea of using AI to supplement human judgement and decision-making still makes us feel uncomfortable. So what will it take for us to make peace with a technology with so much promise to make our lives better?

Why we distrust AI

Discrimination, opaque logic and dystopian narratives lead us to think sceptically about current uses of AI. But without addressing our trust issues, we are limiting the true potential of humanity’s most promising technology. Here are some of the main reasons we mistrust AI and what we need to do to overcome them.

Discrimination

Sensationalist headlines over discriminatory algorithms makes it hard to forget that AI is not neutral. Despite being machines, we realize that AI doesn’t provide a wholly objective truth – they only represent “truths” based on the data that they are given to analyze. And that data can be seriously flawed.

The representativeness of an algorithm’s training dataset in particular poses a huge problem here. Clear problems emerge when algorithms base decision-making on data which doesn’t completely reflect the real world, as demonstrated by a famous MIT project testing the effectiveness of facial recognition technology on people of different ethnicities. Every tool they tested was found to be far less effective at recognizing black women than white men, because the algorithms behind them were trained using predominantly white faces. An exclusionary racial discrimination became embedded in the AI from the very start.

But even if the data trained on is representative, historical biases present within society can be trained into an algorithm. Take Amazon as an example: the company spent four years developing a recruitment algorithm to facilitate selecting the best candidates. But because the algorithm was trained using past hires, it undermined the recruitment process by encoding a positive bias in favour of men for technical roles.

A good AI will minimize potential biases by carefully considering what data is used for training the rules of the algorithm. But failing to design algorithms which adequately account for these sources of bias will only perpetuate mistrust – particularly within communities who are more frequently affected by them.

Lack of transparency

The reason why this bias is so hard to detect stems from a second problem with AI: transparency. Due to the complexity of algorithmic calculations and the difficulty in explaining their logic, AI algorithms are often described as ‘black boxes’. In relation to discrimination, this proves especially problematic, as it is hard to determine whether an algorithm is fair until something has gone wrong.

Algorithms can consider far more factors than humans and come up with answers which seem counter-intuitive, but are in fact more effective. This was evident in the matchup between AlphaGo, an AI created by DeepMind for playing the game Go, and the world’s top-ranked Go player. Though many of the moves chosen by the AI were unexpected, they ultimately proved more effective, with AlphaGo winning every match.

Which cuts to the heart of the matter: even if we have confidence in the fairness of an algorithm’s output, implementing decisions that we don’t understand ourselves makes us uncomfortable. Without an indication of how and why a decision was made, it is difficult to trust outputs which seem counter-intuitive.

Future dystopias

Perhaps the greatest fear surrounding AI is the threat it could pose when it begins to outsmart humans. Current AI has narrow intelligence, in the sense that it is extremely good at performing one assigned task. Looking to the future, there is fear over the creation of ‘artificial general intelligence’ – AI which will be able to outperform humans in any number of tasks.

The fear of hyper-intelligent AI stems from the alignment problem – how can we ensure that these intelligent systems will still do what we want them to? Dystopian depictions from the cinema of human-like machines wielding guns seems an unlikely future. Instead, the risk stems from AI finding the most effective solution to solve a problem, even if this disregards human life. It is not hard to imagine an AI programmed to solve global warming and wiping humanity out as a step in achieving this.

Learning to trust AI

Though a seemingly distant future, the existential risk around AI leaves us cautious to embrace further developments. So how can we can overcome this mistrust of AI? What can be done to turn these black boxes into fair, understandable and safe outputs? And will we ever fully accept the technology?

Fairness

The first step is to overcome the algorithmic discrimination guiding AI. There is no simple solution for alleviating discrimination, but a number of strategies can help to mitigate this problem. Establishing standards and practices surrounding AI can help ensure that representative data which controls for historical biases is being used. In parallel, having ethicists work with programmers and act as oversight can help simple mistakes from being encoded. In turn, taking these steps will help reduce negative publicity and hopefully mitigate societal worries over unfair AI.

Relatability

We also need to provide explanations for AI’s decisions. This presents a larger problem as the workings of an algorithm are highly opaque. Even if the exact logic was provided to a person, it is highly unlikely that they would be able to make sense of it.

But to build trust, we need to make the decision-making process understandable, not just transparent. ‘Counterfactual explanation’ poses one option for making AI explainable – outlining the minimum feature(s) that would have to be changed to receive an alternative response, instead of showing users the logic behind an algorithm. Though this solution is far from perfect, providing a rationale, and at least part of the reasoning behind a decision, makes AI less mysterious.

Time and direct experience

Often, mistrust of AI does not stem from a rational concern over potential harm, with even the most innocuous forms of AI being met with hesitation. Automating timesheets, for example, can bring huge benefits to an organization and presents little risk, yet there can still be resistance adopting the unknown. Building trust in AI requires a change of mind-set – and this won’t happen overnight.

Time is perhaps the only factor which will truly lead to trust in the age of AI. As adoption of AI spreads and becomes more present in our daily lives, dystopian predictions will be outweighed by a lived reality of positive experiences. Adopting fair and understandable AI will pave the way for a future made easier by technology – and hopefully that change won’t be too long in the making.

icon_paperplane

Get ideas, tips & updates

Read also

Eight types of bias in decision making

27 December, 2018 • 1 minutes to read

We like to think we’re open-minded and impartial, but a ton of different biases are constantly distorting our thinking. To protect workplace diversity and make the best choice in any situation, we need to control them. Here are 8 common biases affecting your decision making and how to master them.

How do you unconsciously waste time?

01 November, 2018 • 1 minutes to read

Nearly all of us waste time each day without realizing it. Tons of unconscious behaviours and work structures keep us busy without actually producing meaningful work. So how can we become more aware of how we use it and how we unconsciously waste it? How can we become more intentional with our time?

Mapping the future of organizations

20 February, 2019 • 1 minutes to read

Automation, new work models and values-driven consumption are challenging how businesses organize and operate. We’re increasingly losing patience with rigid control and bloated process, and expect organizations to match our priorities. Here’s what the future organization looks like.