Blog

The biggest problem with AI is man-made

Written 04 January, 2019, 4 minutes to read

Artificial intelligence is growing at an unprecedented rate. Global spending is meant to hit over $52 billion by 2021, and the Word Economic Forum anticipate that machines will overtake people in the workplace by 2025. It seems that in a short space of time AI has gone from science-fiction to science-fact – it’s on our roads, in our skies and on our screens – and now humans are struggling to keep up. But the biggest issue with AI is one we’ve created ourselves.

Humans have long had a fascinating and complex relationship with AI. We’re simultaneously in awe of it and terrified of it, but for all our ideas about robot doctors or armies of drones, there’s another thing going on, and it’s far more unsettling. Not because it involves a malevolent, non-sentient intelligence that’s secretly plotting to overthrow us… but because such a logical, objective force is mirroring our very human faults to us in a very non-human way.

The news that AI can be biased is troubling, not least because tech was meant to triumph at eradicating bias – something that as humans we’ve botched spectacularly. It’s disturbing to see ugly humanoid traits displayed by rational machines, but we only need to look at studies from the past few years to know that this is undoubtedly the case.

AI bias in action

  1. Facial recognition technology is being utilized in law enforcement, but it’s lead to some disquieting examples of race and gender bias. In February 2018, a study at MIT discovered that the most recent gender-recognition AI was 99% accurate at guessing people’s sex after looking at a photo… the problem was, it was only accurate if the photo was of a white man. When looking at photos of black women, the AI was only 35% correct.

What caused this problem? The chances are that the algorithm data contained an unbalanced number of photos of white men compared to minority women; if the data available features more white men, it isn’t surprising that AI is better at recognizing white men.

  1. In the US, the COMPAS algorithm is used to predict the probability of a prisoner reoffending. Two years ago it was reported that COMPAS is racially biased, as the software consistently forecast that black criminals are more likely to reoffend than white criminals. Further studies showed that this software isn’t any better than random unqualified people on the internet when it comes to predicting who will reoffend – a rather scary idea when you consider people’s lives are hanging in the balance.

  2. A study in 2015 further highlighted the AI gender bias. When searching for “CEO” in Google images, it was found that a mere 11% of the returned images were of women, though in the US, where the study took place, more than 27% of CEOs are female. Another study later that year found that men were shown high-income jobs on Google’s advertising platform far more frequently than women.

A very human problem

The influence algorithms like these have on our lives today is huge. AI already dictates search results and what we see in our social media news feeds, and it won’t be long before it starts to affect other areas of our lives too. How likely we are to get a job or a new home, or how likely we are to get randomly checked by police, are all things that AI is being used for, and we’ve excitedly embraced most of these technological advances. And herein lies the problem.

We have a tendency to think that because of the sophistication of AI, and because we see it as free from human error, it must be irreproachable when it comes to making decisions. But we know this isn’t true. Machine learning algorithms can only do what they’re taught – and if they’re trained by humans, human bias will inescapably slink in. Distorted data, incorrect logic, or just the simple biases of human programmers, means that AI cannot only replicate human biases – it can intensify them too.

The good news is that AI is able to recognize bias. It can identify it and alleviate it, but only if its aware of the bias in the first place. We need to consider that the problems of AI are man-made and require man-made solutions. There is no smart app to help AI identify or correct bias. We have to do this ourselves.

The road ahead

When it comes to how we’re going to eradicate AI bias, the issue of diversity comes up a lot. Silicon Valley has long been panned for its lack of diversity, yet obviously, the more varied a team is, the more it can foresee different outlooks and viewpoints. While it’s far too soon to know whether we’ll ever be able to properly solve this problem, we do know that it’s a very human problem.

As long as it’s humans who are creating these AI technologies and spending myriad hours creating algorithms, the issue of prejudice is probably going to be persistent. To understand and evaluate the biases that may be passed down, researchers should collaborate with social scientists to address these problems. The very idea of AI bias should be taught in schools, too; it should be as much a part of the tech curriculum as the way algorithms work is.

We need to remember that AI is not an independent, supernatural life force; it is an alarming reflection of our own humanity. There is no single shape to it, no sole purpose, no guidelines on how to utilize it best. We, the creators, are the only ones able to define the progress and value of AI... and at this stage of the game, we’re still learning the ropes.

icon_paperplane

Get ideas, tips & updates

Read also

AI productivity apps for every professional

25 June, 2018 • 1 minutes to read

AI is becoming an increasingly normal feature of the workplace, with thousands of professionals integrating AI apps into their workflows to automate and simplify tasks. We explore how AI increases workplace productivity and the essential AI apps every professional should have.  

Eight types of bias in decision making

27 December, 2018 • 1 minutes to read

We like to think we’re open-minded and impartial, but a ton of different biases are constantly distorting our thinking. To protect workplace diversity and make the best choice in any situation, we need to control them. Here are 8 common biases affecting your decision making and how to master them.

Inside our curious love affair with AI

05 October, 2018 • 1 minutes to read

Our relationship with AI has always been complex; we love it, we fear it, we’re fascinated, but we know nothing about it. As AI quickly moves from science-fiction to science-fact, we explore how the move towards “everyday AI” is redefining our response to the technology, in spite of new unknowns.