Blog

Will algorithms make us passive?

Written 25 April, 2019, 5 minutes to read

From book recommendations on Amazon to which of our friends’ posts we see on Facebook, algorithms are increasingly determining decisions in our lives. Overwhelmingly, this assistance is beneficial, as it saves us time and effort in connecting us with what matters most. But the “easiness” of AI is ultimately its biggest challenge.

AI technology lies at an uncomfortable intersection between our desire for greater simplicity and our need for control. For many, being handed shortcuts and predictions based on past behaviors weakens individual responsibility for our actions and decisions. The advance of our greatest supportive tool raises a pressing existential question: will algorithms make us passive?

A new form of decision making

Algorithms have changed how we accumulate knowledge and make decisions on a daily basis. Previously, society made new discoveries by rigorously testing a hypothesis – or through complete chance. With our newfound information, we could then make decisions about a specific subject.

But AI has completely changed this by employing powerful algorithms to quickly process masses of data and provide us with accurate answers. Take music as an example. Previously, to discover new music you needed to listen to the radio or get recommendations off your friends. Uncovering something you liked essentially depended on luck or actively searching for it.

But music streaming services have completely changed this by applying analytical tools which recommend songs. Spotify alone uses three different types of analysis to understand sound and text preferences within the audio you listen to, as well as which user groups align to your tastes. From this, its algorithm suggests you new songs. Anyone who has listened to a Spotify Discover Weekly playlist will know just how effective this algorithm can be.

By analyzing music taste and providing suggestions, algorithmic decision-making removes the need for us to actively search for songs. And when the AI produces good results, most of us don’t see any reason to question its output.

Unexplainable answers

The problem with algorithms is that when we do want to know why a particular decision has been made, it can be difficult. Going back Spotify, though computer scientists can decode and understand the algorithm’s reasoning, comprehending this logic is out of reach for the majority of people.

More problematically, as algorithms become more advanced, the exact rationality behind a decision becomes impossible for humans to understand. Take Nvidia’s 2016 self-driving car, for example. Instead of being programmed with explicit instructions, the car’s algorithm taught itself to drive by emulating human behavior. The result? The reasoning behind a specific choice the car made was completely incomprehensible.

So, the proliferation of complex algorithms doesn’t just disincentivize the process of discovery and decision-making, it can also limit our ability to critique how algorithmic decisions are being made for us.

The importance of user input

It’s important to note that algorithms are not always the sole agent in decision-making – a huge number of them seek direct user feedback, known as ‘end-user training’. By providing feedback on an algorithm’s logic, this means a user can play an active role in shaping its logic.

Take automatic time tracker Timely for example – while its algorithm can create draft time sheet entries by looking at your work activity, it requires active participation to “approve” and essentially qualify its suggestions. Its AI can effectively only improve by your active example, learning you’re your edits to provide more appropriate labeling in future.

For algorithms which incorporate end-user training, the individual works in tandem with the algorithm to achieve an optimum result – active user decisions directly shape the end result. This ensures a user stays active and critical of any outputs an algorithm provides.

Correcting algorithmic bias

But even when an algorithm does not allow for end-user training, people still have the opportunity to actively change its results. If an algorithm is producing results we like, there is often little reason to assess them, but often algorithms will produce something that takes us by surprise.

Understand: algorithms are not objective. They are trained on fixed datasets using specific assessment parameters – both of which are created by humans. And we humans are highly subjective bunch.

Frequent headlines report on how human biases within data or poorly designed algorithms create harmful results. Just this month, YouTube’s algorithm promoted a sexist video to the top of its search list for the term ‘Katie Bouman’, the MIT researcher who wrote much of the code for imaging the black hole.

Whilst this type of bias is harmful for society, it is an influential factor in preventing passivity in a world of algorithmic decision-making. For one, it is pushing researchers to investigate ways to explain algorithmic reasoning. Accessible explanations would provide people the scope for active oversight over AI.

More than this, news stories of algorithmic bias ensure that we do not rely on computer judgement alone. A study by Wharton School found that people are less trusting of algorithms if they’ve seen them fail, even if it is only minor. Understanding that algorithms are fallible pushes us to view results critically and actively try to understand why they were made.

User preferences matter

Even when algorithms are producing seemingly fair results, there is the possibility that we just don’t like the output. What constitutes the optimum result is subjective and based on human preference in the context it is used.

Social media companies understand this and are constantly tweaking their algorithms to try and improve user engagement. With this said, the changes that businesses make are not always successful and can create major pushback. When Instagram chose to alter its newsfeed algorithm to order content in a reverse chronological order, fierce backlash ensued, leading the company to re-evaluate the changes.

So even when users do not have direct input in altering the algorithms output, it is important to recognize that these decisions work in a broader social context of consumer preference. If an algorithm’s output is undesirable, users have the option to complain or to migrate to an alternative service. When enough users share the same dissatisfaction, companies will react in order to maintain their user base.

Clearly, we still actively shape, influence and ultimately determine the future of AI tools. Algorithms are still products – they need to respond to our individual needs and desires in order to be successful. Far from being passive agents, there are a huge number of ways we can assess choices and assert changes in our increasingly algorithmic world.

icon_paperplane

Get ideas, tips & updates

Read also

Building trust in the age of AI

07 March, 2019 • 1 minutes to read

Discrimination, opaque logic and dystopian narratives make us approach AI with skepticism. But without addressing our trust issues, we are limiting the true potential of humanity’s most promising technology. Here are some of the main reasons we mistrust AI and what we need to do to overcome them.

Automatic time tracking benefits everyone

29 January, 2018 • 1 minutes to read

Automatic time tracking redefines the way entire businesses value and bill for their time. We break down what automatic time tracking actually is and discuss the benefits of automatic time tracking for managers, employees and business leaders alike.

From perk to policy: The rise of flexible working

18 February, 2019 • 1 minutes to read

Flexible working has moved from employee perk to default company policy. But the change means more than just picking and choosing your hours. Here’s how the rise of flexible working happened – and how the fallout is set to dramatically change other areas or corporate organization and opportunity.