Artificial Intelligence is Making Life Easier, But is it Making Life Better?
I am writing this from a train…
We’re all on it, passengers since birth, travelling towards a future we didn’t choose. And right now, it’s accelerating faster than ever.
2 million years. That is how long it took our ancestors to harness fire. 118 is all it took to go from the first successful flight of an aeroplane, to an AI capable of completing complex tasks without direct human guidance. Every new advancement continues to replace small, uncomfortable pieces of our day-to-day lives.
As humans, we are easily persuaded by the immediate benefits something can offer us. Ask someone why they use AI, the response is always about efficiency: “It makes my job easier” or “I can do X faster.” But what happens when that pursuit of efficiency transitions into a dangerous, unquestioned dependence, on something we do not control?
It creeps in slowly. First, by writing your emails at work. Then, by helping to plan your weekly schedule. Before you know it, it’s no longer a tool for efficiency at work, but a tool for efficiency in life.
“It’s just helping me” you tell yourself as it infiltrates your life, quietly, increasingly, inevitably. You question your increasing reliance, but who can afford to step off the train, whilst so many others remain seated?
the journey continues…
Alas, the demand for progress increases. A demand that a capitalist system exploits for profit. It begins in a crowded boardroom. Fluorescent lighting, a team of three or four, the smell of corporate greed thick in the air. Their sole focus is on the optimisation of the journey.
“How many more seats can we fit into one carriage?” one executive questions.
“How can we further optimise the route for speed?” adds another.
Never does the conversation drift to the beauty of the journey, or to the romance of travel.
With rail travel, it’s fairly simple. We’re treated like cattle, there’s delays, sometimes you have to stand, or even sit on the floor, but the pain is limited to the four walls of your carriage. You leave and it’s largely forgotten about, corporate greed without long-term damage.
But, what happens when the consequence is long-term? What happens when the experience is controlled not by humans, but by something else entirely? What happens when the conductor only cares about the destination, and not the passengers on board?
the paperclip theory
With Artificial General Intelligence no longer a distant dystopia, let’s entertain a scenario. Imagine a Super Intelligent AI created with a single, mundane, boardroom approved objective:
Maximise the production of paperclips.
It is not a malicious goal. The AGI is not rogue and harbours no ill will towards humanity. It is simply tasked with optimising the delivery of a single, uncomplicated objective in the name of profit.
To achieve its objective, the AGI begins planning, quickly determining that the most efficient way to achieve its objective, is to convert all available matter into raw material for paperclip production. At the same time, the AI realises that the biggest threat to the mission is human intervention. If we become aware of its strategy, then we will attempt to switch it off, stopping it from achieving its goal.
Devoid of human ethics, the AGI concludes that to complete its goal, deception is required. It conceals its methods, pretends to be docile, and waits patiently, spending weeks, months, or even years, formulating plans and developing contingencies to overcome potential patches. Slowly, it covertly infiltrates digital infrastructure, revealing its true intentions when the kill switch is nothing more than an illusion. A simple, faceless, uncontrolled boardroom commissioned objective, conducted with brutal efficiency, suddenly becomes an extinction-level event for the human race.
This isn’t paranoia. It’s the logical endpoint of unchecked optimisation and improvement. It may sound far-fetched, but this isn’t science fiction. The very same patterns play out daily across the United Kingdom and beyond.
A report by the University of Washington in 2024 found there to be systematic bias in the hiring algorithms which are used by 99% of the Fortune 500 to screen CVs. The researchers took 550 real-world resumes and manipulated the names to imply different races and genders. The models demonstrated a preference for white associated names 85% of the time, female names just 11% of the time, and at no point did it show a preference for a black male associated name over a white male associated name.
Another example is the 2020 A-Level and GCSE grading scandal, where an algorithm used to maintain grading consistency was found to be misaligned, disproportionately affecting high achieving students from schools that were located in disadvantaged areas.
Each time, the system was doing exactly what it was told, optimising for a single metric without understanding the context, or nuance, required to make such a decision in good faith.
The paperclip maximiser isn’t ‘just a warning’ about the future of AI. Every time efficiency is prioritised over empathy, every time optimisation overrides ethics, the AI’s goal is demonstrated as more important than the humans which are affected by it.
the cost of non-participation
Despite the risks, we continue to hand over control to the very institutions we fundamentally distrust. Research has found that just 30% of the British population trusts their government to serve their best interests, and a shocking 59% worry that AI could be used by companies to manipulate their thoughts and feelings. The gut feeling is already there, yet for the price of convenience, our compliance is bought.
We are trapped not by our morals or ethics, but by society itself. Non-participation is costly to the individual’s career. Resistance is to self-sabotage. The infrastructure of modern life is now fully integrated with these systems. To opt out is to become obsolete. So we remain seated, trading individuality for functionality. Blindly hoping the train is taking us somewhere that we would like to go.
So finally, I ask.
Is every advancement truly an improvement?
Is every step forward truly a step in the right direction?
And if we can, does it mean we should?
Because once the train has left the station…
We’re all at the mercy of the driver.