Slow death of thought: How AI is hollowing human mind from within
As AI begins dominating the world, what's the future of the slow, messy, organic creativity of human mind?
            
                
            
            Updated Tuesday Nov 04 2025          
Have you felt you've been receiving essays on a frequent basis in the form of emails? The excess of words that land as emails, WhatsApp messages, or captions on social media, once dreaded by many who would stare blankly at screens, now can't seem to find ways to edit the text they think "they’ve" produced.
Do you feel you come across words that are a dose of verbosity when a minimal expression, simplistic word choice could have been enough to communicate a message? Is it necessary for an object to hold the "transformative" energy to "delve" into the meaning of the message being communicated?
Recall the day, not too long ago, when the world got to hear of a tool that was available to everyone and one that will send communicators packing their bags. I can recall the moment when those around me were ecstatic, shrieking in joy that they could “write like Shakespeare or Wordsworth”, and that “writing isn’t for writers alone anymore”. Indeed, ChatGPT did allow people to write their organs out, but did it send writers to their dens? It's only made them stand out, defining their craft even further, making them distinct in terms of originality and creativity.
Geo.tv aims to explore how mankind’s increasing dependence on AI, whether in writing, problem-solving, or decision-making, is leading to a quiet erosion of human sovereignty. Are our cognitive boundaries being narrowed, with deep creativity being replaced by optimised convenience?
Outsourcing sense making/creativity?
When AI tools are now being given the roles of a personal assistant by many around the world, making them ‘think’ for the user, how does the increasing reliance on AI tools impact one’s capacity for independent thought and creativity?
Gerd Leonhard — futurist, humanist and chief executive officer of The Futures Agency — says the growing reliance on AI tools is impacting our capacity for independent thought.
“Clearly, if we have an AI tool that provides us with answers and we stop verifying those answers, we stop thinking, we stop exercising our own ingenuity and discovery. Then, it poses a significant risk for us in terms of understanding our environment.”
Outsourcing our cognitive thinking, ideas, and understanding of reality will open us up to manipulation by AI tools, according to Leonhard.
“If we use it (AI) as a substitute, I think the bottom line is that it could ultimately make us incredibly lazy.
“Of course, people have said that about the internet as well, but the bottom line is that, in terms of order of magnitude, it is 1,000 times as much as what the internet offered.”
Dr Sally Hammoud, strategic communication and AI-in-media expert from Lebanon, feels human beings are fascinated by laziness, which is why technology has gradually entered our lives, and is now being run by and around them.
Speaking about man’s race against time, where the pressure to produce, create, and be first is constant, Dr Hammoud mentions that in this race, creativity has, in some ways, lost its true essence, with AI having accelerated it.
“We are at a point where we can use AI to our advantage and if AI is taking over routine tasks, it should allow us more time to focus on creativity.”
For Dr Hammoud, humans need to coexist with this new form of intelligence, one that enhances our creativity.
“We must continue to generate knowledge instead of relying solely on what AI produces,” she says.
An unrecognisable future
If AIs are better than we are at thinking and at creativity in general, then there’s a lot to be gained by letting them work, according to Dr Thomas Metcalf, senior researcher at the Sustainable AI Lab at the Institute for Science and Ethics, University of Bonn, and an associate professor at Spring Hill College in the United States.
“I suspect there will still be plenty of opportunities in our day-to-day lives to apply creativity and ethical reasoning, unless we end up in some unrecognisable future in which AI mediates all our interactions with other humans.” And that kind of future “does seem troubling” to Dr Metcalf.
“I find human-to-human relationships themselves (and expressions of those relationships) intuitively valuable.
“It would be strange, at best, if I checked with an AI before hugging my children or before keeping a promise.
“(Someone might worry that I value doing something correctly or optimally more than I value the other person or my relationship with them.)
“How many times have you tried messaging someone, but first made a quick AI-stop to correct the message, or gave it a prompt to alter the tone of the message and does that lead to a subtle erosion of human sovereignty?
“What’s most in danger of being lost or eroded isn’t sovereignty per se if AIs are better decision-makers than we are about large-scale social choices, then that’s great, but rather interpersonal relationships or valuable aspects thereof.”
With the automation of thought processes through AI, one wonders if it poses a unique threat to human freedom, particularly our freedom to reflect, imagine, and make mistakes.
Dr Metcalf, doesn’t think AI is a direct threat to our freedom, however he feels it might be a threat to man’s freedom in a couple of subtler ways, and he explains, “If AIs become significantly better than we are at reflecting and imagining, or prevent us from ever erring again, then perhaps we'll be too tempted to offload those capacities onto AIs.
“Technically, we could still continue to reflect and imagine, but it would be too easy to let the AIs do the hard work.
“Then, this undermining of our willpower might appear to be a threat to our freedom.”
Then comes the question of whether a superior tool is available. “There may be a kind of Prisoner's Dilemma pressure to use it,” he says, mentioning that college students commonly use AI to complete assignments.
“If you're the only one not using AI, and you get a worse grade (either because you're not as skilled as the AI, or because you have to spend more time on the assignment so you have less time for other tasks), then it might seem unfair to you that you took on this extra cost when few other students did.
So, if there's some competition, you might feel a lack of freedom if the only way you can compete is by using AI," Dr Metcalf says.
AI — as the 21st-century default
For Leonhard, the long-term psychological effects of AI immersion, especially among younger generations who are growing up with AI as a default, can reshape the future of how humans perceive thinking, effort, and originality.
“If we find ourselves forming more relationships with screens, relying on AI to do our thinking, and outsourcing our cognition and understanding of the world, it could have a significant impact on almost everything, and diminish our ability to understand one another,” shares Leonhard.
“We need to have a human resistance against too much of this (AI dependency) and a balanced path of using it as a tool and not as a master.
“The psychological effect of being able to do everything by doing nothing is incredibly tempting, it’s like social media but a thousand times as bad,” he shares.
When it comes to an over-reliance on AI, Dr Hammoud, the AI Expert from Lebanon, refers to these tools as “intelligent advisors”, stating that when man relies on these tools, it’s as if we’re asking a dog with a chip implanted in its head to make decisions on our behalf.
“We must recognise that these systems are merely tools and they are not truly intelligent – they are trained to perform tasks effectively, efficiently, and quickly, but these are not the attributes of human intelligence.
“This is why the discussion on governance remains crucial and we need to continually revisit not only how the algorithms are designed, but also how data is labelled and under what terms,” says Dr Hammoud.
For those who equate quick work with intelligent work, please take note.
Here, Dr Hammoud mentions the various meanings created by AI for us.
“AI is now creating meanings for us. Its interpretations are shaping what we consider facts, and consequently, what becomes our knowledge – something we must critically question in our time,” she states.
Imperfect intelligence — a privilege in the age of AI
The primary factor that distinguishes human creativity from computer-generated products is “unpredictability", according to Dr Metcalf.
“However, it's interesting to consider why we might value inefficiency and unpredictability, even if these aspects are inherent to a process and not necessarily incorporated into the final product.
“Why would the mere fact that some artefact or product was produced unpredictably or inefficiently make the product itself better?”
Dr Metcalf explains that it could be because “we find it remarkable that a flawed or uncontrolled process can produce something impressive.”
“We might come across a video online in which a person performs a noteworthy feat despite a limiting factor.
“Maybe a person has their eyes closed but still draws an accurate portrait.
“Or we might see someone constructing a beautiful design by carefully placing individual grains of coloured sand.
“We might be amazed that such a process produced something so impressive.”
He says that even if AIs surpass us in generating aesthetically pleasing, creative work, “we may still value an imperfect human's attempt to produce something interesting or beautiful.
“For almost any athletic feat (and many artistic feats), some machine could act more efficiently, but a human wants to show that they can do it.”
It is then, maybe, that “our approval attaches to the human rather than to the product, and we can maintain that feeling even when machines are better than we are at producing any product,” says Dr Metcalf.
Homogenisation of human thought/standardisation of imagination
Leonhard warns against becoming too much like the machines we create. He foresees a future where our cognitive habits, shaped by algorithmic thinking, could lead to a homogenisation of human thought.
“There’s a huge trap and temptation – the temptation to be lazy, the temptation to forego human context, and to prefer machine context.
“However, on the other hand, humans realise that this is an empty undertaking, that it is a shell of logic with obvious content.
“It's like fast food in many ways, or like having the car decide on how, where, and why to drive, rather than the driver.
“I think we can navigate this corner by understanding that it’s better to use technology humanely.”
Preserving originality in a world of algorithms
In a world increasingly embedded with AI in educational, creative, and other environments, what becomes of the slow, messy, organic, and often non-linear processes of human learning and imagination?
Dr Hammoud, the Lebanese strategic communicator and AI-in-media expert, "AI has the potential to gradually deskill us over time.
"Yet, at the same time, it presents an opportunity to enhance human potential – depending on how we choose to use it, and this is precisely why AI needs regulation, as it is already reshaping human behaviour, our institutions, and our way of doing things.
“This is not a simple transition for humankind," shares Dr Hammoud.
“Unlike the advent of the calculator or the computer, AI tools are integrated into our lives around the clock; they are constantly with us.” This is inadvertently developing into a new form of intelligence, which Dr Hammoud refers to as "‘alien intelligence’, meaning that we are learning to adapt to these machines rather than the other way around."
“Which is a serious concern, and could potentially lead to harmful outcomes,” Dr Hammoud maintains.
“What we need instead is the opposite: machines must continue to learn from us, not the other way around. If we begin learning from machines, we risk merely recycling existing knowledge.
“Human creativity stands apart precisely because it is original – we truly create, rather than replicate.”
Leonhard, when asked whether we could lose a sense of meaning or authorship in the creative process, says, “We have already achieved a loss of sense of meaning in social media by essentially creating (AI) slop, and instantly creating nonsense and TikTok swiping, mistaking it for entertainment and this is a trend observed on Netflix, TikTok, and YouTube.”
“To prevent that, we need to be more critical of what we see, and to keep asking questions.
“It’s a question of overall awareness, and the willingness to pay for quality – we need to find ways to monetise excellent work and not to put (AI) slop everywhere for free,” Leonhard states.
A checklist for AI adoption
Are there any questions individuals should ask themselves before adopting AI tools? Dr Metcalf states that individually, “We have our own ideas about what we value about ourselves and how we perceive ourselves as humans."
He wants people to ask themselves: “(1) What skills would I be comfortable losing, if a computer were much more efficient or productive than I am at using those skills? (2) Do I define myself and my humanity by my abilities or by my character? (3) What do I value in life (eg, happiness or creativity), and do I want there to be more of that stuff, or do I want to be the one who produces that stuff? And (4) How reliable are my intuitions about a future that's radically different from the present?”
At least for now, Dr Metcalf wants people to ask themselves if adopting AI tools will achieve their respective goals. “From my own perspective, I would caution people not to rely too heavily on them (AI) for high-stakes aspects of their personal lives, and generally, to do their homework about the actual track records of these models.
Ultimately, if we want there to be valuable creative and cognitive work, then it will make sense to rely on AIs to produce most of it. However, we've already seen that we derive significant value from observing humans' abilities to perform tasks that machines excel at. It may be that as AIs advance in capabilities, we start to see more of this divergence between valuing the products and valuing the humans who can produce them, even imperfectly,” Dr Metcalf says.
As for Leonhard, AI has potential for solving mankind's numerous issues, “Especially practical issues like healthcare, climate change, banking, financial, information, and of course the end of search and the rise of AI answering could be extremely powerful.”
But eventually, he wants people to come to a conclusion as to what they want from AI. “Do we want things to be increasing in quality, inequality as well, do we want social justice, or do we want maximum profits. So, the ultimate decision is about economic benefit – do we want people and the planet to pursue prosperity and peace, or do we want more prosperity? That is the question that AI is putting in the centre of the room.”
Mariam Khan is a freelance journalist and a UN volunteer. She tweets @mariaamkahn
Header and thumbnail illustration via Canva
    
  