August 19, 2025
AI is a reality, and as everyday people gain access to writing tools, a palpable discomfort has emerged among some.
The concern is often expressed in arguments about authenticity, creativity or the sanctity of 'real writing’. I must confess that, until recently, I held the same view. But now, I am beginning to wonder if the anxiety is really about craft. When Jensen Huang, CEO of Nvidia, in an interview called AI the “greatest equaliser”, he hit a nerve that makes these conversations uncomfortable.
If AI can now help anyone, regardless of their English fluency, formal training, or access to elite editors, express complex ideas in polished prose, then the gates that have long protected the domain of public intellectuals and writers are suddenly wide open. That is not just disruptive, it is threatening to those who have defined themselves by being inside the gates. It is then worth asking whether the backlash against AI-assisted writing is a defence of ethical standards or just another form of elitist gatekeeping?
Let us begin with the basic inconsistency. In technical disciplines such as the natural sciences, computer science or engineering, AI is used liberally and without hesitation. Autocomplete tools are standard for coders; AI aids drug discovery and researchers routinely use machine learning to process massive datasets. No one accuses them of intellectual fraud. In fact, the more efficiently they use AI to solve problems, the more they are praised.
But shift to the realm of writing, especially in the social sciences or creative domains, and suddenly AI becomes suspect. Why? Is the act of putting words together somehow more sacred than building code or running complicated regressions? Or is it because language, especially refined English, has long served as a cultural and class marker? One that AI is now making dangerously accessible?
Let us be honest and accept that English fluency in our part of the world is not just a skill but a status symbol. It is part of what I called a ‘gora complex’ in a previous article. It gets you noticed. It gets you published. It gets you identified as someone who studied at an elite institution. And for those who have spent years honing this craft, like writing crisp prose, mastering metaphors and learning when to use ‘nonetheless’ instead of ‘but’, the sudden arrival of AI feels like someone just opened the club doors and handed out free VIP passes. What was once exclusive now feels disturbingly accessible.
For decades, tools like spell-check, grammar-check, thesauruses and even human editors have been accepted as routine writing aids. No one questions the legitimacy of a researcher using Grammarly, or a novelist working with a professional editor. So why draw the ethical line at AI-assisted phrasing or structural improvement, especially when the core ideas, arguments and insights come from the human? Why is a human editor seen as a collaborator, but an AI assistant viewed as a fraud machine?
If someone’s argument is sound, their data original and their perspective thoughtful, does it really matter whether the prose was polished by AI or a well-paid copyeditor? Or is the discomfort really with the fact that AI is cheap, accessible and does not require a network of elite contacts?
That is where Huang’s framing of AI as a “democratising force” becomes important. His argument is not just about productivity but about power. Historically, access to public discourse has been shaped not just by who had good ideas but by who had the tools, fluency and access to present those ideas.
Traditionally, formal, grammatically precise English has been the ticket to getting published, cited or taken seriously. But what about the people who think brilliantly but write awkwardly? Or the scholars whose native language is not English, but who have world-class insights to offer? If they can use AI to bridge the gap between their knowledge and the gatekeeping conventions of global publishing, isn’t that something to be celebrated rather than condemned?
The colonial echo in this debate is loud. For centuries, English has been weaponised to silence voices from the Global South. Editors and publishers routinely push non-Western writers to change their idioms, restructure their arguments, or conform to Western narrative structures. Now, if AI can help a Cuban sociologist or a Senegalese poet reach an international audience while retaining their conceptual core, then what is inauthentic in it? It is a way of refusing to be kept out because you did not go to Oxbridge or because your first drafts do not conform to New Yorker-style English. As long as the ideas are your own, and you are transparent about your use of AI, as you would be with any editor, then you are not faking your voice. You are amplifying it.
There are, of course, valid concerns. Fully AI-generated content that lacks human insight, ghostwritten theses by students who do not understand their topic, or AI-generated misinformation are all real problems. But these are problems of misuse, not of the tool itself.
No one is arguing that AI should replace human thinking or originality. But why hold a grudge against an ‘AI-assisted, human-curated’ model? A partnership in which the person brings the substance, and AI helps with clarity, structure or translation. This is no different from a translator helping Gabriel Garcia Marquez reach English-speaking readers, or a mentor helping a young academic organise their thoughts. The ethics, then, hinge not on the use of AI, but on the honesty of authorship. If you are using AI to polish and shape your thinking, you are still the author. If you are outsourcing the thinking itself, that is where the line blurs.
And that brings us to the heart of the issue: fear. Not fear that AI will destroy writing, but fear that it will democratise it. Fear that a student in a less developed part of Sindh with brilliant ideas but broken English might now get published. Fear that a woman in Nigeria who never went to Harvard might write an op-ed that goes viral. Fear that the club of English-speaking elites, with their polished prose and literary references, will no longer be the sole custodians of public discourse.
This is not just gatekeeping; it is a form of class preservation, cloaked in concerns about authenticity. But the future of knowledge production cannot belong only to those who passed the gatekeeping rituals of elite English. It must include those whose experiences, ideas and insights are just as rich, but who may need AI’s help to tell their story.
I fully believe that writing is not disappearing. It is evolving. And the best writing will always be human at its core because AI cannot generate lived experience, cultural nuance or moral insight. Just as spell-check did not end literacy, and calculators did not end mathematics, AI will not end writing. It will simply make it more accessible, moving beyond any linguistic privilege.
AI has its problems, and lots of them, as I wrote in one of my previous articles. But maybe it is time to stop seeing AI as the enemy of expression and start seeing it as the spell-checker that finally got a personality. It is not coming for your creativity. It is just helping others find theirs. And if that levels the field a bit, makes the discourse richer, and lets more voices be heard, is that not the whole point of writing in the first place?
The writer is a demographer. She tweets/posts @durre_nayab_
Disclaimer: The viewpoints expressed in this piece are the writer's own and don't necessarily reflect Geo.tv's editorial policy.
Originally published in The News