Regulators at the edge

Artificial intelligence is no longer a backstage assistant; it has become the lead actor

By |
A message reading AI artificial intelligence, a keyboard, and robot hands are seen in this illustration taken January 27, 2025. — Reuters
A message reading "AI artificial intelligence", a keyboard, and robot hands are seen in this illustration taken January 27, 2025. — Reuters

In the next three to five years, the global media landscape will change more profoundly than it has in the last three decades.

Artificial intelligence is no longer a backstage assistant; it has become the lead actor. From generative content and synthetic news anchors to algorithm-driven narratives and real-time personalisation, AI is disrupting how stories are told, shared and believed. In this new age of media metamorphosis, regulators face a challenge unlike any before: a creeping irrelevance.

Traditional regulatory frameworks, shaped in the analogue and early digital eras, are quickly losing their grip. These systems were designed to govern broadcasters, newspapers and cable channels, not decentralised content ecosystems running on cloud infrastructure and machine learning. As media consumption moves from scheduled programming to personalised feeds, and from verified newsrooms to AI-generated influencers, the fundamental premise of media regulation is under siege.

In the UK, Ofcom has expanded its purview under the Online Safety Bill, moving beyond traditional content regulation into digital harms and platform accountability. The EU has enacted the Digital Services Act (DSA) and the AI Act, imposing transparency obligations on

tech platforms and generative AI developers.

In the US, the FTC has launched inquiries into the risks of synthetic content and the need for algorithmic accountability. Germany’s NetzDG law, meanwhile, was one of the first to penalise social media companies for failing to remove hate speech and misinformation.

But in countries like Pakistan, the regulatory challenge is more acute. Regulatory bodies like the Pakistan Electronic Media Regulatory Authority (Pemra) were born in the age of linear broadcasting and are structurally unprepared for the post-linear, algorithmic world. Pemra regulates licences, transmission hours, decency codes and broadcast infractions, but these levers hold little power over digital-first platforms such as YouTube, TikTok, or even AI-powered content distribution networks.

The Pakistani media environment has already experienced an accelerated shift. Newsrooms are utilising AI tools to write articles, craft headlines and forecast audience engagement. Deepfake audio has been used to simulate political voices. Synthetic anchors are being experimented with in digital news formats. Yet Pemra’s regulatory vocabulary still centres on cable operators and satellite uplinks. This disconnect signals a deeper crisis – not of enforcement, but of conceptual relevance.

The existential threat is threefold. First, regulators are losing jurisdictional authority. Digital creators, influencers and AI engines operate across borders, often without any affiliation to traditional media entities. In Pakistan, a teenager in Multan with a smartphone can reach more viewers through TikTok than a licenced news channel in Karachi, without ever interacting with Pemra.

Second, regulators face a financial crisis of sustainability. Their revenue models are largely based on issuing and renewing licenses or penalising content infractions. However, as terrestrial and satellite viewership declines and digital migration accelerates, these sources of income are dwindling. Regulatory bodies may soon become dependent on government bailouts, raising questions about their autonomy and relevance.

Third, there’s a growing technology gap. AI-generated content, especially synthetic audio and video, evolves at such speed that regulators can’t keep up. In a recent case, an AI-generated clip impersonating a senior Pakistani official circulated widely before being debunked; however, Pemra lacked the necessary tools and protocols to act promptly. In an era of real-time misinformation, regulatory lag is not only costly but also dangerous.

However, there is a path forward if regulators embrace transformation rather than

resist it. That path starts by recognising that the future of media regulation is not about censorship or command. It’s about calibration, foresight, and systems intelligence.

To stay relevant, regulators must transform into hybrid institutions, part oversight body, part AI ethics council, part digital lab. They must develop the technical capacity to analyse algorithmic behaviour, trace synthetic content and identify disinformation patterns using the same tools used by the platforms they oversee.

Pakistan can look to other models for inspiration. The EU’s AI Act, for instance, requires generative AI systems to disclose synthetic content and provide ‘explainability’ features. This sets a precedent for algorithmic transparency, something Pemra and similar bodies in Pakistan could adopt by requiring disclosure for AI-assisted media.

Another important lesson comes from Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA), which empowers government-appointed fact-checkers to issue correction notices and remove misleading content. While the law has attracted criticism for potential overreach, its mechanism for rapid content correction could be adapted in Pakistan, particularly during election seasons when deepfakes and digital propaganda can influence public opinion.

Domestically, Pakistan already has a data infrastructure in the form of NADRA, PTA and various digital registries. These could be leveraged to introduce verified digital content credentials, ensuring traceability of origin, especially for political or commercial content. Blockchain and content watermarking could help build a ‘chain of custody’ for media, a measure far more effective than arbitrary bans or fines.

Pemra, PTA, MOIB, and MoITT could also collaborate to develop an AI media sandbox – a controlled environment for testing and regulating new media technologies before they are scaled into public use. Universities, think tanks and startups could co-develop content authenticity solutions, while regulators assess the ethical and security implications in real-time.

Equally important is a shift in regulatory culture. The watchdog must become a partner in innovation, not just an enforcer. This means engaging with platforms, creators, developers, and civil society to co-design ethical content frameworks. Pakistan has no shortage of digital talent; what it lacks is a forward-facing governance mechanism that incentivises innovation while safeguarding truth.

Above all, regulators must transcend national silos. No single country can regulate the global platforms that now shape public discourse. Regional cooperation, through Saarc, OIC or bilateral digital diplomacy, can help develop interoperable content standards, especially for synthetic content, cross-border misinformation and hate speech.

The future of media in Pakistan, and globally, will be defined not by studios or satellite dishes, but by code, computation and credibility. In such a world, the regulator who cannot code, understand AI systems or trace synthetic provenance will be as obsolete as the technology it once governed.

Ultimately, the question for Pakistan is not whether regulation is needed, but whether regulators can reinvent themselves fast enough to remain relevant. Those who evolve will shape the media of tomorrow. Those who resist will be remembered not as safeguards of the truth, but as remnants of an age that refused to adapt.


Disclaimer: The viewpoints expressed in this piece are the writer's own and don't necessarily reflect Geo.tv's editorial policy.


The writer is a public policy expert and leads the Country Partner Institute of the World Economic Forum in Pakistan. He tweets/posts @amirjahangir and can be reached at: [email protected]


Originally published in The News