AI is learning women, not just images

By Batool Mufti
November 07, 2025

Could AI somehow ‘know’ something it isn't shown? Could it infer private details from patterns we never consente to share?

The representational image shows a woman created by Gemini AI with Saree Portrait trend. — FacebookGraphicsSolutionTricks

It began innocently, a viral ‘Saree Portrait’ trend sweeping through South Asia. Women across Pakistan, India, Bangladesh and Sri Lanka uploaded their photos to AI apps that transformed them into digital saree portraits: glowing faces, soft backdrops and perfectly styled drapes.

For many, it was playful, even empowering, or a way to see themselves through a lens of beauty and culture. But for one Pakistani woman, the experience turned eerie.

Advertisement

After submitting her photo to Google’s Gemini AI, she received a generated image showing a mole on her arm, one that existed in real life but wasn’t visible in the original photo. What seemed like a technical coincidence unsettled her deeply. Could the AI somehow ‘know’ something it wasn’t shown? Could it infer private details from patterns she never consented to share?

This is not a story about one woman or one image; it’s a glimpse into how technology, when unregulated, becomes an instrument of fear – especially for women in patriarchal societies.

Generative AI systems like Gemini and Midjourney are trained from massive datasets of online images, videos and text. Instead of seeing like humans, they identify and replicate patterns, analysing faces, shapes and cultural cues, allowing them to predict or recreate details, even ones not visible, based on prior data.

In countries with strong data protection laws, that’s already cause for alarm. In Pakistan, where data privacy legislation remains a draft on paper, it’s a crisis waiting to happen. The Personal Data Protection Bill, modelled on Europe’s GDPR, promises user rights and transparency, but years later, it remains unimplemented. In the meantime, women live within a legal vacuum where neither synthetic image generation nor AI inference is addressed.

Digital rights advocate Sadaf Khan explains, “Even without an AI-specific policy, Pakistan’s Peca law can address harms like deepfakes, but data protection laws don’t fully cover AI-related issues. Since most AI platforms operate abroad, holding them accountable is difficult, though individuals in Pakistan who misuse AI can still face prosecution”.

However, justice in gendered digital violence remains far from achievable. Across Pakistan and its neighbouring countries, the weaponisation of women’s images is escalating at a terrifying speed. Deepfake pornography, once a fringe threat, is now widespread. Victims often wake up to find fabricated nude photos of themselves circulating on Telegram groups or being used for blackmail.

Earlier this year, a young Pakistani content creator became a victim of digital manipulation when her Instagram photos were altered to create fake explicit images. The doctored visuals spread rapidly online, leading to public shaming and harassment. Despite being the victim, she faced severe backlash and character attacks, exposing how technology-enabled abuse is compounded by a culture that blames women instead of protecting them.

Across the border, in India, a similar nightmare unfolded when women journalists, activists and even students found their photos listed in a mock ‘auction’ on an app that digitally placed their faces on pornographic images. The creators, young men, called it a joke. For the victims, it was a violation that went beyond the digital realm; it entered their homes, their families, and their safety.

In conservative societies, where women’s reputations are fragile currency, the damage is not limited to the internet; it can lead to social ostracism, professional ruin, or even physical danger.

Sadaf Khan, who is also the founder of a leading media development organisation, Media Matters for Democracy (MMfD), highlights that “deepfakes blur real and fake, exposing women to safety threats and stigma. Although Peca criminalises such acts, legal protections often fail to prevent harm or stigma, underscoring the need for a deeper societal response”.

These synthetic images spread faster than truth can catch up. Algorithms reward virality, not accuracy. Once a deepfake is online, the burden shifts to the victim to prove that what people are seeing is not real. The psychological toll of that inversion is immense.

While men are also targeted by digital manipulation, the harm is not gender neutral. In Pakistan and the broader region, where women’s honour and privacy are bound to societal expectations, such violations become instruments of control. They reinforce silence, shame and withdrawal from digital spaces. Women stop posting, stop engaging, stop existing online. The cost is not just personal, it’s political. It erases their voices from public discourse.

The Saree Portrait trend, in this light, feels less harmless. Every upload, every viral challenge adds to the pool of high-resolution female imagery feeding global AI systems. While most platforms claim to delete or anonymise data, transparency is rare and accountability nonexistent.

Sadaf Khan further points out that “holding major tech and AI platforms accountable is a global challenge. Initiatives like the UN’s High-Level Body on AI and the Global Digital Compact are shaping governance around AI and women’s safety, emphasising that true protection requires embedding safety and accountability into AI systems from the design stage”.

Even if Gemini or other major platforms act responsibly, their datasets are not isolated. Once personal photos exist online, they can be scraped, traded, or used to train other, less regulated models. The next generation of deepfakes won’t need hacking; it will need only imagination.

Education is the first line of defence, but it must go beyond basic digital literacy. Women in Pakistan and South Asia need to understand how AI works and how it learns, infers, and deceives to better protect themselves in the digital age.

Legal reform is urgent. Pakistan needs to pass the Personal Data Protection Bill and clearly define AI-related offences, as outdated laws like Peca no longer suffice. Regional cooperation is also vital, through shared protocols, hotlines and tech partnerships, to combat deepfakes that easily cross borders.

And lastly, individual precautions are crucial. Women should think carefully before joining AI trends, avoid sharing high-resolution or identifiable photos, and use blurred or cropped versions instead. If a deepfake or altered image appears, they should report it immediately and keep records such as screenshots, timestamps, and links.

The Saree Portrait trend may pass, but its warning remains: in societies where images can define a woman’s fate, AI’s ability to ‘see’ too much is dangerous. The real concern is not AI’s knowledge, but our readiness to face the consequences of allowing it to learn from us.


The writer is a media, research and creative services expert currently working with Pakistan TV Digital.


Disclaimer: The viewpoints expressed in this piece are the writer's own and don't necessarily reflect Geo.tv's editorial policy.



Next Story >>>
Advertisement

More From Opinion