December 26, 2025
In the wake of increasing use of AI browsers, OpenAI has warned that AI-driven browsers and assistants are facing a grave cybersecurity threat, particularly pointing to risks posed by prompt injection attacks.
As AI browsers are becoming more mainstream, so are the threats. Security researchers from various sectors have also voiced these concerns, noting that the rapid adoption of AI tools is outpacing the development of effective security measures.
Prompt injection attacks are carried out through embedding malicious instructions within seemingly decent content, such as web pages or emails.
When AI systems process this content, they may inadvertently prioritise the attacker’s hidden commands over legitimate user requests.
Researchers have proved that these attacks can lead to data leaks, manipulation of outputs, and even overshadow safety controls. In some cases, AI agents were manipulated into performing actions they were designed to avoid, such as accessing restricted files.
What sets AI browsers apart from traditional web browsers is that they actively interpret language and take action, automating tasks like form completion and document retrieval.
This functionality, especially in enterprises, often comes with more permissions, which ultimately increases the impact of successful attacks.
Experts warned that using conventional defences is not a viable manner of detecting prompt injection attacks, as the malicious payload is often plain text and can be hidden in comments or metadata.
Studies also indicated that even well-trained AI models struggle to differentiate between legitimate and harmful instructions when presented in natural language.
With companies accelerating the adoption of AI systems, it is high time that we adopted robust security frameworks to remain unharmed from such cybercrimes.