Stronger safeguards needed as AI healthcare grows, warns WHO Europe

By APP
November 19, 2025

WHO official says AI will either be used to improve people's health or it could undermine patient safety

A logo is pictured outside a building of the World Health Organisation (WHO) in Geneva, Switzerland, April 6, 2021. — Reuters

COPENHAGEN: The growing use of artificial intelligence in healthcare necessitates stronger legal and ethical safeguards to protect patients and healthcare workers, the World Health Organisation's Europe branch said in a report published Wednesday.

That is the conclusion of a report on AI adoption and regulation in healthcare systems in Europe, based on responses from 50 of the 53 member states in the WHO's European region, which includes Central Asia.

Advertisement

Only four countries, or 8%, have adopted a dedicated national AI health strategy, and seven others are in the process of doing so, the report said.

"We stand at a fork in the road," Natasha Azzopardi-Muscat, the WHO Europe's director of health systems, said in a statement.

"Either AI will be used to improve people's health and well-being, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care," she said.

Almost two-thirds of countries in the region are already using AI-assisted diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.

The WHO urged its member states to address "potential risks" associated with AI, including "biased or low-quality outputs, automation bias, erosion of clinician skills, reduced clinician-patient interaction and inequitable outcomes for marginalised populations".

Regulation is struggling to keep pace with technology, the WHO Europe said, noting that 86% of member states said legal uncertainty was the primary barrier to AI adoption.

"Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong," said David Novillo Ortiz, the WHO's regional advisor on data, artificial intelligence and digital health.

The WHO Europe said countries should clarify accountability, establish redress mechanisms for harm, and ensure that AI systems "are tested for safety, fairness and real-world effectiveness before they reach patients".


Next Story >>>
Advertisement

More From Health