OpenAI CEO Altman takes U-turn, says no plans to leave Europe

Web Desk
May 26, 2023

Artists accuse AI firms of using their work to train their image-creating bots

Share Next Story >>>
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023. — AFP

Sam Altman, CEO of OpenAI,the creator of artificial intelligence-powered (AI) chatbot ChatGPT, has made a U-turn on his remarks he made earlier this week about leaving European Union (EU) if the company failed to comply with the regulations put forth to oversee the technology.

CEO Altman said on Twitter Friday he had a cert productive discussion in Europe on AI regulation, saying "we are excited to continue to operate here and of course have no plans to leave."

The legislation planned by the EU to regulate AI would be the first one to particularly keep the technology in check. The law could also ask the AI companies to show which copyrighted material they used to train their machines to create texts and images.

Sam Altman to Reuters that "the current draft of the EU AI Act would be over-regulating."

"But we have heard it's going to get pulled back," the CEO of a San Francisco-based company said.

AI companies are accused by creative artists that firms are using their work such as music and art to train machines to imitate their work.

The photo shows a computer screen with the home page of the artificial intelligence OpenAI website, displaying its chatGPT robot. — AFP/File

However, Altman contended that it would be technically impossible for OpenAI to comply with some of the AI Act's safety and transparency requirements, as per the Times Magazine.

During an event at University College London, the 38-year-old said he was optimistic AI could create more jobs and reduce inequality.

Dangers of AI and leaders initiatives

He also met with British Prime Minister Rishi Sunak alongside the other heads of AI companies including DeepMind and Anthropic and discussed the technology's risks — from disinformation to national security and even "existential threats" — and the voluntary actions and regulations required to manage them.

Experts have been voicing their concerns that AI technology could threaten the existence of human civilisation as back in March, an open letter was written and signed by a large number of tech leaders and CEOs including Elon Musk, underscoring that AI systems are a threat to humans and urged to slow down its development.

However, PM Sunak said AI could "positively transform humanity" and "deliver better outcomes for the British public, with emerging opportunities in a range of areas to improve public services".

(From right) Prime Minister Rishi Sunak meets with other tech leaders including Sam Altman, CEO of OpenAI, in 10 Downing. — Twitter/10Downingstreet/File

Last week, at the G7 summit in Hiroshima, the leaders of the US, UK, Germany, France, Italy, Japan and Canada agreed to create "trustworthy" AI urging the world to evaluate the opportunities and challenges posed by these systems.

They also said that "a working group will be set up to tackle issues from copyright to disinformation".

Before, any legislation, the European Commission is looking to develop an AI pact with Alphabet.

Silicon Valley veteran, author and O'Reilly Media founder Tim O'Reilly said the best start would be mandating transparency and building regulatory institutions to enforce accountability.

"AI fearmongering, when combined with its regulatory complexity, could lead to analysis paralysis," he said.

"Companies creating advanced AI must work together to formulate a comprehensive set of metrics that can be reported regularly and consistently to regulators and the public, as well as a process for updating those metrics as new best practices emerge."

More From Sci-Tech