Microsoft's Brad Smith casts concerns over deep-fake technology

By
Web Desk
|
Microsoft vice chair and president Brad Smith speaks at the Semafor World Economic Summit on April 12, 2023, in Washington, DC. — AFP
Microsoft vice chair and president Brad Smith speaks at the Semafor World Economic Summit on April 12, 2023, in Washington, DC. — AFP

Amid the fast development of artificial intelligence (AI) and experts coming forth voicing their concern over the pace of technology, Microsoft President Brad Smith said Thursday that he was worried about deep fakes, realistic-looking but false content.

During his speech in Washington on the topic of how AI should be regulated he called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for ill objects.

The experts have been concerned about the technology that boosted after the release of OpenAI’s ChatGPT — a human-like AI-powered chatbot capable of writing human responses.

Smith said: "We're going have to address the issues around deep fakes. We're going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians."

This photo shows a Microsoft logo displayed at the Mobile World Congress in Barcelona. — AFP/File
This photo shows a Microsoft logo displayed at the Mobile World Congress in Barcelona. — AFP/File

"We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI."

Smith also called for licensing for the most critical forms of AI with "obligations to protect the security, physical security, cybersecurity, national security."

"We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country's export control requirements," he underlined.

This photo shows screens displaying the logos of OpenAI and ChatGPT. — AFP/File
This photo shows screens displaying the logos of OpenAI and ChatGPT. — AFP/File

Legislators in Washington have been struggling to find solutions regarding how to best regulate the as the tech giants like Microsoft and Google have jumped into incorporating the technology into their products.

CEO Open AI Sam Altman last week told a Senate panel in his first appearance before Congress that the use of AI interfering with election integrity is a "significant area of concern", adding that it needs regulation.

Altman, whose OpenAI start-up is backed by Microsoft, also called for global cooperation on AI and incentives for safety compliance.

Smith further argued also in his blog post, Thursday, that people "needed to be held accountable for any problems caused by AI," urging lawmakers to ensure that "safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control."

OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023. — AFP
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023. — AFP

He also suggested the use of a "Know Your Customer"-style system for developers of powerful AI models to “keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos.”

Back in March, hundreds of researchers, CEOs and tech leaders including Tesla owner Elon Musk signed an open letter that voiced concerns over the “profound risks" AI technology poses to society and humanity.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," said the open letter.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.