Friday, May 05, 2023
CEOs of top artificial intelligence companies, including Alphabet's Google and Microsoft, met with President Joe Biden at the White House on Thursday to discuss risks and safeguards associated with the technology, which is catching the attention of governments and lawmakers worldwide.
Generative artificial intelligence, such as ChatGPT, has become a buzzword this year, with many companies launching similar products that they believe will change the nature of work. Supporters claim that such tools can make medical diagnoses, write screenplays, create legal briefs, and debug software.
However, there is growing concern about how the technology could lead to privacy violations, skew employment decisions, and power scams and misinformation campaigns.
The meeting lasted two hours and included Vice President Kamala Harris, administration officials, and top AI executives. During the meeting, Harris stated that while the technology has the potential to improve lives, it could also pose safety, privacy, and civil rights concerns.
She called on the chief executives to ensure the safety of their artificial intelligence products and stated that the administration is open to advancing new regulations and supporting new legislation on artificial intelligence.
The administration also announced a $140 million investment from the National Science Foundation to launch seven new AI research institutes and stated that the White House's Office of Management and Budget would release policy guidance on the use of AI by the federal government. Additionally, leading AI developers will participate in a public evaluation of their AI systems.
The Republican National Committee produced a video featuring a dystopian future during a second Biden term, which was built entirely with AI imagery. Political ads like this are expected to become more common as AI technology proliferates.
The Biden administration has already taken some steps to address concerns related to AI, including signing an executive order directing federal agencies to eliminate bias in their AI use and releasing an AI Bill of Rights and a risk management framework.
Last week, the Federal Trade Commission and the Department of Justice's Civil Rights Division also said they would use their legal authorities to fight AI-related harm.
However, some experts argue that the US has fallen short of the tough approach taken by European governments in regulating technology and creating strong rules on deepfakes and misinformation.