US-led AI pact draws skepticism from experts: ‘feel-good measure’


The U.S. and U.K. joined more than a dozen countries to unveil a new artifical intelligence agreement aimed at preventing rogue actors from abusing the technology, though not all experts are sold on how useful the pact will be.

“This is really more of an agreement of intent than actual substance,” Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital.

Siegel’s comments come after what a U.S. official described as the first ever detailed agreement on AI safety was unveiled Sunday, according to a report from Reuters, putting measures in place that are meant to create AI systems that are “secure by design.”

MULTI-NATION AGREEMENT SEEKS COOPERATION ON DEVELOPMENT OF ‘FRONTIER’ AI TECH

Vice President Kamala Harris watches President Biden sign an executive order during an AI event at the White House on Oct. 30, 2023. (Al Drago/Bloomberg via Getty Images)

The 20-page document that was signed by 18 countries acknowledged the need to develop ways to keep the public safe from potential abuses of AI technology. But the agreement is non-binding and will serve more as a guide on how to monitor AI systems for abuse, Reuters reported.

“There needs to be some specifics behind this – a set of procedures and regulations – before anyone can give a reaction beyond ‘it’s a good start,'” Siegel said. “Examples of actions might include the watermarking initiatives on algorithms and/or outputs, asking vendors to perform a KYC (know your customer) procedure like banks do to prevent money laundering, or algorithmic stress tests to make sure they can’t easily be manipulated by bad actors.”

Christopher Alexander, chief analytics officer of Pioneer Development Group, called the new agreement nothing more than a “feel-good measure.”

“The only potential value of an agreement like this is if it paves the way for regulations that governments can use to punish people who misuse AI,” Alexander told Fox News Digtial. “Provide governments with enforceable rules and industry with guidelines to follow that actually matter. Otherwise it is the typical ‘admiration of the problem’ rather than solving anything.”

A man is seen using the OpenAI ChatGPT artificial intelligence chat website

A man uses the OpenAI ChatGPT artificial intelligence chat website in this illustration photo on July 18, 2023. (Jaap Arriens/NurPhoto via Getty Images)

EXPERT SAYS BIDEN ADMIN’S AI SAFETY INSTITUTE NOT ‘SUFFICIENT’ TO HANDLE PITFALLS

The deal comes as the Biden administration has pushed for more regulation of AI, including signing an executive order last month that was hailed as the first step toward safer development of AI. But experts are also skeptical that order will carry much weight, something the administration has acknowledged while pushing Congress for more action.

Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore were among the countries to sign the latest international agreement, something U.S. Cybersecurity and Infrastructure Security Agency Director Jen Easterly said was an important step toward AI safety across the globe.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly told Reuters, adding that the pact’s guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”

Nevertheless, experts remain skeptical, with Bull Moose Project Policy Director Ziven Havens arguing the pact is “bland” and lacks “seriousness.”

Biden speaks at White House

President Biden speaks from the Oval Office of the White House. (AP/Jonathan Ernst/Pool)

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

“Eighteen countries came together to work on this paper and the best they could come up with was to remind AI developers about securing their supply chains and monitoring model security,” Havens told Fox News Digital.

Havens also points out the lack of enforcement mechanisms in the deal, arguing that it read more like “an op-ed than a serious policy proposal.”

“Instead, Congress should take the leap and propose serious legislation on AI, including on generative AI and minor safety,” Havens said.

That concern was shared by Samuel Mangold-Lenett, a staff editor at The Federalist, who argued for more effort on AI regulation in the U.S.

CLICK HERE TO GET THE FOX NEWS APP

“This agreement will likely have little, if any, impact. It’s ‘non-binding’ and lacks enforcement mechanisms,” Mangold-Lenett told Fox News Digital. “We want to encourage AI developers to create secure products that protect user data and intellectual property, but we need action on the home front, not empty gestures abroad.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *