How deepfakes are on the verge of destroying political accountability


AI-generated pictures, videos and voices — called deepfakes — are so believable and widely available that people will soon not be able to discern between real and manipulated media, an image analyst told Fox News. 

“What’s important about deepfakes is not, ‘oh, we can manipulate audio, images and videos’—we’ve always been able to do that,” said Hany Farid, a professor at University of California, Berkeley’s School of Information. “But we’ve democratized access to technology that used to be in the hands of the few, and now are in the hands of the many.” 

“When we enter this world where any audio, image or video can be manipulated, well, then how do you believe anything?” Farid continued. “Anytime I see the president speak or a candidate speak or a CEO speak or a reporter speak, now there’s this lingering doubt.”

U.C. BERKLEY PROFESSOR ISSUES WARNING ABOUT DEEPFAKE AI IMAGES AND VIDEOS: 

WATCH MORE FOX NEWS DIGITAL ORIGINALS HERE

AI has been used recently to manipulate battlefield images and videos of the war in Ukraine and to generate fake imagery such as the Pope wearing designer clothing. Earlier this month, media companies warned readers not to believe fake AI-generated mugshot photos of former President Trump ahead of his indictment. 

“The disinformation campaigns are now going to be fueled with generative AI audio, images and videos,” Farid told Fox News.

Intelligence analysts have warned deepfakes could impact elections and public opinion on a large scale. As a result of AI’s recent rapid growth, tech leaders like Elon Musk have called for a pause in large-scale development over safety concerns. 

Elon Musk told Tucker Carlson that AI poses a potential civilizational threat to humans.  (Fox News)

CREEPY APOLLO 11 NIXON DEEPFAKE VIDEO CREATED BY MIT TO SHOW DANGERS OF HIGH-TECH MISINFORMATION

Farid warned that when the lines between real and fake are blurred, it will be harder for internet users to distinguish true from false. 

“The fakes are becoming more and more real and more and more difficult to discern, particularly at the speed at which the internet works,” the professor said. “So that’s really worrisome when we can’t trust the things we read, see and hear online.” 

Another concern around deepfakes is the ability to generate non-consensual sexual imagery, Farid told Fox News. 

U.C. Berkley School of Information professor Hany Farid looks at the camera.

Deepfake images will be impossible to discern from real images in coming years, U.C. Berkley School of Information professor Hany Farid told Fox News. (Jon Michael Raasch/Fox News)

CREEPY DEEPFAKE BOT CREATED FAKE NUDES BY ‘UNDRESSING’ IMAGES OF MORE THAN 100,000 WOMEN: RESEARCH

“People are taking women’s likenesses, whether that’s a politician, a journalist, a professor, somebody who’s just attracted unwanted attention and inserting her likeness into sexually explicit material and then carpet bombing the Internet with that as a weaponization,” he said. 

Additionally, Farid said scams using AI-generated voices to extort families for money are on the rise.

“We have seen a startling rise in fraud where people are getting phone calls now from what sounds like loved ones saying they’re in trouble,” Farid said. “It sounds like your loved one in panic and you send money.”

The ChatGPT logo on a smartphone

AI platforms used to manipulate voices have been used in call scams. (Gabby Jones/Bloomberg via Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

Making deepfakes and using AI for everyday tasks will only get easier and may soon be able to be done using a cell phone, Farid said.

“We should think very carefully about how we’re going to distinguish the real from the fake,” he told Fox News.

To watch the full interview with Farid, click here



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *