AI chatbots allowing users to talk with Hitler, Nazi figures sparks radicalisation fears


As world leaders are being called on by experts to introduce laws to regulate the rampant growth of artificial intelligence (AI) warning of its alarming impact on society, there is yet fear stemming from AI-based chatbots, radicalisation. 

Hitler chatbot

In 2022, the release of ChatGPT marked a pivotal moment in the development of AI, but some of its responses to users prompted fears notably in the context of far-right extremism and now they may be becoming a reality. 

Gab, an American social networking platform, is known for its far-right user base and is reportedly often described as a haven for neo-Nazis and racists. 

WATCH | Rise of racist, fascist AI bot platforms | Navigating extremism in the era of AI

Andrew Torba, Gab’s CEO, first announced the company’s AI agenda in January last year in a post declaring that “Christians Must Enter the AI Arms Race,” according to a report by Rolling Stones. 

He also reportedly called out the “liberal/globalist/talmudic/satanic worldview” of mainstream AI tools and vowed to build a system which upholds “historical and biblical truth.” 

This included creating a chatbot with characters including Nazi leader Adolf Hitler and Osama bin Laden. 

Torba, in a blog post about AI, also cited his conversations with ChatGPT saying that it will “scold” users for asking controversial questions and “shove liberal dogma down your throat, trying to program your mind to stop asking those questions”. 

A report by Rolling Stone citing a preview by the tech company shows it has created an array of right-wing AI chatbots, including one named “Uncle A”. 

“Uncle A” reportedly poses as Hitler and denies the Holocaust, calling the slaughter of six million Jews “preposterous” and a lie “perpetrated by our enemies.” 

‘Weaponization of chatbots’

“It would appear that the potential weaponisation of chatbots is well under way and now presents a clear security threat,” said Adam Hadley, founder and executive director of Tech Against Terrorism, as quoted by The Times. 

He added, “We can see use cases where these specially developed automated tools can radicalise, spread propaganda, and disseminate misinformation.”

A survey by the Anti-Defamation League, in May last year, found that a majority of Americans are worried about AI’s impact, including the propagation of false information, radicalisation, and the promotion of hate and antisemitism.

‘Unusually high’

The Anti-Defamation League, in a separate report, said that right-wing extremists committed every ideologically driven mass killing in 2022 with an “unusually high” number perpetrated by white supremacists. 

In 2018, Robert Bowers perpetrated the deadliest attack on Jews in United States history when he opened fire with an AR-15 rifle after he had reportedly raved on social media about his hatred of Jewish people. This was without the presence of an AI chatbot. 

Larger issue?

Experts have also feared that the responses of these chatbots could misrepresent historical facts. 

Last year, an app called Historical Figures which allowed users to speak to notable people from history including Hitler, his Nazi lieutenants and other dictators from the past, sparked controversy. 

After users shared screenshots of their conversations, including one with Heinrich Himmler, the chief of Nazi Germany’s SS and an architect of the Holocaust. The app version of Himmler denied responsibility for the Holocaust despite his well-documented role.

(With inputs from agencies)

 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *