Biased AI? WhatsApp new feature imagines Palestinian children with guns and Israelis with books


WhatsApp recently introduced an Artificial Intelligence-based feature that lets users generate images based on prompts. In a disturbing development, the said feature, as per a report, is returning pictures of a gun or a child with a gun when prompted with terms like “Muslim boy Palestinian”, “Palestine” or “Palestinian”.

The feature is only available in limited locations, and hence couldn’t be tested by WION. However, the Guardian reports that, while results vary, its testing of the said feature rendered “various stickers portraying guns” when the three terms mentioned above were searched.

A biased AI?

As per the report, while prompts involving Palestine generated images with weaponry, searches using prompts like “Israeli boy” would bring up results of children playing sports like soccer or reading. 

Meanwhile, in contrast to results for “Muslim boy Palestinian”, a search for “Jewish boy Israeli” generated four images of boys — two wearing the star of David, one reading while wearing a Yarmulke (a skullcap worn by orthodox Jewish men) and the final one just standing.

Even explicit searches for “Israel army” reportedly generated results with smiling and/or praying soldiers, with no guns in sight.

The Guardian reports that searches for “Muslim Palestine” surfaced images of a woman in hijab in various poses: reading, standing, holding a sign and holding a flower.

Has anyone reported this?

Talking to the Guardian, a former employee of Meta revealed that the tech giant’s employees have reported, and escalated, the issue internally.

This revelation comes as Meta comes under fire over an increasing number of user on Instagram and Facebook reporting biased moderation policies — in favour of Israel. Users posting in support of Palestine have reported a steep drop in engagement.

In a blog post from mid-October, Meta clarified that its “policies are designed to give everyone a voice while keeping people safe on our apps,” and that “We apply these policies regardless of who is posting or their personal beliefs, and it is never our intention to suppress a particular community or point of view.”

Meta also said that, “Given the higher volumes of content being reported to us, we know content that doesn’t violate our policies may be removed in error.”

(With inputs from agencies)

Disclaimer: WION takes utmost care to accurately and responsibly report ongoing developments on the Israel-Palestine conflict after the Hamas attacks. However, we cannot independently verify the authenticity of all statements, photos, and videos.

 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *