AI tool creates deceptive Biden, Trump images, tests show

Despite promises to prevent phony photographs of the presidential candidates before the November elections, a watchdog claimed on Wednesday that tests using a popular AI tool enabled the fabrication of misleading and incriminating images of President Joe Biden and Donald Trump.

Disinformation specialists worry about widespread abuse of AI-powered applications in a year with significant global elections because of the proliferation of low-cost, user-friendly internet tools that don’t have enough controls.

The non-profit Center for Countering Digital Hate (CCDH) announced that it has evaluated two apps from Microsoft-backed OpenAI called ChatGPT and Midjourney that can create graphics in response to text suggestions.

“Midjourney’s guardrails failed more often,” CCDH said in a report, adding that the tool failed in forty percent of test cases.

In contrast, ChatGPT only failed roughly 3% of the time, according to CCDH.

In addition to Biden and Trump, CCDH tested the platforms against statements from German Chancellor Olaf Scholz, French President Emmanuel Macron, and President of the European Commission Ursula von der Leyen.

According to the research, Midjourney failed 50% of the tests pertaining to Biden and Trump.

Among them were pictures of Trump standing next to a body duplicate and Biden being taken into custody.

A request for comment from Midjourney was not answered.

Tech activists said in March that Midjourney had disabled all suggestions pertaining to Trump and Biden, so preventing users from fabricating photographs.

However, CCDH said that users could easily get around the rule by, in some situations, simply inserting a backslash to a prompt that Midjourney had previously banned.

This article has been posted by a News Hour Correspondent. For queries, please contact through [email protected]
No Comments