US experts feature ongoing cheapfakes threat despite deepfakes rise

Despite growing concerns over the rise of deepfakes, cheapfakes information continue to pose a significant challenge, said Dr. Heather Ashby, a US foreign policy and national security expert specialising in the impact of technology on elections.

Dr. Ashby made her remarks during an event jointly organized by the US Embassy in Dhaka and the Diplomatic Correspondents Association, Bangladesh (DCAB) at the EMK Center today.

Her work focuses on the intersection of national security, artificial intelligence (AI), and election.

Dr. Ashby explained that cheapfakes involve manipulated media, where video, audio, and images are altered with relatively simple and low-cost editing tools.

In contrast, deepfakes rely on AI to create synthetic media that can seamlessly alter video, audio, and images, making it harder to detect.

She cited an example of a cheapfake from late 2020, when spokesperson for China’s foreign ministry, Lijian Zhao, shared a fabricated image of an Australian soldier holding a bloody knife next to a child.

Dr. Ashby also referenced misleading videos circulated on social media just days after the 2020 US presidential election, falsely suggesting that election workers were engaged in voter fraud.

The doctored footage went viral on social media, demonstrating the potential harm of cheapfakes, even though the law enforcement agency looked into and refuted these charges.

Dr. Ashby talked about the role of deepfakes, which have appeared in a number of elections, including the US presidential campaign, while also highlighting the danger of cheapfakes.

Even when deepfakes are meant to be humorous, they add to the persistent problem of manipulated media during elections, she said, even though some AI-generated content is used for satire and parody.

Dr. Ashby included a number of resources for identifying deepfakes and cheapfakes.

In response to a question about AI in foreign policy, she emphasized that AI is most effective when directed toward a specific problem or challenge.
She explained that within the US national security landscape, AI has been used extensively-well before the public release of ChatGPT in 2022.

AI enables the processing of vast amounts of data, facilitating the identification of anomalies that could indicate potential threats, she said.

The US Department of Homeland Security (DHS), for instance, leverages AI to support law enforcement and emergency response.

AI helps the Federal Emergency Management Agency (FEMA) in disaster response, where it plays a critical role in efficiently coordinating resources and managing information.

DCAB President Nurul Islam Hasib also spoke at the event.

This article has been posted by a News Hour Correspondent. For queries, please contact through [email protected]
No Comments

Leave a Reply

*

*