Donald Trump, what deepfakes teach us about his arrest
Donald Trump
In these hours on social media you may have come across some images depicting the arrest of Donald Trump . As you probably already know, the photos, which quickly went viral, were generated by artificial intelligence (Ai) and are therefore fake. Some of these photorealistic creations are quite convincing, while others look more like frames from a video game or a lucid dream. A thread viewed over 3 million times and posted on Twitter by Eliot Higgins, founder of investigative journalism organization Bellingcat, shows Trump surrounded by synthetic cops, on the run and picking out a jail suit.Twitter content This content can also be viewed on the site it originates from.
We asked Higgins for some tips that internet users can keep in mind to distinguish fake images created by AI – such as those in his post on Twitter – from any real photos that in the near future could immortalize the arrest of the former president of the United States, which he himself feared in recent days .
" Having created many images for the thread , it is evident that often the focus is on the first object described – in this case Trump and various members of his family –, while everything around him is often flawed,” Higgins explains via email. Move your gaze away from the focal point of the image: does the rest of the photo seem to you to be a border element?
Even though the latest versions of Ai imaging tools, such as Midjourney (Higgins used the fifth version of the system for his thread) and Stable Diffusion, are making significant progress, bugs in smaller details continue to be a common feature of fake images. And as Ai art grows in popularity, many artists point out that algorithms still struggle to replicate the human body consistently and naturally.
Looking at the images in Higgins' thread, Trump's face appears quite convincing in many of the posts, as well as his hands. Sometimes, however, the body proportions can seem a little strange and it happens that an image of the former president merges with that of a police officer.
Need another clue? Notice strange writing on walls, clothing, or other visible objects. For Higgins, the presence of texts with a bizarre shape is a useful indication to distinguish fake images from real photos. In Ai images depicting the arrest of Donald Trump, for example, police are wearing badges, hats and other documents that appear to have writing at first glance. On closer inspection, however, the words are meaningless .
Another way to determine if an image has been generated by artificial intelligence is the over the top facial expressions : " I have noticed that if Asked to reproduce expressions, Midjourney tends to render them in an exaggerated way. For example, very pronounced skin folds due to a smile, "says Higgins. The pained expression on Melania Trump's face is more reminiscent of a reconstruction of Munch's The Scream or a frame from an A24 horror film than a snapshot taken by a human photographer.
It should be noted that in deepfakes world leaders, celebrities, influencers, and anyone with a large amount of photos circulating online can be more persuasive than those with a less visible internet presence. “It is clear that the more famous a person is, the more images of him with which Ai has been trained – says Higgins -. So very famous people are rendered extremely well, while less famous people usually come off as a bit lopsided”. If you don't want to risk having an algorithm recreate your face, perhaps it might be worth thinking twice before posting a series of selfies after an evening with friends (although it is likely that the Ai generators have already collected the data from the web relating to your images).
But what is Twitter's policy regarding images generated by artificial intelligence? Currently, the platform's rules state that "it is not allowed to share artificial, manipulated or out-of-context media that may deceive or confuse people and cause harm ('misleading media')". At the same time, Twitter provides several exceptions for memes, comments and posts created without the aim of misleading users.
Just a few years ago, it was almost unthinkable that soon a normal person would be able to fabricate photorealistic deepfakes of world leaders right from home. As it becomes increasingly difficult to distinguish AI images from real ones, social media platforms may need to re-evaluate their approach to synthetic content and find ways to guide users through the complex and often disturbing world of generative AI.
This article originally appeared on sportsgaming.win US.