How to spot Artificial Intelligent (AI)-generated pictures and videos

Generative Artificial Intelligence (AI) software is gaining popularity, and complementing our everyday activities.

Just as AI tools are now assisting with production in factories, so are tools being developed rapidly for school assignments and other forms of multimedia production.

ChatGPT, Dall-E 2, and Lumen5 have revolutionised our way of living. The emergence of this breed of technology, although beneficial, poses a great threat to the way information is produced and shared.

With a click, anyone can create or manipulate pictures and videos (also known as deep fakes), as well as share them across social media platforms in seconds.

Deep fakes are pictures, videos, and audio that have been manipulated or created with generative artificial intelligence software. The term is coined from the deep learning technique which uses software to process large data sets with little human intervention.

Deep fake videos, pictures, and sound bites are created or altered such that an individual appears to be saying or doing something that they never did. Objects of deep faking are usually people of prominence like religious leaders, political leaders, footballers, and the like.

In 2017, researchers at the University of Washington created a model video of Barack Obama. The lip-synched video was created by mapping mouth-shaped neural nets from audio clips of the former U.S. president to create realistic mouth shapes and blend them onto target videos. While the video looks real, the content of what was being said in the video was not originally said by Barack Obama.

A lip-sync of President Obama

Again in March 2023, an image of Pope Francis in a white Balenciaga puffer jacket was sighted online. The viral photos were later found to be AI-generated. Other deep fakes of the Pope were pictures of him in a pool with two other women, a photo of him playing basketball, and a photo of him holding a gun.

Other AI-generated pictures that can be seen online are photos of Mark Zuckerberg modeling on a runway. While the context of these pictures may seem far-fetched, deep-fake photos of former US president Donald Trump resisting arrest are probable and believable because President Trump was and is still facing criminal charges during the same period the pictures began circulating on social media.
This does not only pose a danger to the information ecosystem but organisations and democracies all over the world as people without the knowledge to verify these pictures may also take up arms and threaten or cyberbully the creators of these images or videos.

But why should we be concerned about deep fakes?

Deep fakes can be created by anyone. They are usually created for fun, but they can easily be used for malicious reasons. Moreso because audiovisuals are more compelling and appeal more to the emotions of audiences than any other form of communication.

While it was only possible to lip-synch videos by blending mouth shapes and audio with videos in 2017, this has evolved as videos can now be created from scratch with just a few text commands. Experts say that it will be difficult to detect whether an image or video is a deep fake or real with the ever-advancing rate of AI.

Eliot Higgins shared a thread of a series of AI-generated images of Donald Trump being arrested in court and in a prison cell on Twitter. The thread had 6.4 million views and about 5,000 retweets. The images were used by some European news outlets to report the news about the charges against Trump and what his arrest would look like.

This AI-generated photo of Trump getting arrested was created by Elliot Hagan and was used by some European news outlets.

How to spot AI-generated pictures and videos

Altering photos and videos is not new. Before the era of deep fakes, there were photo and video editing software and apps like Photoshop and Premiere Pro. Contrary to edited photos and videos, AI-generated images cannot be easily detected at first glance. A critical look at them can reveal some telltale signs that may give them away.

  1. Be mindful of the context in which these pictures or videos are being shared. The messages that some fake pictures or videos convey may be far-fetched or unbefitting to the object of the image or video. An example is the photo of Mark Zuckerberg on the runway. While it is not impossible, it is unlikely to happen.
An AI-generated photo of Facebook CEO, Mark Zuckerberg on a runway.
  1. Verify the source of the image or video. Always try to find the source of a photo or image if you are unsure about its authenticity. The best way to do this is by carrying out a reverse image search of images or screenshots of videos. Tools like Google Reverse Image Search, Bing, and Yandex can help find the source of a video or picture and other related information. Sometimes through manual verification, you should pay particular attention to the aesthetic composition of the image. Here’s an example from a news network.

  1. Look out for audio quality in videos. Indications such as the lack of any form of sound, robotic voices, odd pronunciation of words, and imperfect lip-sync are signs that a video may be a deep fake.
  2. Deep-fake videos also sometimes have unusual body movements such as flickering or twitching. Movements of the face may appear to be real, but neck and hand movements are awkward in some cases.
  3. Be attentive to body proportions. AI-generated images may look real at first, but when you zoom in, there may be poor positioning and missing body features such as fingers as identified in the Pope’s Balenciaga AI-generated photo. Other deep fake images have too many teeth, deformations around the ear and accessories (glasses, earrings, chains, etc.), and fading skin tone and hair.
An AI-generated photo of Pope Francis with a missing finger.
  1. Most deep-fake pictures have an artificial gloss or cartoonish look. Also,other people in the backgrounds have blurred faces that look like they were drawn.
An AI-generated image of French President, Emmanuel Macron, depicted him as being a part of the French pension protest.


Parody accounts contribute to information disorder. Here’s how to identify them (

Bawumia’s claim about the number of Immigration Service staff completely false

Related articles