Skip to Main Content

Fact-Checking: How to Guide

Can You Identify the AI Source?

You need to corroborate the information in AI generated content too.

1. Cross-reference the information with various reliable sources. If the AI content cites its source(s), verify it and evaluate its trustworthiness.

2. Be on the lookout for inconsistencies, contradictions, and biases. 

3. AI has been "trained" on a limited set of data with a cutoff date. The content it provides may not be based on the most recent information. 

What is a hallucination?

In the context of AI, a hallucination refers to an output that's factually incorrect or misleading. These can be pretty convincing since generative AI is skilled at producing fluent and seemingly accurate text or images. This happens sometimes because AI is trained on imperfect data, and it prioritizes patterns over factual accuracy.  When data is incomplete, AI tries to fill the gaps by inventing details that fit the pattern but may not be true. 

Spotting Deep Fakes

Spotting deepfakes is getting harder because deepfakes are getting better. Here are some things to look for:

  • Skin and Lighting: Deepfakes can struggle with natural skin texture and lighting effects. Check for unusual smoothness, mismatched skin tones, or shadows that don't line up with the light source.
  • Eyes and Mouth: Deepfakes sometimes falter on replicating eye movements and blinking. Look for unnatural blinking patterns, odd reflections in eyes, or lip movements that don't perfectly match the speech.
  • Hair and Facial Features: Deepfakes may have trouble with details like hair texture, jewelry, or facial features like moles. Look for inconsistencies or blurriness around these areas.

Hint: If you can slow down the video playback, it can help to reveal imperfections in the deepfake. 

Consider the Content:

  • Unnatural Speech: Deepfakes may have audio that sounds off, with robotic voices, strange pronunciation, or background noise that doesn't fit the scene.
  • Out of Character Actions: Is the content believable? If a public figure is saying or doing something very out of character, it might be a deepfake.

 

Source: One Tech Tip: How to spot AI-generated deepfake images; AI fakery is quickly becoming one of the biggest problems confronting us online. (2024, March 21). Independent [Online], NA. https://link.gale.com/apps/doc/A787059994/STND?u=southcollege&sid=ebsco&xid=a642a51f