You need to corroborate the information in AI generated content too.
1. Cross-reference the information with various reliable sources. If the AI content cites its source(s), verify it and evaluate its trustworthiness.
2. Be on the lookout for inconsistencies, contradictions, and biases.
3. AI has been "trained" on a limited set of data with a cutoff date. The content it provides may not be based on the most recent information.
What is a hallucination?
In the context of AI, a hallucination refers to an output that's factually incorrect or misleading. These can be pretty convincing since generative AI is skilled at producing fluent and seemingly accurate text or images. This happens sometimes because AI is trained on imperfect data, and it prioritizes patterns over factual accuracy. When data is incomplete, AI tries to fill the gaps by inventing details that fit the pattern but may not be true.
Spotting deepfakes is getting harder because deepfakes are getting better. Here are some things to look for:
Hint: If you can slow down the video playback, it can help to reveal imperfections in the deepfake.
Consider the Content:
Source: One Tech Tip: How to spot AI-generated deepfake images; AI fakery is quickly becoming one of the biggest problems confronting us online. (2024, March 21). Independent [Online], NA. https://link.gale.com/apps/doc/A787059994/STND?u=southcollege&sid=ebsco&xid=a642a51f