← Back to Glossary

Deepfake

Safety & Ethics

AI-generated fake media -- typically video or audio -- that realistically depicts someone saying or doing something they never actually did.

Think of deepfakes like a digital mask that is so perfect, no one can tell it is a mask. Someone can wear the face and voice of any person and create a video that looks completely real. Now imagine the chaos if anyone could put on anyone else's face at will -- that is the deepfake problem.

A deepfake is a piece of synthetic media, usually a video or audio clip, created by AI to look or sound like a real person doing something they never actually did. The term combines "deep learning" (the AI technique used) with "fake." A deepfake might show a politician giving a speech they never gave, a celebrity in a situation they were never in, or anyone's face convincingly swapped onto someone else's body.

The technology works by training AI models on existing images and videos of a person. The model learns the person's facial expressions, movements, and voice patterns so thoroughly that it can generate new, fabricated content that looks authentic. Early deepfakes were easy to spot -- they had glitchy edges and unnatural movements. Modern deepfakes can be nearly indistinguishable from real footage, even to trained observers.

The concerns about deepfakes are serious. They can be used to spread political misinformation (fake videos of world leaders), commit fraud (cloning someone's voice to trick their family or bank), create non-consensual content, and undermine trust in legitimate media. If any video can be faked, how do you know what to believe? This erosion of trust in visual evidence is sometimes called the "liar's dividend" -- even real footage can be dismissed as fake.

On the positive side, the underlying technology has legitimate uses: movie studios use it for special effects, language dubbing with matched lip movements, and bringing historical figures to "life" in documentaries. The challenge for society is managing the harmful uses while preserving the beneficial ones. Detection tools, watermarking standards, and legal frameworks are all part of the emerging response to deepfakes.

Real-World Examples

  • *Fake videos of politicians making statements they never actually made, spread on social media
  • *Scammers cloning a CEO's voice to trick an employee into wiring money
  • *Movie studios using face-swapping technology for de-aging actors or recreating deceased performers

Tools That Use This

RunwayFreemiumElevenLabsFreemium

Related Terms

AI EthicsAI SafetyDeep LearningGenerative AI