Skip to content Skip to footer

What Are Deepfakes, And Should I Be Worried About Them?

Images, throughout history, have been used to tell stories and to emphasize their importance, and to flatter individuals or to mock and belittle them. Deepfakes are the modern manifestation of this trend. I am going to be taking part in Databarracks’ wargame entitled “Defending Deepfaked and Disinformation,” and so I thought I would share a few of my thoughts on the subject and what we as business continuity professionals should be aware of.

A deepfake is a video, audio, or image created by artificial intelligence, which often makes it appear that someone is doing or saying something they never actually did. They can be convincing. The concept came to public attention in February 2024 when Arup, the global engineering and design consultancy, fell for a deepfake scam and lost £20m. The scam was perpetrated using deepfakes. A senior executive was asked to join a call at which there were several colleagues they knew, including their UK-based Chief Financial Officer. However, the people on screen were deepfake AI-generated video avatars, designed to look and sound exactly like real Arup staff. During the meeting, the fake CFO instructed the executive to make several urgent money transfers to supposedly confidential accounts related to a secret corporate acquisition. The fraud was only discovered when internal checks flagged unusual activity. What seems scary about this case is that the people on the call were familiar to the executive who made the transfers, so they must have been convinced that they were the real people. The only difficulty I have with this case — and I wonder if there is more to it than meets the eye — is that one person was allowed to transfer such large sums of money. If they were, then it seems a significant failure of systems that there were not more people involved and a sign-off chain for millions of pounds. This brought the issue to public attention.

Deepfakes have been around for a few years; I wrote a bulletin about them in 2019. However, with AI becoming cheaper and more advanced, they are now easier and more affordable to produce.

The concept of using images for propaganda purposes has been around since humans started to reproduce images. Cave art with men hunting animals might have been messaging about hunting animals and celebrating their mastery over the environment. Kings and other prominent people used portraits to glorify and elevate themselves. King Louis XIV (“The Sun King”) of France, in his painting by Hyacinthe Rigaud (1701), shows him in a grand, majestic pose, with a long flowing robe, sword, and crown, to symbolize absolute power. He is painted as young and strong, even though he was aging at the time. As well as glorifying themselves in the moment, it was this image they wanted to portray to subsequent generations. On the other hand, James Gillray, who was a political cartoonist producing his work in the late 18th century, brutally mocked politicians like Napoleon and King George III by portraying them with exaggerated physical features (e.g., big noses and hunched backs), to symbolize moral flaws or stupidity.

In a similar way, in the 1930s, Stalin airbrushed purged officials out of photographs. Nikolai Yezhov, known as the “Bloody Dwarf,” once head of Stalin’s secret police, was famously airbrushed out of a photograph of Stalin by the Moscow Canal after he fell from grace and was executed. Leon Trotsky, an early revolutionary leader and Stalin’s rival, was systematically removed from photos of key Soviet events after his exile and eventual assassination. Photographs, which should be permanent records of events and people, became tools of propaganda, and visual history was rewritten to serve those in power.

Deepfakes have been produced for many reasons:

  • Entertainment: Movies using deepfakes to de-age actors.
  • Fraud: Like the Arup case, corporate scams exploit synthetic video. Remote working has made these scams even easier to pull off, as employees often rely on video calls and emails without the same face-to-face checks, making it harder to spot when the person on screen isn’t real.
  • Disinformation: Fake political speeches designed to destabilize, especially when those creating them use bots to promote them on social media. In 2023, AI-generated images showing Donald Trump being violently arrested in New York went viral online. Although entirely fake, the photos stirred political outrage and confusion and were widely shared on social media as genuine images.
  • Reputation attacks: Creating fake videos to damage individuals’ credibility. Malaysia in 2019 saw a deepfake video scandal targeting politician Azmin Ali, allegedly showing him in a compromising situation; while authenticity was never conclusively proven, the damage to his credibility and political standing was immediate.
  • Satire and parody: Sometimes obvious, sometimes dangerously subtle. Deepfake videos of celebrities like Tom Cruise performing absurd stunts on TikTok have gone viral, while a fake Boris Johnson giving absurd speeches has been shared online as satire, blurring the line between comedy and confusion.
  • Fake Brand Promotion: Deepfakes have been used to create false endorsements, with AI-generated videos of celebrities appearing to promote products they’ve never agreed to, misleading consumers, and damaging brand trust. AI-generated videos of a fake Tom Hanks promoting dental plans circulated online, tricking viewers into believing he had endorsed the product.
  • Psychological Operations – Used by state or political actors to erode trust, spread confusion, or manipulate populations. At the beginning of the Ukraine war in 2022, a deepfake video of Ukrainian President Zelenskyy falsely urging troops to surrender spread rapidly across social media, and even appeared on a hacked news site.

Research by Vaccari and Chadwick (2020) found that while deepfakes may not always deceive people outright, they create dangerous uncertainty. In an experiment involving a deepfake of Barack Obama, only a small percentage fully believed the fake video. However, many more were left unsure, and that uncertainty significantly reduced their trust in political news on social media. Deepfakes, it seems, can undermine confidence in what we see, rather than completely fooling us.

So, as business continuity professionals, what can we do to combat deepfakes — either to recognize them, or to combat where one is damaging our organization?

  • Recognize deepfakes as an emerging risk in business continuity and cyber threat planning, and quantify your organization’s exposure to them as a risk.
  • Update crisis communication plans or crisis management plans to include procedures for responding to synthetic media attacks, especially to rapidly debunk them if they are pushing disinformation.
  • Ensure that protocols and checks are in place to prevent an Arup-type fraud.
  • Run exercises that simulate deepfake scenarios to test decision-making under uncertainty.
  • Educate staff on how to spot deepfakes and raise awareness across leadership and critical teams.
  • Build partnerships with external experts, such as PR firms, legal advisors, and digital forensics specialists, to enable rapid response.
  • Promote a culture of healthy skepticism where verification is prioritized over speed when reacting to content.

As the technology gets cheaper and easier to use, there will likely be an increase in the use of deepfakes for both good and bad purposes. As new threats emerge, we, as business continuity professionals, have to be aware of them!

 

++++++++++++++++++++++++++++++++++++++++++++++++

 

This article was originally published by BC Training Ltd.

Charlie Maclean-Bristol is the author of the groundbreaking book, Business Continuity Exercises: Quick Exercises to Validate Your Plan

business-continuity-exercise-rothstein-publishing

“Charlie drives home the importance of continuing to identify lessons from real-life incidents and crises, but more importantly, how to learn the lessons and bring them into our plans. Running an exercise, no matter how simple, is always an opportunity to learn.” – Deborah Higgins, Head of Cabinet Office, Emergency Planning College, United Kingdom

Click here for your FREE business continuity exercises!

 

 

Rothstein Publishing Logo

Stay in touch with Our Updates

We don’t spam!

E-mail
Password
Confirm Password