The “fake news” label is often applied in response to a news story that is presenting information not favorable to the subject of the story. Sometimes the news is true — sometimes it’s fake. However, the over-usage of the term obscures the real danger posed by increasingly sophisticated deepfake videos.
Deepfakes leverage deep-learning technology, a branch of artificial intelligence (AI), and effectively learn what a person’s face looks like at different angles with the purpose of superimposing that likeness onto another person. Deepfakes’s AI algorithms can take in large amounts of data (photos of a person’s face, voice samples, etc.), and from it, create audio and video of a person doing or saying things they've never done. The technology can even go so far as to match physical mannerisms to audio and video, replicating an original speaker’s lip and mouth movements.
Fake news and deepfakes aren’t just restricted to any political party, country, or specific affiliation either. The most widely read and distributed
fake news story of 2016 was a post that claimed the Pope had endorsed Trump; later, the identical story was published claiming the Pope had endorsed Clinton.
Collectively, we may be suspicious of random postings on Facebook or Twitter, but a video of an event is more likely to be trusted. If we see and hear it, then we have “proof.” For this reason, fake videos are more dangerous than fake news. Lately, there has been an escalation of selective editing, but often, those can be “debunked” by sharing the unedited video. When deepfake technology is used to create a new video, larger problems arise. Unlike editing that cuts out a key part of an answer, if there is no video of the original event, then it might be hard to counter a deepfake. If it is convincing, it will be harder yet to prove it’s not real.
Additionally, in our current information consumption model, videos often get wider circulation and are more easily shared and more memorable. We have become cynical about the biases a media outlet may apply to a report, but we think that an unedited video is the only authority that we can trust — until now.
Business Problems
In the pollical world, many people will accept a deepfake video without question because it supports their point of view. But a different element of trust and suspicions exist in the business world. Even as hacking and phishing continue to get more sophisticated (and continue to work), when AI is used in combination with social hacking, it opens an entirely new pandora’s box.
One of the first cases of financial fraud using deepfake technology involved AI-synthesized audio that recreated the voice of the general director of the company. Using the fake phonograms, the actor managed to convince an employee to transfer 220,000 euros to their bank account.
The number of possible exploits is only limited by the imagination of the nefarious actor. A sample of the more obvious includes:
- False claims of malfeasance, damaging a product or company’s reputation
- Endorsements that are not real (you thought fake written reviews were harmful)
- Video-backed HR complaints about a co-worker or a boss
- Insurance fraud, support by “video proof”
- False news about the company’s owners, founders, leaders, etc.
- Onboarding processes subverted and fraudulent accounts created
- Identity theft, using video to convince someone to alter critical personal data
- Diversion of shipments
- Orders for unwanted materials
- Payments and/or funds transfer fraudulently authorized
- Blackmail based on the threat to release a damaging video
Some of the events don’t have to be believed for very long to have a significant impact. In a deepfake extortion scenario, the authenticity of the defamatory video will be irrelevant; the potential damage will be just as significant to the individual and/or corporate reputation. This could be monetized by someone selling a stock short just before some the bad news comes to light. It only takes a believable rumor to have real impacts on businesses and individuals, and recovering from a false story will take even longer if the casual observer can not tell it is a hoax.
What’s Next?
Detecting fake videos has been a challenge for quite some time. Rising Sun, both a book by Michael Crichton (1992) and a movie (1993), foreshadowed how a video can be faked. In this fictional story, a video was altered to frame someone for murder. One scene in particular showed how easy it had become to replace the face of one person with that of another — and this was over 25 years ago!
Detecting fake videos now will not be as simple as close observation of a misplaced shadow, a shifted camera view, or a background lighting inconsistency. AI-backed machine learning techniques are being used to spot detectable problems and improve the quality of the deepfake.
Most cybersecurity experts are promoting AI technologies to detect irregularities in the operation of computer systems and networks. Similarly, AI is currently one of the best tools to fight deepfakes by trying to detect anomalies. But like many challenges in the world of security, it feels like we will usually be a step behind the black hats.
Should you be personally worried? Think of something that you would never say, and then imagine your friends, family, or employer being shown a (convincing) video of you saying it. The potential malicious misuse could be a problem for all companies and all of us individually.
"SCTC Perspective" is written by members of the Society of Communications Technology Consultants, an international organization of independent information and communications technology professionals serving clients in all business sectors and government worldwide.
Knowing the challenges many enterprises are facing during COVID-19, the SCTC is offering to qualified members of the Enterprise Connect user community a limited, pro bono consulting engagement, approximately 2-4 hours, including a small discovery, analysis, and a deliverable. This engagement will be strictly voluntary, with no requirement for the user/client to continue beyond this initial engagement. For more information or to apply, please visit us here.