“Deepfakes may very well represent a final frontier for the digital age.” – Rimon Law Practicing Attorney, Jon Mechanic
We have come a long way in fooling people from the “Is it live or is it Memorex?” days of the 1970s, when we were asked to guess if a video of Ella Fitzgerald breaking a glass when hitting a high note was live or on a Memorex cassette tape.
Perhaps the most unforgettable use of Photoshop – another fakery technology, invented in 1987 – was the 1989 TV Guide issue with Oprah Winfrey on the cover. Except it was Oprah’s head superimposed on 1960s heartthrob Ann-Margaret’s body – with neither woman’s prior knowledge or permission.
‘Deepfake’ videos, fast becoming the rage, leverage artificial intelligence, machine learning, and other technologies to, for example, superimpose the face of a real person onto the face of another to make it appear that the real person is saying or doing something they did not in fact do. The technology is so advanced that it can be quite difficult to tell whether it is “live … or Memorex.”
However, deepfakes can be utilized with nefarious purposes in mind, such as using a celebrity personality to convince people to contribute to campaigns or make rash purchases; indeed, the unscrupulous are now using technology to distort business and politics.
Canadian attorney at global firm Rimon Law, Jon Mechanic, pointed out to me that the entertainment industry, which invests gigantic sums into big-screen epics, Broadway productions, and television series, might start toying with using deepfakes to complete projects when a lead actor or performer dies or becomes incapacitated.
Deepfakes were a major plot line in season 2 of the cult animated Netflix series BoJack Horseman.
BoJack bailed on filming “Secretariat”, only to learn the studio scanned his image to complete the film without him. Imagine the fallout if a studio did this to a human actor.
Others may recall the 1997 film “Face/Off”, in which Nicolas Cage and John Travolta both get face transplants to assume each other’s identity.
Mechanic points out that the potential for using deepfakes raises numerous legal issues in entertainment – for actors, producers, and especially the estates of dead actors who could “star” in productions using deepfakes. Another concern, though, is that live actors could lose major roles to the deepfake images of dead stars.
That fear could, in part, stem from Andrew Niccol’s 2002 film “S1m0ne,” in which an AI super beauty replaces a “real-life actress”, in the film within the film.
S1m0ne becomes an overnight star, wowing audiences and critics alike. Yet no one who knows the truth questions the morality of the deception.
While deepfake might be a two-edged sword for entertainment personalities, its use in the political field has deeper implications.
Citing Nazi “journalist” and war criminal, Julius Streicher (publisher of Der Stürmer and The Poisonous Mushroom), who fanned the flames of hate by using fake and anti-semitic images,
Mechanic pondered the damage that such a monster could wreak with modern-day deepfake tools at hand.
Emma Woollacott calls deepfake “the new cybersecurity frontier”, as she points to its growing use for both entertainment and misinformation.
On the funny side, five tech employees recently superimposed the face of their boss on their bodies during a video call in a video that later went viral.
On the not-so-funny side, Woollacott imagines a video of a fake Elon Musk giving insider trading tips or a deep-faked politician announcing a new policy.
In 2019, fraudsters reportedly used voice-generating AI software to scam a UK CEO out of $243,000. Deepfake fraudsters – including members of organized crime – indeed routinely create fake accounts to scam unwitting victims.
An iProov survey found that three-quarters of cybersecurity experts in the financial sector express concerns about deepfake fraud; nearly two-thirds expect the threat to worsen.
Two Belgian examples demonstrate the opportunities and dilemmas deepfake poses.
Belgian deepfake artist Chris Ume superimposed the faces of talent judges Simon Cowell, Howie Mandell, and Terry Crews of “America’s Got Talent” on the bodies of three opera singers –and advanced on the program. Everyone had a blast with that deepfake.
Two years earlier, though, Belgium’s Extinction Rebellion radicals posted a deepfake video purporting to show Belgian premier Shophie Wilmès linking COVID-19 with the climate crisis.
The video, in which machine learning and artificial technology reproduced her voice and likeness, had her saying that “…pandemics are one of the consequences of a deeper ecological crisis.”
Frontier Enterprise editor Mike Leaño learned from VMware cybersecurity strategist Rick McElroy that deepfake fraudsters are also using the technology to gain access to compromise organizations.
By infiltrating business emails, they perform unauthorized transfers of funds, create bogus contracts, steal intellectual property, add fake virtual employees, and perform corporate espionage.
To combat these crimes, companies employ multi-level authentication procedures for transfers or data releases. To create an additional line of defense, they can provide employees with additional training to detect deepfakes.
Another tech executive told Leaño that organizations now layer traditional data sources with alternative identity data, such as mobile and social media and utilities payments.
In 2023, it’s an ongoing battle for deepfake detectors to keep up with the ever-improving deepfake technology.
Some states have passed laws regulating the use of deepfake technology, especially as it relates to election interference.
But Federalist Society writer Matthew Feeney says that laws in California and Texas, as examples, fail to demonstrate a compelling governmental interest in limiting political speech.
California ACLU legislative director Kevin Baker claims his state’s law “…will only result in voter confusion, malicious litigation, and repression of free speech.” And Texas’ law fails to include exceptions for parody and satire.
At the federal level, former Senator Ben Sasse introduced deepfake legislation to address what he perceived to be a range of harms; Rep.
Yvette Clarke sought to require labeling of deepfake content; and many others have sought to amend Section 230 of the Communications Decency Act to require interactive computer services to take “reasonable steps” to prevent illegal activities.
Feeney, however, suggests that the private sector may hold the strongest cards for fending off deepfake fraud and misinformation.
Mechanic notes that his own province of Quebec relies on a civil code inspired by France’s Napoleonic Code, and also has a charter of human rights and freedoms, which collectively protect individual’s privacy rights against the misuse of their names, images, and voices.
On the other hand, banks and insurers and “completion guarantors” can craft contracts that specify the steps to be taken if, say, a lead actor dies or becomes incapacitated in order to protect significant loans or investments that rely on that actor’s image and reputation.
But, Mechanic agrees, we as a society are still in the early stages of creating a proper legal framework that allows both for the positive uses of deepfake technology and protects against its negatives.
He urges content creators to take care to comply with the law when using deepfake in films, TV shows, and new media content.
Talent, too, should investigate the opportunities for new revenue streams using the technology. On the flip side, Mechanic urges individuals and businesses to keep their eyes and ears open to detect deepfakes and to understand their rights against their malicious use.
About the author
Duggan Flanakin is the Director of Policy Research at the Committee for a Constructive Tomorrow (CFACT). Flanakin is also an accomplished writer with bylines in such publications as the Chicago Tribune, International Business Times, Real Clear Energy, and International Policy Digest, among many others.