If you have ever seen a deepfake video, then you will have undoubtedly sensed something unnerving about them. Deepfakes are ultrarealistic videos that make people appear to say and do things they haven’t.
Perhaps the most unsettling thing you take away from them is how rapidly technology is developing. Experts warn it’s only a matter of time before someone creates a bogus video that’s convincing enough to fool millions.
But some partners at EY, formerly Ernst & Young, are now using deepfakes as a kind of workplace gimmick. They jazz up client presentations or routine emails with video clips of themselves made with AI software. The results have been good thus far.
Contents
No More Golf or Long Lunches
The multinational accounting giant’s exploration of the new technology is provided by the London-based software company, Synthesia. Executives at EY say the experiment came about as the pandemic has made traditional ways to transact business impossible. Golf and long lunches are out of the question. Zoom calls – annoying as they are — have become too routine.
Partners at EY have used their AI doubles for e-mails and presentations. One partner used a built-in function in Synthesia’s technology to display his AI-avatar speaking the language of a Japanese client. The results were apparently good.
“We’re using it as a differentiator and reinforcement of who the person is,” says Jared Reeder, who is part of an EY team that provides creative and technical assistance to partners. “As opposed to sending an email and saying ‘Hey we’re still on for Friday,’ you can see me and hear my voice.”
Porno and Politicians
Even so, Reeder observes that some need a little time to warm up to the technology. After all, deepfakes have earned an unsavory reputation over the past few years. Digital technology is making it much easier to fabricate convincing but malicious fakes.
The technical concept first came to broad public notice in 2017 with fake pornographic clips of Hollywood actors. The bogus videos saw wide circulation online.
In 2019, the Daily Beast published a story exposing the creator of a now infamous fake video. The clip appears to show US House Speaker Nancy Pelosi drunkenly slurring her words during a press conference.
The video was made by a private citizen named Shawn Brooks, reportedly a Trump political operative. Brooks created the video by taking genuine footage, slowing it down, and then adjusting the pitch of Pelosi’s voice.
Judging by social media comments, many people fell for the bogus news clip. The lesson? While the premise may seem absurd, people are happy to accept ridiculous things about politicians they dislike.
Are we ready?
That brings us to a more dangerous proposition. Most media coverage of deepfake technology focuses on horror scenarios of the technology being used against developed countries. But experts warn that deepfakes could cause real havoc in developing nations. Many of these countries are run by fragile governments. Most have nascent technological literacy.
That is why some experts are wary of the technology. Rather than invite chuckles and derision, misinformation in developing countries could lead to real-world violence. You don’t need much imagination to conjure a scenario in which a terrorist fakes a video to provoke civil unrest.
“In some ways it doesn’t matter if it’s fake,” says Hany Farid, a computer science professor at Dartmouth who specializes in digital forensics. “That’s not the underlying issue. It can be used to just undermine credibility and cast doubt.”
Farid says that politicians from developing countries around the world have already asked him to analyze fake videos. These often purport to capture the politicians in compromising sexual situations.
“I don’t think we’re ready as a society,” says Farid. “Our legislators aren’t ready. Technology companies aren’t ready.”
“Artificial Reality Identities”
Perhaps because the word “deepfake” has come to connote deception, EY calls its virtual doubles, Artificial Reality Identities (ARIs), instead. The EY clips are also presented openly as synthetic, not as real person videos intended to fool viewers.
Reeder says they have proven to be an effective way to brighten otherwise routine interactions with clients. “It’s like bringing a puppy on camera,” he says.
People now use the technology in customizing stock photos, generating models for new clothing, and in conventional Hollywood productions.
Synthesia has developed a suite of tools for creating synthetic video. The company’s clients include WPP, the advertising company, which has used the technology for internal corporate messaging in different languages. EY has helped some consulting clients make synthetic clips for internal announcements, as well.
“What’s more human than me?”
EY executives say the company plans to continue experimenting with digital clones of employees. But some believe the novelty of synthetic video as a business tool may not prove long-lasting.
Anita Woolley of Carnegie Mellon’s business school says videos made with Synthesia’s technology can look a little uncanny.
The professor and organizational psychologist says her research suggests that rushing to embrace video can sometimes be a mistake. She says there is evidence that video calls can make it more difficult to communicate.
“When you have a technology presenting a human-like appearance, it’s a fine line from comforting to eerie,” Woolley says.
Reeder at EY says he has also encountered some cynicism when pitching Synthesia’s video cloning technology internally. Some coworkers have expressed concern the technology could eventually devalue the human element in their jobs.
Reeder argues that the synthetic clips can augment, rather than diminish, the human touch. “What’s more human than me saying ‘Hello, good morning,’ with my voice, my mannerisms, and my face?” he asks.