Is a deepfake of yourself a deepfake? Maybe we should rebrand it to shallow fake

I don’t know the exact date when deepfakes were invented, I imagine this is akin to asking the question who invented sliced bread. There were knives, and then there was bread, but whoever was the first person to plunge a blade into that hot lump of gluten we will probably never know. It doesn’t matter, the person I’m interested in is whoever decided to grill that sliced bread will butter and cheese. Ummmm, grilled cheese…Where was I again? Oh, right, deepfakes. Similarly, human faces have been around for a few years, and then this thing by the name of Machine learning (ML) escaped containment and has forever changed our lives.

Machine learning, specifically neural networks, powers deepfakes, and while the term deepfake has become a catchall to describe any image/video that seems fraudulent. For a deepfake to be authentic (oxymoron statements make me laugh), the fake must be built using a machine learning toolset.

From here on, I will refer to shallow fake as a deepfake of one’s self. So when was the first shallow fake introduced? I do not know, so instead, let’s use the release of the movie Tron Legacy (2010) as our baseline. Why this movie? Mostly because it provides an entertaining milestone as our first publicly observed shallow fake. The film Tron Legacy debuted actor Jeff Bridges as both himself and himself 28 years younger. After the film was released, Jeff was asked how he felt about being motion captured and having his face scanned with laser are recreated via CGI. Jeff replied, “Of course, there’s always the chance that one day actors won’t be needed at all, as filmmakers fully create their characters in a computer. “I go between worry and gratefulness. Maybe I’ll be ready to do something else by that time,” he laughed about the possibility. “There is going to be a time — it’s probably already here — when they say, ‘Let’s get a combination of [co-star] Garrett [Hedlund] and Bridges and let’s put a little Bela Lugosi in there. What the hell — let’s see what happens!’ Just 11 years later, we can do this…on a cell phone.

Other than using shallow fakes to make Disney Movies, how can this technology be applied “productively”. Zoom meetings, obviously !! In October 2020, NVIDIA introduced its MAXINE Video-Streaming Platform. The platform enables users to “shallow fake” in real-time. MBA’s note that NVIDIA created a Platform instead of a downloadable app.

MAXINE includes features that grant end-users the ability to remove nose piercings, blemishes, and correct lighting, all while reducing video bandwidth to a fraction of today’s standards. Bad haircut, don’t worry, forgot to put on makeup, oh well, failed to wipe away your eye boogers, if no one can see them, they never happened. Apparently, how one views themself during a zoom session has lead to a record number of elective surgeries. Check out this article published by the New York Times titled Don’t Like What You See on Zoom? Get a Face-Lift and Join the Crowd. WOW! “Cosmetic surgeons say business is booming after elective surgery opened up, with quarantine proving ample time to heal in secrecy from renovation of face and body.” 1

If this hasn’t scared you enough wait until you hear Whats on the horizon? Need a hint, Deep fake audio…like in Terminator (1984) when the T600 impersonates John Connors, while the T1000 super deepfaked Johns mother.

Part 2, This is where I attempt to explain how deepfake technology works, using simple concepts. Above was the meat and potatoes, below is the room temperature sparkling water to wash it all down.  Fair warning, I am not an expert on this subject… yet, so please don’t use this information in your dissertation

Deepfake broad concept

If I asked you, given 3x = 30, what is x? You would answer 10. Great you’ve just mastered the extreme basics behind deepfake technology. The resulting graph would look like this. Okay, hold onto this thought.

Computers are great with numbers, so the question then becomes how can we get an image to be represented by numbers with an algorithm that’s insanely good at finding x if x was a facial feature. Mathematically you would get something that looks like the neural network below, but let’s use cartoons instead.

enter image description here
Image courtesy of http://www.alanzucconi.com2

Hold it! we still aren’t there yet. We actually have to do this two times. One for the original face and another for the imposter.

Image courtesy of http://www.alanzucconi.com2

Now we get to the good part. The computer has mapped out how face A and face B looks and moves. To create the deep fake, we say to the computer if 3x = 30 and x = 10 what is 3.4x=30? Think of the ‘3’ as the original face and ‘3.4’ as the imposters face. The computer would say I don’t have an exact number, but I can get close and reply with x equals roughly 8.82353, which is close enough to fool the human eye. The more data points the algorithm is trained with, the more accurate the estimate will be, which is why deepfaking celebrities is easy. They have plenty of images and videos of their faces in different expressions. With enough data points, impersonating your own face is not that hard, the “3.4” would become more like “3.02”.

Image courtesy of http://www.alanzucconi.com2

I hope you enjoyed the crash crash in deepfakes :)


1 –

2 –


  1. Great post Andrae! That is very fascinating technology being implemented by Nvidia. I find it fasinating that they can make people look directly into the camera even when they are not. It’s nice to see AI being implemented in a productive way. I can see this being a huge help for people who are interviewing for a job remotely.

  2. I’ll be curious to see how deepfakes play out. I remember watching some of the new CGI in movies back in the late 1990s (or the cutting-edge video games) and thinking how realistic they seemed at the time. Today, I’ll show them to my kids and they mock how terrible the special effects are. Since deepfakes are trained to fool other ML, I wonder if people will be able to develop or maintain a bit of a sixth sense to be able to sniff out when a video is a deepfake. While they can fool us now, I wonder if we’ll develop the sophistication to be able to spot them with more experience.

  3. shaneriley88 · ·

    Fascinating post! Your post spurred me to do a little digging of my own. This ended up in a 15 min internet quest which lead me to a great write up from MIT Sloan ( In a world of algos, MI, and advanced techniques the article finished of with using human instincts and skepticism. Thanks again for the post.

  4. williammooremba · ·

    Very interesting post. One application of deep fakes that has been interesting to me has been the use of them of images of people without their direct involvement. An example would be the likeness of Peter Cushing’s Grand Moff Tarkin in Rogue One: A Star Wars Story. I think with Nvidia’s Maxine video we are getting closer to have that kind of technology be more and more accessible for everyday use. I wonder how close we are getting to the point where a lot the general population can have their likeness live on digitally indefinitely.

  5. lisahersh · ·

    Very insightful post and thank you for sharing! I had never even heard of a deepfake before reading this. This may be a stupid question (I still don’t fully understand ML), but is this essentially just taking CGI to the next level? Also, the audio bit got me thinking of how HSBC was going to start using voice recognition for their banking customers in the UK. I guess it’s the common case of every time you build a better mousetrap it leads to a new generation of smarter mice.

  6. changliu0601 · ·

    Fun post!!I realized i could throw away the selfie light i bought for the zoom class.I wonder how this technology will change the cosmetic industry and how will they react to it.

%d bloggers like this: