3

Deepfakes (a portmanteau of "deep learning" and "fake") are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.

Nowadays most of the news circulating in the news and social media are fake/gossip/rumors which may false-positives or false-negatives except WikiLeaks

I know there has been a Deepfake Detection Challenge Kaggle competition for a whooping sum $1,000,000 prize money.

I would like to know how deepfakes work and how they might be dangerous?

Pluviophile
  • 1,223
  • 5
  • 17
  • 37

1 Answers1

2

In general, deepfakes rely on advanced context-aware digital signal manipulations - usually image, video or audio - that allow for very natural looking modifications of content that previously have been costly or near impossible to produce in high quality.

The AI models, often based on generative adversarial networks (GANs), style transfer, pose estimation and similar technologies, are capable of tasks such as transferring facial features from subject A to replace those of subject B in a still image or video, whilst copying subject B's pose, expression, and matching the scene's lighting. Similar technologies exist for voices.

A good example of this might be these Star Wars edits, where actors faces have been changed. It is not perfect, you can in a few shots see a little instability if you study the frames - but the quality is still pretty good, and it was done with a relatively inexpensive setup. The work was achieved using freely-available software, such as DeepFaceLab on Github.

The technology is not limited to simple replacements - other forms of puppet-like control over output are possible, where an actor can directly control the face of a target in real time using no more than a PC and webcam.

Essentially, with the aid of deepfakes, it becomes possible to back up slanderous or libelous commentary with convincing media, at a low price point. Or the reverse, to re-word or re-enact an event that would otherwise be negative publicity for someone, in order to make it seem very different yet still naturally captured.

The danger of this technology is it puts tools for misinformation into a lot of people's hands. This leads to potential problems including:

  • Attacks on integrity of public figures, backed by realistic-looking "evidence". Even with the knowledge that this fakery is possible (and perhaps likely given a particular context), then damage can still be done especially towards feeding people with already-polarised opinions with manufactured events, relying on confirmation bias.

  • Erosion of belief in any presented media as proof of anything. With deepfakes out in the wild, someone confronted with media evidence that went against any narrative can claim "fake" that much more easily.

Neither of these issues are new in the domains of reporting, political bias, propaganda etc. However, it adds another powerful tool for people willing to spread misinformation to support any agenda, alongside things such as selective statistics, quoting out of context, lies in media that is text-only or crudely photoshopped etc.

A search for papers studying impact of deep fakes should find academic research such as Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.


Opinion: Video content, presented feasibly as a live capture or report, is especially compelling, as unlike text and still image media, it directly interfaces to two key senses that humans use to understand and navigate the world in real time. In short, it is more believable by default at an unconscious and emotional level compared to a newspaper article or even a photo. And that applies despite any academic knowledge of how it is produced that you might possess as a viewer.

Neil Slater
  • 28,678
  • 3
  • 38
  • 60
  • This answer provides a good overview of the applications, but it would be nice to also have more details about "how it works". For example, how could you train a GAN to replace someone's face with another person's face? – nbro Jun 14 '20 at 19:29
  • @nbro: I think that may be too broad, as there are mutilple technologies in use, pipelined , and they vary depending on the specific use case. – Neil Slater Jun 14 '20 at 19:30
  • 1
    Well, I am not expecting a very detailed description, but maybe just to give an idea. Anyway, maybe someone else or I later will provide some details about that. – nbro Jun 14 '20 at 19:31
  • 2
    @nbro: Some of the links provided do go into more details. I also just added https://github.com/iperov/DeepFaceLab as an example utility. – Neil Slater Jun 14 '20 at 19:44