[For original post move to the bottom]
After a recent discussion on Reddit, I should change my views on the topic radically. Initially, I thought conversations about deep fake technology it's more of a hype that mainstream media try to utilize to increase their audiences. But after discussing the topic, I must agree that this technology may be a real threat.
This is the Reddit discussion: https://www.reddit.com/r/MachineLearning/comments/eb4fap/d_why_deepfake_is_not_an_issue/
For national politics, where serious media usually try to select carefully the content, the local or regional level may be the place where deepfake battles are taken place. The risks outlined by the fellow commentators on Reddit are:
- Deepfake may pose a real political risk if only a very small portion of the video is faked (let’s say adding a word or small gesture). It’s easy to believe in the video if most of it is real. For people who are not politically critical and not double-check sources, this may lead to misinformation.
- When the target of the deepfake attack is not so “famous” and whose videos are not double or triple checked before media release. For example, local or regional politicians.
- When the original source video used for deepfake attack is only in the possession of the attacker.
- The risk of deepfakes is double:
- First, because an innocent person may be blamed for something he/she didn’t do.
- Second, when a guilty person is tagging all the evidence as fake.
Therefore; I would like to change my mind and definitely I consider that deepfakes pose a real danger.
Recently we see a lot of news about deepfake and how the misinformation may impact United States elections. An example is an NBC News video below that shows United States presidents fake avatars giving unrealistic speeches.
The presenter speaks about the role of machine learning and artificial intelligence in building a very realistic model of American politicians to mimic the real ones. As we can see from the video, the models shown here are not really sophisticated. But let's imagine that in some time, the technique is vastly improved and models are accurate to the point that you cannot distinguish fake ones from the real ones. Something like we have currently in still images that can be edited with tools like Photoshop up to that point that only professionals can spot the difference. But even having Photoshop the majority of people are capable to identify trustworthy news. How? Very easy: because everyone is knowledgeable about the existence of such technology so nobody believes only in the picture. People not only look at information itself but also on the source of information.
Mainstream media, which includes newspapers, TV channels, news sites and so on don't usually post unreliable information. Professional reporters usually have trustworthy sources that are checked with time. If for example, CNN receives some video files with sensitive declarations made by the President, the first thing they will check is if the source the piece comes from is trustworthy. Actually, the reputation of the media is probably their main asset.
But what we should care about and what should alarm us is a different issue. Mainstream media, and also bloggers of all kinds try to use artificial intelligence, machine learning, and tech advances to generate negative hype. There is a consistent effort to create fear among citizens on the new technologies. It is true that manipulation of social media can affect elections, and without a doubt, this discussion should be on top of the agenda of political reporters. But it is very important not to overestimate the threat. There is no risk from machine learning or artificial intelligence to the US elections, at least from deepfake imagery or audio. We should focus on real threats. Blaming all kinds of new technologies just because it looks fancy it's not a good idea. It make people fearful. And fearful people don't make free decisions in the elections.