A new and rapidly evolving video technology could intensify the problem of fake news in India. Those who have peddled video disinformation in the country till now have generally used old footage as recent events. For instance, a 2017 air show clip was circulated as evidence of India’s airstrike on Pakistan last month.

Also, parts of speeches have been mischievously edited to make certain politicians’ views unpalatable. Essentially, though, it has all been a low-tech affair. Deepfakes are different. They are synthetic audio and video content created by artificial intelligence-enabled algorithms.

First reported upon in 2017, deepfakes started off mostly as an application to create face-swapped pornography. They are now setting off alarm bells globally. Buzzfeed demonstrated their scary potential last year by creating a video of Barack Obama saying words he had never uttered.

While some feel the fears of such disinformation are overblown, especially since no election in the world is confirmed to have been influenced by deepfakes yet, the likelihood of this only grows as the technology evolves. As India hurtles towards its next general election beginning in April, it could be critical.

#KhabarLive spoke with Henry Ajder, the head of communications and research analysis at Deeptrace, a deepfakes detection firm, to hear about the current state of the technology and how likely it is that it could play a role in the Indian election.

Edited excerpts:

    • In what ways is India susceptible to deepfakes, and what should the country be concerned about?

If you look at India, Brazil, Myanmar, and other developing countries that have seen misinformation lead to serious real-world consequences, there’s a tendency for people to watch a video or hear voice audio and assume that there is no way they could be fake. With video, in particular, it is almost like the medium is sacred in a way. Deepfakes fundamentally undermine this idea. A lack of awareness of how deepfakes could infiltrate the digital media you consume could lead people to believe falsehoods to be true, not because they are being careless, but because the real and the synthetic will have become indistinguishable.

ALSO READ:  KCR's 'One Nation' Vision Promotes Nationalism And Patriotism

Even if they are aware this is possible, the deep-rooted and reflexive impact of a “seeing is believing” mentality may mean deepfakes change how people think, even if they aren’t consciously aware of it. The proliferation of deepfakes will also inevitably result in what Robert Chesney and Danielle Citron (academics) refer to as the “liar’s dividend,” where deepfakes undermine the credibility of all media.

So if Donald Trump or Narendra Modi or Theresa May say or do something incriminating, there becomes a plausible point of deniability where you can pass it off by saying “that’s a deepfake.” If deepfake-detecting technologies are not present in this context, the constant possibility that a video or audio recording could be fake or real may lead to a complete breakdown in trust in digital media.

    • How likely is it that deepfakes could be used as political disinformation in India’s upcoming election?

At the moment, I think creating a very convincing deepfake that would be able to defame someone with speech and video, especially by April, is quite unlikely. I don’t think the technology is good enough—at least not on a widely accessible level. So I don’t think that there’s going to be deepfakes of a realistic quality that will penetrate official organisations or media outlets. But you’ve also got individuals with poor media literacy, such as those in rural communities, and in general, a populace that hasn’t got the same level of access to technology as in some more developed countries.

Think about the way that the WhatsApp lynchings have been occurring based on misappropriated videos or hearsay, and how this plays into a bigger issue, which I think is actually the more worrying one: How the current post-truth climate has shone a light on how pre-existing cognitive biases prime people to do certain things and act in certain ways, irrespective of evidence.

This is not just in developing countries, but globally. Many people have no desire to critically reflect on what they’re viewing. And from there, the quality doesn’t matter so much as an individual can plausibly say, well, I saw this and I wanted to act in this way because of it. A good example is the Indian journalist Rana Ayyub, where her face was crudely edited into existing pornographic footage.

ALSO READ:  Facebook India Apologised For Sharing Anti-Muslim Posts

A critical eye could likely pick up on this being fake, but those who already disliked her or wanted it to be true probably won’t approach such media in this way. Similarly, let’s say you have a deepfake created of a purportedly Muslim individual claiming they were going to go slaughter cows for meat production. That could be a really poorly executed video.

The audio doesn’t have to be great, but let’s say you encounter it on an anti-Muslim WhatsApp group, a phenomenon well documented in India where misinformation routinely preys on religious biases. Similarly, if it’s on Facebook, the very format of Facebook’s newsfeed leads to short term, glancing impressions that leave a long-term impact. I still don’t think deepfakes will play a massive role (in the upcoming election), but I think that if they are going to, this is how they could cause serious damage.

    • How rapidly are deepfakes evolving, and how accessible to people across the world is the technology for creating them?

The technology behind deepfakes has improved exponentially in the past five years and has similarly seen rapid commodification. Back in 2014, this technology was only accessible in places like computer laboratories at top universities and still produced fairly low-quality outputs. But now, for creating a deepfake similar to the “derp fakes” with Nicholas Cage you may have seen on YouTube, all you need is a mid- to high-grade gaming computer with a decent graphics processing unit (GPU), and access to open-source software that you can easily download online.

In terms of actually operating the software, the level of knowledge of machine learning you need to go from downloading the program to creating a deepfake is also worryingly low, with most programs having detailed step-by-step guides or plug-ins that automate processes such as converting video into still images for training data. The key point here though is the distinction between a deepfake which is not convincing and a deepfake which is convincing. “Derpfakes” are fun and sometimes a bit uncanny, but the technology to create an entirely synthetic and highly convincing audio-visual deepfake, I would say, is still fairly restricted—and some would probably argue it doesn’t currently exist.

    • Is deepfakes technology language-agnostic? Or would it be more difficult to make deepfakes of certain languages versus others?
ALSO READ:  Controversy Cropped On Suspicious Movement Of EVMs At Jagtial In Telangaa, EC Clarifies Training Staff, Politicians Crying Foul

If you’re training your algorithm solely on English-speaking individuals with an American accent, it’s likely it would only be able to accurately recreate synthetic voice audio that speaks English with an American accent. Similarly, if you’re training your algorithm on Hindi-speaking individuals with an Indian accent, it would only be able to accurately recreate synthetic voice audio that speaks Hindi with an Indian accent.

So in this respect, yes, theoretically generative technologies are language-agnostic, they function to synthetically recreate the kind of data they are trained on. Better algorithms may be able to more effectively create synthetic audio from less data, but ultimately the quality and language of the deepfake output will depend on what data you feed it.

    • How should India’s government deal with the potential problem of deepfakes?

I think a sophisticated approach would start by understanding that most technologies can be used maliciously and that AI-generated synthetic media is not an exception. Social media platforms like WhatsApp or Facebook, or journalists and governments are always going to fight misinformation and malicious content.

Whilst threatening social media platforms or “facilitators” with sanctions and punishments in the short term may stimulate action, this action is likely to be rushed and orientated towards short-term “quick fixes.” A more sustainable approach requires engaging with these organisations and actually sparking a movement within India that helps solve the root of the problem: namely, a lack of media literacy that provides awareness of deepfakes and healthy scepticism towards digital media. #KhabarLive