icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
29 Apr, 2024 13:06

Deepfake democracy: How AI is disrupting the biggest election on Earth

Manipulated videos of political endorsements are too sophisticated to discern or track. Experts say the intent might be reverse psychology – to undermine the party the video endorses, once voters discover it is fake
Deepfake democracy: How AI is disrupting the biggest election on Earth

The ongoing Indian parliamentary elections, like those in the US, Pakistan, and Indonesia, are witnessing the creative use of artificial intelligence. Some AI-generated content, such as phone calls to voters apparently from candidates but actually by AI-generated voices, is harmless. Others, like deepfake videos of film stars ostensibly endorsing one party over another, or of the country’s home minister advocating for abolishing affirmative action, are more sinister. 

And India’s keystone cops are one step behind cyber-space’s ‘bad actors’, unable to discern deepfakes from real videos and unable to trace their origins, left with the sole recourse of blocking them – by which time it’s too late.

The term ‘deepfake’ derives from ‘deep learning’ and ‘fake’, and signifies the digital manipulation of media to substitute one person’s image with another’s.

Nearly two weeks ago, videos surfaced of Bollywood stars Aamir Khan and Ranveer Singh criticizing Prime Minister Narendra Modi and asking people to vote for the opposition Congress Party. The actors wasted no time in complaining to the Mumbai police cyber-crime wing – but by then, the videos had been viewed more than half a million times.

Half a million may not seem like much, given that Google-owned YouTube has 462 million users in India, and the electorate is 968 million. There are 900 million internet users, according to a survey by the Esya Center and IIM (A); on average, each spends 194 minutes a day surfing social media. 

However, as cyber-psychologist and TEDx speaker Nirali Bhatia told RT, thanks to social media, everyone will form their own opinions and judgments on the matter – which can be far more damaging. 

RT

Power of reverse psychology

Psephologist Dayanand Nene explains that deepfake videos can influence public opinion and discredit people and politicians. “Deepfakes exploit several psychological vulnerabilities,” he told RT.

Humans have a natural tendency to trust what they see and hear, and deepfakes leverage this by creating realistic content. “Moreover, cognitive biases such as confirmation bias – where we are more likely to believe information that confirms our pre-existing beliefs – make us susceptible to deepfakes that align with our viewpoints,” Nene said.

For example, the Delhi police on Sunday had to register a case after a doctored video was found in which Home Minister Amit Shah was seen advocating for the abolition of education and job quotas for India’s unprivileged – Dalits and ‘backward’ castes. It was a deepfake, manipulated from a speech he gave in the southern Indian state of Telangana, and it went viral.

The high number of views was due to curiosity, Bhatia felt, since celebrities are still far more popular than influencers. Moreover, the deepfakes will work as reverse psychology – negatively impacting the party being endorsed once it becomes clear they are deepfakes.

“In my opinion, that was the agenda of these videos,” Bhatia, who studies the impact of technology on human behavior, said. “The intelligent voters may understand the true agenda but a majority act on sentiment rather than on rational decision.” 

RT

Deepfakes transfer attention from ‘content’ to ‘intent’ in creating these fakes. “And that’s what will be the influencing factor,” Bhatia, the founder of CyberBAAP (Cyber Bullying Awareness, Action and Prevention), said.

The investigation has proven to be challenging. Police speaking on condition of anonymity say it is difficult to differentiate fakes from reality. Deepfakes are low-cost, easy to make, and difficult to track, giving bad actors ‘a free run’.

According to government officials assisting the high-profile investigation, the creators of the deepfake videos are believed to have used the diffusion model, an AI tool that allows bad actors to exploit videos available on social media sites. “The tool enables them to recreate just about anything,” an official told RT.

Training AI

The diffusion model generates unique images by learning how to de-noise and reconstruct the content. So how does it work? 

Computer scientist Srijan Kumar, awarded for work on social media safety and integrity, explained that depending on the prompt, a diffusion model can create wildly imaginative pictures and shots based on the statistical properties of its training data – all in a matter of seconds.

“In the past, individual faces had to be superimposed, which was a time-consuming process,” Kumar said. “Now, with the diffusion model, multiple images can be generated with a single command. It does not require much technical knowledge.”

In the Aamir Khan and Ranveer Singh videos, the two actors purportedly say Modi failed to keep campaign promises and failed to address critical economic issues during his two terms.  

The moment Singh’s video surfaced on April 17, Congress spokesperson Sujata Paul shared it with her 16,000 followers on X. Thereafter, her post was shared 2,900 times and liked 8,700 times. The video received 438,000 views.

RT

This is not the first time Bollywood has been hit by deepfakes. Earlier, actors Rashmika Mandanna and Kajol figured in deepfakes circulating online. In Mandanna’s video, her face was superimposed on that of British-Indian social media personality Zara Patel, who was entering an elevator in a revealing onesie.

These AI-generated visuals have the ability to undermine any tracking systems on the internet. “This is precisely why it’s nearly impossible to determine whether an image is real or fake,” Kumar, the co-founder and CEO of Lighthouz AI, said.

Using generative AI models, the cost of producing high-quality content, both good and bad, has come down drastically. “AI tools enable manipulation on an unimaginable scale,” said the cyberspace expert, who has created data science methods to tackle fake reviewers on e-commerce platforms. “It plays a key role in creating realistic looking but entirely fabricated and misinformative content.”

Another model, or rather an extension of the existing AI tool, is stable diffusion, a deep-learning text-to-image model used to create deepfakes. 

“It is completely open-source and the most adaptable picture generator,” says a government official involved in the investigation (but who demanded anonymity due to his organization’s secrecy). “It’s believed that stable diffusion is being heavily relied upon in the West to create deepfakes.”

So far, investigators are unclear whether the deepfake videos of Bollywood actors were generated abroad or were made in India. While police blocked the videos, India’s Ministry of Electronics and IT has urged social media platforms such as X (formerly Twitter) and Meta to regulate the proliferation of AI-generated deepfakes.

Racing with technology 

Are there detection and mitigation solutions for deepfake videos? 

“Quite a few, but they are preliminary and do not account for the many types of generated images and videos,” Kumar said.

The most popular among the increasing deepfake detection/mitigation tools include Intel’s Real-Time Deepfake Detector (FakeCatcher). It focuses on speed and efficiency. Utilizing Intel hardware and software, FakeCatcher analyzes subtle physiological details like blood flow variations in pixels.

RT

Another is Microsoft’s Video Authenticator Tool, which analyzes videos and images, and provides for each a confidence score that indicates the likelihood of manipulation. It claims to identify inconsistencies in blending boundaries and subtle grayscale elements that are invisible to the human eye 

One of the major challenges the investigators face is that as new detectors are developed, the generative technology evolves to incorporate methods to evade detection from those detectors. 

“It’s getting harder and harder to identify deepfakes with the advancement of technology, the ease of access, and the lower cost,” Kumar says. “Unfortunately, there is no immediate resolution in sight.”

He says that like anti-virus and anti-malware software, end users will have to use equivalents on their devices to protect against deepfakes.

Podcasts
0:00
28:28
0:00
29:0