Celebrities like Alia Bhatt, Priyanka Chorpra, Taylor Swift, Kim Kardashian, and Ariana Grande, among others, have become victims of the deepfake epidemic. While Hollywood and Bollywood stars have long been targeted by deepfakes, often in the form of explicit content, this troubling trend has only recently entered the mainstream. The widespread availability of AI-powered tools has unveiled a darker side of digital manipulation, threatening to distort reality, damage reputations, and unravel the very fabric of society by eroding trust in the authenticity of both individuals and information.
Technology is blurring the line between reality and fabrication, deepfakes—AI-generated synthetic media—have emerged as dangerous tools of manipulation, capable of eroding political stability, public trust, and social cohesion. From election interference to geopolitical crises, these hyper-realistic forgeries exploit human psychology, destabilize governance, and force society into a new era of contested truths. As deepfake technology evolves, so does the challenge of detecting them, putting pressure on global institutions to find solutions.
Deepfakes have transitioned from niche technological experiments to powerful instruments of propaganda and misinformation. Political operatives, intelligence agencies, and rogue actors now employ them to discredit opponents, sow confusion, and manipulate public opinion. Unlike traditional disinformation, deepfakes are harder to detect and far more convincing, allowing falsehoods to spread before fact-checkers can intervene.
Fabricated videos, audio clips, and entire speeches leave no leader, activist, celebrities, or journalist immune from the risk of character assassination. TikTok and other social media platforms are now flooded with deepfakes, exposing millions of Nepalis to an endless stream of manipulated content. In a country already grappling with misinformation, this unchecked wave of digital deception threatens to distort public perception, erode trust, and destabilize democracy itself.
Political weaponization
During Russia’s invasion of Ukraine, for instance, a deepfake video purportedly showed President Volodymyr Zelenskyy instructing Ukrainian troops to surrender. The video, viewed by over 120,000 people before being debunked, aimed to demoralize soldiers and undermine public confidence in leadership. This case underscored how deepfakes can be weaponized during conflicts to manipulate perception and incite chaos.
In Nigeria’s 2023 elections, deepfake videos featuring opposition candidates making inflammatory statements spread widely on WhatsApp and Facebook. Despite official denials, the videos fueled ethnic and religious tensions among voters, influencing public discourse and contributing to post-election violence. This highlights the vulnerability of developing democracies to AI-driven disinformation.
Similarly, in Gabon, a deepfake video of President Ali Bongo Ondimba, who was recovering from a stroke, caused unrest. The video, portraying him in poor health, sparked protests demanding transparency and further complicated the political instability. This incident revealed how deepfakes could exploit public skepticism about leadership, especially in volatile political climates.
In Myanmar, a deepfake audio recording impersonating a government official incited violence against the Rohingya community, triggering riots and displacement. Authorities struggled to contain the damage, illustrating how deepfakes can exacerbate existing social fractures, worsening humanitarian crises.
Ahead of Pakistan’s 2024 elections, a deepfake video of prominent opposition leaders surfaced, accusing them of corruption. The video, which was later debunked, circulated widely, contributing to political polarization and undermining public trust in electoral messaging. These cases underscore the growing role of deepfakes in electoral manipulation, highlighting the difficulties voters face in discerning fact from fiction.
The cumulative effect of deepfakes is a crisis of legitimacy. When citizens can no longer distinguish between real and fabricated content, trust in democratic processes erodes. A 2024 survey in the US revealed that 68 percent of voters were concerned that deepfakes could influence elections. Similarly, the European Union’s AI Act classifies deepfakes as “high-risk” tools that threaten democratic institutions and processes.
The “liar’s dividend” phenomenon—where verified evidence, such as bodycam footage of police brutality, is dismissed as synthetic—further exacerbates the problem. Bad actors can deny real scandals by claiming evidence is doctored, undermining accountability and transparency.
Complexity of detection
To make matters worse, the rise of deepfakes has made detecting manipulated content increasingly difficult. While some flaws in synthetic media can still be spotted with careful observation, others are so subtle that they elude the naked eye. Deepfake technology is evolving rapidly, with each new iteration making it harder to distinguish fake content from real. Deep learning models, which are trained on vast datasets of real-world images and videos, allow synthetic media to mimic even the most intricate human features—such as skin texture, micro-expressions, and movement—making these fakes more convincing than ever.
In Nepali society, deepfakes threaten community trust and cohesion. To combat this, citizens must prioritize media literacy, support education on content analysis, and push for stronger AI regulations and transparency.
The growing sophistication of these tools means that traditional detection methods, such as human scrutiny alone, are no longer sufficient. Automated detection systems powered by AI and computer vision are essential for identifying patterns, anomalies, and signs of tampering. The demand for reliable detection methods grows exponentially. Deep neural networks play a critical role in detecting deepfakes. Trained on both authentic and manipulated content, these models can identify subtle inconsistencies, such as pixel-level mismatches, inconsistent lighting effects, or unnatural skin textures. These networks continually adapt and evolve, staying ahead of the latest deepfake techniques and improving their detection accuracy over time.
Metadata and error-level analysis also play crucial roles in detecting synthetic content. Deepfake videos often introduce unique compression patterns—such as pixelation or blurring—around modified areas. Analyzing digital metadata, such as timestamps, camera settings, or file origins, can reveal discrepancies, helping to uncover manipulated content that otherwise appears genuine.
Legislation, technology and education
In response to the growing threat of deepfakes, governments and tech companies are stepping up efforts to combat their spread. The EU’s AI Act mandates transparency for AI-generated content, while the US is considering bills like the Deepfake Accountability Act, which would require watermarks on synthetic media. However, enforcing these laws remains challenging, especially for smaller nations struggling to regulate open-source AI tools.
Tech companies are racing to develop deepfake detection tools. Microsoft’s Video Authenticator and Adobe’s Content Authenticity Initiative are leading the charge in providing solutions for verifying the authenticity of digital content. Blockchain-based systems are also being explored as a way to embed digital provenance into media files, allowing audiences to trace the origins of content and verify its authenticity.
In addition to technological solutions, media literacy programs are crucial in educating the public on how to spot deepfakes. Programs like Stanford’s “Detecting Deepfakes” curriculum teach critical analysis of metadata, lighting, and audio inconsistencies. Grassroots efforts in countries like Brazil and Indonesia are helping communities learn how to identify and report digital misinformation, which is essential for countering the rise of synthetic media.
Deepfakes are not a hypothetical threat—they are actively destabilizing governance, eroding trust in institutions, and distorting reality. From election interference to geopolitical crises, these synthetic forgeries exploit technological vulnerabilities, challenging the very fabric of democracy. The arms race between deepfake creators and detection technologies is ongoing, and without global cooperation—combining robust legislation, advanced detection tools, and widespread public education—democracies risk descending into a “post-truth” era where the authenticity of information is constantly in question.
As we face this new era, the challenge is not just technological but existential: preserving truth in a world where seeing is no longer believing. In Nepali society, deepfakes threaten community trust and cohesion. To combat this, citizens must prioritize media literacy, support education on content analysis, and push for stronger AI regulations and transparency. By advocating for tools like watermarking and blockchain, Nepali society must safeguard truth, protect democracy, and resist the corrosive effects of digital manipulation.
Unfortunately, Nepal’s ban on cryptocurrency has stifled innovation, preventing youth from experimenting with blockchain-based solutions. In its attempt to regulate the unknown, the government has instead crippled its own future—swinging the axe upon its own foot, ensuring that Nepal lags behind in the very technological arms race that defines the digital age.
But a tsunami of synthetic content is coming, an unstoppable flood of deception that will rewrite reality itself. And the government? Ill-equipped, outdated, and blind to the scale of the crisis. Chaos is not just looming—it is inevitable.
Comment