AI influence on democracy

The invention of the printing press by German inventor Johannes Gutenberg in the mid-15th century revolutionized the mass distribution of knowledge and information, significantly transforming the practice of politics. More than five centuries later, we are witnessing a similar technological upheaval with the explosive rise of generative artificial intelligence (AI), which is reshaping every sphere of society, including politics and democracy, at an even greater and more profound scale. While the Gutenberg press enabled the spread of knowledge, generative AI can not only disseminate information but also produce vast volumes of text, video, audio and images without human input. 

However, unlike previous communication technologies of the past 500 years, AI is a double-edged sword for democracy. Used responsibly, it can strengthen democratic systems. Misused, it could seriously undermine them. The impact of AI on democracy is complex and multifaceted. When harnessed properly, AI can enhance civic engagement, voter education, governance, election transparency and integrity. Democratic governments can use AI to solicit public input on policy matters or gather feedback during decision-making processes. Around the world, AI is transforming election campaigns and automating electoral procedures. In Nepal, the government could use AI-enabled platforms to gather public feedback on its proposed AI policies and regulations. AI can also serve as a tool to combat fake news, disinformation and misinformation which undermine democratic institutions and erode public trust. Moreover, AI has the potential to reduce election costs for both the state and political parties, minimizing the influence of money and muscle in shaping voting behavior.

At the same time,  AI also presents serious threats to democracy. Deep fakes, AI-generated content that convincingly mimics real people and events, are already blurring the lines between truth and fake.  In Nepal, a flood of AI-generated misinformation is spreading across social media, targeting politicians and political parties, and eroding public trust in the political system. Unfortunately, there has been little research or public debate on this issue, even as malicious actors continue to exploit these tools to manipulate opinion.

There is no concrete data yet on the extent of misinformation and disinformation in Nepal’s 2017 and 2022 elections. However, it is clear that these issues will significantly affect future polls. Candidates with greater financial and technological resources are likely to benefit disproportionately. This will widen the gap between the powerful and the under-resourced.

In the recent 2024 and 2025 elections in countries like India, the US, Germany and across South Asia, AI-generated false content was widely circulated to influence voters. While high levels of digital literacy in some of these democracies may have mitigated the impact, countries like Nepal, with lower digital literacy, remain highly vulnerable to such tactics. Additionally, Nepal faces the risk of Foreign Information Manipulation and Interference (FIMI), as observed in recent elections in India, the US and Taiwan, due to its geopolitical factors and the preference of big powers over one party over other to advance their strategic interests. Some signs of FIMI were observed during the debate over the Millennium Challenge Corporation (MCC) between 2019 and 2022. However, this remains to be independently verified.

Due to low digital literacy, many in Nepal are unable to recognize AI-generated fake content which shapes public opinion and even influences top politicians. For instance, senior leader Bam Dev Gautam once called on Prime Minister KP Sharma Oli to resign based on a fake video. In such a context, AI is more likely to exacerbate democratic vulnerabilities unless strong regulation is put in place. 

Deliberations are underway across the world about AI governance and regulation. While the European Union, Germany and the United Kingdom have made some progress, even their approaches are struggling to keep pace with the rapid evolution of AI. In 2024, the United Nations emphasized the need for a global AI regulatory body. A UN report noted that if AI-related risks grow more severe and concentrated, the world may need a stronger international institution with monitoring, enforcement and accountability powers.

Nepal is still in the early stages of AI governance. Even though AI adoption in sectors like health, education and governance is increasing, the unchecked use of AI for spreading misinformation has been a concerning issue for the country. Although the government has drafted the National Artificial Intelligence Policy, 2025, public awareness has remained low. Feedback from stakeholders has also been mixed. The policy envisions establishing a National AI Council, AI Regulatory Body and an AI Excellence Center.  However, there has been delay in preparing legal and institutional frameworks to set up these institutions. Moreover, the draft policy misses some crucial aspects of AI regulation.

Globally, companies like Microsoft, Amazon and Google dominate the market for cloud computing resources used to train and deploy AI models. In Nepal, AI systems are likely to be controlled either by government agencies or by corporate entities close to power. In this context, time has come to discuss the idea of publicly-owned AI, developed and managed for the collective good. We must also begin discussions on creating a democratic, robust and transparent institution to govern AI in the national interest.

 

To make this a reality, the government must invest in AI capacity-building, including training human resources capable of leading AI governance. To reduce the harms and maximize the benefits of AI in democratic systems, state institutions must act now. In particular, the Election Commission should develop specific policies and infrastructure to safeguard upcoming elections from AI-enabled threats and malign actors seeking to erode democracy.