The role of AI in democracy

EDITOR’S NOTE: The League of Women Voters of Lane County published a study in December on “The Evolving Role of Artificial Intelligence in Democracy,” focusing on defining AI, its development, and its impact on democratic institutions and governance challenges. This month, The Chronicle partners with the LWV-LC, as part of a policy-deep dive into the ever-evolving subject. Read the editor’s intro column about the new page here.

AI has become essential in daily life, enhancing business with data analytics, supporting education with personalized learning, and advancing healthcare through diagnostic tools and telehealth. Its influence transforms decision-making and interactions, and as technology evolves, AI’s role will further expand.

Democracy, meaning “rule by the people,” is complex and relies on governmental authority stemming from the consent of the governed through free elections. For true democracy, citizens must have access to unbiased information, the freedom to organize and dissent, real choices, and the ability to hold leaders accountable. Without participation and transparency, elections lose their significance.

When developed responsibly, AI can enhance civic engagement and improve democratic processes. By prioritizing fairness and equitable access, AI can benefit a broader audience. For individuals, AI can strengthen democracy by facilitating civic engagement, providing personalized information about candidates and voting, and simplifying the voting process through chatbots that answer questions about registration and polling.

AI can aid grassroots organizing by identifying supporters and optimizing campaign strategies through social media analysis. It can also promote individual rights by simplifying legal documents and clarifying privacy policies, empowering people to recognize rights violations and understand data sharing.

AI can enhance election integrity by swiftly detecting disinformation, identifying inauthentic behavior, and securing voting infrastructure against real-time threats. It aids fact-checkers and journalists in verifying claims and the authenticity of media.  

Additionally, AI optimizes polling locations to reduce wait times and analyzes ballot processing irregularities, improving election administration and ensuring a smoother voting experience for citizens.

AI’s Threats to Democratic Systems 

The use of AI threatens democracy in several ways. It spreads misinformation and disinformation, eroding trust in institutions and distorting public discourse. Biased AI systems can lead to unfair government decisions, and the use of AI to track individuals can suppress fundamental rights such as free speech and assembly.

Deepfakes, a significant form of disinformation, are altered images or recordings that misrepresent individuals. OpenAI’s latest smartphone app lets users create hyper-realistic videos that raise concerns about distinguishing between reality and fiction. 

As deepfakes become more common, they amplify uncertainty, causing viewers to question even credible sources. Disinformation from deepfakes can lead individuals to believe false “facts,” distorting discussions of important policies. These manipulated media can influence elections by spreading damaging content about candidates, especially if released right before an election, when debunking is difficult.

Additionally, AI is increasingly used in gerrymandering, where partisan mapmakers manipulate electoral district boundaries to favor specific parties. By analyzing voter preferences, demographics, and geographic data, AI helps identify areas likely to support certain candidates and optimize district configurations to maximize votes for those parties.

This raises ethical and legal issues, as gerrymandering can disenfranchise voters and undermine democratic principles. The lack of transparency in the algorithms used further complicates accountability and fairness in elections.

Bias 

AI systems are trained on datasets created by people, and biased datasets can lead to algorithmic bias, where certain attributes like age or gender are prioritized, sampling bias when data isn’t representative, and representation bias when the data fails to model the population accurately. 

These biases can produce inconsistent or harmful outcomes. For instance, COMPAS, an AI used for predicting criminal reoffending, was found to have racial bias: it over-predicted reoffending for Black defendants while under-predicting it for White defendants. Companies often keep their algorithms proprietary, keeping the public and lawmakers in the dark about these issues.

Surveillance 

We are increasingly under surveillance wherever we go. Cameras monitor intersections and highways, while stores track shopper movements and employers oversee their workers, compromising personal privacy and posing risks to citizens’ rights. 

Jay Stanley of the American Civil Liberties Union cautions that, “In a democracy, the government shouldn’t be watching its citizens all  the time just in case we do something wrong.” 

Facial recognition software, designed to identify criminals, has shown biases, particularly against women and people of color, due to the makeup of training images. Even if such biases are addressed, advancements in AI could lead to more extensive monitoring and judgment of individuals.

Residents of Springfield raised concerns last year regarding the Springfield Police Department’s use of Flock cameras, which are automated license plate recognition (ALPR) cameras. During city council public speaking periods, many residents expressed concerns about potential privacy invasions and data-sharing issues, particularly regarding the placement of these cameras in residential areas rather than retail spaces, as well as fears that mass surveillance could undermine trust within the community and affect residents’ behavior. The situation escalated when the cameras captured images of a stolen vehicle while they were supposed to be inactive. 

Some Springfield residents made their voices known about their thoughts on Flock cameras at the No Kings protests in Springfield and Eugene earlier this year. BOB WILLIAMS / THE CHRONICLE

In response to the public outcry, the SPD severed ties with the camera company and removed the devices in December.

Governance challenges

To effectively support democracy with AI, there must be coherent oversight that includes monitoring outcomes, regulations, multi-stakeholder assessments, accountability mechanisms, and public education. Independent auditing by various organizations is essential for testing AI systems for bias and vulnerabilities. “Algorithmic transparency” is needed, particularly in high-stakes areas like criminal justice and elections, requiring systems to be explainable and auditable.

Successful oversight should involve diverse voices, including technologists and affected communities, and should use advisory boards and public impact assessments. Clear liability rules for AI harms and protections for whistleblowers are crucial, though challenges include national security pressures and resistance to regulation from companies. 

Citizens must develop AI literacy to effectively advocate for their interests, supported by public education funding and tutorials. Positive use cases of AI should be showcased to alleviate public fear and promote understanding of its benefits.

However, major tech companies like Meta, Amazon, and Google hold significant power, influencing regulations in ways that protect their market dominance. Their financial and technical resources enable them to lobby against critical oversight, shaping legislation to minimize restrictions and maintain control over AI and digital services.

The application of AI in democratic systems presents both opportunities and risks. On one hand, AI can help inform voters about candidates and issues, support citizens in understanding their rights, and facilitate civic engagement. However, it also poses significant threats, particularly through the creation of highly convincing deepfake content, which can distort reality and undermine informed decision-making. 

Moreover, the rapid pace of AI development might outpace citizens’ and regulatory bodies’ capacity to address its flaws and protect individual rights, such as freedom of expression and due process. The concentration of AI development power in a few major tech companies, which often withhold key data and algorithms, compounds these issues.

A 2025 study by Rafaa Chehoudi across 72 countries found a significant negative link between AI development and democracy scores, indicating potential human rights violations, bias, discrimination, and privacy concerns related to AI use. The study stressed the need for further exploration of how AI influences democratic systems and the associated risks to citizens.

FULL REPORT