AI Research
AI & Elections – McCourt School – Public Policy

How can artificial intelligence transform election administration while preserving public trust? This spring, the McCourt School of Public Policy partnered with The Elections Group and Discourse Labs to tackle this critical question at a groundbreaking workshop.
The day-long convening brought together election officials, researchers, technology experts and civil society leaders to chart a responsible path forward for AI in election administration. Participants focused particularly on how AI could revolutionize voter-centric communications—from streamlining information delivery to enhancing accessibility.
The discussions revealed both promising opportunities for public service innovation and legitimate concerns about maintaining institutional trust in our democratic processes. Workshop participants developed a comprehensive set of findings and actionable recommendations that could shape the future of election technology.
Expert Insights from Georgetown’s Leading Researchers
To unpack the workshop’s key insights, we spoke with two McCourt School experts who are at the forefront of this intersection between technology and democracy:
Ioannis Ziogas is an Assistant Teaching Professor at the McCourt School of Public Policy, an Assistant Research Professor at the Massive Data Institute, and Associate Director of the Data Science for Public Policy program. His work bridges the gap between cutting-edge data science and real-world policy challenges.

Lia Merivaki is an Associate Teaching Professor at the McCourt School of Public Policy and Associate Research Professor at the Massive Data Institute, where she focuses on the practical applications of technology in democratic governance, particularly election integrity and voter confidence.
Together, they address five essential questions about AI’s role in election administration—and what it means for voters, officials and democracy itself.
Q1
How is AI currently being used in election administration, and are there particular jurisdictions that are leading in adoption?
Ioannis: When we talk about AI in elections, we need to clarify that it is not a single technology but a family of approaches, from predictive analytics to natural language processing to generative AI. In practice, election officials are already using generative AI routinely for communication purposes such as drafting social media posts and shaping public-facing messages. These efforts aim to increase trust in the election process and make information more accessible. Some offices have even experimented with using generative AI to design infographics, though this can be tricky due to hallucinations or inaccuracies. More recently, local election officials are exploring AI to streamline staff training, operations, or to summarize complex legal documents.
Our work focuses on alerting election officials to the limitations of generative AI, such as model drift and bias propagation. A key distinction we emphasize in our research is between AI as a backend administrative tool (which voters may never see) and AI as a direct interface with the public (where voter trust and transparency become central). We believe that generative AI tools can be used in both contexts, provided that there is awareness of the challenges and limitations.
Lia: Election officials have been familiar with AI for quite some time, primarily to understand how to mitigate AI-generated misinformation. A leader in this space has been the Arizona Secretary of State Adrian Fontes, who conducted the first of its kind deepfake detection tabletop exercise in preparation for the 2024 election cycle.
We’ve had conversations with election officials in California, New Mexico, North Carolina, Florida, Maryland and others whom we call early adopters, with many more being ‘AI curious.’
Q2
Security is always a big concern when it comes to the use of AI. Talk about what risks are introduced by bringing AI into election administration, and conversely, how AI can help detect and prevent any type of election interference and voter fraud.
Ioannis: From my perspective, the core security challenge is not only technical but also about privacy and trust. AI systems, by design, rely on large volumes of data. In election contexts, this often includes sensitive voter information. Even when anonymized, the use of personal data raises concerns about surveillance, profiling, or accidental disclosure. Another risk relates to delegating sensitive tasks to AI, which can render election systems vulnerable to adversarial attacks or hidden biases baked into the models.
At the same time, AI can support security: machine learning can detect coordinated online influence campaigns, identify anomalous traffic to election websites, or flag irregularities that warrant further human review. In short, I view AI as both a potential shield and a potential vulnerability, which is why careful governance and transparency are essential. That is why I believe it is critical to pair AI adoption with clear safeguards, training and guidance, so that officials can use these tools confidently and responsibly.
Lia: A potential risk we are trying to mitigate is the impact of relying on AI for important administrative tasks on voter trust. For instance, voters who call/email their election official and expect to talk with them, but instead interact with a chatbot, may feel disappointed and in turn not trust the information as well as the election official. There is also some evidence that voters do not trust information that is generated with AI, particularly when its use is disclosed.
As for detecting and preventing any irregularities, over-reliance on AI can be problematic and can lead to disenfranchisement. To illustrate, AI can help identify individuals in voter records whose information is missing, which would seemingly make the process of maintaining accurate lists more efficient. The election office can send a letter to these individuals to verify they are citizens, and ask for their information to be updated. This seems like a sound practice; however, it violates federal law, and it risks making eligible voters feel intimidated, or having their eligibility challenged by bad actors. The reality is that maintaining voter records is a highly complex process, and data entry errors are very common. Deploying AI models to substitute existing practices in election administration such as voter list maintenance – with the goal of detecting whether non-citizens register or whether dead dead voters exist in voter rolls – can harm voters and undermine trust.
Q3
What are the biggest barriers to AI adoption in election administration – technical, financial, or political?
Lia: There are significant skills and knowledge gaps among election officials when it comes to utilizing technology generally, and we see such gaps with AI adoption, which is not surprising. Aside from technical barriers, election offices are under-resourced, especially at the local jurisdiction level. We observe that policies around AI adoption in public administration generally, and election administration specifically, are sparse at the moment.
While the election community invested a lot of resources to safeguard the election infrastructure against the threats of AI, we are not seeing – yet – a proportional effort to educate and prepare election officials on how to use AI to improve elections. To better understand the landscape of AI adoption and how to best support the election community, we hosted an exploratory workshop at McCourt in April 2025, in collaboration with The Elections Group and Discourse Labs. In this workshop, we brought together election officials, industry, civil society leaders and other practitioners to discuss how AI tools are used by election officials, what technical barriers exist and how to move forward with designing policies on ethical and responsible use of AI in election administration. Through this workshop, we identified a list of priorities which require close collaboration among the election community, academia, civil society and industry, to ensure that the adoption of AI is done responsibly, ethically and efficiently, without negatively affecting the voter experience.
Ioannis: I would highlight that barriers are not just about resources but also about institutional design. Election officials often work in environments of high political scrutiny but low budgets and limited technical staff. Introducing AI tools into that context requires financial investment and clear guidance on how to evaluate these systems: what counts as success, how to measure error rates and how to align tools with federal and state regulations. Beyond that, there is a cultural barrier. Many election officials are understandably cautious; they’ve spent the past decade defending democracy against disinformation and cyber threats, so embracing new technologies requires trust and confidence that AI will not introduce new risks. That is why partnerships with universities and nonpartisan civil-society groups are critical: they provide a space to pilot ideas, build capacity, and translate research into practice.
Our two priorities are to help narrow the skills gap and build frameworks for ethical and responsible AI use. At McCourt, we’re collaborating with the Arizona State University’s Mechanics of Democracy Lab, which is developing training materials and custom-AI products for election officials. Drawing on our background in AI and elections, we aim to provide election officials with a practical resource that maps out both the risks and the potential of these tools, and that helps them identify ideal use cases where AI can enhance efficiency without compromising trust or voter experience.
Q4
Looking ahead, what emerging AI technologies could transform election administration in the next 5-10 years?
Lia: It’s hard to predict really. At the moment we are seeing high interest from vendors and election officials to integrate AI into elections. Concerns about security and privacy will undoubtedly shape the discussion about what AI can do for the election infrastructure. It could be possible that we see a liberal approach to using AI technologies to communicate with voters, produce training materials, translate election materials into non-English languages, among others. That said, elections are run by humans, and maintaining public trust relies on having “humans in the – elections – loop.” This, coupled with ongoing debates about how AI should or should not be regulated, may result in more guardrails and restrictions over time.
Ioannis: One promising direction is multimodal AI: systems that process text, audio and images together. For election officials, this could mean automatically generating plain-language guides, sign-language translations, or sample audio ballots to improve accessibility. But these same tools can amplify risks if their limitations are not understood. For that reason, any adoption will need to be coupled with auditing, transparency and education for election staff, so they view AI as a supportive tool rather than a replacement platform or a black box.
Q5
What guidelines or regulatory frameworks are needed to govern AI use in elections?
Ioannis: We urgently need a baseline framework that establishes what is permissible, what requires disclosure, and what is off-limits. Today, election officials are experimenting with AI in a largely unregulated space, and they are eager for guidance. A responsible framework should include at least three elements: a) transparency: voters should know when AI-generated materials are used in communications; b) accountability: human oversight should retain the final authority, with AI serving only as a support; and c) auditing: independent experts must be able to test and evaluate these tools for accuracy, bias and security.
AI Research
Ray Dalio calls for ‘redistribution policy’ when AI and humanoid robots start to benefit the top 1% to 10% more than everyone else

Legendary investor Ray Dalio, founder of Bridgewater Associates, has issued a stark warning regarding the future impact of artificial intelligence (AI) and humanoid robots, predicting a dramatic increase in wealth inequality that will necessitate a new “redistribution policy”. Dalio articulated his concerns, suggesting that these advanced technologies are poised to benefit the top 1% to 10% of the population significantly more than everyone else, potentially leading to profound societal challenges.
Speaking on “The Diary Of A CEO” podcast, Dalio described a future where humanoid robots, smarter than humans, and advanced AI systems, powered by trillions of dollars in investment, could render many current professions obsolete. He questioned the need for lawyers, accountants, and medical professionals if highly intelligent robots with PhD-level knowledge become commonplace, stating, “we will not need a lot of those jobs.” This technological leap, while promising “great advances,” also carries the potential for “great conflicts.”
He predicted “a limited number of winners and a bunch of losers,” with the likely result being much greater polarity. With the top 1% to 10% “benefiting a lot,” he foresees that being a dividing force. He described the current business climate on AI and robotics as a “crazy boom,” but the question that’s really on his mind is: why would you need even a highly skilled professional if there’s a “humanoid robot that is smarter than all of us and has a PhD and everything.” Perhaps surprisingly, the founder of the biggest hedge fund in history suggested that redistribution will be sorely needed.
Five big forces
“There certainly needs to be a redistribution policy,” Dalio told host Steven Bartlett, without directly mentioning universal basic income. He clarified that this will have to more than “just a redistribution of money policy because uselessness and money may not be a great combination.” In other words, if you redistribute money but don’t think about how to put people to work, that could have negative effects in a world of autonomous agents. The ultimate takeaway, Dalio said, is “that has to be figured out, and the question is whether we’re too fragmented to figure that out.”
Dalio’s remarks echo those of computer science professor Roman Yampolskiy, who sees AI creating up to 80 hours of free time per week for most people. But AI is also showing clear signs of shrinking the jobs market for recent grads, with one study seeing a 13% drop in AI-exposed jobs since 2022. Major revisions from the Bureau of Labor Statistics show that AI has begun “automating away tech jobs,” an economist said in a statement to Fortune in early September.
Dalio said he views this technological acceleration as the fifth of five “big forces” that create an approximate 80-year cycle throughout history. He explained that human inventiveness, particularly with new technologies, has consistently raised living standards over time. However, when people don’t believe the system works for them, he said, internal conflicts and “wars between the left and the right” can erupt. Both the U.S. and UK are currently experiencing these kinds of wealth and values gaps, he said, leading to internal conflict and a questioning of democratic systems.
Drawing on his extensive study of history, which spans 500 years and covers the rise and fall of empires, Dalio sees a historical precedent for such transformative shifts. He likened the current era to previous evolutions, from the agricultural age, where people were treated “essentially like oxen,” to the industrial revolutions where machines replaced physical labor. He said he’s concerned about a similar thing with mental labor, as “our best thinking may be totally replaced.” Dalio highlighted that throughout history, “intelligence matters more than anything” as it attracts investment and drives power.
Pessimistic outlook
Despite the “crazy boom” in AI and robotics, Dalio’s outlook on the future of major powers like the UK and U.S. was not optimistic, citing high debt, internal conflict, and geopolitical factors, in addition to a lack of innovative culture and capital markets in some regions. While personally “excited” by the potential of these technologies, Dalio’s ultimate concern rests on “human nature”. He questions whether people can “rise above this” to prioritize the “collective good” and foster “win-win relationships,” or if greed and power hunger will prevail, exacerbating existing geopolitical tensions.
Not all market watchers see a crazy boom as such a good thing. Even OpenAI CEO Sam Alman himself has said it resembles a “bubble” in some respects. Goldman Sachs has calculated that a bubble popping could wipe out up to 20% of the S&P 500’s valuation. And some long-time critics of the current AI landscape, such as Gary Marcus, disagree with Dalio entirely, arguing that the bubble is due to pop because the AI technology currently on the market is too error-prone to be relied upon, and therefore can’t be scaled away. Stanford computer science professor Jure Leskovec told Fortune that AI is a powerful but imperfect tool and it’s boosting “human expertise” in his classroom, including the hand-written and hand-graded exams that he’s using to really test his students’ knowledge.
For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.
AI Research
Mira Murati’s Thinking Machines Lab Publishes First Research on Deterministic AI Models

Thinking Machines Lab, the AI research company founded by former OpenAI CTO Mira Murati, has released its first public research under a new blog series titled Connectionism. Backed by $2 billion in seed funding and a team of former OpenAI researchers, the lab is focused on solving fundamental challenges in AI.
The inaugural post, authored by Horace He, explores how randomness in large language model inference arises from GPU kernel orchestration. The research outlines techniques to create deterministic responses, a breakthrough with potential applications in enterprise reliability, scientific research, and reinforcement learning. The publication marks a rare glimpse into one of Silicon Valley’s most closely watched AI startups as it prepares its first product launch.
AI Research
When you call Donatos, you might be talking to AI

If you call Donatos Pizza to place an order, you might be speaking with artificial intelligence.
The Columbus-based pizza chain announced that it has completed a systemwide rollout of voice-ordering technology powered by Revmo AI. The company says the system is now live at all 174 Donatos locations and has already handled more than 301,000 calls since June.
Donatos Reports Higher Order Accuracy, More Efficient Operations
According to Donatos, the AI system has converted 71% of calls into orders, up from 58% before the rollout, and has achieved 99.9% order accuracy. The company also says the switch freed up nearly 5,000 hours of staff time in August alone, allowing employees to focus more on preparing food and serving in-store customers.
“Our focus was simple: deliver a better guest experience on the phone and increase order conversions,” Kevin King, President of Donatos Pizza, said in a statement.
Ben Smith, Donatos’ Director of Operations Development, said the change provided immediate relief on the phones, allowing staff to redirect time to order accuracy and hospitality.
Donatos said it plans to expand the system to handle more types of calls and to make greater use of its centralized answering center. The company did not say whether it plans to reduce call center staffing or rely more heavily on automation in the future.
Other chains report trouble with AI ordering systems
Taco Bell recently started re-evaluating its used of AI to take orders in the drive-thru after viral videos exposed its flaws. In one well-known video, a man crashed the system by ordering 18,000 cups of water. The company is now looking at how AI can help during busy times and when it’s appropriate for a human employee to step in and take the order.
Last year, McDonald’s ended its AI test in 100 restaurants after similar problems surfaced. In one case, AI added bacon to a customer’s ice cream. A McDonald’s executive told the BBC that artificial intelligence will still be part of the chain’s future.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi