Share and Follow
As American democracy unravels at the hands of President Trump and his enabling congressional and Supreme Court majorities, millions of Americans are desperate to identify whatever possible countermeasures remain to slow the country’s descent into fascism.
The outcome of the 2026 midterms is unlikely to produce meaningful change, even if the Democrats take control of the House. Without a cooperative Senate, it will be impossible either to pass legislation or secure a conviction on impeachment charges. Oversight hearings can bring public attention to things like rampant corruption, but the threats Trump poses to the rule of law and democracy are already well-known. The courts can only do so much.
There’s another emerging tool, however: artificial intelligence.
Trump seems to understands the transformative power of AI. Last month, the administration announced an “AI Action Plan” for “winning the AI race.” Among other measures, it promises to remove “onerous Federal regulations that hinder AI development and deployment, and seek private sector input on rules to remove.”
As part of this initiative, the General Services Administration and OpenAI announced earlier this month that the company will be “providing ChatGPT to the entire U.S. federal workforce” under a “first-of-its-kind partnership.” Participating agencies will pay a nominal cost of $1 each for the first year to enable federal employees to “explore and leverage AI.” The company is also “teaming up with experienced partners Slalom and Boston Consulting Group to support secure, responsible deployment and trainings.” Last week, the AI company Anthropic likewise announced it had struck the same deal with GSA to enable federal agencies’ access to its Claude model.
The Trump administration’s effort to streamline the federal government with AI models makes some sense. Research has shown that generative AI — particularly large language models, which consume vast amounts of data to understand and generate natural language content — can enhance government efficiency in data processing, analysis and drafting, among other potential advantages.
But AI systems also increase the risk of widespread government surveillance, personalized misinformation and disinformation, systematic discrimination, lack of accountability and inaccuracy. According to a recent academic paper, “although many studies have explored the ethical implications of AI, fewer have fully examined its democratic implications.”
Trump’s alliance with OpenAI head Sam Altman goes back to start of his second term, when he announced a $500 billion joint venture with OpenAI, Oracle and Softbank to build up to 20 large AI data centers. Trump called the venture “Stargate.” The deal’s details are murky, including who will have access to Stargate and how it will possibly benefit taxpayers. Although a spokesman for OpenAI told Fox News Digital that “Sam Altman sort of planted a flag on democratic AI versus autocratic AI,” let’s not forget that Altman is not a government official or employee.
As a legal matter, it is unclear whether these “fast-tracked” deals will fully comply with traditional oversight and procurement laws and procedures. No major AI company is currently approved under the Federal Risk And Authorization Management Program, for example, which is the process for authorizing the use of cloud technologies by federal agencies. According the GSA website, the program aims to ensure “security and protection of federal information” by imposing strict cybersecurity controls to protect against data breaches, hacking and unauthorized access, and requiring ongoing monitoring and reporting.
Given that the GSA is reportedly working on “developing a separate authorization” for generative AI systems like ChatGPT and Claude, the potential threats to national security and private citizens’ personal information are significant. The Trump administration’s lack of transparency also risks creating a black-box government run by proprietary algorithms that the public cannot inspect — centralizing control over federal AI in two companies whose interests clearly lie in market dominance, not the public good.
This is why these kinds of decisions are best made through established legal procedures — including the Federal Competition in Contracting Act (requiring fair and open competition), the Privacy Act of 1974 (limiting how agencies can collect and disclose personal data), the Federal Records Act (requiring the proper retention and archiving of public records) and the Administrative Procedure Act (requiring public comment and input into major policy decisions).
For now, OpenAI has promised that its “goal is to ensure agencies can use AI securely and responsibly. ChatGPT Enterprise already does not use business data, including inputs or outputs, to train or improve OpenAI models. The same safeguards will apply to federal use.” This promise from Altman’s company is no substitute for actual legal standards enforced by the federal government.
Whether AI tools embedded in federal government systems could one day be used to sway elections to favor Trump and his cronies is a vital question. For now, what’s clear is that Democrats need to get into the AI game, and fast.
A Democratic political action committee called the National Democratic Training Committee recently unveiled on online course entitled “AI For Progressive Campaigns,” which is designed to teach candidates how to use AI to help create social media content, draft speeches, craft voter outreach messaging and phone-banking scripts, conduct research into their constituencies and opponents, and develop internal training materials. The founder and CEO of the group, Kelly Dietrich, stated that “thousands of Democratic campaigns can now leverage AI to compete at any scale.”
This effort, although laudable, does not go far enough to capitalize on AI’s potential to help outmaneuver authoritarianism in the U.S. There’s much more that might be done, including using AI to educate citizens on the benefits of democracy, how institutions work and the facts underlying important issues; to create large-scale, moderated public deliberation and consensus around divisive issues; to detect and alert the public to manipulated media, thus combatting misinformation and disinformation and fostering public trust in an alternative to Trump; and to create and implement effective messaging strategies for alternative visions for the future of the country.
AI could be American voters’ best friend, not their enemy. It just needs to be asked.
Kimberly Wehle is author of the book “Pardon Power: How the Pardon System Works — and Why.”