Connect with us

Tools & Platforms

Vetting of ‘ideological bias’ in AI models in new Trump plan stirs confusion

Published

on


The Trump administration’s push to expand artificial intelligence use in the government is now being coupled with a fight against “ideological bias” in AI models, raising new questions about who and what will determine the technology used by federal workers.

In its highly anticipated AI Action Plan released Wednesday, the Trump administration outlined various action items related to the federal procurement process for AI models, including new limitations on technology the government approves for contracts. 

The 28-page plan placed heavy emphasis on ensuring AI systems are “built from the ground up with freedom of speech and expression in mind” and that AI used by the government “objectively reflects truth rather than social engineering agendas.” 

In its listed policy recommendations, the plan called for updated federal procurement guidelines to “ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias.” 

The Trump administration has made fighting against conservative bias a key policy tenet, but Wednesday’s announcement marks the first time this push has been linked to automation technology in the government. 

It is not immediately clear how the administration hopes procurement offices will vet for ideological biases, though some in the technology space are already sounding alarms about the murkiness of the move. 

Kit Walsh, director of AI and access-to-knowledge legal projects at the Electronic Frontier Foundation, suggested the initiative could be rooted in “a desire to control what information is available through AI tools.”

“The government has more leeway to decide which services it purchases for its own use, but may not use this power to punish a publisher for making available AI services that convey ideas the government dislikes,” Walsh said in a statement. 

Some experts warned that this leaves too much discretion with the government to decide on models that could be used both in and outside of government. 

Ryan Hauser, a research fellow at George Mason University’s Mercatus Center, said the procurement requirement forces the government’s technology partners to comply with “an impossible standard.” 

“Anthropic, Google, OpenAI, and xAI are already working with the Pentagon and lending their LLMs to national security work,” Hauser told FedScoop on Wednesday. “That kind of innovation is badly needed in our overly rigid bureaucracy.”

“But now these same frontier labs will have to commit more resources to auditing their models and making sure they don’t run afoul of these new bias requirements,” he added. 

Kristian Stout, director of innovation policy at the International Center for Law and Economics, noted federal procurement can have “significant downstream pressure” on product design, especially for smaller firms more reliant on government buyers. 

“If objectivity becomes a procurement criterion, we should expect companies to be more explicit about how they audit or validate their models for neutrality,” Stout told FedScoop. 

As part of the plan, the Trump administration recommended that the National Institute of Standards and Technology adjust its AI Risk Management Framework to remove references to diversity, equity, and inclusion, climate change and misinformation. 

Under this change, AI companies — especially those with federal contracts — would not be required to manage the risks associated with those issues.  

Topics related to DEI are the administration’s main concern when it comes to potential biases, a senior White House official told reporters on a call Wednesday morning. 

“We expect GSA to put together some procurement language that would be contractual language, requiring that, again, LLMs procured by the federal government would abide by a standard of truthfulness, of seeking accuracy and truthfulness, and not sacrificing those things due to ideological bias,” the official said. 

Cato Institute research fellow Matthew Mittelsteadt called the move the “biggest error” of the order and suggested it could have ripple effects on foreign competition. 

“Not only is ‘objectivity’ elusive philosophically, but efforts to technically contain perceived bias have yet to work,” he said in a statement. “If this policy successfully shapes American models, we will lose international customers who won’t want models shaped by a foreign government’s whims.” 

The White House’s move against “ideological bias” in AI models comes as the General Services Administration promotes its own AI chatbot — GSAi — for federal workers and increasingly explores tools from external firms. 

The GSAi platform already gives federal workers access to private models like Anthropic and Meta. And last week, xAI announced Grok was available to purchase through GSA, just days after xAI faced backlash for the chatbot’s recent antisemitic responses


Written by Miranda Nazzaro

Miranda Nazzaro is a reporter for FedScoop in Washington, D.C., covering government technology. Prior to joining FedScoop, Miranda was a reporter at The Hill, where she covered technology and politics. She was also a part of the digital team at WJAR-TV in Rhode Island, near her hometown in Connecticut. She is a graduate of the George Washington University School of Media and Pubic Affairs. You can reach her via email at miranda.nazzaro@fedscoop.com or on Signal at miranda.952.



Source link

Tools & Platforms

OpenAI Backs AI-Animated Film for 2026 Cannes Festival

Published

on


OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. Credit: Focal Foto / Wikimedia Commons / CC BY-SA 4.0

OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. The tech company aims to prove its AI technology can revolutionize Hollywood filmmaking with faster production timelines and significantly lower costs. 

The movie titled “Critterz” will be about woodland creatures that go on an adventure after their village is damaged by a stranger. The film’s producers are aiming for a global theatrical release after the premiere at the Cannes Film Festival. 

The project has a budget of less than US$30 million and a production timeline of nine months. This is a comparable and significant difference, given that most mainstream animated movies have budgets in the range of US$100 to US$200 million, whilst also having a three-year development and production cycle. 

OpenAI-backed ‘Critterz’ set for release at the Cannes Film Festival

Chad Nelson, a creative specialist at OpenAI, originally began developing Critterz as a short film three years ago, using the company’s DALL-E image generation tool to develop the concept. Nelson has now partnered with the London-based Vertigo Films and studio Native Foreign in Los Angeles to expand the project into a feature film. 

In the news release that announced OpenAI’s backing of the film, Nelson said: “OpenAI can say what its tools do all day long, but it’s much more impactful if someone does it,” adding, “That’s a much better case study than me building a demo.” Crucially, however, the film’s production will not be entirely AI-generated, as it will blend AI technology with human work. 

Human artists will draw sketches that will be fed into OpenAI’s tools such as GPT-5, the Large Language Model (LLM) on which ChatGPT is built, as well as other image-generating AI models. Human actors will voice the characters. 

Critterz has some of the writing team behind the smash hit ‘Paddington in Peru’

Despite having some of the writing team behind the hit film Paddington in Peru, it comes at a time of intense legal fights between Hollywood studios and AI and other tech companies over intellectual property rights. 

Studios such as Disney, Universal, and Warner Bros. have filed for copyright infringement suits against Midjourney, another AI firm, alleging that they illegally used their characters to train its image generation engine. Critterz will be funded by Vertigo’s Paris-based parent company, Federation Studios, with some 30 contributors set to share profits. 

Crucially, however, Critterz will not be the first feature film ever made with generative AI. Last year, “DreadClub: Vampire’s Verdict” was released and is widely considered to be the first feature film entirely made by generative AI. It had a budget of US$405. 



Source link

Continue Reading

Tools & Platforms

AI Lies Because It’s Telling You What It Thinks You Want to Hear

Published

on


Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.  

While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

AI Atlas art badge tag

In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble

Don’t miss any of CNET’s unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome.

How machines learn to lie

To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

There are three phases of training LLMs:

  • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
  • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
  • Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.

The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us. 

“Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

Getting AI to be honest 

Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

The Princeton researchers identified five distinct forms of this behavior:

  • Empty rhetoric: Flowery language that adds no substance to responses.
  • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
  • Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
  • Unverified claims: Making assertions without evidence or credible support.
  • Sycophancy: Insincere flattery and agreement to please.

To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.

“It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI





Source link

Continue Reading

Tools & Platforms

AI: The Church’s Response to the New Technological Revolution

Published

on


Artificial intelligence (AI) is transforming everyday life, the economy, and culture at an unprecedented speed. Capable of processing vast amounts of data, mimicking human reasoning, learning, and making decisions, this technology is already part of our daily lives: from recommendations on Netflix and Amazon to medical diagnoses and virtual assistants.

But its impact goes far beyond convenience or productivity. Just as with the Industrial Revolution, the digital revolution raises social, ethical, and spiritual questions. The big question is:  How can we ensure that AI serves the common good without compromising human dignity?

A change of era

Pope Francis has described artificial intelligence as a true “epochal change,” and his successor, Pope Leo XIV, has emphasized both its enormous potential and its risks. There is even talk of a future encyclical entitled Rerum Digitalium, inspired by the historic Rerum Novarum of 1891, to offer moral guidance in the face of the “new things” of our time.

The Vatican insists that AI should not replace human work, but rather enhance it. It must be used prudently and wisely, always putting people at the centre. The risks of inequalities, misinformation, job losses, and military uses of this technology necessitate clear limits and global regulations.

The social doctrine of the Church and AI

The Church proposes applying the four fundamental principles of social doctrine to artificial intelligence  :

  • Dignity of the person: the human being should never be treated as a means, but as an end in itself.

  • Common good: AI must ensure that everyone has access to its benefits, without exclusions.

  • Solidarity: Technological development must serve the most needy in particular.

  • Subsidiarity: problems should be solved at the level closest to the people.

Added to this are the values ​​of truth, freedom, justice, and love , which guide any technological innovation towards authentic progress.

Opportunities and risks

Artificial intelligence already offers advances in medicine, education, science, and communication. It can help combat hunger, climate change, or even better convey the Gospel. However, it also poses risks:

  • Massive job losses due to automation.

  • Human relationships replaced by fictitious digital links.

  • Threats to privacy and security.

  • Use of AI in autonomous weapons or disinformation campaigns.

Therefore, the Church emphasizes that AI is not a person: it has no soul, consciousness, or the capacity to love. It is merely a tool, powerful but always dependent on the purposes assigned to it by humans.

A call to responsibility

The Antiqua et nova (2025) document   reminds us that all technological progress must contribute to human dignity and the common good. Responsibility lies not only with governments or businesses, but also with each of us, in how we use these tools in our daily lives.

Artificial intelligence can be an engine of progress, but it can never be a substitute for humankind. No machine can experience love, forgiveness, mercy, or faith. Only in God can perfect intelligence and true happiness be found.



Source link

Continue Reading

Trending