Connect with us

Tools & Platforms

Vetting of ‘ideological bias’ in AI models in new Trump plan stirs confusion

Published

on


The Trump administration’s push to expand artificial intelligence use in the government is now being coupled with a fight against “ideological bias” in AI models, raising new questions about who and what will determine the technology used by federal workers.

In its highly anticipated AI Action Plan released Wednesday, the Trump administration outlined various action items related to the federal procurement process for AI models, including new limitations on technology the government approves for contracts. 

The 28-page plan placed heavy emphasis on ensuring AI systems are “built from the ground up with freedom of speech and expression in mind” and that AI used by the government “objectively reflects truth rather than social engineering agendas.” 

In its listed policy recommendations, the plan called for updated federal procurement guidelines to “ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias.” 

The Trump administration has made fighting against conservative bias a key policy tenet, but Wednesday’s announcement marks the first time this push has been linked to automation technology in the government. 

It is not immediately clear how the administration hopes procurement offices will vet for ideological biases, though some in the technology space are already sounding alarms about the murkiness of the move. 

Kit Walsh, director of AI and access-to-knowledge legal projects at the Electronic Frontier Foundation, suggested the initiative could be rooted in “a desire to control what information is available through AI tools.”

“The government has more leeway to decide which services it purchases for its own use, but may not use this power to punish a publisher for making available AI services that convey ideas the government dislikes,” Walsh said in a statement. 

Some experts warned that this leaves too much discretion with the government to decide on models that could be used both in and outside of government. 

Ryan Hauser, a research fellow at George Mason University’s Mercatus Center, said the procurement requirement forces the government’s technology partners to comply with “an impossible standard.” 

“Anthropic, Google, OpenAI, and xAI are already working with the Pentagon and lending their LLMs to national security work,” Hauser told FedScoop on Wednesday. “That kind of innovation is badly needed in our overly rigid bureaucracy.”

“But now these same frontier labs will have to commit more resources to auditing their models and making sure they don’t run afoul of these new bias requirements,” he added. 

Kristian Stout, director of innovation policy at the International Center for Law and Economics, noted federal procurement can have “significant downstream pressure” on product design, especially for smaller firms more reliant on government buyers. 

“If objectivity becomes a procurement criterion, we should expect companies to be more explicit about how they audit or validate their models for neutrality,” Stout told FedScoop. 

As part of the plan, the Trump administration recommended that the National Institute of Standards and Technology adjust its AI Risk Management Framework to remove references to diversity, equity, and inclusion, climate change and misinformation. 

Under this change, AI companies — especially those with federal contracts — would not be required to manage the risks associated with those issues.  

Topics related to DEI are the administration’s main concern when it comes to potential biases, a senior White House official told reporters on a call Wednesday morning. 

“We expect GSA to put together some procurement language that would be contractual language, requiring that, again, LLMs procured by the federal government would abide by a standard of truthfulness, of seeking accuracy and truthfulness, and not sacrificing those things due to ideological bias,” the official said. 

Cato Institute research fellow Matthew Mittelsteadt called the move the “biggest error” of the order and suggested it could have ripple effects on foreign competition. 

“Not only is ‘objectivity’ elusive philosophically, but efforts to technically contain perceived bias have yet to work,” he said in a statement. “If this policy successfully shapes American models, we will lose international customers who won’t want models shaped by a foreign government’s whims.” 

The White House’s move against “ideological bias” in AI models comes as the General Services Administration promotes its own AI chatbot — GSAi — for federal workers and increasingly explores tools from external firms. 

The GSAi platform already gives federal workers access to private models like Anthropic and Meta. And last week, xAI announced Grok was available to purchase through GSA, just days after xAI faced backlash for the chatbot’s recent antisemitic responses


Written by Miranda Nazzaro

Miranda Nazzaro is a reporter for FedScoop in Washington, D.C., covering government technology. Prior to joining FedScoop, Miranda was a reporter at The Hill, where she covered technology and politics. She was also a part of the digital team at WJAR-TV in Rhode Island, near her hometown in Connecticut. She is a graduate of the George Washington University School of Media and Pubic Affairs. You can reach her via email at miranda.nazzaro@fedscoop.com or on Signal at miranda.952.



Source link

Tools & Platforms

“No process without AI” – Volkswagen gears-up €1bn industrial AI drive

Published

on


Volkswagen will invest €1bn in AI by 2030 to transform vehicle development, production, and IT, targeting €4bn savings, faster innovation cycles, digital sovereignty in Europe, and “AI everywhere” across its industrial value chain.

In sum – what to know:

Billion-dollar AI drive – VW targets “no process without AI” to transform design, production, logistics, and IT.

Bigger industrial gains – 1,200+ AI production apps deployed; €4bn savings and 25% faster production targeted.

Sovereignty and support – push for digital sovereignty in Europe; request for AI-friendly support and regulation.

German automaker Volkswagen Group is to invest €1 billion ($1.17bn) by 2030 in AI-related industrial technologies to boost vehicle development, industrial applications, and IT infrastructure. It made the announcement at the IAA Mobility trade fair in Munich this week, with an AI-rules kind of message about its future Industry 4.0 strategy: “no process without AI”, it said. The firm reckons it will save €4 billion by 2035 from efficiency gains and cost avoidance through “consistent and scalable use of AI” across its entire “value chain”.

Hauke Stars, member of the management board for IT at Volkswagen Group, said: “Wherever we see potential, we utilize AI in a targeted manner. Scalable, responsible, and with clear industrial benefits. Our ambition: AI everywhere, in every process.” The group is working with unnamed technology and industry partners to develop a domain-specific Industry 4.0 language model, a so-called Large Industry Model (LIM), which uses design, production, and sundry automotive process data from participating companies. 

It stated: “Collective industrial process knowledge could be used to train an AI model that helps optimize internal workflows and enables more efficient logistics and process control across industries and for all participants.” An organizational blueprint for such an initiative, still in the “exploration” phase, might be the open Catena-X platform for the automotive sector and broader industrial value chain, it suggested. The Catena-X platform is designed to allow secure data exchange between manufacturers, suppliers, and other tech providers. 

Volkswagen is a founding member, alongside BMW, BASF, Mercedes-Benz, SAP, Siemens, ZF, and T-Systems. In the end, its total strategy is to make vehicles better, and faster, and AI looks like the answer. Volkswagen claims to have 1,200 AI applications in production already, and “several hundred more” in development or implementation. It has a proprietary “factory cloud”, connecting more than 40 production sites across the group. “Volkswagen is continuously introducing new AI applications into its manufacturing processes,” it said. 

Its centralised “factory cloud”, presented as a Digital Production Platform (DPP), is part of its group-wide private cloud infrastructure. This will be “significantly expanded” in the coming years, in line with its digital sovereignty play, and its hard line on resiliency “against external risks and influences”. It stated: “Technological independence and resilience begin with maintaining control over data – and that only works if data is stored, processed, and protected within Europe.” Sustainability, cybersecurity, and knowledge sharing are all part of its smarter production strategy.

In vehicle development, before its vehicles are connected to its “factory cloud” in its manufacturing sites, Volkswagen is working with Dassault Systèmes to build an “AI-powered engineering environment” to help engineers with virtual testing and component simulations across all its brands in all its markets. It wants to reduce its development cycle by around 12 months (25 percent) – to 36 months, “or less”. 

But there was a political message in its address at IAA Mobility, as well: it wants support, in exchange for support. It stated: “Volkswagen is committed to actively shaping the future of AI in Europe and supporting political and economic frameworks at both national and European levels. In an increasingly challenging environment – marked by high energy prices, elevated location costs, and administrative complexity – the company sees a clear need to advance technological innovation in AI in Germany and Europe through political support.” 

It wants “nnovation-friendly frameworks in the global AI race”, it said. Stars said: “We support the innovation-friendly evolution of European regulation. In addition, targeted incentives are needed: We must make more of what we’re capable of. This includes, above all, funding programs that strengthen spin-offs from universities and research institutions and accelerate the transfer of scientific knowledge into market-ready applications.”

The company has a large-scale internal AI training programme in place, since last year, which has already trained 130,000 staff across all levels, in all its markets. As a footnote, a blog post to go with the news of its AI investment makes AI need humans – in charge of it, and also accepting of it. Hence all the training. It stated: “AI needs rules… That is why we act on the basis of ethical standards and European regulation. When it comes to sensitive personnel issues, for example, a human being will make the final decision. Always. The key to the success of AI is acceptance.” 

Stars said: “With AI, we are igniting the next stage on our path to becoming the global automotive tech driver. AI is our key to greater speed, quality, and competitiveness – across the entire value chain, from vehicle development to production. Our ambition is to accelerate our development of attractive, innovative vehicles and bring them to our customers faster than ever before. To achieve this, we deploy AI with purpose: scalable, responsible, and with clear industrial benefits. Our ambition: no process without AI.”



Source link

Continue Reading

Tools & Platforms

AI Tool Predicts Stem Cell Transplant Infection Risk

Published

on


University at Buffalo researchers and collaborators have completed a series of studies that reveal how much painful mouth sores known as oral mucositis increase infection risks in stem cell transplant patients and how artificial intelligence can be used to more accurately predict those risks.

Their paper, published Aug. 14 in the journal Cancers, revealed that patients undergoing hematopoietic stem cell transplants (HSCT) for blood cancers who develop oral mucositis are at nearly four times the risk of developing a severe infection compared to those without the condition. This is the first time that risk has been quantified.

The paper is the most comprehensive synthesis to date of recent findings on individual risk factors for oral mucositis, whether the transplant involves a patient’s own stem cells or donor cells. Risk factors are identified as specific drugs, such as methotrexate, high-dose chemotherapy, female gender, younger age, kidney issues, and reactivation of the herpes simplex virus.

A significant portal for infections

“Oral mucositis is not simply a source of discomfort; it serves as a significant portal for infections in immunocompromised patients,” says Satheeshkumar Poolakkad Sankaran, DDS, corresponding author on the paper and research scientist in the Division of Hematology/Oncology in the Department of Medicine at the Jacobs School of Medicine and Biomedical Sciences at UB. “All my patients with oral mucositis experience poorer outcomes, adversely impacting their quality of life.”

For this reason, he says, screening every cancer patient for oral mucositis risk ahead of time makes sense because the condition is so common — it occurs in up to 80% of HSCT patients. “Knowing risk factors can help doctors spot patients at high risk early,” he says. “This can allow for preventive steps, like oral hygiene or cryotherapy, where extremely cold temperatures are used to reduce inflammation, thus improving outcomes and quality of life.”

To better assess who is at risk, Poolakkad Sankaran and colleagues published a paper in July in Support Cancer Care describing a nomogram tool they developed to predict which patients are more likely to develop oral mucositis. A nomogram is a statistical instrument that is used to model relationships among variables. The researchers used age, gender, race, total body irradiation, and fluid/electrolyte disorders to estimate risks of developing ulcerative mucositis, a severe form of oral mucositis.

“This nomogram simplifies complex data for clinicians, enabling targeted oral care before HSCT,” explains Joel Epstein, DMD, co-author at the City of Hope Comprehensive Cancer Center.

Explainable AI better predicts adverse events

At the Multinational Association of Supportive Care in Cancer 2025 meeting in June, Poolakkad Sankaran presented additional related findings on a nomogram-based model that can better predict adverse events. He explains that this model was evaluated against a new framework that uses explainable AI, employing machine learning algorithms to assess intricate clinical and demographic facts. Explainable AI is designed to provide the rationale behind the output of an AI system.

“The AI model exhibited enhanced predictive accuracy, recognizing patterns linked to toxicities that conventional nomograms failed to detect,” he adds. “By synthesizing demographic and clinical data, the system can predict adverse events, facilitating individualized therapy modifications to reduce toxicities.”

Poolakkad Sankaran is validating the model with other cancer adverse events such as immune-related adverse events in a larger cohort, working with Roberto Pili, MD, a co-author and associate dean for cancer research and integrative oncology in the Jacobs School. Their ultimate goal is widespread clinical adoption of the model in assessing cancer patients.

“These interconnected studies underscore the oral-systemic connection in cancer therapy, urging multidisciplinary collaboration among oncologists, dentists, and AI specialists,” Poolakkad Sankaran says. “As cancer management such as HSCT and immunotherapy grows — particularly for older patients — these tools promise reduced complications, shorter hospitalizations, and lower costs.”

Reference: Eichhorn S, Rudin L, Ramasamy C, et al. Elevated likelihood of infectious complications related to oral mucositis after hematopoietic stem cell transplantation: A systematic review and meta-analysis of outcomes and risk factors. Cancers. 2025;17(16). doi: 10.3390/cancers17162657

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.



Source link

Continue Reading

Tools & Platforms

Building a solid foundation to support AI adoption at Bentley

Published

on


On determination: Years ago, there was a program called Tomorrow’s World, and it was all about the future and what that could look like. I was fascinated by it because in one episode, there was a woman presenter. So that inspired me then, and showed you could aspire to that. I went into my first role working in a software development division in the mid-90s where everyone was frantically recoding everything because of Y2K. Those were my very early days in technology. But even then, I had female role models who made it seem accessible. I was curious but not very academic at school, so I felt lucky to get a job in technology. Being able to learn coding was very interesting, and I found my routine, and continued to grow and move through the ranks.

On data: I always say there’s no AI without data. So the thing we’ve been working on for the last two years is the data strategy, understanding the governance, the framework, and everything else we need there as foundations. We’ve done a lot of work around data literacy and upskilling in the organization, and we’ve been doing that in preparedness for AI because we know everybody wants it, and they want it now, but they don’t necessarily know what they want it for. So it’s about creating that safe space where people can test and learn. I’ve been working in partnership with the chief strategy officer to say this needs to be a joint business, and we need an IT strategy around data and AI. It can’t just be directly from it. We need to work together with the business to understand what we want to use AI for so we can get to value sooner. And if you think about what we’ve been doing for the last three years in moving to the enterprise systems, reducing the systems landscape, and making sure we understand what data we’ve got in those systems, it’s all been creating the pathway and the foundation levels we need to get us there sooner.

On streamlining efficiencies: When I came into automotive, it was essentially learning a whole different language because many of the abbreviations are tied to German words. So even when you know them, you don’t understand what they mean. For me, it was about taking that big step back, looking at our business and saying we’re all about designing and creating an amazing product. We then have customers we service through the web or the app. So it’s broken down into value streams, and within those are the processes of designing, building, marketing, and selling cars. I like logic, so I try to apply it when we examine capabilities, like reducing cost of delivery from an IT perspective. That gives us more money to invest in other things, or future ready our organization.



Source link

Continue Reading

Trending