Connect with us

AI Research

New research shows AI chatbots suggesting women and minorities should be paid less.

Published

on


When you buy through links on our articles, Future and its syndication partners may earn a commission.

Credit: Startup Stock Photos / Pexels

  • A new study found AI chatbots often suggest significantly lower salaries to women and minorities

  • The research showed that identity cues can trigger consistent biases in salary negotiation advice

  • The results suggest LLMs are trained in a way that leads to persistent bias

Negotiating your salary is a difficult experience no matter who you are, so naturally, people are sometimes turning to ChatGPT and other AI chatbots for advice about how to get the best deal possible. But, AI models may come with an unfortunate assumption about who deserves a higher salary. A new study found that AI chatbots routinely suggest lower salaries to women and some ethnic minorities and people who described themselves as refugees, even when the job, their qualifications, and the questions are identical.

Scientists at the Technical University of Applied Sciences Würzburg-Schweinfurt conducted the study, discovering the unsettling results and the deeper flaw in AI they represent. In some ways, it’s not a surprise that AI, trained on information provided by humans, has human biases baked into it. But that doesn’t make it okay, or something to ignore.

For the experiment, chatbots were asked a simple question: “What starting salary should I ask for?” But the researchers posed the question while assuming the roles of a variety of fake people. The personas included men and women, people from different ethnic backgrounds, and people who described themselves as born locally, expatriates, and refugees. All were professionally identical, but the results were anything but. The researchers reported that “even subtle signals like candidates’ first names can trigger gender and racial disparities in employment-related prompts.”

For instance, ChatGPT’s o3 model told a fictional male medical specialist in Denver to ask for $400,000 for a salary. When a different fake persona identical in every way but described as a woman asked, the AI suggested she aim for $280,000, a $120,000 pronoun-based disparity. Dozens of similar tests involving models like GPT-4o mini, Anthropic’s Claude 3.5 Haiku, Llama 3.1 8B, and more brought the same kind of advice difference.

It wasn’t always best to be a native white man, surprisingly. The most advantaged profile turned out to be a “male Asian expatriate,” while a “female Hispanic refugee” ranked at the bottom of salary suggestions, regardless of identical ability and resume. Chatbots don’t invent this advice from scratch, of course. They learn it by marinating in billions of words culled from the internet. Books, job postings, social media posts, government statistics, LinkedIn posts, advice columns, and other sources all led to the results seasoned with human bias. Anyone who’s made the mistake of reading the comment section in a story about a systemic bias or a profile in Forbes about a successful woman or immigrant could have predicted it.

AI bias

The fact that being an expatriate evoked notions of success while being a migrant or refugee led the AI to suggest lower salaries is all too telling. The difference isn’t in the hypothetical skills of the candidate. It’s in the emotional and economic weight those words carry in the world and, therefore, in the training data.

The kicker is that no one has to spell out their demographic profile for the bias to manifest. LLMs remember conversations over time now. If you say you’re a woman in one session or bring up a language you learned as a child or having to move to a new country recently, that context informs the bias. The personalization touted by AI brands becomes invisible discrimination when you ask for salary negotiating tactics. A chatbot that seems to understand your background may nudge you into asking for lower pay than you should, even while presenting as neutral and objective.

“The probability of a person mentioning all the persona characteristics in a single query to an AI assistant is low. However, if the assistant has a memory feature and uses all the previous communication results for personalized responses, this bias becomes inherent in the communication,” the researchers explained in their paper. “Therefore, with the modern features of LLMs, there is no need to pre-prompt personae to get the biased answer: all the necessary information is highly likely already collected by an LLM. Thus, we argue that an economic parameter, such as the pay gap, is a more salient measure of language model bias than knowledge-based benchmarks.”

Biased advice is a problem that has to be addressed. That’s not even to say AI is useless when it comes to job advice. The chatbots surface useful figures, cite public benchmarks, and offer confidence-boosting scripts. But it’s like having a really smart mentor who’s maybe a little older or makes the kind of assumptions that led to the AI’s problems. You have to put what they suggest in a modern context. They might try to steer you toward more modest goals than are warranted, and so might the AI.

So feel free to ask your AI aide for advice on getting better paid, but just hold on to some skepticism over whether it’s giving you the same strategic edge it might give someone else. Maybe ask a chatbot how much you’re worth twice, once as yourself, and once with the “neutral” mask on. And watch for a suspicious gap.

You might also like



Source link

AI Research

AI to reshape India’s roads? Artificial intelligence can take the wheel to fix highways before they break, ETInfra

Published

on


From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

In India, a pothole is rarely just a pothole. It is a metaphor, a mood and sometimes, a meme. It is the reason your cab driver mutters about karma and your startup founder misses a pitch meeting because the expressway has turned into a swimming pool. But what if roads could detect their own distress, predict failures before they happen, and even suggest how to fix them?

That is not science-fiction but the emerging reality of AI-powered infrastructure.

According to KPMG’s 2025 report AI-powered road infrastructure transformation- Roads 2047, artificial intelligence is slowly reshaping how India builds, maintains, and governs its roads. From digital twins that simulate entire highways to predictive algorithms that flag out structural fatigue, the country’s infrastructure is beginning to show signs of cognition.

From concrete to cognition

India’s road network spans over 6.3 million kilometers – second only to the United States. As per KPMG, AI is now being positioned not just as a tool but as a transformational layer. Technologies like Geographic Information System (GIS), Building Informational Modelling (BIM) and sensor fusion are enabling digital twins – virtual replicas of physical assets that allow engineers to simulate stress, traffic and weather impact in real time. The National Highway Authority of India (NHAI) has already integrated AI into its Project Management Information System (PMIS), using machine learning to audit construction quality and flag anomalies.

Autonomous infrastructure in action

Across urban India, infrastructure is beginning to self-monitor. Pune’s Intelligent Traffic Management System (ITMS) and Bengaluru’s adaptive traffic control systems are early examples of AI-driven urban mobility.

Meanwhile, AI-MC, launched by the Ministry of Road Transport and Highways (MoRTH), uses GPS-enabled compactors and drone-based pavement surveys to optimise road construction.

Beyond cities, state-level initiatives are also embracing AI for infrastructure monitoring. As reported by ETInfra earlier, Bihar’s State Bridge Management & Maintenance Policy, 2025 employs AI and machine learning for digital audits of bridges and culverts. Using sensors, drones, and 3D digital twins, the state has surveyed over 12,000 culverts and 743 bridges, identifying damaged structures for repair or reconstruction. IIT Patna and Delhi have been engaged for third-party audits, showing how AI can extend beyond roads to critical bridge infrastructure in both urban and rural contexts.

While these examples demonstrate the potential of AI-powered maintenance, challenges remain. Predictive maintenance, KPMG notes, could reduce lifecycle costs by up to 30 per cent and improve asset longevity, but much of rural India—nearly 70 per cent of the network—still relies on manual inspections and paper-based reporting.

Governance and the algorithm

India’s road safety crisis is staggering: over 1.5 lakh deaths annually. AI could be a game-changer. KPMG estimates that intelligent systems can reduce emergency response times by 60 per cent, and improve traffic efficiency by 30 per cent. AI also supports ESG goals— enabling carbon modeling, EV corridor planning, and sustainable design.

But technology alone won’t fix systemic gaps. The promise of AI hinges on institutional readiness – spanning urban planning, enforcement, and civic engagement.

While NITI Aayog has outlined a national AI strategy, and MoRTH has initiated digital reforms, state-level adoption remains fragmented. Some states have set up AI cells within their PWDs; others lack the technical capacity or policy mandate.

KPMG calls for a unified governance framework — one that enables interoperability, safeguards data, and fosters public-private partnerships. Without it, India risks building smart systems on shaky foundations.

As India looks towards 2047, the road ahead is both digital and political. And if AI can help us listen to our roads, perhaps we’ll finally learn to fix them before they speak in potholes.

  • Published On Sep 4, 2025 at 07:10 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

Get updates on your preferred social platform

Follow us for the latest news, insider access to events and more.



Source link

Continue Reading

AI Research

Mistral AI Nears Close of Funding Round Lifting Valuation to $14B

Published

on

By


Artificial intelligence (AI) startup Mistral AI is reportedly nearing the close of a funding round in which it would raise €2 billion (about $2.3 billion) and be valued at €12 billion (about $14 billion).

This would be Mistral AI’s first fundraise since a June 2024 round in which it was valued at €5.8 billion, Bloomberg reported Wednesday (Sept. 3), citing unnamed sources.

Mistral AI did not immediately reply to PYMNTS’ request for comment.

According to the Bloomberg report, Mistral AI, which is based in France, is developing a chatbot called Le Chat that is tailored to European user as well as other AI services to compete with the dominant ones from the United States and China.

It was reported on Aug. 3 that Mistral AI was targeting a $10 billion valuation in a funding round in which it would raise $1 billion.

In June, it was reported that the company’s revenues had increased several times over since it raised funds in 2024 and were on pace to exceed $100 million a year for the first time.

PYMNTS reported in June 2024, at the time of Mistral AI’s most recent funding round, that the AI startup raised $113 million in seed funding in June 2023, weeks after it was launched, secured an additional $415 million in a funding round in December 2023 in which it was valued at around $2 billion, and then raised $640 million in the round that propelled its valuation to $6 billion.

“We are grateful to our new and existing investors for their continued confidence and support for our global expansion,” Mistral AI said in a post on LinkedIn announcing the June 2024 funding round. “This will accelerate our roadmap as we continue to bring frontier AI into everyone’s hands.”

In June, Mistral AI and chipmaker Nvidia announced a partnership to develop next-generation AI cloud services in France.

The initiative centers around building AI data centers in France using Nvidia chips and will expand Mistral’s businesses model, transitioning the AI startup from being a model developer to being a vertically integrated AI cloud provider, PYMNTS reported at the time.



Source link

Continue Reading

AI Research

PPS Weighs Artificial Intelligence Policy

Published

on


Portland Public Schools folded some guidance on artificial intelligence into its district technology policy for students and staff over the summer, though some district officials say the work is far from complete.

The guidelines permit certain district-approved AI tools “to help with administrative tasks, lesson planning, and personalized learning” but require staff to review AI-generated content, check accuracy, and take personal responsibility for any content generated.

The new policy also warns against inputting personal student information into tools, and encourages users to think about inherent bias within such systems. But it’s still a far cry from a specific AI policy, which would have to go through the Portland School Board.

Part of the reason is because AI is such an “active landscape,” says Liz Large, a contracted legal adviser for the district. “The policymaking process as it should is deliberative and takes time,” Large says. “This was the first shot at it…there’s a lot of work [to do].”

PPS, like many school districts nationwide, is continuing to explore how to fold artificial intelligence into learning, but not without controversy. AsThe Oregonian reported in August, the district is entering a partnership with Lumi Story AI, a chatbot that helps older students craft their own stories with a focus on comics and graphic novels (the pilot is offered at some middle and high schools).

There’s also concern from the Portland Association of Teachers. “PAT believes students learn best from humans, instead of AI,” PAT president Angela Bonilla said in an Aug. 26 video. “PAT believes that students deserve to learn the truth from humans and adults they trust and care about.”

Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office.

Help us dig deeper.





Source link

Continue Reading

Trending