Tools & Platforms
AI is Creating a New Gender Divide

The AI revolution isn’t ahead of us; it’s here. But, for a technology that’s been heralded as the future, it risks bringing with it problems from the past.
Women are adopting generative AI technology at a slower rate than men—data from the Survey of Consumer Expectations found that 50 percent of men are using generative AI tools, compared to 37 percent of women. Further research from Harvard Business School Associate Professor Rembrand Koning found that women are adopting AI tools at a 25 percent lower rate.
So, what’s behind women’s hesitation to adopt AI?
Whether it’s deepfake pornography, discrimination from AI hiring technology, or forms of digital violence online, research and data suggest that women have a fundamentally different relationship to AI than men do. The result? An AI gender gap, where women are being left behind in the technological revolution.
Newsweek spoke to the experts to find out more about how AI’s misogyny maintenance is creating a new gender divide.
What Is The AI Gender Gap?
A 2025 survey from the National Organization for Women (NOW) and Icogni found that 25 percent of women had experienced harassment enabled by technology, including AI-generated deepfake pornography. A study from the Berkeley Haas Center for Equity, Gender, and Leadership, meanwhile, analyzed 133 AI systems from different industries. It found that 44 percent showed gender bias.
Beyond the studies and the data, what is the actual impact of this gender disparity on women?
Enter: the AI gender gap.
Professor Ganna Pogrebna, Lead for Behavioral Data Science at the Alan Turing Institute and Executive Director at the AI and Cyber Futures Institute, told Newsweek over email, “There is mounting evidence that early negative experiences with AI systems—particularly those involving misogyny, sexualization, or coercion—can have profound psychological, behavioral, and societal consequences for women and girls.”
“These harms are not abstract; they are embodied in concrete experiences, amplified through algorithmic systems,” Pogrebna said.
And AI-inflicted harms begin at a young age. A 2024 report from the Center for Democracy & Technology found that generative AI technologies are worsening the sharing of non-consensual intimate imagery in schools and that female students are most often depicted in this deepfake imagery.
So, what might be the long-term impacts on women and girls if they are having negative or traumatic experiences with AI?
Laura Bates, activist and author of The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny, told Newsweek, “I think we will see a widening gap in terms of women’s access to and uptake of new technologies.”
Bates said that this will include AI and that this will have “a devastating impact on everything from women’s job prospects and careers to their involvement in further developments in the sector, which will, in turn, continue to intensify the problem because it will mean that new tools are tailored towards men as the majority of users.”
Asked if there is a risk that these negative experiences could lead to disengagement with future technologies, putting women on the back foot, Bates said, “Absolutely.”
“We already see how differently men and women use and experience existing forms of technology,” Bates said. Both men and women experience forms of online harassment, according to the Pew Research Centre, which found in 2021 that 41 percent of Americans had experienced some kind of harassment online; harassment takes different forms. The Pew Research Centre found that 33 percent of women under 35 report experiencing sexual harassment online, compared to 11 percent of men, a figure which doubled from 2017 to 2021.
“Women’s use of tech is mediated by an entirely different online experience than men’s, marked by abuse, harassment, doxing, threats, stalking and other forms of tech facilitated gender-based violence,” Bates said, adding, “It is inevitable that the barrage of abuse women and girls face online, combined with the gender bias inherently baked into many emerging tools, are going to have a chilling effect in terms of women’s uptake and participation in new forms of tech.”
Pogrebna echoed this: “These traumatic experiences can embed deep mistrust in AI systems and digital institutions.”

FREDERIC J. BROWN/AFP via Getty Images
Newsweek also spoke with Dr. Sarah Myers West, co-executive director at the AI Now Institute. In a phone call with Newsweek, she said, “There are disproportionate patterns of reinforcing inequality in ways that lead to harm for women and girls and people of other minorities.”
West pointed to “the way AI is intermediating access to our resources or our life chances,” and noted, “the AI that gets used, say, in a hiring process and reinforces is historical employment-based discrimination.” West said that this is affecting people in ways that are “profoundly consequential.”
In 2018, Reuters reported that Amazon had scrapped an AI recruiting tool that was showing bias against women. In 2024, UNESCO’s research highlighted that gender bias in AI hiring tools may penalize women through the reproduction of regressive stereotypes.
Asked if negative experiences with AI in hiring scenarios could lead to a sense of mistrust and disengagement, West said, “I think rightly so, if it’s being used in that way.”
A Problem from the Past, Reinvented for the Future
AI might be increasingly prevalent, but the discourse over it is increasingly polarized. A 2025 survey from YouGov found that one-third of Americans are concerned about the possibility that AI will cause the end of the human race. Additionally, the survey found that Americans are more likely to say that AI will have a negative effect on society than on their own life and that most Americans don’t trust AI to make ethical decisions.
But as these apocalyptic alarms sound, concerns over how AI is further encoding misogyny into the fabric of society fall through the cracks. Back in 2024, a report from the UN said that AI is mirroring gendered bias in society, and gender disparity is already pronounced in the tech industry, with the World Economic Forum reporting in 2023 that women account for only 29 percent of science, technology, engineering and math (STEM) workers.
“There is a growing body of evidence showing that AI systems reflect and amplify biases present in the datasets on which they are trained. This includes gender biases, sexualization of women, and reinforcement of harmful stereotypes,” Pogrebna said. She added that large language models trained on “internet corpora” are risking “encoding toxic gender stereotypes and normalizing misogynistic narratives.”
A 2024 report from UNESCO found that “AI-based systems often perpetuate (and even scale and amplify) human, structural and social biases,” producing gender bias, as well as homophobia and racial stereotyping.
Newsweek spoke with Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute at the University of Oxford in the United Kingdom, about this.
“If AI is somewhat a mirror of society,” Wachter said, “It kind of indirectly shows you where your place in the world is.”
Wachter then pointed to examples of gender bias in AI, including bias in image generators and text prediction, where AI is more likely to assume a male gender for professions like doctors, and a female gender for professions like nurses. A 2024 study in the JAMA Open Network found that when generating images of physicians, AI text-to-image generators are more likely to depict people who are white and male.
“It’s a tacit kind of reminder that certain spots are reserved for you and others are not,” Wachter said. “We have to think about what it does to young women and girls.”
“How can we praise the technology to be so perfect when it is so problematic for a large portion of our society, right? And just ask the question, who is this technology actually good for? And who does it actually benefit?” Wachter said. She added, “It gives people a very early idea of what your role is supposed to look like in society.”
Pointing to the issues with AI, Wachter said, “We would never do this with a car, right? We would never just say, you go and drive. I know it’s failing all the time.”
“What does it say about the value of being a woman?” she said. “If it’s okay that this injury will happen, we know it will happen, but we’re going to bring it on the market anyway, and we’re going to fix it later.”
Newsweek also spoke with Dr. Kanta Dihal, a lecturer in science communication at Imperial College London, who shared some of the concerns that Wachter does. “There is so much that regularly goes wrong around the topics of women and technology in the broader sense,” Dihal said.
In terms of the relationship women have with AI, Dihal said there is a feeling of “Is this for me, or is this meant to keep me in my place? Or make things worse for me? Am I the kind of person that the creators of this technology had in mind when they designed it?”
“So many different career paths and our schools as well are indeed introducing AI related technologies that if you don’t want to use them, you’re already sometimes on the back foot,” Dihal said, adding, “It’s going to be both a matter of being disadvantaged in school and career progression.”

Emanuele Cremaschi/Getty Images
Looking Ahead
So, what would inclusion in AI look like?
Bates told Newsweek that we need to see government regulation of AI technology “at the point they are rolled out to public or corporate use” in order to ensure that safety and ethics standards are met before implementation, “not after women and marginalized communities have already faced significant discrimination.”
She added, “With AI technologies poised to become inextricably intertwined with almost every aspect of our personal and professional lives, that must change in order to ensure that women, girls, and marginalized groups are able to reap the same benefits from these technologies as everybody else, without suffering negative consequences.”
Meanwhile, Pogrebna told Newsweek, “The marginalisation of women in AI is not an inevitable by-product of technological advancement—it is the result of design choices, governance gaps, and historical inequities embedded in data and institutions. A multi-pronged approach that includes technical, procedural, legal, and cultural reforms is not only possible but has already demonstrated early success in multiple domains.”
She added that technical fixes are necessary but insufficient without regulatory frameworks to enforce accountability.
As AI technology continues to develop and become more prevalent, it’s clear that the fabric of society continues to change at a rapid pace, and the dream of a tech revolution that leads to a fairer society is still there. What’s unclear is if AI is doomed to code a world that’s bugged with the same prejudice as the one that came before it.
Tools & Platforms
Tech giants to pour billions into UK AI. Here’s what we know so far

Microsoft CEO Satya Nadella speaks at Microsoft Build AI Day in Jakarta, Indonesia, on April 30, 2024.
Adek Berry | AFP | Getty Images
LONDON — Microsoft said on Tuesday that it plans to invest $30 billion in artificial intelligence infrastructure in the U.K. by 2028.
The investment includes $15 billion in capital expenditures and $15 billion in its U.K. operations, Microsoft said. The company said the investment would enable it to build the U.K.’s “largest supercomputer,” with more than 23,000 advanced graphics processing units, in partnership with Nscale, a British cloud computing firm.
The spending commitment comes as President Donald Trump embarks on a state visit to Britain. Trump arrived in the U.K. Tuesday evening and is set to be greeted at Windsor Castle on Wednesday by King Charles and Queen Camilla.
During his visit, all eyes are on U.K. Prime Minister Keir Starmer, who is under pressure to bring stability to the country after the exit of Deputy Prime Minister Angela Rayner over a house tax scandal and a major cabinet reshuffle.
On a call with reporters on Tuesday, Microsoft President Brad Smith said his stance on the U.K. has warmed over the years. He previously criticized the country over its attempt in 2023 to block the tech giant’s $69 billion acquisition of video game developer Activision-Blizzard. The deal was cleared by the U.K.s competition regulator later that year.
“I haven’t always been optimistic every single day about the business climate in the U.K.,” Smith said. However, he added, “I am very encouraged by the steps that the government has taken over the last few years.”
“Just a few years ago, this kind of investment would have been inconceivable because of the regulatory climate then and because there just wasn’t the need or demand for this kind of large AI investment,” Smith said.
Starmer and Trump are expected to sign a new deal Wednesday “to unlock investment and collaboration in AI, Quantum, and Nuclear technologies,” the government said in a statement late Tuesday.
Tools & Platforms
Workday previews a dozen AI agents, acquires Sana
After introducing its first AI agents for its HR and financial users last year, Workday returns this year with more prebuilt agents, a data layer for agents to feed analytics systems, and developer tools for custom agents.
The company also said it entered a definitive agreement to acquire Sana, whose AI-based tools enable learning and content creation. Workday said the acquisition will cost $1.1 billion and expects it to close by Jan. 31.
Workday has been on a tear with acquisitions this year. It reached an agreement to buy Paradox, an AI agent builder that automates tasks such as candidate screening, texting and interview scheduling. The deal is expected to close by the end of October. In April, Workday acquired Flowise, an AI agent builder.
HR software, in general, is complex compared with enterprise systems such as CRM, said Josh Bersin, an independent HR technology analyst. Because of that, some HR vendors will have to add agentic AI functionality through acquisition. Workday’s acquisitions this year coincide with the hiring of former SAP S/4HANA and analytics leader Gerrit Kazmaier as its president of product and technology.
“Workday knows that the architecture they have is not going to quickly get them to the world of agents — they can’t build agents fast enough to work across the proprietary workflow system that they have,” Bersin said. “Their direct competitors, SAP and Oracle, are all in the same boat.”
Agents, tools to come
Workday previewed several agents to automate HR work, including the Business Process Copilot Agent, which configures Workday for individual user tasks; Document Intelligence for Contingent Labor Agent, which manages scope of work processes and aligns contracts; Employee Sentiment Agent, which analyzes employee feedback; Job Architecture Agent, which automates job creation, titles and management; and Performance Agent, which surveys data across Workday and assembles it for performance reviews.
Another tool, Case Agent, can potentially be a significant time-saver for HR workers, said Peter Bailis, chief technology officer at Workday. He is a former Google AI for cloud analytics executive who also recently joined the company.
“One of the biggest challenges in HR [is when] an employee has a critical question,” Bailis said. “But their questions are often complex, and processing times for HR departments are often long.”
The case agent can review similar cases in HR, apply the right regional and compliance context, and draft a tailored response for humans to review and deliver.
“The most important part — caring for employees — stays human,” Bailis said.
On the financials side, Workday previewed Cost & Profitability Agent, which enables users to define allocation rules with natural language to derive insights; Financial Close Agent, which automates closing processes; and Financial Test Agent, which analyzes financials to detect fraud and enable compliance. For the education vertical, Workday plans to release Student Administration Agent and Academic Requirements Agent.
Workday also plans agents that bring the functionality of recent acquisitions Paradox and Flowise to its platform.
Expected in the next platform update is the zero-copy Workday Data Cloud, which brings together Workday data with other operational systems such as sales and risk management for analytics, forecasting and planning. Also in the works is Workday Build, a developer platform that includes no-code features from Flowise that enables the creation of custom agents.

How AI will affect HR jobs
The AI transformation Workday and the rest of the enterprise HR software market is undergoing will likely affect the ratios of HR workers to employees for large businesses, Bersin said.
Currently, many companies aim for an industry standard of one HR employee per 100 employees; with AI agents automating many administrative processes, he said he sees the potential for ratios of 1:200, 1:250, or — in the case of one client that Bersin’s company interviewed — possibly 1:400.
As such, automation will enable companies to do more work with smaller HR teams.
“In recruiting, there are sourcers, screeners, interview schedulers, people that do assessment, people that look at pay, people that write job offers, people that create start dates, people that do onboarding,” Bersin said. “Those jobs, maybe a third of them will go away. In learning and development, there’s a new era where a lot of the training content is being generated by AI.”
Workday previewed these features and announced the Sana acquisition in conjunction with its Workday Rising user conference in Las Vegas Sept. 15-18.
Don Fluckinger is a senior news writer for Informa TechTarget. He covers customer experience, digital experience management and end-user computing. Got a tip? Email him.
Tools & Platforms
Humanity’s Best or Worst Invention? – Pacific Index

Artificial Intelligence is everywhere– and professors at Pacific are less than thrilled
Half the world seems to be under the impression that the creation of Artificial Intelligence (a.k.a AI) is the greatest invention since the wheel, while another half seems to worry that AI will roll over humanity and crush it.
For students, though, it especially seems like humanity has struck gold. Why whittle away precious hours doing homework when AI can spit out an entire essay in seconds? (Plus, it can create fun images like the one you see accompanying this article.) It can even make videos that look so real you begin to doubt everything you see. AI is the robot that’s smarter than you, faster than you, and more creative than you—but maybe it’s not as good as some make it seem.
“I think it’s a mistake to think of it as a tool,” voiced Professor Sang-hyoun Pahk. “They are replacing a little bit too much of the thinking that we want to do and want our students to do.” Professor Pahk recently gave a presentation to Pacific’s faculty on the topic, along with professors Aimee Wodda, Dana Mirsalis, and Rick Jobs. Like many universities around the world, AI has become an increasingly popular topic at Pacific, with some accepting the technology and others shunning it. Professor Pahk explained that there are a lot of opportunities for faculty to learn techniques for using AI as a tool in the classroom, but that not all faculty have a desire to go down that road. “There’s less…kind of strategies for what you do when you don’t want to use it,” he shared, firmly expressing that it’s something he wishes to stay far away from. “It’s part of what we were trying to start when we presented.”
Professor Pahk and his colleagues presented a mere week before school was back in-session, so there was little time for faculty to change their syllabi as a way to safe-guard students from using AI on coursework. Still, many professors seemed to be on a similar page, tweaking their lesson plans and teaching methods to ensure that AI is a technology students can’t even be tempted to use.
“I’ve tried to, if you will, AI proof my classes to a certain degree,” Professor Jules Boykoff shared. “Over the recent years, I’ve changed the assignments quite a bit; one example is I have more in-class examinations.” Professor Boykoff is no stranger to AI and admits that he’s done his fair share of testing out the technology. Still, the cons seem to greatly outweigh the pros, especially when it comes to education. “I’m a big fan of students being able to write with clarity and confidence and I’m concerned that overall, AI provides an incentive to not work as hard at writing with clarity and confidence.” Professor Boykoff, like many of his colleagues, stresses that AI short-circuits one’s ability to learn and develop by doing all the heavy lifting for them.
Philosophy Professor Richard Frohock puts it like this, “It would be like going to the gym and turning on the treadmill, and then just sitting next to it.” Professor Frohock said this with good humor, but his analogy rings true. “Thinking is the actual act of running. It’s hard, sometimes it sucks, we never really want to do it– and it’s not about having five miles on your watch…it’s about that process, that getting to five miles. And using AI is skipping that process, so it’s not actually helping you.” Similar to his colleagues, Professor Frohock doesn’t allow any AI usage in his classes, especially since students are still in the process of developing their minds. “I don’t want it to be us vs the students, and like we’re policing what you guys do,” he admits, explaining that he has no desire to make student learning more difficult, but rather the opposite. “If we want to use AI to expand our mind, first we actually have to have the skills to be thinkers independently without AI.”
This is just one of many reasons that professors warn against using AI, but they’re not naïve to the fact that students will use it, nonetheless. It’s become integrated into all Google searches and social media, which means students interact with AI whether they want to or not. “I have come to the conclusion that it’s counterproductive to try and control in some way student use of AI,” commented Professor Michael Huntsberger. Like other faculty, Professor Huntsberger has adjusted his lesson plans to make using AI more challenging for students, but he recognizes that this may not be foolproof. Still, he warns students to be very cautious when approaching AI and advises, “Don’t use it past your first step–so as a place to start your research…I think that’s a great way to make use of these things, but then tread very carefully.” He suggests that students should leave behind the technology altogether once they’ve established their starting point so that their work can still maintain enough human interaction to be considered student work and not AI.
The problem with any AI usage is that the results that pop up are the product of other people’s work, which encroaches on the grounds of plagiarism. “This is the big fight right now between creators and the big tech companies because the creators are saying ‘you’re drawing on our work,’” explained Professor Huntsberger. “And of course, those creators are, A, not being compensated, and B, are not being recognized in any way, and ultimately it’s stealing their work.”
Pacific’s own Professor Boykoff recognizes that his work is victim of this process, explaining that a generous chunk of his writing has been stolen by this technology. “A big conglomerate designed to make money is stealing my hard-earned labor,” he articulated. “It’s not just me, it’s not just like it’s a personal thing, I’m just saying, as a general principle it’s offensive.”
Alongside those obvious concerns, Professor Pahk adds a few more items to the cons list saying, “Broadly, there’s on the one hand, the social, and environmental, and political costs of artificial intelligence.”
AI 0, Professor Pahk 1.
Looking past all the cons, Professor Pahk acknowledged a bright side to the situation. “It’s just…culturally a less serious problem here,” claimed Professor Pahk, sharing that from his experience, students at Pacific want to learn and aren’t here just to mark off courses on a to-do list. “It’s not that it’s not a problem here, but it’s not the same kind of problem.”
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries