Politicians seem increasingly intent on modeling artificial intelligence (AI) tools in their own likenesses—and their plans could sneakily undermine free speech.
For Republicans, the ruse involves fighting “woke” AI. The Trump administration is reportedly readying an executive order aimed at preventing AI chatbots from being too politically correct.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth’s sex, tech, bodily autonomy, law, and online culture coverage.
And, sure, the way some generative AI tools have been trained and programmed has led to some silly—and reality-distorting—outcomes. See, for instance, the great black pope and Asian Nazi debacle of 2024 from Google Gemini.
And, all glibness aside, it’s way better than the alternative.
Both Gemini and Grok have been retooled to avoid similar snafus going forward. But the fact remains that different tech companies have different standards and safeguards baked into their AI systems, and these may lead these systems to yield different results.
Unconscious biases baked into AI will continue to produce some biased information. But the trick to combating this isn’t some sort of national anti–woke AI policy but teaching AI literacy—ensuring that everyone knows that the version of reality reflected by AI tools may be every bit as biased as narratives produced by humans.
And, as the market for AI tools continues to grow, consumers can also assert preferences as they always do in marketplaces: by choosing the products that they think are best. For some, this might mean that more biased systems are actually more appealing; for others, systems that produce the most neutral results will be the most useful.
Whatever problems might result from private choices here, I’m way more worried about the deleterious effects of politicians trying to combat “woke” or “discriminatory” AI.
The forthcoming Trump executive order “would dictate that AI companies getting federal contracts be politically neutral and unbiased in their AI models, an effort to combat what administration officials see as liberal bias in some models,” according toThe Wall Street Journal.
That might seem unobjectionable at first blush. But while we might wish AI models be “neutral and unbiased”—just as we might wish the same about TV news programs, or magazine articles, or social media moderation—the fact is that private companies, be they television networks or publishers or tech companies, have a right to make their products as biased as they want. It’s up to consumers to decide if they prefer neutral models or not.
Granted, the upcoming order is not expected to try and mandate such a requirement across the board but to stipulate that this is mandatory for AI companies getting federal contracts. That seems fair enough in theory, but in practice, not likely.
Look at the past decade of battles over social media moderation, during which the left and the right have both cried “bias” over decisions that don’t favor their preferred views.
Look at the way every recent presidential administration has tried to tie education funding to supporting or rejecting certain ideologies surrounding sex, gender, race, etc.
“Because nearly all major tech companies are vying to have their AI tools used by the federal government, the order could have far-reaching impacts and force developers to be extremely careful about how their models are developed,” the Journal suggests.
To put it more bluntly, tech companies could find themselves having to retool AI models to fit the sensibilities and biases of Trump—or whoever is in power—in order to get lucrative contracts.
Sure, principled companies could opt out of trying for government contracts. But that just means that whoever builds the most sycophantic AI chatbots will be the ones powering the federal government. And those contracts could also mean that the most politically biased AI tools wind up being the most profitable and the most able to proliferate and expand.
Like “woke AI,” the specter of “AI discrimination” has become a rallying cry for authorities looking to control AI outputs.
And, again, we’ve got something that doesn’t sound so bad in theory. Who would want AI to discriminate?
But, in practice, new laws intended to prevent discrimination in artificial intelligence outputs could have negative results, as Greg Lukianoff, president and CEO of the Foundation for Individual Rights and Expression (FIRE), explains:
These laws — already passed in states like Texas and Colorado — require AI developers to make sure their models don’t produce “discriminatory” outputs. And of course, superficially, this sounds like a noble endeavor. After all, who wants discrimination? The problem, however, is that while invidious discriminatory action in, say, loan approval should be condemned, discriminatory knowledge is an idea that is rightfully foreign. In fact, it should freak us out.
[…] Rather than calling for the arrest of a Klansman who engages in hateful crimes, these regulations say you need to burn down the library where he supposedly learned his hateful ideas. Not even just the books he read, mind you, but the library itself, which is full of other knowledge that would now be restricted for everyone else.
Perhaps more destructive than any government actions stemming from such laws is the way that they will influence how AI models are trained.
The very act of trying to depoliticize or neutralize AI, when done by politicians, could undermine AI’s potential for neutral and nonpolitical knowledge dissemination. People are not going to trust tools that they know are being intimately shaped by particular political administrations. And they’re not going to trust tools that seem like they’ve been trained to disregard reality when it isn’t pretty, doesn’t flatter people in power, or doesn’t align with certain social goals.
“This is a matter of serious epistemic consequence,” Lukianoff writes. “If knowledge is riddled with distortions or omissions as a result of political considerations, nobody will trust it.”
“In theory, these laws prohibit AI systems from engaging in algorithmic discrimination—or from treating different groups unfairly,” note Lukianoff and Adam Goldstein at National Review. “In effect, however, AI developers and deployers will now have to anticipate every conceivable disparate impact their systems might generate and scrub their tools accordingly. That could mean that developers will have to train their models to avoid uncomfortable truths and to ensure that their every answer sounds like it was created with HR and legal counsel looking over their shoulder, softening and obfuscating outputs to avoid anything potentially hurtful or actionable. In short, we will be (expensively) teaching machines to lie to us when the truth might be upsetting.”
Whether it’s trying to make ChatGPT a safe space for social justice goals or for Trump’s ego, the fight to let government authorities define AI outputs could have disastrous results.
At the very least, it’s going to lead to many years of fighting over AI bias, in the same way that we spent the past decade arguing over alleged biases in social media moderation and “censorship” by social media companies. And after all that, does anyone think that social media on average is any better—or that it is producing any better outcomes—in 2025 than it did a decade ago?
Politicizing online content moderation has arguably made things worse and, if nothing else, been so very tedious. It looks like we can expect a rehash of all these arguments where we just replace “social media” and “search engines” with “AI.”
The recruitment bit “prohibits speech encouraging lawful abortion while allowing speech discouraging lawful abortion,” per the court’s opinion. “That is impermissible viewpoint discrimination, which the First Amendment rarely tolerates—and does not tolerate here.”
While “Tennessee may criminalize speech recruiting a minor to procure an abortion in Tennessee,” wrote U.S. District Judge Julia Gibbons. “The state may not, however, criminalize speech recruiting a minor to procure a legal abortion in another Tennessee.”
Mississippi can start enforcing a requirement that social media platforms verify user ages and block anyone under age 18 who doesn’t have parental consent to participate, the U.S. Court of Appeals for the 5th Circuit has ruled.
“Just as the government can’t force you to provide identification to read a newspaper, the same holds true when that news is available online,” said Paul Taske, co-director of the NetChoice Litigation Center, in a statement. “Courts across the country agree with us: NetChoice has successfully blocked similar, unconstitutional laws in other states. We are confident the Supreme Court will agree, and we look forward to fighting to keep the internet safe and free from government censorship.”
• Are men too “anxious about desire”? Is “heteropessimism” a useful concept? Should dating disappointment be seen as something deeper, or is that just a defense? In a much-talked-about new New York Times essay, Jean Garnett showcases the seductive appeal of blaming one’s personal romantic woes on some sort of larger, more political forces and the perils of this approach.
• “House Speaker Mike Johnson is rebuffing pressure to act on the investigation into Jeffrey Epstein, instead sending members home early for a month-long break from Washington after the week’s legislative agenda was upended by Republican members who are clamoring for a vote,” the Associated Press reports.
• X can tell users when the government wants their data. “X Corp. convinced the DC Circuit to vacate a broad order forbidding disclosure about law enforcement’s subpoenas for social media account information, with the panel finding Friday a lower court failed to sufficiently review the potential harm of immediate disclosure,” notes Bloomberg Law.
• A case involving Meta and menstrual app data got underway this week. In a class-action suit, the plaintiffs say that period tracking app Flo shared their data with Meta and other companies without their permission. “Brenda R. Sharton of Dechert, the lead attorney representing Flo Health, said evidence will show that Flo never shared plaintiffs’ health information, that plaintiffs agreed that Flo could share data to maintain and improve the app’s performance, and that Flo never sold data or allowed anybody to use health information for ads,” per Courthouse News Service.
• YouTube is now Netflix’s biggest competitor. “The rivalry signals how the streaming wars have entered a new phase,” television reporter John Koblin suggests. “Their strategies for success are very different, but, in ways large and small, it’s becoming clear that they are now competing head-on.”
• European Union publishers are suing Google over an alleged antitrust law violation. Their beef is with Google’s AI overviews, which they’re worried could cause “irreparable harm.” They complain to the European Commission that “publishers using Google Search do not have the option to opt out from their material being ingested for Google’s AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google’s general search results page.”
Conservative Political Action Conference (CPAC) | National Harbor, Maryland | 2014 (ENB/Reason)
By Choonsik Yoo ( September 1, 2025, 07:26 GMT | Insight) — South Korea’s defense ministry is shifting from research and preparation to actively adopting AI technologies, with a comprehensive defense AI policy paper planned by early 2026, a ministry official told MLex. The move reflects President Lee Jae Myung’s pledge to make use of AI across the country the main driver of economic recovery. Initial applications will focus on administration, manpower management and surveillance systems, while large-scale combat uses are expected to take longer due to technological challenges.
South Korea’s defense ministry plans to begin adopting artificial intelligence technology as broadly as possible, moving away from its previous strategy of focusing primarily on study and preparation, and acknowledging the sustained proliferation of AI across industries and countries….
Prepare for tomorrow’s regulatory change, today
MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.
Know what others in the room don’t, with features including:
Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
Curated case files bringing together news, analysis and source documents in a single timeline
Builder AI launches liquidation process in Delaware after controversy over sales overestimation, Nate founder’s federal indictment, GameOn false data, etc
[Picture = Gemini]
While the generative artificial intelligence (AI) craze is approaching its peak, promises that “AI will do everything on its own” are collapsing throughout Silicon Valley. The bankruptcy of Builder AI, which was revered as a unicorn, is a symbolic event.
According to the New York Times on the 31st (local time), Builder AI has launched a massive promotion with high growth in 2024, but a board investigation has confirmed overstatement of sales. After management changes and a liquidity crisis, the company entered liquidation proceedings in Delaware courts in the first half of 2025. As suspicions spread that “people took care of it from behind” over the reality of AI manager Natasha, who said he would automatically make the app, management explained, “AI was an auxiliary tool and did not replace people,” but failed to restore trust.
The incident shows how easily verification of the actual level of automation of technology and financial numbers can be pushed back while the label ‘AI’ draws the attention of investment and media.
Similar scenes were repeated on other stages. The shopping app “Nate” promoted that “Deep Learning replaces payment and checkout,” but allegations arose that the Philippine outsourcing staff handled the order manually. Eventually, the Southern New York Federal Prosecutor’s Office (SDNY) charged its founder with investor fraud in the spring of 2025.
San Francisco startup “Game On” put forward an AI sports chatbot, but was indicted on false financial data, fake audit reports, and allegations of inflated sales. What these events have in common is that they promoted “AI-washing,” that is, processes that are largely performed by humans or low automation maturity, as if they were “completely automatic.”
‘AI done by humans’ is not small in the field of large corporations. Amazon’s “Just Walk Out” was a concept that sensors and computer vision handled automatic payments, but reports continued that personnel identified and inspected transactions in actual operations. Amazon denied the controversy over the exaggeration, but adjusted its store strategy to focus on smart carts.
Presto Automation, which introduced a fast-food drive-through automatic response solution, was also found to have processed a significant percentage of orders at a certain time. Legal technology start-up advocated automating personal injury case documents, but when internal testimony was reported that many of the actual tasks depend on human inspection, the company emphasized that “the combination of AI and humans is essential for high quality.”
“The fall of Builder AI clearly shows what to believe and what to doubt in the current AI boom,” the New York Times said. “As it is said that AI is sold, but automation is not, the gap between the actual level of technology and market expectations is still large.”
No generation is spared from the cultural upheaval of new technology. In the 2020s, it is AI fuelling that disruption.
Love it or fear it, artificial intelligence offers endless possibilities some people would have now felt on a personal level.
While many are fearing for their jobs, however, others are seeing a widening of possibilities.
Max Hamilton is worried about the impact AI would have on creatives, including copyright issues. (Supplied: Max Hamilton)
Forced to seek change
Max Hamilton, a graphic designer with over two decades of experience, has already adapted her career to meet the threat of generative AI head on.
The increasingly scarce availability of jobs pushed her to venture into illustration work for children’s books.
“I saw that happening a few years ago and that’s when I pivoted,” Ms Hamilton said.
“I’ve been really focusing on using watercolour and hand drawing, which I did on purpose because I thought that might set me apart from having the computer-generated look.“
To stay ahead of the fast-changing landscape, she has also expanded her skill set to include writing, meaning she can be involved in every aspect of producing a book.
“As a creative, I think we like to think that our creativity is our special weapon,” Ms Hamilton said.
How will AI affect jobs?
Data shows that creatives like Ms Hamilton are right to expect increasing AI influence in their sectors.
A recent Jobs and Skills Australia (JSA) report confirms artificial intelligence will bring about an impending change to the labour market, either through automation or augmentation.
The body assessed various tasks within ANZSCO-listed occupations and ranked them based on the degree to which AI could impact them.
Here are the sectors JSA predicts are most likely to be automated by artificial intelligence, with existing workflows replaced.
Here are the sectors most likely to be augmented by artificial intelligence, improving the output of existing workers.
Evan Shellshear, an innovation and technology expert from The University of Queensland, explains what this means for the availability of jobs in the market.
“It’s not jobs that are at risk of AI, it’s actual tasks and skills,” Dr Shellshear said.
“We’re seeing certain skills and parts of jobs disappearing, but not necessarily whole occupations disappearing.“
The report further supports this, saying Generative AI technologies are more likely to help boost workers’ productivity, as opposed to replacing them, especially in high-skilled occupations.
In fact, Dr Shellshear believes there’s a likelihood AI will create job opportunities.
“It’s making a lot of things that were impossible, possible,” he says, especially for small businesses.
“Gen AI can lower the cost for things, expertise and knowledge that were out of reach in the past.”
An opportunity to create the unthinkable
Growing up a big fan of science fiction, Melanie Fisher jumped at the chance to experiment with Generative AI shortly after ChatGPT was released.
Ms Fisher started off by testing the tool’s knowledge of food regulation, with which she was familiar from years of experience in the industry.
“It came up with some untrue stuff so early on I learnt you have to be careful,” said the 67-year old who is based in Canberra.
Melanie Fisher built an app for her 3 year-old grandchild and now it’s a bonding activity for the two (Supplied: Melanie Fisher)
But Ms Fisher continued pushing the bounds of what AI could offer — using it to find new recipes and suggestions for things to do — before landing on the idea to create a game app for her 3-year old granddaughter Lilly*.
When she heard AI could code, she thought to herself, “Oh I’d love to try that, but … I’m not an IT person or anything.”
So, she threw the question to ChatGPT.
Melanie spelled out her request on the generative AI tool and made it clear she had no relevant background. (Supplied: Melanie Fisher)
The tool recommended a program that allowed her to drag and drop different elements to produce a coherent story mode gameplay.
Ms Fisher didn’t have to look far for inspiration.
“[The game was] based on stories Lilly* and I made up about her being a girl pirate with her friends, and they have adventures together,” Ms Fisher said.
It took three weeks of work to bring her vision to reality, even getting the characters to loosely resemble Lilly*.
Now the game has become a special pet project for the duo.
Melanie continues to build on the gameplay with input from her granddaughter. (Supplied: Melanie Fisher)
Drawing from her own experience, Ms Fisher sees AI as a double-edged sword.
“I think it’s a great leap forward for people, but I do very much worry it’s going to massively displace lots of people from work,” she said.
Transition still in early days
Many professionals such as recruiters, university staff and health practitioners have incorporated AI into their workflows.
More recently, Commonwealth Bank Australia made headlines by slashing jobs due to artificial intelligence, only to later put out an apology while backtracking its decision.
But news stories about corporate lay-offs and downsizing don’t necessarily point to an AI takeover, according to Professor Nicholas Davis. He is a former World Economic Forum executive, who is now an artificial intelligence expert at the University of Technology Sydney (UTS).
He believes these trends are being driven by “early adopters” and foresees “a disconnect between expectation versus reality”.
“We’re likely to see organisations lay off people in anticipation of gains and then rehiring because it doesn’t quite work the way they expect,” said Professor Davis.
“We’re at very early stages of using the latest forms of AI at the enterprise level.
“Most organisations have yet to see a measurable positive impact on the bottom line.”
An example he provided is how the introduction of self check-out machines at supermarkets resulted in higher levels of staff stress, customer frustration and costs from theft.
This has led a number of UK and US chains to reintroduce manned tills.
“The consumer experience is different to the organisational value and experience,” warns Professor Davis.
Nicholas Davis believes humans are still needed alongside AI for it to perform sustainably and reliably. (ABC News: Ian Cutmore)
How can we better prepare for an AI-driven world?
Despite having success with the app, Ms Fisher says, “I’ve learnt a little bit but I don’t think I could become a game developer.”
Speaking to this, Dr Shellshear agrees there’s a distinction to be made between what is possible with AI and the value humans have to offer.
Dr Evan Shellshear believes people should focus on harnessing the right skills for an AI-driven future. (Supplied: Dr Evan Shellshear)
While AI can help a person attain new skills, they still need education, training and real-world expertise to get to a professional level, he adds.
Having conducted his own research into AI’s impact on jobs with a keen interest in what remains relevant in the future, he found professions involving communication, management, collaboration and creativity, assisting and caring to be most difficult to replicate.
Other human traits such as problem-solving, resilience, attentiveness are also irreplaceable, says Professor Davis.
But he says that having a varied set of skills can put you at an advantage.
“The more you’re able to add value, the less it matters that things get taken away,” he said.
“But if your job is doing one specific thing or creating one style, then there’s where it gets problematic.”
“Embracing, engaging and reinventing is how you benefit.”
Here’s also Dr Shellshear’s advice on staying ahead of the game.
“Recognise its impact on your life as an individual, especially from a job perspective and ask yourself: ‘How do I position myself to continue to add value with these tools around me?'”At some point, you have to learn how to integrate [AI] into your workflows otherwise you [risk no longer being] efficient or relevant.“