Connect with us

AI Insights

AI’s workforce impact is ‘small,’ but it’s not ‘zero,’ economist says

Published

on


BartekSzewczyk | Getty Images

While artificial intelligence has caused turbulence in the labor market, the recent decline in job opportunities has more to do with economic uncertainty, experts say.

“As we look across the broader labor market, we see that AI’s impact on the labor market has still been fairly small,” said Cory Stahle, a senior economist at Indeed, a job search site. 

“The important asterisk is that that doesn’t mean that it has been zero,” he said. 

Mandi Woodruff-Santos, a career coach, agrees, “I don’t think AI is to blame, I think the economic uncertainty is to blame.”

The state of the job market

The job market has not been good in recent months, whether you’re looking for a job or currently employed.

The U.S. economy added about 22,000 jobs for the month of August, while the unemployment rate rose to 4.3%, according to a Bureau of Labor Statistics report on Friday. Economists surveyed by Dow Jones had been looking for payrolls to rise by 75,000.

Of those who are still employed, some are “job hugging,” or “holding onto their job for dear life,” according to an August report by Korn Ferry, an organizational consulting firm.

But others are “quiet cracking,” which is a “persistent feeling of workplace unhappiness that leads to disengagement, poor performance, and an increased desire to quit,” according to cloud learning platform TalentLMS.

Growing economic uncertainty has kept workers from quitting their jobs and has led businesses to slow down hiring decisions, experts say.

“No business knows what the heck the Trump administration is going to do next with the economy,” said Woodruff-Santos.

“And in this kind of economic climate, companies are not sure of anything, and so they’re being very conservative with the way that they’re hiring,” she said.

How artificial intelligence is impacting the labor force

While some companies have announced layoffs to pursue AI technologies in their organizations, most of the impact has been isolated in the tech industry, said Indeed’s Stahle. 

Most recently, Salesforce CEO Marc Benioff said the company laid off about 4,000 customer support roles, due to advancements in the company’s use of artificial intelligence software.

Other studies show AI has mostly affected younger workers rather than mid-career employees. 

An August report by Stanford University professors found that early career workers (ages 22 to 25) in the most AI-exposed occupations experienced a 13% decline in employment. On the flip side, employment for workers in less exposed fields and more experienced workers in the same occupations has either stayed the same or grown.

The study also found that employment declines are concentrated in occupations “where AI is more likely to automate rather than augment human labor.” 

More from Personal Finance:
Why coffee prices are so high
Record numbers of retirement savers are now 401(k) or IRA millionaires
68 jobs may qualify for Trump’s $25,000 ‘no tax on tips’ deduction

Yet, the tech industry itself is not a large sector, said Stahle. According to a March 2025 analysis by nonprofit trade association CompTIA, or the Computing Technology Industry Association, “net tech employment” made up about 5.8% of the overall workforce.

Net tech employment is a designation that represents all those employed in the industry, including workers in technical positions such as cybersecurity and business professionals employed by technology companies, as well as full-time and self-employed technology workers.

For AI-driven layoffs to be considered a broad threat to the job market, the technology needs to start impacting other sectors, such as retail and marketing, said Stahle.

‘We’re seeing more and more demand for AI skills’

Some predictions on AI’s workforce impact contend that employers may be more likely to retrain workers rather than lay them off, according to a new report by the Brookings Institution, a public policy think tank.

“AI may be more likely to augment rather than fully replace human workers,” the authors wrote.

In fact, “we’re seeing more and more demand for AI skills,” said Stahle.

If you have the opportunity, experts say, it’s smart to learn how your field and employer are using AI. 

“You’d be foolish not to do the research into your own field,” and understand how AI can be a tool in your industry, said Woodruff-Santos. 

Look for training programs or webinars where you can participate or free trials of AI tools you can use, she said.

Correction: A new report came from the Brookings Institution. An earlier version misstated the name of the organization.

Don’t miss these insights from CNBC PRO



Source link

AI Insights

FTC Probes AI Chatbots’ Impact on Child Safety

Published

on


The Federal Trade Commission (FTC) is investigating the effect of artificial intelligence (AI) chatbots on children and teens.

The commission announced Thursday (Sept. 11) that it was issuing orders to seven providers of AI chatbots in search of information on how those companies measure and monitor potentially harmful impacts of the technology on young people.

The companies in question are Google, Character.AI, Instagram, Meta, OpenAI, Snap and xAI.

“AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users,” the FTC said in a news release. “AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”

According to the release, the FTC wants to know what measures, if any, these companies have taken to determine the safety of their chatbots when serving as companions.

It is also seeking information on how the companies limit the products’ use by and potential negative effects on children and teens, and to inform users and parents of the risks associated with the products.

“The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule,” the news release added.

As noted here last week when reports of the FTC’s efforts first emerged, some companies have already tried to address this issue.

For instance, OpenAI has said it would add teen accounts that can be monitored by parents. Character.AI has made similar changes, and Meta has added restrictions for people under 18 who use its AI products.

Those reports came the same day First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education. In a news release issued before the event, Trump said the rise of AI must be managed responsibly.

“During this primitive stage, it is our duty to treat AI as we would our own children—empowering, but with watchful guidance,” Trump said. “We are living in a moment of wonder, and it is our responsibility to prepare America’s children.”

Meanwhile, Character.AI CEO Karandeep Anand said last month he foresees a future where people have AI friends.

“They will not be a replacement for your real friends, but you will have AI friends, and you will be able to take learnings from those AI-friendly conversations into your real-life conversations,” Anand told the Financial Times.





Source link

Continue Reading

AI Insights

General Counsel’s Job Changing as More Companies Adopt AI

Published

on


The general counsel’s role is evolving to include more conversations around policy and business direction, as more companies deploy artificial intelligence, panelists at a University of California Berkeley conference said Thursday.

“We are not just lawyers anymore. We are driving a lot of the policy conversations, the business conversations, because of the geopolitical issues going on and because of the regulatory, or lack thereof, framework for products and services,” said Lauren Lennon, general counsel at Scale AI, a company that uses data to train AI systems.

Scattered regulation and fraying international alliances are also redefining the general counsel’s job, panelists …



Source link

Continue Reading

AI Insights

California bill regulating companion chatbots advances to Senate

Published

on


The California State Assembly approved legislation Tuesday that would place new safeguards on artificial intelligence-powered chatbots to better protect children and other vulnerable users.

Introduced in July by state Sen. Steve Padilla, Senate Bill 243 requires companies that operate chatbots marketed as “companions” to avoid exposing minors to sexual content, regularly remind users that they are speaking to an AI and not a person, as well as disclose that chatbots may not be appropriate for minors.

The bill passed the Assembly with bipartisan support and now heads to California’s Senate for a final vote.

“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” Padilla said in statement. “Safety must be at the heart of all developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”

The push for regulation comes as tragic instances of minors harmed by chatbot interactions have made national headlines. Last year, Adam Raine, a teenager in California, died by suicide after allegedly being encouraged by OpenAI’s chatbot, ChatGPT. In Florida, 14-year-old Sewell Setzer formed an emotional relationship with a chatbot on the platform Character.ai before taking his own life.

A March study by the MIT Media Lab examining the relationship between AI chatbots and loneliness found that higher daily usage correlated with increased loneliness, dependence and “problematic” use, a term that researchers used to characterize addiction to using chatbots. The study revealed that companion chatbots can be more addictive than social media, due to their ability to figure out what users want to hear and provide that feedback.

Setzer’s mother, Megan Garcia, and Raine’s parents have filed separate lawsuits against Character.ai and OpenAI, alleging that the chatbots’ addictive and reward-based features did nothing to intervene when both teens expressed thoughts of self-harm.

The California legislation also mandates companies program AI chatbots to respond to signs of suicidal thoughts or self-harm, including directing users to crisis hotlines, and requires annual reporting on how the bots affect users’ mental health. The bill allows families to pursue legal action against companies that fail to comply.


Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.



Source link

Continue Reading

Trending