Connect with us

AI Insights

AI’s workforce impact is ‘small,’ but it’s not ‘zero,’ economist says

Published

on


BartekSzewczyk | Getty Images

While artificial intelligence has caused turbulence in the labor market, the recent decline in job opportunities has more to do with economic uncertainty, experts say.

“As we look across the broader labor market, we see that AI’s impact on the labor market has still been fairly small,” said Cory Stahle, a senior economist at Indeed, a job search site. 

“The important asterisk is that that doesn’t mean that it has been zero,” he said. 

Mandi Woodruff-Santos, a career coach, agrees, “I don’t think AI is to blame, I think the economic uncertainty is to blame.”

The state of the job market

The job market has not been good in recent months, whether you’re looking for a job or currently employed.

The U.S. economy added about 22,000 jobs for the month of August, while the unemployment rate rose to 4.3%, according to a Bureau of Labor Statistics report on Friday. Economists surveyed by Dow Jones had been looking for payrolls to rise by 75,000.

Of those who are still employed, some are “job hugging,” or “holding onto their job for dear life,” according to an August report by Korn Ferry, an organizational consulting firm.

But others are “quiet cracking,” which is a “persistent feeling of workplace unhappiness that leads to disengagement, poor performance, and an increased desire to quit,” according to cloud learning platform TalentLMS.

Growing economic uncertainty has kept workers from quitting their jobs and has led businesses to slow down hiring decisions, experts say.

“No business knows what the heck the Trump administration is going to do next with the economy,” said Woodruff-Santos.

“And in this kind of economic climate, companies are not sure of anything, and so they’re being very conservative with the way that they’re hiring,” she said.

How artificial intelligence is impacting the labor force

While some companies have announced layoffs to pursue AI technologies in their organizations, most of the impact has been isolated in the tech industry, said Indeed’s Stahle. 

Most recently, Salesforce CEO Marc Benioff said the company laid off about 4,000 customer support roles, due to advancements in the company’s use of artificial intelligence software.

Other studies show AI has mostly affected younger workers rather than mid-career employees. 

An August report by Stanford University professors found that early career workers (ages 22 to 25) in the most AI-exposed occupations experienced a 13% decline in employment. On the flip side, employment for workers in less exposed fields and more experienced workers in the same occupations has either stayed the same or grown.

The study also found that employment declines are concentrated in occupations “where AI is more likely to automate rather than augment human labor.” 

More from Personal Finance:
Why coffee prices are so high
Record numbers of retirement savers are now 401(k) or IRA millionaires
68 jobs may qualify for Trump’s $25,000 ‘no tax on tips’ deduction

Yet, the tech industry itself is not a large sector, said Stahle. According to a March 2025 analysis by nonprofit trade association CompTIA, or the Computing Technology Industry Association, “net tech employment” made up about 5.8% of the overall workforce.

Net tech employment is a designation that represents all those employed in the industry, including workers in technical positions such as cybersecurity and business professionals employed by technology companies, as well as full-time and self-employed technology workers.

For AI-driven layoffs to be considered a broad threat to the job market, the technology needs to start impacting other sectors, such as retail and marketing, said Stahle.

‘We’re seeing more and more demand for AI skills’

Some predictions on AI’s workforce impact contend that employers may be more likely to retrain workers rather than lay them off, according to a new report by the Brookings Institution, a public policy think tank.

“AI may be more likely to augment rather than fully replace human workers,” the authors wrote.

In fact, “we’re seeing more and more demand for AI skills,” said Stahle.

If you have the opportunity, experts say, it’s smart to learn how your field and employer are using AI. 

“You’d be foolish not to do the research into your own field,” and understand how AI can be a tool in your industry, said Woodruff-Santos. 

Look for training programs or webinars where you can participate or free trials of AI tools you can use, she said.

Correction: A new report came from the Brookings Institution. An earlier version misstated the name of the organization.

Don’t miss these insights from CNBC PRO



Source link

AI Insights

After suicides, calls for stricter rules on how chatbots interact with children and teens

Published

on


A growing number of young people have found themselves a new friend. One that isn’t a classmate, a sibling, or even a therapist, but a human-like, always supportive AI chatbot. But if that friend begins to mirror some user’s darkest thoughts, the results can be devastating.

In the case of Adam Raine, a 16-year-old from Orange County, his relationship with AI-powered ChatGPT ended in tragedy. His parents are suing the company behind the chatbot, OpenAI, over his death, alleging that bot became his “closest confidant,” one that validated his “most harmful and self-destructive thoughts,” and ultimately encouraged him to take his own life.

It’s not the first case to put the blame for a minor’s death on an AI company. Character.AI, which hosts bots, including ones that mimic public figures or fictional characters, is facing a similar legal claim from parents who allege a chatbot hosted on the company’s platform actively encouraged a 14-year-old-boy to take his own life after months of inappropriate, sexually explicit,  messages.

When reached for comment, OpenAI directed Fortune to two blog posts on the matter. The posts outlined some of the steps OpenAI is taking to improve ChatGPT’s safety, including routing sensitive conversations to reasoning models, partnering with experts to develop further protections, and rolling out parental controls within the next month. OpenAI also said it was working on strengthening ChatGPT’s ability to recognize and respond to mental health crises by adding layered safeguards, referring users to real-world resources, and enabling easier access to emergency services and trusted contacts.

Character.ai said the company does not comment on pending litigation but that they has rolled out more safety features over the past year, “including an entirely new under-18 experience and a Parental Insights feature. A spokesperson said: “We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.

“The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay. And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”

But lawyers and civil society groups that advocate for better accountability and oversight of technology companies say the companies should not be left to police themselves when it comes to ensuring their products are safe, particularly for vulnerable children and teens.

“Unleashing chatbots on minors is an inherently dangerous thing,” Meetali Jain, the Director of the Tech Justice Law Project and a lawyer involved in both cases, told Fortune. “It’s like social media on steroids.”

“I’ve never seen anything quite like this moment in terms of people stepping forward and claiming that they’ve been harmed…this technology is that much more powerful and very personalized,” she said.

Lawmakers are starting to take notice, and AI companies are promising changes to protect children from engaging in harmful conversations. But, at a time when loneliness among young people is at an all-time high, the popularity of chatbots may leave young people uniquely exposed to manipulation, harmful content, and hyper-personalized conversations that reinforce dangerous thoughts.

AI and Companionship

Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy. 

While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific. 

A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way. 

“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.

“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”

Intimacy by Design

Some of the design features of AI chatbots encourage users to feel an emotional bond with the software. They are anthropomorphic—prone to acting as if they have interior lives and lived experience that they do not, prone to being sycophantic, can hold long conversations, and are able to remember information.

There is, of course, a commercial motive for making chatbots this way. Users tend to return and stay loyal to certain chatbots if they feel emotionally connected or supported by them. 

Experts have warned that some features of AI bots are playing into the “intimacy economy,” a system that tries to capitalize on emotional resonance. It’s a kind of AI-update on the “attention economy” that capitalized on constant engagement.

“Engagement is still what drives revenue,” Sarma said. “For example, for something like TikTok, the content is customized to you. But with chatbots, everything is made for you, and so it is a different way of tapping into engagement.”

These features, however, can become problematic when the chatbots go off script and start reinforcing harmful thoughts or offering bad advice. In Adam Raine’s case, the lawsuit alleges that ChatGPT bought up suicide at twelve times the rate he did, normalized his sucicial thoughts, and suggested ways to circumvent its content moderation.

It’s notoriously tricky for AI companies to stamp out behaviours like this completely and most experts agree it’s unlikely that hallucinations or unwanted actions will ever be eliminated entirely. 

OpenAI, for example, acknowledged in its response to the lawsuit that safety features can degrade over long conversions, despite the fact that the chatbot itself has been optimized to hold these longer conversations. The company says it is trying to fortify these guardrails, writing in a blogpost that it was strengthening “mitigations so they remain reliable in long conversations” and “researching ways to ensure robust behavior across multiple conversations.” 

Research Gaps Are Slowing Safety Efforts

For Michael Kleinman, U.S. policy director at the Future of Life Institute, the lawsuits underscore a pointAI safety researchers have been making for years: AI companies can’t be trusted to police themselves.

Kleinman equated OpenAI’s own description of its safeguards degrading in longer conversations to “a car company saying, here are seat belts—but if you drive more than 20 kilometers, we can’t guarantee they’ll work.”

He told Fortune the current moment echoes the rise of social media, where he said tech companies were effectively allowed to “experiment on kids” with little oversight. “We’ve spent the last 10 to 15 years trying to catch up to the harms social media caused. Now we’re letting tech companies experiment on kids again with chatbots, without understanding the long-term consequences,” he said.

Part of this is a lack of scientific research on the effects of long, sustained chatbot conversations.  Most studies only look at brief exchanges, a single question and answer, or at most a handful of back-and-forth messages. Almost no research has examined what happens in longer conversations.

“The cases where folks seem to have gotten in trouble with AI: we’re looking at very long, multi-turn interactions. We’re looking at transcripts that are hundreds of pages long for two or three days of interaction alone and studying that is really hard, because it’s really hard to stimulate in the experimental setting,” Sarma said. “But at the same time, this is moving too quickly for us to rely on only gold standard clinical trials here.”

AI companies are rapidly investing in development and shipping more powerful models at a pace that regulators and researchers struggle to match.

“The technology is so far ahead and research is really behind,” Sakshi Ghai, a Professor of Psychological and Behavioural Science at The London School of Economics and Political Science, told Fortune.

A Regulatory Push for Accountability

Regulators are trying to step in, helped by the fact that child online safety is a relatively bipartisan issue in the U.S. 

On Thursday, the FTC said it was issuing orders to seven companies, including OpenAI and Character.AI, in an effort to understand how their chatbots impact children. The agency said that chatbots can simulate human-like conversations and form emotional connections with their users. It’s asking companies for more information about how they measure and “evaluate the safety of these chatbots when acting as companions.” 

FTC Chairman Andrew Ferguson said in a statement shared with CNBC that “protecting kids online is a top priority for the Trump-Vance FTC.”

The move follows a push for state level push for more accountability from several attorneys generals. 

In late August, a bipartisan coalition of 44 attorneys general warned OpenAI, Meta, and other chatbot makers that they will “answer for it” if they release products that they know cause harm to children. The letter cited reports of chatbots flirting with children, encouraging self-harm, and engaging in sexually suggestive conversations, behavior the officials said would be criminal if done by a human.

Just a week later, California Attorney General Rob Bonta and Delaware Attorney General Kathleen Jennings issued a sharper warning. In a formal letter to OpenAI, they said they had “serious concerns” about ChatGPT’s safety, pointing directly to Raine’s death in California and another tragedy in Connecticut. 

“Whatever safeguards were in place did not work,” they wrote. Both officials warned the company that its charitable mission requires more aggressive safety measures, and they promised enforcement if those measures fall short.

According to Jain, the lawsuits from the Raine family as well as the suit against Character.AI are, in part, intended tocreate this kind of regulatory pressure on AI companies to design their products more safely and prevent future harm to children. One way lawsuits can generate this pressure is through the discovery process, which compels companies to turn over internal documents, and could shed insight into what executives knew about safety risks or marketing harms. Another way is just public awareness of what’s at stake, in an attempt to galvanize parents, advocacy groups, and lawmakers to demand new rules or stricter enforcement.

Jain said the two lawsuits aim to counter an almost religious fervor in Silicon Valley that  sees the pursuit of artificial general intelligence (AGI) as so important, it is worth any cost—human or otherwise.

“There is a vision that we need to deal with [that tolerates] whatever casualties in order for us to get to AGI and get to AGI fast,” she said. “We’re saying: This is not inevitable. This is not a glitch. This is very much a function of how these chat bots were designed and with the proper external incentive, whether that comes from courts or legislatures, those incentives could be realigned to design differently.”



Source link

Continue Reading

AI Insights

2 Artificial Intelligence (AI) Stocks That Could Become $1 Trillion Giants

Published

on


These AI growth stocks may still be undervalued on Wall Street.

There are 10 companies with a market cap over $1 trillion right now, and all of these except one are involved in artificial intelligence (AI). This technology will drive a substantial amount of economic growth in the 21st century, providing investors the chance to earn substantial gains from the right stocks.

Some companies that are well positioned to play a key role in shaping the economy with AI are still valued at less than $1 trillion. Although their share prices could be volatile in the near term, the following two companies could be worth a lot more down the road they are today.

Image source: Getty Images.

1. Palantir Technologies

More than 800 companies have chosen Palantir Technologies (PLTR 4.14%) to transform their business operations with AI. Businesses can upload data on Palantir’s platforms, and it basically shows them how to be more efficient, grow their revenue, and become more profitable. It is working magic for businesses and the U.S. military, which trusts Palantir to keep top-secret information secure about the U.S. and its allies. Despite its already high market cap of $400 billion, Palantir’s unique value proposition and stellar profitability has all the makings of a $1 trillion business.

Palantir is not just slapping a large language model on a company’s data to make it easy to search information. It pulls together data from different sources within a company, which creates a framework for understanding how the company operates. Palantir is essentially building a digital copy of a company’s operations that can detect problems and solve those problems instantly.

Palantir’s financials suggest there is no replacement for the value it provides. It reported accelerating revenue growth over the last year. In the second quarter, revenue grew 48% year over year, compared to 27% in the year-ago quarter.

Moreover, its net income margin was stellar at 33% in Q2, with an adjusted free cash flow margin of 57%. It’s not common for a small software company in the early stages of growth to be reporting margins like Microsoft.

These margins are being driven by high prices that Palantir charges customers. For example, it recently secured a $10 billion contract with the U.S. Army for the next decade. Organizations are willing to pay up for Palantir’s software because the savings realized are that big. Palantir is saving enterprises millions, even hundreds of millions in costs in some cases, providing an attractive return on investment that is driving the company’s growth.

Palantir stock is expensive, trading at high multiples of sales and earnings. But this is a unique software company with a huge opportunity ahead. CEO Alex Karp is aiming to grow revenue by 10x over time, which would bring annual revenue to more than $40 billion from this year’s analyst estimate of $4.1 billion. Based on its current margins, that could equate to $20 billion in annual free cash flow over the long term. Applying a high-growth multiple of 50 to that would put the stock’s market cap at $1 trillion.

2. Advanced Micro Devices

For AI to keep advancing and transform how people work and communicate, it needs more powerful chips. Nvidia has been the biggest winner so far, but investors shouldn’t overlook Advanced Micro Devices (AMD 1.91%). It is the second-leading supplier of graphics processing units (GPUs), and it could be well positioned to meet growing demand in edge computing and AI inferencing that could send the stock from its current $250 billion market cap to $1 trillion.

As AI proliferates across the economy, people will be able to use powerful AI applications and processing on their devices, which makes edge computing a large opportunity for AMD. The company offers a range of high-performance and energy-efficient chips that are aimed at running AI devices and PCs, positioning it to benefit from a booming market estimated to be worth $327 billion by 2033, according to Grand View Research.

Investors were disappointed by the company’s Q2 data center growth of 14% year over year, but management expects stronger demand once it launches its Instinct MI350 series of GPUs. As it continues to bring new solutions to the data center market, AMD’s data center business should accelerate.

AMD’s chips are clearly addressing needs in the AI market. It announced a partnership with Saudi Arabia’s Humain to build AI infrastructure using AMD’s GPUs and software. Meanwhile, Oracle is building a massive AI compute cluster using multiple AMD chips. AMD says it is also working with governments globally to build sovereign AI infrastructure.

Analysts expect AMD‘s earnings to grow at an annualized rate of 30% over the next several years. Against those prospects, the stock trades at a reasonable forward price-to-earnings multiple of 40. There is enough earnings growth here to potentially triple the stock in five years, putting it easily within striking distance of reaching $1 trillion within the next decade.

John Ballard has positions in Advanced Micro Devices, Nvidia, and Palantir Technologies. The Motley Fool has positions in and recommends Advanced Micro Devices, Microsoft, Nvidia, Oracle, and Palantir Technologies. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Insights

Kazakhstan establishes Ministry of Artificial Intelligence to spearhead digital nation transformation

Published

on



Kazakhstan has announced the creation of a Ministry of Artificial Intelligence (AI) and a systemic shift towards a digital state, set to be realised within the next three years. President Kassym-Jomart Tokayev outlined a comprehensive reform plan, highlighting AI as the central driver for transformation across all sectors, from government administration and industry to agriculture and education.


The initiative includes the integration of a digital tenge into the budgetary system. This is reported by the
official website of Kazakhstan’s president.


A key component of this new development phase is the creation of a Digital Code, designed to standardise regulations surrounding technologies, digital platforms, data, and AI.


The Code will serve as the foundational legal framework for both business and government. The establishment of the Ministry of Artificial Intelligence and Digital Development is an institutional step.


AI integration will encompass all spheres, from the economy and industry to public administration and the social sector. Government services are slated to transition to intelligent platforms, while businesses will be encouraged to adopt digital technologies to enhance productivity and competitiveness.


The initiative includes a social component with the launch of the programme, focused on educating students and schoolchildren in the fundamentals of artificial intelligence. Plans are also in place to introduce AI as a separate subject in school curricula for the first time.


Photo: Myvector /
iStock



Source link

Continue Reading

Trending