Two fundamentally important businesses being bolstered by AI are begging to be bought, while another highflier is butting heads with history (and not in a good way).
Roughly 30 years ago, the advent of the internet ushered in a new era of corporate growth. Although it took many years for the internet to mature a technology and for businesses to figure out how to optimize this innovation to boost their sales and profits, it was a genuine game-changer.
For decades, Wall Street and investors have been waiting for the next technological leap forward. The evolution of artificial intelligence (AI) looks to have answered the call.
AI empowers software and systems with the ability to make split-second decisions, all without the need for human oversight. AI-accelerated data centers are facilitating generative AI solutions for businesses, as well as helping to train large language models (LLMs), such as chatbots and virtual agents.
Image source: Getty Images.
Based on estimates in Sizing the Prize, the analysts at PwC foresee AI adding $15.7 trillion to the global economy by 2030. If this projection is even remotely close to accurate, it’s going to lead to a lot of winners. But it doesn’t automatically mean every AI stock is worth buying.
As we push into the second-half of 2025, two historically cheap artificial intelligence stocks are begging to be bought, while another AI highflier with mounting red flags is worth avoiding in July.
Magnificent AI stock No. 1 that can be purchased with confidence in July: Alphabet
Although the “Magnificent Seven” played an undeniably huge role in lifting Wall Street’s major stock indexes to new highs, one of its seven components remains historically inexpensive. I’m talking about Google parent Alphabet(GOOGL -0.22%) (GOOG -0.27%).
While there’s been some concern about the possibility of LLMs siphoning away internet search share from Google, we haven’t witnessed any evidence of this occurring. Based on data from GlobalStats, Google accounted for a monopoly like 89.6% of global internet search market share in May 2025. Looking back more than a decade, it’s consistently accounted for an 89% to 93% worldwide share of internet search. This is a foundational cash cow of a segment that’s not going away.
Don’t overlook Alphabet’s strong cyclical ties, either. Approximately 74% of its net sales during the March-ended quarter can be traced to advertising, which includes ads found on YouTube, the No. 2 most-visited social media destination. With economic expansions lasting considerably longer than recessions, Alphabet is ideally positioned to take advantage of long-winded periods of growth and often possesses exceptional ad-pricing power.
However, Alphabet’s most attractive long-term growth prospect is its cloud infrastructure service platform, Google Cloud, which is already pacing more than $49 billion in annual run-rate revenue. Google Cloud is the world’s No. 3 cloud infrastructure service platform by spending, according to an analysis from Canalys, and its sales have the potential to accelerate further with customers gaining access to generative AI solutions.
As promised, there’s also quite the value proposition with shares of Alphabet. As of the closing bell on June 27, shares of the company can be scooped up for 12.7 times forecast cash flow in 2026, as well as a forward price-to-earnings (P/E) multiple of 17.5. For context, this represents a 28% discount to its average multiple to cash flow over the trailing-five-year period and is 20% below its average forward P/E ratio since 2020.
Image source: Getty Images.
Sensational AI stock No. 2 that can be bought in July: Okta
The second inexpensive artificial intelligence stock that makes for a no-brainer buy in July is none other than cybersecurity companyOkta(OKTA -1.35%). While shares hit the skids in late May after the company guided for “just” 9% to 10% full-year sales growth in fiscal 2026 (ended Jan. 31, 2026), there are multiple reasons to believe Okta’s growth story is just getting started.
To begin with, cybersecurity has evolved from an optional to necessary solution for businesses. Regardless of how well or poorly the U.S. economy and stock market are performing, hackers don’t take time off from trying to steal sensitive data. This means demand for cybersecurity solutions from third-party providers like Okta is only going to increase.
What makes Okta such an intriguing investment is its AI- and machine learning-driven identity verification platform. Though AI platforms aren’t perfect, they offer the ability to become smarter over time at recognizing and responding to potential threats. This should make Okta’s Identity Cloud platform far nimbler and more effective than on-premises solutions.
Something else working in Okta’s favor is its subscription-based operating model. Subscription-fueled models tend to offer high margins (often in the neighborhood of 80%) and keep customers loyal to the platform. Additionally, it provides a layer of operating cash flow predictability that Wall Street and investors tend to appreciate.
Okta’s valuation also makes sense — especially following its double-digit percentage decline in late May. The company’s forward P/E ratio has fallen to 27, and its forward-year cash flow multiple of 21 is well below its average cash flow multiple of 51 over the last half-decade.
The exceptionally pricey AI stock to avoid in July: Palantir Technologies
However, not every artificial intelligence stock can be a winner. Despite adding north of $300 billion in market cap over the last 30 months, data-mining special Palantir Technologies(PLTR -4.09%) is the AI stock investors should steer clear of in July.
Don’t get me wrong, Palantir is a rock-solid business. Its government-focused Gotham platform and enterprise-driven Foundry platform have no one-for-one large-scale replacements, which means the company has a sustainable moat. These platforms, which respectively incorporate AI and machine learning, also generate highly predictable operating cash flow. Gotham’s government contracts are spread over multiple years, while Foundry is a subscription-based model.
The problem is there’s only so much premium that can be bestowed on a company with a sustainable moat, and Palantir has unquestionably overstepped its bounds. Whereas companies on the leading edge of the innovative curve during the rise of the internet topped out at price-to-sales (P/S) ratios of 30 to 43, Palantir’s P/S ratio handily surpassed 110 last week. No megacap company has ever been able to sustain a multiple this aggressive for an extended period, and it’s unlikely that Palantir is the exception.
Furthermore, there hasn’t been a next-big-thing technology or innovation since (and including) the advent of the internet that avoided a bubble-bursting event. In other words, investors have persistently overestimated the early adoption and/or utility of game-changing technologies for three decades.
Though spending on AI infrastructure has been robust, the simple fact that most businesses aren’t optimizing this technology as of yet, or generating a profit on their AI investments, signals the growing likelihood of being in a bubble. If the AI bubble bursts, investor sentiment will weigh heavily on the exceptionally expensive Palantir.
Lastly, the long-term ceiling for Gotham (the company’s most-profitable segment) is lower than investors might realize. Since this AI-driven platform is only available to the U.S. and its immediate allies, Palantir’s customer pool is rather narrow. It’s all the more reason for investors to avoid Palantir Technologies stock in July.
Earlier this month, footage was released of one of Will Smith’s gigs which was allegedly AI-generated.
Snopes agreed that the crowd shots featured ‘some AI manipulation’. You can watch the video below:
Will Smith is being accused of posting a video that features AI-generated shots of fans cheering in the crowd during his tour pic.twitter.com/1Zvmp1p8MgAugust 27, 2025
Eagle-eyed viewers who paused the footage spotted some telltale signs: namely, that the AI ‘fans’ in the video looked less like humans and more like, well, alien creatures in a horror movie who are desperate to suck out your soul. Their hands were elongated and had more fingers than the children of incestuous relationships, while their blurred facial features resembled melted candles in the shape of ghouls.
Nonetheless, it turns out the emotive slogans were real and were held by real Smith fans, such as Patric and Géraldine of Switzerland, who held up a sign saying “‘You Will Make It’ helped me survive cancer. Thx Will’. And to be fair to Smith, it appears that the massive crowds in the video were real: his team had merely used AI to turn still images into short videos.
Green Day laughed at Smith on Instagram, posting a shot of their fans at a gig with the caption: “Don’t need AI for our crowds”.
However, though his motive seems to be simply generating AI videos from stills, Smith’s is unlikely to be the last example we see of performers using AI footage of fans. Every music artist wants a full-to-bursting, over-emotional stadium crowd who are hysterical with joy at seeing their idol(s). So if you, unlike Smith, personally can’t get real footage of that, then why not fake it? (Probably because the internet is full of merciless, critical sleuths who are going to roast you until you’re a smoking heap of charred remains.)
Donald Trump’s team have allegedly paid extras to appear at his rallies to fill spare stadium seats, but that’s expensive and also risky as people might not show up – or, even worse for the team, Democrats might turn up. Generating AI footage is far cheaper, even if it burns trees.
The best camera deals, reviews, product advice, and unmissable photography news, direct to your inbox!
You could also make your crowds as attractive, young, unisex, and ethnically diverse as you want – even if the pause button does reveal them to be more horrifying than the zombies in I Am Legend.
Reading materials and fliers at the Sacramento Works job training and resources center in Sacramento on April 23, 2024. The center provides help and resources to job seekers, business and employers in Sacramento County. Photo by Miguel Gutierrez Jr., CalMatters
This story was originally published by CalMatters. Sign up for their newsletters.
After three years of trying to give Californians the right to know when AI is making a consequential decision about their lives and to appeal when things go wrong, Assemblymember Rebecca Bauer-Kahan said she and her supporters will have to wait again, until next year.
The San Ramon Democrat announced Friday that Assembly Bill 1018, which cleared the Assembly and two Senate committees, has been designated a two-year bill, meaning it can return as part of the legislative session next year. That move will allow more time for conversations with Gov. Gavin Newsom and more than 70 opponents. The decision came in the final hours of the California Legislative session, which ends today.
Her bill would require businesses and government agencies to alert individuals when automated systems are used to make important decisions about them, including for apartment leases, school admissions, and, in the workplace, hiring, firing, promotions, and disciplinary actions. The bill also covers decisions made in education, health care, criminal justice, government benefits, financial services, and insurance.
“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement shared with CalMatters.
The pause comes at a time when politicians in Washington D.C. continue to oppose AI regulation that they say could stand in the way of progress. Last week, leaders of the nation’s largest tech companies joined President Trump at a White House dinner to further discuss a recent executive order and other initiatives to prevent AI regulation. Earlier this year, Congress tried and failed to pass a moratorium on AI regulation by state governments.
When an automated system makes an error, AB 1018 gives people the right to have that mistake rectified within 60 days. It also reiterates that algorithms must give “full and equal” accommodations to everyone, and cannot discriminate against people based on characteristics like age, race, gender, disability, or immigration status. Developers must carry out impact assessments to, among other things, test for bias embedded in their systems. If an impact assessment is not conducted on an AI system, and that system is used to make consequential decisions about people’s lives, the developer faces fines of up to $25,000 per violation, or legal action by the attorney general, public prosecutors, or the Civil Rights Department.
Amendments made to the bill in recent weeks exempted generative AI models from coverage under the bill, which could prevent it from impacting major AI companies or ongoing generative AI pilot projects carried out by state agencies. The bill was also amended to delay a developer auditing requirement to 2030, and to clarify that the bill intends to address evaluating a person and making predictions or recommendations about them.
An intense legislative fight
Samantha Gordon, a chief program officer at TechEquity, a sponsor of the bill, said she’s seen more lobbyists attempt to kill AB 1018 this week in the California Senate than for any other AI bill ever. She said she thinks AB 1018 had a pathway to passage but the decision was made to pause in order to work with the governor, who ends his second and final term next year.
“There’s a fundamental disagreement about whether or not these tools should face basic scrutiny of testing and informing the public that they’re being used,” Gordon said.
Gordon thinks it’s possible tech companies will use their “unlimited amount of money” to fight the bill next year..
“But it’s clear,” she added, “that Americans want these protections — poll after poll shows Americans want strong laws on AI and that voluntary protections are insufficient.”
AB 1018 faced opposition from industry groups, big tech companies, the state’s largest health care provider, venture capital firms, and the Judicial Council of California, a policymaking body for state courts.
A coalition of hospitals, Kaiser Permanente, and health care software and AI company Epic Systems urged lawmakers to vote no on 1018 because they argued the bill would negatively influence patient care, increase costs, and require developers to contract with third-party auditors to assess compliance by 2030.
A coalition of business groups opposed the bill because of generalizing language and concern that compliance could be expensive for businesses and taxpayers. The group Technet, which seeks to shape policy nationwide and whose members include companies like Apple, Google, Nvidia, and OpenIAI, argued that AB 1018 would stifle job growth, raise costs, and punish the fastest growing industries in the state in a video ad campaign.
Venture capital firm Andreessen Horowitz, whose founder Marc Andreessen supported the re-election of President Trump, oppose the bill due to costs and due to the fact that the bill seeks to regulate AI in California and beyond.
A policy leader in the state judiciary said in an alert sent to lawmakers urging a no vote this week that the burden of compliance with the bill is so great that the judicial branch is at risk of losing the ability to use pretrial risk assessment tools like the kind that assign recidivism scores to sex offenders and violent felons. The state Judicial Council, which makes policy for California courts, estimates that AB 1018 passage will cost the state up to $300 million a year. Similar points were made in a letter to lawmakers last month.
Why backers keep fighting
Exactly how much AB 1018 could cost taxpayers is still a big unknown, due to contradictory information from state government agencies. An analysis by California legislative staff found that if the bill passes it could cost local agencies, state agencies, and the state judicial branch hundreds of millions of dollars. But a California Department of Technology report covered exclusively by CalMatters concluded in May that no state agencies use high risk automated systems, despite historical evidence to the contrary. Bauer-Kahan said last month that she was surprised by the financial impact estimates because CalMatters reporting found that automated decisionmaking system use was not widespread at the state level.
Support for the bill has come from unions who pledged to discuss AI in bargaining agreements, including the California Nurses Association and the Service Employees International Union, and from groups like the Citizen’s Privacy Coalition, Consumer Reports, and the Consumer Federation of California.
Coauthors of AB 1018 include major Democratic proponents of AI regulation in the California Legislature, including Assembly majority leader Cecilia Aguilar-Curry of Davis, author of a bill passed and on the governor’s desk that seeks to stop algorithms from raising prices on consumer goods; Chula Vista Senator Steve Padilla, whose bill to protect kids from companion chatbots awaits the governor’s decision; and San Diego Assemblymember Chris Ward, who previously helped pass a law requiring state agencies to disclose use of high-risk automated systems and this year sought to pass a bill to prevent pricing based on your personal information.
The anti-discrimination language in AB 1018 is important because tech companies and their customers often see themselves as exempt from discrimination law if the discrimination is done by automated systems, said Inioluwa Deborah Raji, an AI researcher at UC Berkeley who has audited algorithms for discrimination and advised government officials in Sacramento and Washington D.C. about how AI can harm people. She questions whether state agencies have the resources to enforce AB 1018, but also likes the disclosure requirement in the bill because “I think people deserve to know, and there’s no way that they can appeal or contest without it.”
“I need to know that an AI system was the reason I wasn’t able to rent this house. Then I can at an individual level appeal and contest. There’s something very valuable about that.”
“It’s disappointing this [AB 1018] isn’t the priority for AI policy folks at this time,” she told CalMatters. “I truly hope the fourth time is the charm.”
A number of other bills with union backing were also considered by lawmakers this session that sought to protect workers from artificial intelligence. For the third year in a row, a bill to require a human driver in commercial delivery trucks in autonomous vehicles failed to become law. Assembly Bill 1331, which sought to prevent surveillance of workers with AI-powered tools in private spaces like locker or lactation rooms and placed limitations on surveillance in breakrooms, also failed to pass.
But another measure, Senate Bill 7 passed the legislature and is headed to the governor. It requires employers to disclose plans to use an automated system 30 days prior to doing so and make annual requests data used by an employer for discipline or firing. In recent days, author Senator Jerry McNerney amended the law to remove the right to appeal decisions made by AI and eliminate a prohibition against employers making predictions about a worker’s political beliefs, emotional state, or neural data. The California Labor Federation supported similar bills in Massachusetts, Vermont, Connecticut, and Washington.