AI Research
5 Top Artificial Intelligence Stocks to Buy in August

These market leaders continue to deliver strong growth.
Artificial intelligence (AI) continues to be reshaping the world we live in, which can be both exciting and scary. It’s also reshaping the stock market, and it is certainly an area you want to invest in.
Let’s look at the stocks of five AI leaders that would make top stock buys this month.
Image source: Getty Images.
1. Nvidia
Nvidia (NVDA -0.85%) remains the king of AI infrastructure. Its graphics processing units (GPUs) power most AI workloads worldwide, and in the first quarter, it commanded an astonishing 92% of the GPU market. What really sets it apart is its CUDA software platform, which it planted into universities and research labs years before AI went mainstream. That early push created a generation of developers trained on its tools and libraries built on top of its platform, building a moat that rivals struggle to cross.
Nvidia has also accelerated its product cycle, planning new chip launches annually to stay ahead of the competition. Its growth opportunities also go beyond data centers, with the automotive market another big opportunity, thanks to the rise of self-driving and robotaxis.
Nvidia’s mix of market dominance, software moat, and expansion into new markets keeps it firmly at the center of AI.
2. Palantir Technologies
Palantir Technologies (PLTR -2.14%) started as a critical analytics partner to U.S. government agencies but is now making its mark in commercial markets. Its Artificial Intelligence Platform (AIP) integrates data from numerous sources into an “ontology,” allowing AI models to produce clear, actionable results.
AIP is essentially becoming an AI operating system, making it a vital platform as companies begin to use AI in their operations. The strength of AIP could be seen in its Q2 results, as the company’s U.S. commercial revenue surged 93%, while its total deal value more than doubled and its customer base climbed 43%.
Given the breadth of use cases across very different industries that AIP can handle, Palantir has a long runway of growth in front of it. The company has continued to see accelerating revenue growth, and the best part is that many of its customers are still in their early stages of usage.
As an essential component of the emerging AI economy, Palantir has the potential to grow into one of the largest companies in the world.
3. Alphabet
Alphabet (GOOGL 0.46%) (GOOG 0.52%) is proving that AI can strengthen its core businesses. Search has gained momentum with AI Overviews, which is now being used by over 2 billion people a month, helping drive a 12% year-over-year increase in search revenue last quarter. Google Cloud is another major beneficiary of AI, with its revenue jumping 32% and operating profit more than doubling in Q2, thanks to strong AI demand on its Vertex platform.
Another area that is often overlooked is Alphabet’s Tensor Processing Units (TPUs). As inference performance per dollar starts becoming one of the most important factors in running AI models, Alphabet has a nice advantage with its custom chips.
In addition to search and cloud computing, Alphabet is also seeing solid contributions from its other businesses. YouTube ad revenue grew 13% last quarter, with Shorts leading the way. Meanwhile, Waymo is picking up steam, rolling out its robotaxi services to new cities across the U.S.
From a valuation perspective, Alphabet is one of the most attractive AI stocks in the market, trading at a forward P/E just over 20. This makes it a must-own stock.
4. Broadcom
Rather than competing directly with Nvidia with GPUs, chipmaker Broadcom (AVGO -1.57%) is playing to its strengths in AI networking and custom chip design. Its Ethernet switches and other networking components are critical for moving vast amounts of data between AI clusters. Demand here is booming, with its AI networking revenue up 70% in Q1.
The real prize, though, may be its work on custom application-specific integrated circuits (ASICs). Broadcom helped develop Alphabet’s TPUs and is now designing chips for multiple hyperscalers (companies with massive data centers), with management estimating its top three customers alone could represent a $60 billion to $90 billion opportunity in fiscal 2027.
Its recent acquisition of VMware adds another growth lever, with its Cloud Foundation platform helping enterprises manage AI workloads across hybrid cloud environments. Between its leadership in data center networking components, custom chip expertise, and virtualization software, Broadcom is becoming one of the most important players in AI infrastructure.
5. GitLab
GitLab (GTLB 8.18%) is evolving from a code repository into a full-fledged AI-powered software development platform. Its GitLab 18 release brought more than 30 upgrades, including Duo Agent, which can automate testing, deployment, security, and monitoring. That’s important, because developers spend only a fraction of their time writing code. Automating the rest of the workflow can dramatically increase productivity.
GitLab has delivered steady 25%-plus revenue growth since going public, with Q1 sales rising 27% year over year. The value of its platform in an AI-driven development world opens the door to a possible shift from seat-based to consumption-based pricing, which could drive revenue even higher. With AI changing how software is built, GitLab’s end-to-end approach positions it as a key player in enterprise software development.
As investors worry about the impact of AI on software, GitLab’s stock has fallen to a very attractive valuation of a forward price-to-sales (P/S) ratio of 7 times the 2025 analyst estimates.
AI Research
Congress ramps up push to arm consumer product regulators with AI tools

A move to empower federal consumer product regulators with artificial intelligence tools picked up steam this week with the introduction of a bipartisan Senate bill whose companion has already passed the House.
The Consumer Safety Technology Act from Sens. John Curtis, R-Utah, and Lisa Blunt Rochester, D-Del., calls on the Consumer Product Safety Commission to create a pilot program that uses AI to track product injury trends, identify hazards, monitor recalls and pinpoint which products fall short of critical standards.
The legislation also directs the Federal Trade Commission and the Commerce secretary to deliver a report on blockchain technology and tokens.
“The world is changing fast, and consumer protection must keep pace,” Curtis said in a press release Thursday. “This bill puts the right tools in the hands of experts — employing AI to catch dangerous products before they hurt families, exploring blockchain to strengthen supply chains, and making sure digital tokens don’t become a new avenue for fraud. This is about keeping people safe while helping American innovation thrive.”
The House version of the bill, introduced in March by Rep. Darren Soto, cleared the lower chamber in July. The Florida Democrat said at the time that the legislation would “help make the CPSC more efficient.”
“The reality is, the crooks are already using AI,” Soto said. “The cops on the beat need to be able to use this, too.”
The Senate bill directs the CPSC to seek out a variety of stakeholders to consult on the agency’s AI pilot, including cybersecurity experts, technologists, data scientists, machine-learning specialists, retailers, consumer product safety groups and manufacturers.
Within a year of the pilot’s conclusion, the CPSC would be charged with submitting a report to Congress detailing its findings and data, “including the extent to which the use of artificial intelligence improved the ability of the Commission to advance the consumer product safety mission,” the bill states.
The blockchain section of the bill orders the FTC and Commerce Department to study how the technology can be leveraged to protect consumers by guarding against fraud attempts and other unfair and deceptive practices. There would also be an examination of what federal regulations could be modified to spur blockchain adoption.
A separate report would look into unfair or deceptive acts and practices tied to transactions via digital tokens. A fact sheet from Curtis said that provision is aimed at “ensuring consumers are protected without stifling responsible innovation.”
Blunt Rochester said in a statement that the government “must be able to keep up with new and emerging technologies, especially when it comes to consumer safety.”
“The Consumer Safety Technology Act would allow the Consumer Product Safety Commission to explore using artificial intelligence to further its critical goals,” she continued. “I am grateful to work alongside Senator Curtis on this legislation and look forward to getting it over the finish line.”
AI Research
3 Arguments Against AI in the Classroom

Generative artificial intelligence is here to stay, and K-12 schools need to find ways to use the technology for the benefit of teaching and learning. That’s what many educators, technology companies, and AI advocates say.
In response, more states and districts are releasing guidance and policies around AI use in the classroom. Educators are increasingly experimenting with the technology, with some saying that it has been a big time saver and has made the job more manageable.
But not everyone agrees. There are educators who are concerned that districts are buying into the AI hype too quickly and without enough skepticism.
A nationally representative EdWeek Research Center survey of 559 K-12 educators conducted during the summer found that they are split on whether AI platforms will have a negative or positive impact on teaching and learning in the next five years: 47% say AI’s impact will be negative, while 43% say it will be positive.
Education Week talked to three veteran teachers who are not using generative AI regularly in their work and are concerned about the potential negative effects the technology will have on teaching and learning.
Here’s what they think about using generative AI in K-12.
AI provides ‘shortcuts’ that are not conducive for learning
Dylan Kane, a middle school math teacher at Lake County High School in Leadville, Colo., isn’t “categorically against AI,” he said.
He has experimented with the technology personally, using it to help him improve his Spanish-language skills. AI is a “half decent” Spanish tutor, if you understand its limitations, he said. For his teaching job, Kane has experimented with AI tools to generate student materials like many other teachers, but it takes too many iterations of prompting to generate something he would actually put in front of his classes.
“I will do a better job just doing it myself and probably take less time to do so,” said Kane, who is in his 14th year of teaching. Creating student materials himself means he can be “more intentional” about the questions he asks, how they’re sequenced, how they fit together, how they build on each other, and what students already know.
His biggest concern is how generative AI will affect educators and students’ critical-thinking skills. Too often, people are using these tools to take “shortcuts,” he said.
“If I want students to learn something, I need them to be thinking about it and not finding shortcuts to avoid thinking,” Kane said.
The best way to prepare students for an AI-powered future is to “give them a broad and deep collection of knowledge about the world and skills in literacy, math, history and civics, and science,” so they’ll have the knowledge they need to understand if an AI tool is providing them with a helpful answer, he said.
That’s true for teachers, too, Kane said. The reason he can evaluate whether AI-generated material is accurate and helpful is because of his years of experience in education.
“One of my hesitations about using large language models is that I won’t be developing skills as a teacher and thinking really hard about what things I put in front of students and what I want them to be learning,” Kane said. “I worry that if I start leaning heavily on large language models, that it will stunt my growth as a teacher.”
And the fact that teachers have to use generative AI tools to create student materials “points to larger issues in the teaching profession” around the curricula and classroom resources teachers are given, Kane said. AI is not “an ideal solution. That’s a Band-Aid for a larger problem.”
Kane’s open to using AI tools. For instance, he said he finds generative AI technology helpful for writing word problems. But educators should “approach these things with a ton of skepticism and really ask ourselves: ‘Is this better than what we should be doing?’”
Experts and leaders haven’t provided good justifications for AI use in K-12
Jed Williams, a high school math and science teacher in Belmont, Mass., said he hasn’t heard any good justifications for why generative AI should be implemented in schools.
The way AI is being presented to teachers tends to be “largely uncritical,” said Williams, who teaches computer science, physics, and robotics at Belmont High School. Often, professional development opportunities about AI don’t provide a “critical analysis” of the technology and just “check the box” by mentioning that AI tools have downsides, he said.
For instance, one professional development session he attended only spent “a few seconds” on the downsides of AI tools, Williams said. The session covered the issue of overreliance on AI tools, but Williams criticized it for not talking about “labor exploitation, overuse of resources, sacrificing the privacy of students and faculty,” he said.
“We have a responsibility to be skeptical about technologies that we bring into the classroom,” Williams said, especially because there’s a long history of ed-tech adoption failures.
Williams, who has been teaching since 2006, is also concerned that AI tools could decrease students’ cognitive abilities.
“So much of learning is being put into a situation that is cognitively challenging,” he said. “These tools, fundamentally, are built on relieving the burden of cognitive challenge.
“Especially in introductory courses, where students aren’t familiar with programming and you want them to try new things and experiment and explore, why would you give them this tool that completely removes those aspects that are fundamental to learning?” Williams said.
Williams is also worried that a rushed implementation of AI tools would sacrifice students and teachers’ privacy and use them as “experimental subjects in developing technologies for tech companies.”
Education leaders “have a tough job,” Williams said. He understands the pressure they feel around implementing AI, but he hopes they give it “critical thought.”
Decisionmakers need to be clear about what technology is being proposed, how they anticipate teachers and students using it, what the goal of its use is, and why they think it’s a good technology to teach students how to use, Williams said.
“If somebody has a good answer for that, I’m very happy to hear proposals on how to incorporate these things in a healthy, safe way,” he said.
Educators shouldn’t fall for the ‘fallacy’ that AI is inevitable
Elizabeth Bacon, a middle school computer science teacher in California, hasn’t found any use cases with generative AI tools that she feels will be beneficial for her work.
“I would rather do my own lesson plan,” said Bacon, who has been teaching for more than 20 years. “I have an idea of what I want the students to learn, of what’s interesting to them, and where they are and the entry points for them to engage in it.”
Teachers have a lot of pressure to do more with less. That’s why Bacon said she doesn’t judge other teachers who want to use AI to get the job done. It’s “a systemic problem,” but teaching and learning shouldn’t be replaced by machines, she said.
Bacon believes it’s “particularly dangerous” for middle school students to be using “a machine emulating a person.” Students are still developing their character, their empathy, their ability to socialize with peers and work collectively toward a goal, she said, and a chatbot would undermine that.
She can foresee using generative AI tools to explain to her students what large language models are. It’s important for them to learn about generative AI, that it’s a statistical model predicting the next likely word based on data it’s been trained on, that there’s no meaning [or feelings] behind it, Bacon said.
Last school year, she asked her high school students what they wanted to know about AI. Their answers: the technology’s social and environmental impacts.
Bacon doesn’t think educators should fall for the “fallacy” that AI is the inevitable future because technology companies are the ones saying that and they have an incentive to say that, she said.
“Educators have basically been told, in a lot of ways, ‘don’t trust your own instincts about what’s right for your students, because [technology companies are] going to come in and tell you what’s going to be good for your students,” she said.
It’s discouraging to see that a lot of the AI-related professional development events she’s attended have “essentially been AI evangelism” and “product marketing,” she said. There should be more thought about why this technology is necessary in K-12, she said.
Technology experts have talked up AI’s potential to increase productivity and efficiency. But as an educator, “efficiency is not one of my values,” Bacon said.
“My value is supporting students, meeting them where they are, taking the time it takes to connect with these students, taking the time that it takes to understand their needs,” she said. “As a society, we have to take a hard look: Do we value education? Do we value doing our own thinking?”
AI Research
University Of Utah Teams With HPE, NVIDIA To Boost AI Research

The University of Utah (the U) is planning to join forces with two powerhouse tech firms to accelerate research and discovery using artificial intelligence (AI). The agreement with Hewlett Packard Enterprise (HPE) and AI chipmaker NVIDIA will amplify the U’s capacity for understanding cancer, Alzheimer’s disease, mental health, and genetics. The initiative is projected to enable medical breakthroughs, driving innovation, and scientific discovery across disciplines.
“The U has a proud legacy of pioneering technological breakthroughs,” said Taylor Randall, president of the University of Utah. “Our goal is to make the state awash in computing power by building a robust AI ecosystem benefiting our entire system of higher education, driving research to find new cures, and igniting Utah’s entrepreneurial spirit.”
The partnership, which includes a $50 million investment of funds from both public and philanthropic sources, is projected to increase the U’s computing capacity 3.5-fold. The flagship school’s Board of Trustees gave preliminary approval to the proposed arrangement on September 9.
The structure paves a path for substantial advances in computing storage and infrastructure required for Utah-based projects in AI and innovation. The goal is to lay the foundation for a scalable AI ecosystem available to researchers, learners, and entrepreneurs across Utah. The multi-year initiative would build upon existing capabilities in AI, giving the U access to substantially more computing power.
Brynn and Peter Huntsman along with the Huntsman Family Foundation will provide a lead philanthropic gift to the U that is intended to initiate the project and help encourage other supporters to make investments required to move the work forward through AI “supercomputer” systems designed to handle enormous processing and storage needs. The university will seek remaining funds from the state of Utah and other sources.
“This AI initiative will accelerate world class cancer research that enhances capabilities in ways we hardly imagined just a few years ago,” said Peter Huntsman, CEO and chairman, Huntsman Cancer Foundation. “Huntsman Cancer Foundation recently announced our commitment to support the expansion of the educational, research, and clinical care capacity of the world renown Huntsman Cancer Institute in Vineyard, Utah, which will serve as a hub for cancer AI research. These investments will speed discoveries and enhance the state of Utah’s leadership in AI education and economic opportunity.”
Mental health will be a major focus of the AI research endeavor.
“As the Huntsman Mental Health Institute opens its new 185,000-square-foot Translational Research Building this coming year, we’re looking forward to increasing momentum around mental health research, including the impact of this technology,” said Christena Huntsman Durham, Huntsman Mental Health Foundation CEO and co-chair. “We know so many people are struggling with mental health challenges; we’re thrilled we will be able to move even faster to get help to those who need it most.”
Check out all the latest news related to Utah economic development, corporate relocation, corporate expansion and site selection.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi