AI Research
2 Artificial Intelligence (AI) Stocks to Buy Now, According to Wall Street

-
The Nasdaq Composite returned 12% annually during the last two decades, and investors can reasonably expect similar returns in the future.
-
AppLovin is an adtech company that has differentiated itself from peers with superior targeting capabilities, driven by its artificial intelligence (AI) recommendation engine called Axon.
-
MongoDB develops the leading document-oriented database, a technology that lends itself to AI applications, and the current valuation is cheap versus the three-year average.
Anticipating what the stock market will do in any given year is impossible, but investors can lean into long-term trends. For instance, the Nasdaq Composite (NASDAQINDEX: ^IXIC) soared 875% in the last 20 years, compounding at 12% annually, due to strength in technology stocks. That period encompasses such a broad range of market and economic conditions that similar returns are quite plausible in the future.
Indeed, the rise of artificial intelligence (AI) should be a tailwind for the technology sector, and most Wall Street analysts anticipate substantial gains in these Nasdaq stocks:
-
Among 31 analysts who follow AppLovin (NASDAQ: APP), the median target price of $470 per share implies 40% upside from the current share price of $335.
-
Among 39 analysts that follow MongoDB (NASDAQ: MDB), the median target price of $275 per share implies 34% upside from the current share price of $205.
Here’s what investors should know about AppLovin and MongoDB.
AppLovin builds adtech software that helps developers market and monetize applications across mobile and connected TV campaigns. The company is also piloting ad tech tools for e-commerce brands. Importantly, its platform leans on a sophisticated AI engine called Axon to optimize campaign results by matching advertiser demand with the best publisher inventory.
AppLovin has put a great deal of effort into building its Axon recommendation engine. The company started acquiring video game studios several years ago to train the underlying machine learning models that optimize targeting, and subsequent upgrades have encouraged media buyers to spend more on the platform over time.
Morgan Stanley analyst Brian Nowak recently called AppLovin the “best executor” in the adtech industry. In particular, he called attention to superior ad targeting capabilities driven by its “best-in-class” machine learning engine, which has led to outperformance versus the broader in-app advertising market since 2023.
AI Research
3 Arguments Against AI in the Classroom

Generative artificial intelligence is here to stay, and K-12 schools need to find ways to use the technology for the benefit of teaching and learning. That’s what many educators, technology companies, and AI advocates say.
In response, more states and districts are releasing guidance and policies around AI use in the classroom. Educators are increasingly experimenting with the technology, with some saying that it has been a big time saver and has made the job more manageable.
But not everyone agrees. There are educators who are concerned that districts are buying into the AI hype too quickly and without enough skepticism.
A nationally representative EdWeek Research Center survey of 559 K-12 educators conducted during the summer found that they are split on whether AI platforms will have a negative or positive impact on teaching and learning in the next five years: 47% say AI’s impact will be negative, while 43% say it will be positive.
Education Week talked to three veteran teachers who are not using generative AI regularly in their work and are concerned about the potential negative effects the technology will have on teaching and learning.
Here’s what they think about using generative AI in K-12.
AI provides ‘shortcuts’ that are not conducive for learning
Dylan Kane, a middle school math teacher at Lake County High School in Leadville, Colo., isn’t “categorically against AI,” he said.
He has experimented with the technology personally, using it to help him improve his Spanish-language skills. AI is a “half decent” Spanish tutor, if you understand its limitations, he said. For his teaching job, Kane has experimented with AI tools to generate student materials like many other teachers, but it takes too many iterations of prompting to generate something he would actually put in front of his classes.
“I will do a better job just doing it myself and probably take less time to do so,” said Kane, who is in his 14th year of teaching. Creating student materials himself means he can be “more intentional” about the questions he asks, how they’re sequenced, how they fit together, how they build on each other, and what students already know.
His biggest concern is how generative AI will affect educators and students’ critical-thinking skills. Too often, people are using these tools to take “shortcuts,” he said.
“If I want students to learn something, I need them to be thinking about it and not finding shortcuts to avoid thinking,” Kane said.
The best way to prepare students for an AI-powered future is to “give them a broad and deep collection of knowledge about the world and skills in literacy, math, history and civics, and science,” so they’ll have the knowledge they need to understand if an AI tool is providing them with a helpful answer, he said.
That’s true for teachers, too, Kane said. The reason he can evaluate whether AI-generated material is accurate and helpful is because of his years of experience in education.
“One of my hesitations about using large language models is that I won’t be developing skills as a teacher and thinking really hard about what things I put in front of students and what I want them to be learning,” Kane said. “I worry that if I start leaning heavily on large language models, that it will stunt my growth as a teacher.”
And the fact that teachers have to use generative AI tools to create student materials “points to larger issues in the teaching profession” around the curricula and classroom resources teachers are given, Kane said. AI is not “an ideal solution. That’s a Band-Aid for a larger problem.”
Kane’s open to using AI tools. For instance, he said he finds generative AI technology helpful for writing word problems. But educators should “approach these things with a ton of skepticism and really ask ourselves: ‘Is this better than what we should be doing?’”
Experts and leaders haven’t provided good justifications for AI use in K-12
Jed Williams, a high school math and science teacher in Belmont, Mass., said he hasn’t heard any good justifications for why generative AI should be implemented in schools.
The way AI is being presented to teachers tends to be “largely uncritical,” said Williams, who teaches computer science, physics, and robotics at Belmont High School. Often, professional development opportunities about AI don’t provide a “critical analysis” of the technology and just “check the box” by mentioning that AI tools have downsides, he said.
For instance, one professional development session he attended only spent “a few seconds” on the downsides of AI tools, Williams said. The session covered the issue of overreliance on AI tools, but Williams criticized it for not talking about “labor exploitation, overuse of resources, sacrificing the privacy of students and faculty,” he said.
“We have a responsibility to be skeptical about technologies that we bring into the classroom,” Williams said, especially because there’s a long history of ed-tech adoption failures.
Williams, who has been teaching since 2006, is also concerned that AI tools could decrease students’ cognitive abilities.
“So much of learning is being put into a situation that is cognitively challenging,” he said. “These tools, fundamentally, are built on relieving the burden of cognitive challenge.
“Especially in introductory courses, where students aren’t familiar with programming and you want them to try new things and experiment and explore, why would you give them this tool that completely removes those aspects that are fundamental to learning?” Williams said.
Williams is also worried that a rushed implementation of AI tools would sacrifice students and teachers’ privacy and use them as “experimental subjects in developing technologies for tech companies.”
Education leaders “have a tough job,” Williams said. He understands the pressure they feel around implementing AI, but he hopes they give it “critical thought.”
Decisionmakers need to be clear about what technology is being proposed, how they anticipate teachers and students using it, what the goal of its use is, and why they think it’s a good technology to teach students how to use, Williams said.
“If somebody has a good answer for that, I’m very happy to hear proposals on how to incorporate these things in a healthy, safe way,” he said.
Educators shouldn’t fall for the ‘fallacy’ that AI is inevitable
Elizabeth Bacon, a middle school computer science teacher in California, hasn’t found any use cases with generative AI tools that she feels will be beneficial for her work.
“I would rather do my own lesson plan,” said Bacon, who has been teaching for more than 20 years. “I have an idea of what I want the students to learn, of what’s interesting to them, and where they are and the entry points for them to engage in it.”
Teachers have a lot of pressure to do more with less. That’s why Bacon said she doesn’t judge other teachers who want to use AI to get the job done. It’s “a systemic problem,” but teaching and learning shouldn’t be replaced by machines, she said.
Bacon believes it’s “particularly dangerous” for middle school students to be using “a machine emulating a person.” Students are still developing their character, their empathy, their ability to socialize with peers and work collectively toward a goal, she said, and a chatbot would undermine that.
She can foresee using generative AI tools to explain to her students what large language models are. It’s important for them to learn about generative AI, that it’s a statistical model predicting the next likely word based on data it’s been trained on, that there’s no meaning [or feelings] behind it, Bacon said.
Last school year, she asked her high school students what they wanted to know about AI. Their answers: the technology’s social and environmental impacts.
Bacon doesn’t think educators should fall for the “fallacy” that AI is the inevitable future because technology companies are the ones saying that and they have an incentive to say that, she said.
“Educators have basically been told, in a lot of ways, ‘don’t trust your own instincts about what’s right for your students, because [technology companies are] going to come in and tell you what’s going to be good for your students,” she said.
It’s discouraging to see that a lot of the AI-related professional development events she’s attended have “essentially been AI evangelism” and “product marketing,” she said. There should be more thought about why this technology is necessary in K-12, she said.
Technology experts have talked up AI’s potential to increase productivity and efficiency. But as an educator, “efficiency is not one of my values,” Bacon said.
“My value is supporting students, meeting them where they are, taking the time it takes to connect with these students, taking the time that it takes to understand their needs,” she said. “As a society, we have to take a hard look: Do we value education? Do we value doing our own thinking?”
AI Research
University Of Utah Teams With HPE, NVIDIA To Boost AI Research

The University of Utah (the U) is planning to join forces with two powerhouse tech firms to accelerate research and discovery using artificial intelligence (AI). The agreement with Hewlett Packard Enterprise (HPE) and AI chipmaker NVIDIA will amplify the U’s capacity for understanding cancer, Alzheimer’s disease, mental health, and genetics. The initiative is projected to enable medical breakthroughs, driving innovation, and scientific discovery across disciplines.
“The U has a proud legacy of pioneering technological breakthroughs,” said Taylor Randall, president of the University of Utah. “Our goal is to make the state awash in computing power by building a robust AI ecosystem benefiting our entire system of higher education, driving research to find new cures, and igniting Utah’s entrepreneurial spirit.”
The partnership, which includes a $50 million investment of funds from both public and philanthropic sources, is projected to increase the U’s computing capacity 3.5-fold. The flagship school’s Board of Trustees gave preliminary approval to the proposed arrangement on September 9.
The structure paves a path for substantial advances in computing storage and infrastructure required for Utah-based projects in AI and innovation. The goal is to lay the foundation for a scalable AI ecosystem available to researchers, learners, and entrepreneurs across Utah. The multi-year initiative would build upon existing capabilities in AI, giving the U access to substantially more computing power.
Brynn and Peter Huntsman along with the Huntsman Family Foundation will provide a lead philanthropic gift to the U that is intended to initiate the project and help encourage other supporters to make investments required to move the work forward through AI “supercomputer” systems designed to handle enormous processing and storage needs. The university will seek remaining funds from the state of Utah and other sources.
“This AI initiative will accelerate world class cancer research that enhances capabilities in ways we hardly imagined just a few years ago,” said Peter Huntsman, CEO and chairman, Huntsman Cancer Foundation. “Huntsman Cancer Foundation recently announced our commitment to support the expansion of the educational, research, and clinical care capacity of the world renown Huntsman Cancer Institute in Vineyard, Utah, which will serve as a hub for cancer AI research. These investments will speed discoveries and enhance the state of Utah’s leadership in AI education and economic opportunity.”
Mental health will be a major focus of the AI research endeavor.
“As the Huntsman Mental Health Institute opens its new 185,000-square-foot Translational Research Building this coming year, we’re looking forward to increasing momentum around mental health research, including the impact of this technology,” said Christena Huntsman Durham, Huntsman Mental Health Foundation CEO and co-chair. “We know so many people are struggling with mental health challenges; we’re thrilled we will be able to move even faster to get help to those who need it most.”
Check out all the latest news related to Utah economic development, corporate relocation, corporate expansion and site selection.
AI Research
F5 to acquire AI security firm CalypsoAI for $180 million

F5, a Seattle-based application delivery and security company, announced Thursday it will acquire Dublin-based CalypsoAI for $180 million in cash, highlighting the mounting security challenges enterprises face as they rapidly integrate artificial intelligence into their operations.
The acquisition comes as companies across industries rush to deploy generative AI systems while grappling with new categories of cybersecurity threats that traditional security tools struggle to address. CalypsoAI, founded in 2018, specializes in protecting AI systems against emerging attack methods, including prompt injection and jailbreak attacks.
“AI is redefining enterprise architecture and the attack surface companies must defend,” said François Locoh-Donou, F5’s president and CEO. The company plans to integrate CalypsoAI’s capabilities into its Application Delivery and Security Platform to create what it describes as a comprehensive AI security solution.
Companies are embedding AI into products and operations at an unprecedented pace, but this rapid adoption has created compliance gaps and heightened regulatory scrutiny. CalypsoAI addresses these challenges through what the company calls “model-agnostic” security, providing protection regardless of which AI models or cloud providers enterprises use.
The platform conducts automated red-team testing against thousands of attack scenarios monthly, generating risk assessments and implementing real-time guardrails to prevent data leakage and policy violations.
“Enterprises want to move fast with AI while reducing the risk of data leaks, unsafe outputs, or compliance failures,” said CalypsoAI CEO Donnchadh Casey. The company’s approach focuses on the inference layer where AI models process requests, rather than securing the models themselves.
The acquisition comes during a flurry of similar moves by established companies in the cybersecurity space that are looking to add AI-powered offerings to their customers.
F5 has also been active this year with what it considers strategic purchases. The company acquired San Francisco-based Fletch in June and observability firm MantisNet in August, demonstrating a pattern of building capabilities through acquisition rather than internal development.
The deal is expected to close by Sept. 30.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi