Connect with us

AI Research

History Says the Nasdaq Will Soar: 2 Artificial Intelligence (AI) Stocks to Buy Now, According to Wall Street

Published

on


Most Wall Street analysts see substantial upside in these technology stocks.

Anticipating what the stock market will do in any given year is impossible, but investors can lean into long-term trends. For instance, the Nasdaq Composite (^IXIC 0.09%) soared 875% in the last 20 years, compounding at 12% annually, due to strength in technology stocks. That period encompasses such a broad range of market and economic conditions that similar returns are quite plausible in the future.

Indeed, the rise of artificial intelligence (AI) should be a tailwind for the technology sector, and most Wall Street analysts anticipate substantial gains in these Nasdaq stocks:

  • Among 31 analysts who follow AppLovin (APP -1.86%), the median target price of $470 per share implies 40% upside from the current share price of $335.
  • Among 39 analysts that follow MongoDB (MDB -3.48%), the median target price of $275 per share implies 34% upside from the current share price of $205.

Here’s what investors should know about AppLovin and MongoDB.

Image source: Getty Images.

AppLovin: 40% upside implied by the median target price

AppLovin builds adtech software that helps developers market and monetize applications across mobile and connected TV campaigns. The company is also piloting ad tech tools for e-commerce brands. Importantly, its platform leans on a sophisticated AI engine called Axon to optimize campaign results by matching advertiser demand with the best publisher inventory.

AppLovin has put a great deal of effort into building its Axon recommendation engine. The company started acquiring video game studios several years ago to train the underlying machine learning models that optimize targeting, and subsequent upgrades have encouraged media buyers to spend more on the platform over time.

Morgan Stanley analyst Brian Nowak recently called AppLovin the “best executor” in the adtech industry. In particular, he called attention to superior ad targeting capabilities driven by its “best-in-class” machine learning engine, which has led to outperformance versus the broader in-app advertising market since 2023.

AppLovin reported strong first-quarter financial results. Revenue increased 40% to $1.4 billion, as strong sales growth in the advertising segment offset a decline in sales in the mobile games segment. Generally accepted accounting principles (GAAP) net income increased 149% to $1.67 per diluted share. And management guided for 69% advertising sales growth in the second quarter.

Wall Street estimates AppLovin’s earnings will increase at 53% annually through 2026. That makes the current valuation of 61 times earnings look rather inexpensive. Investors should pounce on the opportunity to buy this stock today. Personally, I would start with a small position and add shares periodically.

MongoDB: 34% upside implied by the median target price

MongoDB is the most popular document database. Whereas traditional relational databases (also called SQL databases) store information in structured rows and columns, the document model is more scalable and flexible. It supports structured data, but also unstructured data like emails, social media posts, images, videos, and websites.

Every application requires a database. It is where information can be stored, modified, and retrieved when needed. But the document model is particularly well suited to analytics, content management, e-commerce, payments, and artificial intelligence applications due to its superior scalability and flexibility. MongoDB is leaning into demand for AI.

Last year, the company introduced MAAP (MongoDB AI Application Program), a collection of resources and reference architectures that help programmers build applications with AI capabilities. Additionally, MongoDB recently acquired Voyage AI, a company that develops embedding and reranking models that make AI applications more accurate and reliable.

CEO Dev Ittycheria told analysts: “MongoDB now brings together three things that modern AI-powered applications need: real-time data, powerful search, and smart retrieval. By combining these into one platform, we make it dramatically easier for developers to build intelligent, responsive apps without stitching together multiple systems.”

MongoDB reported encouraging first-quarter financial results, exceeding estimates on the top and bottom lines. Customers climbed 16% to 57,100, the highest net additions in six years. Revenue increased 22% to $549 million, a sequential acceleration, and non-GAAP earnings jumped 96% to $1.00 per diluted share.

Going forward, Grand View Research estimates the database management system market will increase at 13% annually through 2030. MongoDB should grow faster as it continues to gain market share. That makes the present valuation of 7.8 times sales look reasonable, especially when the three-year average is 13.2 times sales. Patient investors should feel comfortable buying a small position today.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI in health care could save lives and money − but change won’t happen overnight

Published

on


Imagine walking into your doctor’s office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what’s wrong.

This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives.

What’s more, a 2023 study found that if the health care industry significantly increased its use of AI, up to US$360 billion annually could be saved.

But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low.

A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses.

I’m a professor and researcher who studies AI and health care analytics. I’ll try to explain why AI’s growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI’s widespread adoption by the medical industry.

Inaccurate diagnoses, racial bias

Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care.

AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care.

But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn’t perfectly match the patient in front of them.

As a result, AI doesn’t always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations.

Racial and ethnic bias is another issue. If data includes bias because it doesn’t include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened.

Humans and AI are beginning to work together at this Florida hospital.

Data-sharing concerns, unrealistic expectations

Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor’s offices simply don’t have the time, personnel, money or will to implement AI.

Also, many cutting-edge AI systems operate as opaque “black boxes.” They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification.

But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings.

There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records.

For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient’s data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards.

Privacy concerns also extend to patients’ trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care.

The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises.

Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they’re safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations.

AI could rapidly accelerate the discovery of new medications.

Incremental change

Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time.

Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help.

Suffice to say that health care’s transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI’s potential to treat millions and save trillions awaits.



Source link

Continue Reading

AI Research

Report shows China outpacing the US and EU in AI research

Published

on


Governments now face the reality that falling behind in AI capability could have serious geopolitical consequences, warns a new research report.

AI is increasingly viewed as a strategic asset rather than a technological development, and new research suggests China is now leading the global AI race.

A report titled ‘DeepSeek and the New Geopolitics of AI: China’s ascent to research pre-eminence in AI’, authored by Daniel Hook, CEO of Digital Science, highlights how China’s AI research output has grown to surpass that of the US, the EU and the UK combined.

According to data from Dimensions, a primary global research database, China now accounts for over 40% of worldwide citation attention in AI-related studies. Instead of focusing solely on academic output, the report points to China’s dominance in AI-related patents.

In some indicators, China is outpacing the US tenfold in patent filings and company-affiliated research, signalling its capacity to convert academic work into tangible innovation.

Hook’s analysis covers AI research trends from 2000 to 2024, showing global AI publication volumes rising from just under 10,000 papers in 2000 to 60,000 in 2024.

However, China’s influence has steadily expanded since 2018, while the EU and the US have seen relative declines. The UK has largely maintained its position.

Clarivate, another analytics firm, reported similar findings, noting nearly 900,000 AI research papers produced in China in 2024, triple the figure from 2015.

Hook notes that governments increasingly view AI alongside energy or military power as a matter of national security. Instead of treating AI as a neutral technology, there is growing awareness that a lack of AI capability could have serious economic, political and social consequences.

The report suggests that understanding AI’s geopolitical implications has become essential for national policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!



Source link

Continue Reading

AI Research

Effects of generative artificial intelligence on cognitive effort and task performance: study protocol for a randomized controlled experiment among college students | Trials

Published

on


Intervention description {11a}

In the intervention group, the computer screen will be set up in a split-screen format. On the left side of the screen, the participant will receive instructions on the writing prompt, writing requirements, time requirements, grading feedback, and the grading rubric. The instructions will also highlight to the participant that they can use ChatGPT in any way they like to assist their writing, and there is no penalty in their writing score for how ChatGPT is used. The right side of the screen will display a blank ChatGPT interface where the participant can prompt questions and receive answers.

Explanation for the choice of comparators {6b}

In the control group, as in the intervention group, the computer screen will be set up in a split-screen format. On the left side of the screen, the participant will receive the same instructions on the writing prompt, writing requirements, time requirements, grading feedback, and the grading rubric. Additionally, the instructions will highlight to the participant that they can use a text editor in any way they like to assist their writing. On the right side, instead of ChatGPT, a basic text editor interface will be displayed. In summary, this comparator will keep the split-screen format consistent between the two groups and ensure that participants in the control group will complete the writing task with minimal support.

Criteria for discontinuing or modifying allocated interventions {11b}

This study is of minimal risk, and we do not anticipate needing to discontinue or modify the allocated interventions during the experiment. Participants can withdraw from the study at any time.

Strategies to improve adherence to interventions {11c}

Adherence to the interventions will be high because the procedures are straightforward and will be clearly explained in the step-by-step instructions on the computer screen. The participant will be alone in a noise-canceling room during the entire experiment. The participant can reach out to the experimenter through an intercom if they need any clarification.

Relevant concomitant care permitted or prohibited during the trial {11d}

Not applicable. This is not a clinical study.

Provisions for post-trial care {30}

Not applicable. This is a minimal-risk study.

Outcomes {12}

The study has two primary outcomes. First, we will measure participants’ writing performance scores on the analytical writing task. The task is adapted from the Analytical Writing section in the GRE, a worldwide standardized computer-based exam developed by the Educational Testing Service (ETS) [27]. Participants’ writing performance will be scored using the GRE 0–6 rubric and by an automatic essay-scoring platform called ScoreItNow!, which is powered by ETS’s e-rater engine [32, 33]. We chose to adapt from the GRE writing materials for two reasons. First, their writing task and grading rubrics were established writing materials designed to measure critical thinking and analytical writing skills and have been used in research as practice materials for writing (e.g. [34]). Second, OpenAI’s technical report shows that ChatGPT (GPT-4) can score 4 out of 6 (~ 54th percentile) on the GRE analytical writing task [31]. This gives us a benchmark for assessing the potential increase in writing performance when individuals collaborate with generative AI.

Second, we will measure participants’ cognitive effort during the writing process. Participants’ cognitive effort will be measured using a psychophysiological proxy—i.e., changes in pupil size [35, 36]. Pupil diameter and gaze data will be collected using the Tobii Pro Fusion eye tracker at a sampling rate of 120 Hz. During the preparation stage of the study, the room light will be adjusted so that the illuminance at the participants’ eyes is at a constant value of 320 LUX. Baseline pupil diameters will be recorded during a resting task in the experiment preparation stage that asks the participant to stare at a cross that will appear for 10 s each on the left, center, and right sections of the computer screen. Pupil diameters and gaze data will be recorded throughout the writing process.

The study has several secondary outcomes. First, to identify the neural substrates of cognitive effort during the writing process, we developed an additional psychophysiological proxy, changes in the cortical hemodynamic activity in the frontal lobe of the brain. Specifically, we will examine hemodynamic changes in oxyhemoglobin (HbO). Brain activity will be recorded throughout the writing process using the NIRSport 2 fNIRS device and the Aurora software with a predefined montage (Fig. 2). The montage consists of eight sources, eight detectors, and eight short-distance detectors. The 18 long-distance channels (source-detector distance of 30 mm) and eight short-distance channels (source-detector distance of 8 mm) are located over the prefrontal cortex (PFC) and supplementary motor area (SMA) (Fig. 2). The PFC is often involved in executive function (e.g., cognitive control, cognitive efforts, inhibition) [37, 38]. The SMA is associated with cognitive effort [39, 40]. The sampling rate of the fNIRS is 10.2 Hz. Available fNIRS cap sizes are 54 cm, 56 cm, and 58 cm. The cap size selected will always be rounded down to the nearest available size based on the participant’s head measurement. The cap is placed on the center of the participant’s head based on the Cz point from the 10–20 system.

Fig. 2

Design of the fNIRS montage

Third, we will measure participants’ subjective perceptions of the writing task by self-reported survey measures in the post-survey (Table 1). We will measure participants’ subjective perceptions of the two primary outcomes—that is, their self-perceived writing performance and self-perceived cognitive effort. Self-perceived writing performance will be measured with a one-item scale using the same grading rubric described in the instructions for their writing task and used in the scoring tool. Self-perceived cognitive effort will be measured using a one-item scale adapted from the National Aeronautics and Space Administration-task load index (NASA-TLX) [41, 42]. We will also measure participants’ subjective perceptions of several mental health and learning-related outcomes, including stress, challenge, and self-efficacy in writing. Self-perceived stress will be measured using a one-item scale adapted from the Primary Appraisal Secondary Appraisal scale (PASA) [43, 44]. Self-perceived challenge will be measured using a one-item sub-scale adapted from the Primary Appraisal Secondary Appraisal scale (PASA) [43, 44]. Self-efficacy in writing will be measured using a 16-item scale that measures three dimensions of writing self-efficacy: ideation, convention, and self-regulation [45]. Furthermore, we will measure participants’ situational interest in analytical writing using a four-item Likert scale adapted from the situational interest scale [46]. Additionally, we will measure participants’ behavioral intention to use ChatGPT in the future for essay writing tasks [47].

Table 1 Scales in the post-survey

Participant timeline {13}

The time schedule is provided via the schematic diagram below (Fig. 3). The entire experiment will last for approximately 1–1.5 h for each participant.

Fig. 3
figure 3

Schedule of enrollment, interventions, and assessments of the study

Sample size {14}

To estimate the required sample size, we conducted a simulation analysis on the intervention effect on writing performance using ordinary least squares (OLS) regression. Recent empirical evidence suggests that the effect size of generative AI on writing tasks ranges around Cohen’s d = 0.4–0.5, such as [1, 48]. In our simulation analysis, the simulated data assumes normally distributed data, equal and standardized standard deviations between the two conditions, and an anticipated effect size of Cohen’s d = 0.45. In the end, our analysis indicated that recruiting a minimum of 160 participants would be necessary to achieve a statistical power greater than 0.8 under an alpha level of 0.05. The simulation was implemented in R, and the corresponding code is available at the Open Science Framework (OSF) via https://osf.io/9jgme/.

We opt to base our sample size estimation on writing performance, but not on the other primary outcome, cognitive effort, for two reasons. First, the effect of generative AI on performance outcomes has been studied [1, 48], but we did not find prior evidence on the effect size of generative AI on cognitive effort using physiological measures. Second, our physiological measure of cognitive effort may likely be powered once the sample size satisfies our behavioral measure of writing performance. Pupillometry studies on cognitive efforts, such as the N-back test, typically recruit 20–50 participants in short, repeated, within-subject trials (e.g., [49]). These studies provide a general estimation of participants needed. Although our study design (i.e., a between-subject RCT) differs from common pupillometry studies, cognitive effort is still a repeated outcome measure using time series pupil data throughout the entire writing process. Repeated outcome measures generally can enhance statistical power by taking into account within-subject variability [50].

Recruitment {15}

The recruitment will follow a convenience sampling strategy. To aim for a student population with diverse academic backgrounds, participants will be recruited broadly through social media platforms, email lists, and flyers at the research university where the experiment will be conducted. Given that the experiment will start during the summer, the research team can recruit summer school students as participants. Thus, the study sample will not be limited to the students presently at the university. The recruitment materials include a brief description of the study, the eligibility criteria for participation, and the compensation for participation. Individuals who are interested in participation can sign up on a calendar by selecting available time slots provided by the experimenters. Participants will receive 30 euros in compensation upon completion of the experiment. Participants who withdraw in the middle of the experiment will receive partial compensation, prorated based on the amount of time they spend in the experiment.



Source link

Continue Reading

Trending