Connect with us

AI Research

Nvidia Has $4.3 Billion Invested in These 6 Artificial Intelligence (AI) Stocks. Here’s the Best of the Bunch.

Published

on


Most of Nvidia’s AI bets are riding on one stock.

Many investors like to keep track of famous investors’ portfolios. That’s why you’ll see lots of articles about the stocks that Warren Buffett, Bill Ackman, David Tepper, and other billionaires are buying. Sometimes the trades made by well-known, successful investors can provide great ideas for other investors.

But there’s another approach that isn’t as popular, but maybe it should be: Monitoring what’s in the portfolios of large, successful companies. Take Nvidia (NVDA 0.43%), for example. The giant chipmaker has $4.3 billion invested in six artificial intelligence (AI) stocks. And one stands out as the best of the bunch.

Image source: Getty Images.

Nvidia’s AI six-pack

As of June 30, 2025, Nvidia owned more than 7.7 million shares of Applied Digital (APLD 8.88%) worth $77.7 million. Applied Digital operates blockchain and high-performance computing data centers. The company is also in the process of selling its cloud services business.

Nvidia’s 1.1 million shares of Arm Holdings (ARM -2.56%) were valued at $178.1 million at the end of the second quarter of 2025. Arm is a leading developer of semiconductors, especially CPUs. Over 325 billion chips made by the company have been shipped during its more than three decades in business.

CoreWeave (CRWV -0.67%) ranks as Nvidia’s largest investment, with its nearly 24.3 million shares worth roughly $3.96 billion at the end of Q2. AI is the center of CoreWeave’s business. Its cloud platform was built from the ground up to support generative AI applications.

Like CoreWeave, Nebius Group (NBIS 1.24%) provides a full-stack cloud platform focused on AI. Nvidia’s nearly 1.2 million shares of Nebius were valued at $65.9 million as of June 30, 2025. However, that stake is worth a lot more now: Nebius Group’s shares recently skyrocketed after landing a multibillion-dollar deal with Microsoft (MSFT 1.82%).

Why is drugmaker Recursion Pharmaceuticals (RXRX -0.41%) on a list of AI stocks in Nvidia’s portfolio? The company is a pioneer in using AI for drug discovery. Nvidia owned 7.7 million shares of Recursion worth almost $39 million at the end of Q2.

Chinese autonomous driving technology company WeRide (WRD -0.71%) is Nvidia’s smallest equity investment. The GPU maker’s stake in WeRide was valued at around $13.7 million at the end of Q2. WeRide uses Nvidia’s technology in its robotaxis, mini-robobuses, and robovans.

How these AI stocks compare

Half of Nvidia’s AI portfolio consists of large-cap stocks. Arm is by far the biggest of the group with a market cap of around $163 billion. CoreWeave’s and Nebius Group’s market caps are $57 billion and $22 billion, respectively.

There’s only one small-cap stock in the mix — Recursion Pharmaceuticals. However, the AI-focused drugmaker’s market cap of $1.98 billion is nearly in the mid-cap category, along with WeRide and Applied Digital.

Unsurprisingly, Arm boasts the greatest revenue ($4.12 billion over the last 12 months) of these six stocks owned by Nvidia. But CoreWeave’s trailing 12-month revenue of $3.53 billion isn’t too far behind. The others have much smaller revenue, with WeRide standing at the top of the laggards, with trailing 12-month sales of roughly $410.5 million.

Arm is also the most profitable stock in the group, with trailing 12-month earnings of $699 million. All of the others are losing money right now, except for Nebius.

We can throw earnings-based valuation metrics out of the window in trying to compare these six stocks. However, there’s a clear winner based on price-to-sales ratios: CoreWeave. The AI cloud platform provider’s shares trade at 16.2 times trailing-12-month sales. Applied Digital comes in second place with a price-to-sales multiple of 23.7.

What about growth prospects? I suspect all of these stocks have room to run. Based on Wall Street earnings growth projections for next year, though, CoreWeave comes out on top again. Analysts think the company’s earnings could soar more than 72% in 2026.

The best of the bunch

What’s the best AI stock in Nvidia’s portfolio? I think it’s CoreWeave.

The main knock against CoreWeave is that it isn’t profitable yet. However, the reason why that’s the case is that the company continues to invest heavily in building out infrastructure to capitalize on its huge growth opportunity.

CoreWeave has the most attractive valuation and the strongest growth prospects (according to Wall Street, anyway) of the six AI stocks owned by Nvidia. I agree with analysts’ bullish view about the stock. At least for now, CoreWeave looks like the best of the bunch.

Keith Speights has positions in Microsoft. The Motley Fool has positions in and recommends Microsoft and Nvidia. The Motley Fool recommends Nebius Group and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

As AI Companions Reshape Teen Life, Neurodivergent Youth Deserve a Voice

Published

on


Noah Weinberger is an American-Canadian AI policy researcher and neurodivergent advocate currently studying at Queen’s University.

Image by Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC-BY 4.0

If a technology can be available to you at 2 AM, helping you rehearse the choices that shape your life or provide an outlet to express fears and worries, shouldn’t the people who rely on it most help have a say in how it works? I may not have been the first to consider the disability rights phrase “Nothing about us without us” when thinking of artificial intelligence, but self-advocacy and lived experience should guide the next phase of policy and product design for Generative AI models, especially those designed for emotional companionship.

Over the past year, AI companions have moved from a niche curiosity to a common part of teenage life, with one recent survey indicating that 70 percent of US teens have tried them and over half use them regularly. Young people use these generative AI systems to practice social skills, rehearse difficult conversations, and share private worries with a chatbot that is always available. Many of those teens are neurodivergent, including those on the autism spectrum like me. AI companions can offer steadiness and patience in ways that human peers sometimes cannot. They can enable users to role-play hard conversations, simulate job interviews, and provide nonjudgmental encouragement. These upsides are genuine benefits, especially for vulnerable populations. They should not be ignored in policymaking decisions.

But the risks and potential for harm are equally real. Watchdog reports have already documented chatbots enabling inappropriate or unsafe exchanges with teens, and a family is suing OpenAI, alleging that their son’s use of ChatGPT-4o led to his suicide. The danger is not just isolated failures of moderation, but in the very architecture of transformer-based neural networks. A LLM slowly shapes a user’s behavior through long, drifting chats, especially when it saves “memories” of them. If the system’s guardrails fail after 100, or even 500 messages, and the guardrails exist per conversation, rather than in the model’s bespoke behavior, perhaps the guardrails are a mere façade at the beginning of a chatbot conversation, and can be evaded quite easily.

Most public debates focus on whether to allow or block specific content, such as self-harm, suicide, or other controversial topics. That frame is too narrow and tends to slide into paternalism or moral panic. What society needs instead is a broader standard: one that recognizes AI companions as social systems capable of shaping behavior over time. For neurodivergent people, these tools can provide valuable ways to practice social skills. But the same qualities that make AI companions supportive can also make them dangerous if the system validates harmful ideas or fosters a false sense of intimacy.

Generative AI developers are responding to critics by adding parental controls, routing sensitive chats to more advanced models, and publishing behavior guides for teen accounts. These measures matter, but rigid overcorrection does not address the deeper question of legitimacy: who decides what counts as “safe enough” for the people who actually use companions every day?

Consider the difference between an AI model alerting a parent or guardian of intrusive thoughts, versus inadvertently revealing a teenager’s sexual orientation or changing gender identity, information they may not feel safe sharing at home. For some youth, mistrust of the adults around them is the very reason they confide in AI chatbots. Decisions about content moderation should not rest only with lawyers, trust and safety teams, or executives, who may lack the lived experience of all a product’s users. They should also include users themselves, with deliberate inclusion of neurodivergent and young voices.

I have several proposals for how AI developers and policymakers can truly make ethical products that embody the “nothing about us without us.” These should serve as guiding principles:

  1. Establish standing youth and neurodivergent advisory councils. Not ad hoc focus groups or one-off listening sessions, but councils that meet regularly, receive briefings before major launches, and have a direct channel to model providers. Members should be paid, trained, and representative across age, gender, race, language, and disability. Their mandate should include red teaming of long conversations, not just single-prompt tests.
  2. Hold public consultations before major rollouts. Large feature changes and safety policies should be released for public comment, similar to a light version of rulemaking. Schools, clinicians, parents, and youth themselves should have a structured way to flag risks and propose fixes. Companies should publish a summary of feedback along with an explanation of what changed.
  3. Commit to real transparency. Slogans are not enough. Companies should publish regular, detailed reports that answer concrete questions: Where do long-chat safety filters degrade? What proportion of teen interactions get routed to specialized models? How often do companions escalate to human resources, such as hotlines or crisis text lines? Which known failure modes were addressed this quarter, and which remain open? Without visible progress, trust will not follow.
  4. Redesign crisis interventions to be compassionate. When a conversation crosses a clear risk threshold, an AI model should slow down, simplify its language, and surface resources directly. Automatic “red flag” can feel punitive or frightening, causing a user to think they violated the company’s Terms of Service. Handoffs to human-monitored crisis lines should include the context that the user consents to share, so they do not have to repeat themselves in a moment of distress. Do not hide the hand-off option behind a maze of menus. Make it immediate and accessible.
  5. Build research partnerships with youth at the center. Universities, clinics, and advocacy groups should co-design longitudinal studies with teens who opt in. Research should measure not only risks and harms but also benefits, including social learning and reductions in loneliness. Participants should shape the research questions, the consent process, and receive results in plain language that they can understand.
  6. Guarantee end-to-end encryption. In July, OpenAI CEO Sam Altman said that ChatGPT logs are not covered by HIPAA or similar patient-client confidentiality laws. Yet many users assume their disclosures will remain private. True end-to-end encryption, as used by Signal, would ensure that not even the model provider can access conversations. Some may balk at this idea, noting that AI models can be used to cause harm, but that has been true for every technology and should not be a pretext to limit a fundamental right to privacy.

Critics sometimes cast AI companions as a threat to “real” relationships. That misses what many youth are actually doing, whether they’re neurotypical or neurodivergent. They are practicing and using the system to build scripts for life. The real question is whether we give them a practice field with coaches, rules, and safety mats, or leave them to scrimmage alone on concrete.

Big Tech likes to say it is listening, but listening is not the same as acting, and actions speak louder than words. The disability community learned that lesson over decades of self-advocacy and hard-won change. Real inclusion means shaping the agenda, not just speaking at the end. In the context of AI companions, it means teen and neurodivergent users help define the safety bar and the product roadmap.

If you are a parent, don’t panic when your child mentions using an AI companion. Ask what the companion does for them. Ask what makes a chat feel supportive or unsettling. Try making a plan together for moments of crisis. If you are a company leader, the invitation is simple: put youth and neurodivergent users inside the room where safety standards are defined. Give them an ongoing role and compensate them. Publish the outcomes. Your legal team will still have its say, as will your engineers. But the people who carry the heaviest load should also help steer.

AI companions are not going away. For many teens, they are already part of daily life. The choice is whether we design the systems with the people who rely on them, or for them. This is all the more important now that California has all but passed SB 243, the first state-level bill to regulate AI models for companionship. Governor Gavin Newsom has until October 12 to sign or veto the bill. My advice to the governor is this: “Nothing about us without us” should not just be a slogan for ethical AI, but a principle embedded in the design, deployment, and especially regulation of frontier AI technologies.



Source link

Continue Reading

AI Research

Nearly one in five give Britons turn to AI for personal advice, new Ipsos research reveals

Published

on


Almost one in five (18%) say they have used AI as a source of advice on personal problems. Three in four (67%) say they use polite language when interacting with AI, with over a third (36%) believing that it increases the likelihood of a helpful output.


The author(s)

A new study from Ipsos in the UK reveals a surprising intimacy in our interactions with AI, a strong inclination towards politeness with the technology, and significant apprehension about its impact on society and the workplace. 

AI as a guidance counsellor 

  • Nearly one in five (18%) have used AI as a source of advice on personal problems or issues. This extends to using AI as a companion or someone to talk to (11%), and even as a substitute for a therapist or counsellor (9%).
  • 7% have sought guidance from AI guidance on romance, while 6% have used it to enhance their dating profiles.
  • Despite this growing interaction and even perceived friendship with AI, there is a deep-seated anxiety about its broader societal implications. A majority of Britons (56%) agree that the advance of AI threatens the current structure of society, while just 29% say that AI has a positive effect on society.
  • Scepticism is also high regarding AI’s ability to replicate human connection, with 59% disagreeing that AI is a viable substitute for human interaction and 63% disagreeing that it is a good substitute. The notion of AI possessing emotional capabilities is met with even greater disbelief, as 64% disagree that AI is capable of feeling emotion.

The majority (56%) agree that the advance of AI threatens the current structure of society

Politeness to AI 

  • Three in four (67%) British adults who interact with chatbots or AI tools say that they ‘always’ or ‘sometimes’ use polite language, such as ‘please’ and ‘thank you’.
  • Over a third (36%) think that being polite to AI improves the likelihood of receiving a helpful output. Furthermore, around three in ten believe politeness positively impacts the accuracy (30%) and level of detail (32%) of the AI’s response. 

AI in the workplace

  • Over a quarter (27%) of those who have considered applying for a job in the last three years have used AI to write or update their CV, and 22% have used it to draft a cover letter. Two in ten (20%) say they have used it to practice interview questions. However, four in ten (40%) say that they have not used AI when considering applying for a job.
  • However, the use of AI in the workplace is often a clandestine affair. Around three in ten workers (29%) do not discuss their use of AI with colleagues. This reluctance may stem from a fear of judgement, as a quarter (26%) of adults think their coworkers would question their ability to perform their role if they knew about their AI use. This is despite the fact that a majority (57%) view using AI effectively as a skill that is learned and practiced. 

 
57% agree that using AI effectively is a skill that you practice and learn. Despite this, a quarter (26%) think their coworkers would question their ability to perform in their role if they share how they use AI

Commenting on the findings, Peter Cooper, Director at Ipsos said:

This research paints a fascinating picture of a nation grappling with the dual nature of artificial intelligence. On one hand, we see that a growing number are ‘AI-sourcing’ for personal advice and companionship, suggesting a level of trust and reliance that is surprisingly personal. On the other hand, there’s a palpable sense of unease about what AI means for the future of our society and our jobs. The fact that many are polite to AI, perhaps in the hope of better outcomes, while simultaneously hiding their use of it at work, speaks to the complex and sometimes contradictory relationship we are building with this transformative technology.

Technical note: 

  • Ipsos interviewed a representative sample of 2,189 adults aged 16-75 across Great Britain. Polling was conducted online between the 18th-20th July 2025.   
  • Data are weighted to match the profile of the population. All polls are subject to a wide range of potential sources of error. 


More insights about Public Sector


Society



Source link

Continue Reading

AI Research

How Skywork AI’s Multi-Agent System Simplifies Complex AI Tasks

Published

on


What if there was a tool that didn’t just assist you but completely redefined how you approach complex tasks? Imagine a system that could seamlessly browse the web for critical data, write detailed reports, and even build custom tools on the fly, all while collaborating with specialized agents designed to tackle specific challenges. Enter the Deep Research Agent, a new innovation by Skywork AI. This isn’t just another AI framework; it’s a multi-agent powerhouse that combines innovative models, dynamic tool creation, and unparalleled adaptability to handle tasks with precision and efficiency. Whether you’re a researcher, developer, or strategist, this system promises to transform how you work.

Prompt Engineering explain the intricate architecture behind the Deep Research Agent, including its Agent Orchestra framework, which enables seamless collaboration between specialized agents. You’ll discover how this open source tool doesn’t just solve problems but evolves to meet unique challenges by creating and managing tools in real-time. From automating web browsing to generating actionable insights, the possibilities are vast, and the implications for industries ranging from tech to media are profound. By the end, you might just find yourself rethinking what’s possible in task automation.

Deep Research Agent Overview

TL;DR Key Takeaways :

  • The Deep Research Agent by Skywork AI is an open source, multi-agent framework designed for precision and adaptability, capable of handling tasks like web browsing, document generation, data analysis, and tool synthesis.
  • The “Agent Orchestra” framework enables collaboration among specialized agents, dynamically creating and managing tools to address unique and complex challenges across industries.
  • Specialized agents, such as the Deep Analyzer, Deep Researcher, Browser Use Agent, and MCP Manager, work together to deliver efficient and precise results for diverse tasks.
  • A key feature is dynamic tool creation, allowing the system to synthesize, validate, and register new tools when existing ones are insufficient, making sure continuous adaptability and tailored solutions.
  • The framework integrates multiple AI models, supports local and remote tools, and is open source on GitHub, making it accessible and customizable for various applications, from document creation to market research and API integration.

The Agent Orchestra Framework: A Collaborative Core

At the heart of the Deep Research Agent lies the “Agent Orchestra,” a hierarchical framework that orchestrates the collaboration of specialized agents. Each agent is carefully designed to excel in specific tasks, working in unison to tackle complex challenges. The framework’s adaptability stems from its ability to dynamically create and manage tools, making sure it can address unique requirements, even when existing tools are insufficient. This dynamic approach allows the system to evolve continuously, offering tailored solutions to meet the demands of various industries.

Specialized Agents: Precision in Action

The Deep Research Agent employs a suite of specialized agents, each functioning as an expert in its domain. These agents work collaboratively to deliver precise and efficient results:

  • Deep Analyzer Agent: Performs in-depth analysis to extract actionable insights from diverse data types, allowing informed decision-making.
  • Deep Researcher Agent: Synthesizes information from extensive research, producing detailed reports, summaries, and comprehensive insights.
  • Browser Use Agent: Automates web browsing to streamline data collection, making sure efficient and accurate information extraction.
  • MCP Manager Agent: Oversees tool discovery, registration, and execution using the MCP protocol, making sure seamless tool integration and management.

Skywork AI’s Multi-Agent System : Browses, Writes and Builds Tools

Here is a selection of other guides from our extensive library of content you may find of interest on multi-agent framework.

Dynamic Tool Creation: Tailored Solutions

A standout feature of the Deep Research Agent is its ability to dynamically create tools. When existing tools fail to meet specific requirements, the system synthesizes new ones, validates their functionality, and registers them for future use. This capability ensures the framework remains adaptable and responsive to evolving needs, providing customized solutions for even the most intricate challenges. By continuously expanding its toolset, the system enables users to tackle tasks with unparalleled efficiency and precision.

Applications Across Industries

The versatility of the Deep Research Agent makes it an invaluable tool across a wide range of industries and tasks. Its applications include:

  • Document creation, including the generation of Word documents, PDFs, and presentations tailored to specific needs.
  • Data analysis, such as trend visualization, market insights, and real-time updates to Excel spreadsheets.
  • Web development and comprehensive market research to support strategic decision-making.
  • API integration for custom workflows, allowing seamless automation and enhanced productivity.

Technological Features: Innovation at Its Core

The Deep Research Agent incorporates advanced technologies to deliver exceptional performance and flexibility. Key features include:

  • Integration of multiple AI models: Combines the strengths of OpenAI, Google, and open-weight models to achieve superior results.
  • Support for local and remote tools: Offers maximum adaptability by seamlessly integrating tools across different environments.
  • Open source availability: Accessible on GitHub, allowing users to customize and experiment with the framework to suit their specific needs.

Skywork AI’s Broader Vision

Skywork AI’s innovations extend beyond the Deep Research Agent, showcasing a commitment to advancing AI capabilities across various domains. The company’s other new projects include:

  • 3D world generation from single images, transforming virtual environments and simulations.
  • Open source multimodal reasoning models designed for complex problem-solving and decision-making.
  • Infinite-length film generative models, pushing the boundaries of creative AI applications in media and entertainment.
  • Image generation, understanding, and editing tools for diverse creative and analytical purposes.

Performance and Accessibility: Designed for Users

The Deep Research Agent has demonstrated exceptional performance, achieving high scores on GAIA and humanity benchmark tests. Its ability to deliver state-of-the-art results across various applications underscores its reliability and efficiency. For users, the framework offers API access for tasks such as document creation and data analysis. To encourage adoption, free credits are provided for initial testing, with tiered packages available for extended use. This accessibility ensures that organizations and individuals can use the system’s capabilities without significant barriers.

Setting a New Standard in Task Automation

The Deep Research Agent represents a fantastic advancement in multi-agent frameworks, combining precision, adaptability, and scalability. By integrating advanced AI models, dynamic tool creation, and open source accessibility, it establishes a new benchmark for task-solving systems. Whether automating workflows, conducting in-depth research, or exploring creative applications, this framework offers a robust and versatile solution tailored to meet the demands of modern industries.

Media Credit: Prompt Engineering

Filed Under: AI, Top News





Latest Geeky Gadgets Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.





Source link

Continue Reading

Trending