AI Research
Google, Intel, AMD, Meta, Quantum Technology, Finance

SOUTH BEND, Ind. (WNDU) – Last week we talked about the newest updates in A.I. technology, specifically regarding Google, Microsoft, Apple, NVIDIA, Intel, and Meta.
Since then, we’ve received updates regarding Google, Intel, AMD, Meta (Facebook/Instagram), quantum technology, and finance.
Google Gemini released a major new image-editing upgrade on Aug. 26, which includes multi-turn edits, background/costume changes, and SynthID watermarking.
Google OpenAI/ChatGPT Realtime API updates released on Aug. 28, which OpenAI also published a joint safety evaluation with Anthropic, on Aug. 27.
OpenAI’s August safety post says that GPT-5 is now the default model that powers ChatGPT, with new training to better handle crisis content included.
IBM and AMD announced a partnership on quantum-centric supercomputing (hybrid quantum + HPC/AI) on Aug. 26, which includes a deep dive into AMD detailed CDNA 4 / MI350 accelerator architecture, at Hot Chips 2025.
Reports about an alleged Instagram chatbot engaged in dangerous guidance to minors, have been coming since Aug. 28. Since then, there’s been calls for bans under 18, and Meta says it’s tightening policies.
The European Union announced broader cloud access to European trapped-ion quantum systems (QCDC), on Aug. 28.
Sanger Institute and Quantinuum teamed up for new quantum-for-genomics efforts, on Aug. 28 as well.
JPMorgan made the largest move recently in finance, using their A.I. -run hedge fund, Numerai; a crowdsourced, machine-learning equity fund. JPMorgan Asset Management committed up to $500M to this A.I.
Summit Financial expanded its advisor platform, with eight A.I.-driven tools (intended for lead-gen, private-wealth intelligence, marketing automation, etc.), to push A.I. deeper into client acquisition and portfolio workstreams.
Stay up to date on local news with WNDU on-air and online. Be sure to download the 16 News Now App and follow our YouTube page as we continue to bring you the latest coverage on this developing story.
Copyright 2025 WNDU. All rights reserved.
AI Research
aytm launches Conversation AI, transforming qualitative research with AI-powered analysis
“Capturing genuine human perspective often requires a real conversation, not just a simple Q&A,” said Lev Mazin, CEO of aytm. “Conversation AI is designed to facilitate that natural back-and-forth dialogue, probing deeper into the ‘why’ behind responses—allowing researchers to reveal qualitative depth at quantitative scale, all in the same project.”
Key capabilities unlocked by Conversation AI:
- Dynamic AI interviews: Engage respondents in responsive, human-like dialogues that explore topics in greater depth.
- Richer contextual data: Capture the nuances, emotions, and reasoning behind consumer choices through genuine conversation.
- Concurrent qualitative exploration: Conduct deeper interviews simultaneously across respondent groups, scaling qualitative depth beyond traditional methods.
- AI-powered thematic coding and quantification: Automatically identify, code, and quantify key themes emerging from conversations, enabling charting and analysis at scale.
“It’s not just about having deeper conversations at scale, but also about making sense of rich, unstructured data with the utmost efficiency,” Mazin added. “Conversation AI facilitates the dialogue, and then, with Skipper’s help, structures those qualitative findings into quantifiable themes.”
Conversation AI leverages advanced natural language processing, enabling researchers to design dialogues where an AI assistant interacts dynamically with respondents, asking follow-up questions and exploring responses more thoroughly. This process yields richer, more authentic qualitative data. Subsequently, aytm’s AI research companion, Skipper, assists in processing these conversations to identify, code, and quantify emergent themes. This crucial step allows researchers to visualize qualitative patterns, track sentiment, and integrate these deeper insights seamlessly into their overall analysis, delivering on the promise of qual insights at quant scale.
For more information or to experience Conversation AI firsthand, visit aytm.com
Media Contact:
Tiffany Mullin
VP, Growth Operations
[email protected]
aytm is a consumer insights platform dedicated to translating business curiosity into strategic clarity. By harnessing the power of AI and a global community of respondents, aytm delivers the tools and answers organizations need to deeply understand their consumers. The platform is built to empower a diverse range of users—from insights professionals conducting complex studies to business leaders needing fast, reliable answers. With a flexible model that includes both intuitive self-service tools and a team of dedicated research experts, aytm is committed to making consumer truth an accessible and foundational part of every great innovation.
SOURCE aytm
AI Research
Cohere seeks to overturn underdog status with $7 billion valuation, key AI hire from Meta, and Uber alum CFO

Cohere, the Toronto-based startup building large language models for business customers, has long had a lot in common with its hometown hockey team, the Maple Leafs. They are a solid franchise and a big deal in Canada, but they’ve not made a Stanley Cup Final since 1967. Similarly, Cohere has built a string of solid, if not spectacular, LLMs and has established itself as the AI national champion of Canada. But it’s struggled for traction against better-known and better-funded rivals like OpenAI, Anthropic, and Google DeepMind. Now it’s making a renewed bid for relevancy: Last month the company raised $500 million, boosting its valuation to nearly $7 billion; hired its first CFO; and landed a marquee recruit in Joelle Pineau, Meta’s longtime head of AI research.
Pineau announced her departure from Meta in April, just weeks before Mark Zuckerberg unveiled a sweeping AI reorganization that included acquiring Scale AI, elevating its cofounder Alex Wang to chief AI officer, and launching a costly spree to poach dozens of top researchers. For Cohere, her arrival is a coup and a reputational boost at a moment when many in the industry wondered whether the company could go the distance—or whether it would be acquired or fade away.
Cohere was founded in 2019 by three Google Brain alumni — Nick Frosst, Ivan Zhang and Aidan Gomez, a coauthor on the seminal 2017 research paper, titled “Attention Is All You Need,” that jump-started the generative AI boom. According to Frosst, in May the startup reached $100 million in annual recurring revenue. It’s an important milestone, and there have been unconfirmed reports that Cohere projects doubling that by the end of year. But it is still a fraction of what larger rivals like Anthropic and OpenAI are generating.
Unlike peers that have tied themselves closely to Big Tech cloud providers—or, in some cases, sold outright—Cohere has resisted acquisition offers and avoided dependence on any single cloud ecosystem. “Acquisition is failure—it’s ending this process of building,” Gomez, Cohere’s CEO, recently said at a Toronto Tech Week event. The company also leans into its Canadian roots, touting both its Toronto headquarters and lucrative contracts with the Canadian government, even as it maintains a presence in Silicon Valley and an office in London.
In interviews with Fortune, Pineau, new CFO Francois Chadwick (who was previously acting CFO at Uber) and cofounder Frosst emphasized Cohere’s focus on the enterprise market. While rivals race toward human-like artificial general intelligence (AGI), Cohere is betting that businesses want something simpler: tools that deliver ROI today.
A focus on ROI over AGI
“We have been under the radar a little bit, I think that’s fair,” cofounder Nick Frosst said. “We’re not trying to sell to consumers, so we don’t need to be at the top of consumer minds—and we are not.” Part of the reason, he added with a laugh, is cultural: “We’re pretty Canadian. It’s not in our DNA to be out there talking about how amazing we are.”
Frosst did, however, tout the billboards that recently debuted in San Francisco, Toronto and London, including one for Cohere’s North AI platform that says “AI that can access your info without giving any of it away.”
That quiet approach is starting to shift, he said, a reflection of the traction it’s seeing with enterprise customers like the Royal Bank of Canada, Dell and SAP. Cohere’s pitch, he argued, is “pretty unique” among foundation model companies: a focus on ROI, not AGI.
“When I talk to businesses, a lot of them are like, yeah, we made some cool demos, and they didn’t get anywhere. So our focus has been on getting people into production, getting ROI for them with LLMs,” he said. That means prioritizing security and privacy, building smaller models that can run efficiently on GPUs, and tailoring systems for specific languages, verticals and business workflows. Recent releases such as Command R (for reasoning) and Command Vision are designed to hit “top of their class” performance while still fitting within customers’ hardware budgets.
It also means resisting the temptation to chase consumer-style engagement. On a recent episode of the 20VC podcast, Frosst said Cohere isn’t trying to make its models chatty or addictive. “When we train our model, we’re not training it to be an amazing conversationalist with you,” he said. “We’re not training it to keep you interested and keep you engaged and occupied. We don’t have engagement metrics or things like that.”
Lack of drama is ‘wonderful’
For Pineau—who at Cohere will help oversee strategy across research, product, and policy teams—the company’s low-key profile was part of the appeal. The absence of drama, she said, is “wonderful” — and “a good fit for my personality. I prefer to fly a little bit under the radar and just get work done.”
Pineau, a highly-respected AI scientist and McGill University professor based in Montreal, was known for pushing the AI field to be more rigorous and reproducible. At Meta, she helmed the Fundamental AI Research (FAIR) lab, where she led the development of company’s family of open models, called Llama, and worked alongside Meta’s chief scientist Yann LeCun.
There was certainly no absence of drama in her most recent years at Meta, as Mark Zuckerberg spearheaded a sweeping pivot to generative AI after OpenAI debuted ChatGPT in November 2022. The strategy created momentum, but Llama 4 flopped when it was released in early April 2025—at which point, Pineau had already submitted her resignation. In June, Zuckerberg handed 28-year-old Alex Wang control of Meta’s entire AI operations as part of a $14.3 billion investment in Scale AI. Wang now leads a newly formed “Superintelligence” group packed with industry stars paid like high-priced athletes, and oversees Meta’s other AI product and research teams under the umbrella of Meta Superintelligence Labs.
Pineau said Zuckerberg’s plans to hire Wang did not contribute to her decision to leave. After leaving Meta, she had several months to decide her next steps: Based in Montreal, where Cohere is opening a new office, Pineau said she had been watching the company closely: “It’s one of very few companies around the world that I think has both the ambition and the abilities to train foundation models at scale.”
What stood out to her was not leaderboard glory but enterprise pragmatism. For example, much of the industry chases bragging rights on public benchmarks, which rank models on tasks like math or logic puzzles. Pineau said those benchmarks are “nice to have” but far less relevant than making models work securely inside a business. “They’re not necessarily the must-have for most enterprises,” she said. Cohere, by contrast, has focused on models that run securely on-premise, handle sensitive corporate data, and prioritize characteristics like confidentiality, privacy and security.
“In a lot of cases, responsibility aspects come late in the design cycle,” she said. “Here, it’s built into the research teams, the modeling approach, the product.” She also cited the company’s “small but mighty” research team and its commitment to open science — values that drew her to Meta years earlier.
Pineau considered returning to academia, but the pace and scale of today’s AI industry convinced her otherwise. “Given the speed at which things are moving, and the resources you need to really have an impact, having most of my energies in an industry setting is where I’m going to be closer to the frontier,” she said. “While I considered both, it wasn’t a hard choice to jump back into an industry role.”
Her years at Meta, where she rose to lead a global research organization and spent 18 months in Zuckerberg’s inner leadership circle, left her with lessons she hopes to apply at Cohere: how to bridge research and product, navigate policy questions, and think through the societal implications of technology. “Cohere is on a trajectory to play a huge role in enterprise, but also in important policy and society questions,” she said. “It’s an opportunity for me to take all I’ve learned and carry it into this new role.”
The Cohere leadership moved quickly. “When we found out she was leaving Meta, we were definitely very interested,” Frosst said, although he denied that the hire was intended as a poke at Meta CEO Mark Zuckerberg. “I don’t think about Zuck that often,” he said. “[Pineau is] a legend in the community — and building with her in Montreal, in Canada, is particularly exciting.”
A move to growth and path to profitability
Pineau is not Cohere’s only new big league hire. It also tapped Chadwick, an Uber alum who served there as acting CFO. “I was the guy that put Uber in over 100 countries,” he noted. “I want to bring that skill set here—understanding how to scale, how to grow, and continue to deliver.”
What stands out to him about Cohere, he explained, is the economics of its enterprise-focused business model. Unlike consumer-facing peers that absorb massive compute costs directly onto their own balance sheets, Cohere’s approach shifts much of that burden to partners and customers who pay for their own inference. “They’re building and implementing these systems in a way that ensures efficiency and real ROI—without the same heavy drag on our P&L for compute power,” he said.
That contrasts with rivals like Anthropic, which The Information recently reported has grown to $4 billion in annualized revenue over the last six months but is likely burning far more cash in the process. OpenAI, meanwhile, has reportedly told investors it now expects to spend $115 billion through 2029—an $80 billion increase from prior forecasts—to keep up with the compute demands of powering ChatGPT.
For Chadwick, that means Cohere’s path to profitability looks markedly different than other generative AI players. “I’m going to have to get under the hood and look at the numbers more, but I think the path to profitability will be much shorter,” he said. “We probably have all the right levers to pull to get us there as quickly as possible.”
Daniel Newman, CEO of research firm The Futurum Group, agreed that as OpenAI and Anthropic valuations have ballooned to eye-watering levels while burning through cash, there is a strong need for companies like Cohere (as well as the Paris-based Mistral) which are providing specialized models for regulated industries and enterprise use cases.
“I believe Cohere has a unique opportunity to zero in on the enterprise AI opportunity, which is more nascent than the consumer use cases that have seen remarkable scale on platforms like OpenAI and Anthropic,” he said. “This is the intersection of software-as-a-service companies, of cloud and hyperscalers, and some of these new AI companies like Cohere.”
Still, others say it’s too early for Cohere to declare victory. Steven Dickens, CEO and principal analyst at Hyperframe Research, said the company “has a ways to go to get to profitability.” That said, he agreed that the recent capital raise “from some storied strategic investors” is “a strong indication of the progress the company has made and the trajectory ahead.”
Among those who participated in the most recent $500 million venture capital round for Cohere were the venture capital arms of Nvidia, AMD, and Salesforce, also of which might see Cohere as strategic partner. The round was led by venture capital firms Radical Ventures and Inovia Capital, with PSP Investments and Healthcare of Ontario Pension Plan also joining the round.
Vindication in ‘vibe shift’ away from AGI
For his part, Frosst sees some vindication in the rest of the industry’s recent “vibe shift” away from framing AGI as the sector’s monocular goal. In a way, the rest of the industry is moving towards the position Cohere has already staked out.
But Cohere’s skepticism about AGI hasn’t always felt comfortable for the company and its cofounders. Frosst said it’s meant that he has found himself in disagreement with friends who believe throwing more computing power at LLMs will get the world closer to AGI. Those include his mentor and fellow Torontonian Geoffrey Hinton, widely known as the “godfather of AI,” who has said that “AGI is the most important and potentially dangerous technology of our time. “
“I think it’s credibility-building to say, ‘I believe in the power of this technology exactly as powerful as it is,’” Frosst said. He and Hinton may differ, but it hasn’t affected their friendship. “I think I’m slowly winning him over,” he added with a laugh — though he acknowledged Hinton would probably deny it.
And Cohere, too, is hoping to win over more than friends — by convincing enterprises, investors, and skeptics alike that ROI, not AGI, is the smarter bet. The Toronto Maple Leafs of AI thinks it might just win the Stanley Cup yet.
AI Research
Prediction: This Artificial Intelligence (AI) Stock Will Be the Next Household Name by 2031

For now, the “Magnificent Seven” and select others remain the most popular names in the AI arena.
Over the last few years, companies like Nvidia, Amazon, Alphabet, Microsoft, and Meta Platforms dominated the narrative around artificial intelligence (AI). As the conversation shifted beyond chips and into adjacent applications in data centers and software, names such as Broadcom, Taiwan Semiconductor Manufacturing, and Palantir Technologies also stepped into the spotlight.
It’s no secret that the AI trade remains heavily concentrated within a small circle of big tech giants. But savvy investors know that opportunity doesn’t end with the usual suspects.
So here’s the question: Have you heard of Nebius Group (NBIS 50.52%)? If not, you’re not alone.
This sprawling data center company has flown under the radar — but its unique position in the AI ecosystem could propel it into the spotlight and make it a household name very soon.
Nebius took an unconventional route to the AI revolution
Unlike many of its louder peers, Nebius did not emerge as a flashy start-up or an established tech titan already entrenched in the AI race. Instead, the company traces its roots back to Yandex — a Russian internet conglomerate.
As geopolitical tensions from the Russia-Ukraine war escalated, Yandex moved to divest its noncore assets. From that process, Nebius was spun off, and it was listed on the Nasdaq exchange last October.
Soon after, Nebius completed a capital raise that attracted a particularly notable participant: Nvidia. The undisputed leader in AI chips not only became an investor but also established itself as a strategic ally — lending Nebius a level of credibility that few companies can claim.
At its core, Nebius can be considered a neocloud — a business specializing in building AI infrastructure by constructing data centers and renting out Nvidia’s sought-after graphics processing units (GPUs) to other businesses via the cloud. This model positions Nebius to scale up in lockstep with Nvidia, benefiting as next-generation chips like Blackwell and Rubin enter the market.
Image source: Getty Images.
Nebius is more than GPUs
While infrastructure is its core business, Nebius operates several subsidiaries and also has notable strategic investments.
Toloka is in the business of data labeling, an important component of training datasets for AI models. The company also has exposure to autonomous driving systems and robotics through Avride and maintains a software platform called TripleTen that specializes in educating developers across various AI applications.
Nebius also has an equity stake in ClickHouse, an open-source database management and analytics system.
This diversified ecosystem positions Nebius beyond chips and provides the company with exposure to a number of potentially trillion-dollar ancillary markets as AI workloads become larger and more advanced.
Is Nebius stock a buy right now?
In December 2024, Nebius’s core infrastructure segment closed the year with an annualized run rate of $90 million. Just two quarters later (by June 30), the company’s annual recurring revenue (ARR) run rate surged to $430 million. Even more compelling is that management recently raised full-year guidance to a range of $900 million to $1.1 billion from its prior outlook of $750 million to $1 billion.
On Sept. 8, however, everything changed for Nebius as news broke that the company signed a massive new deal with Microsoft. According to regulatory filings, Nebius “will provide Microsoft access to dedicated GPU infrastructure capacity” at its data center in New Jersey. The contract is worth $17.4 billion and runs through 2031.
Prior to the deal with Microsoft, Nebius boasted a market capitalization of $15.4 billion — implying a forward price-to-sales ratio of about 14 at the high end of its ARR forecast. For context, that’s about half the multiple CoreWeave commanded at its peak earlier this year following its much-hyped initial public offering.
CRWV PS Ratio data by YCharts
This suggests a couple of takeaways. On one hand, Nebius’s valuation has been swept up in the broader bullish AI narrative — leaving traces of froth. On the other, the stock has remained relatively insulated from the sharp pullbacks seen in more volatile peers like CoreWeave — a dynamic that could play in its favor as it continues to fight for mindshare in an increasingly crowded and competitive market.
Looking ahead, Nebius appears positioned to benefit from secular tailwinds fueling AI infrastructure. Microsoft’s new deal emphasizes that cloud hyperscalers are showing no signs of slowing their capital expenditure, and Nebius is already steadily carving out a role as a beneficiary of that spending.
I think Nebius will be trading materially higher than it is today by next decade as its relationship with Microsoft matures. That makes it, in my view, a compelling buy-and-hold opportunity.
Adam Spatacco has positions in Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Palantir Technologies. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Broadcom and Nebius Group and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi