Connect with us

AI Research

Cohere looks to shed its underdog status with a star AI hire, new CFO and $7 billion valuation — chasing ‘ROI over AGI’

Published

on


Cohere, the Toronto-based startup building large language models for business customers, has long had a lot in common with its hometown hockey team, the Maple Leafs. They are a solid franchise and a big deal in Canada, but they’ve not made a Stanley Cup Final since 1967. Similarly, Cohere has built a string of solid, if not spectacular, LLMs and has established itself as the AI national champion of Canada. But it’s struggled for traction against better-known and better-funded rivals like OpenAI, Anthropic, and Google DeepMind. Now it’s making a renewed bid for relevancy: Last month the company raised $500 million, boosting its valuation to nearly $7 billion; hired its first CFO; and landed a marquee recruit in Joelle Pineau, Meta’s longtime head of AI research.

Pineau announced her departure from Meta in April, just weeks before Mark Zuckerberg unveiled a sweeping AI reorganization that included acquiring Scale AI, elevating its cofounder Alex Wang to chief AI officer, and launching a costly spree to poach dozens of top researchers. For Cohere, her arrival is a coup and a reputational boost at a moment when many in the industry wondered whether the company could go the distance—or whether it would be acquired or fade away.

Cohere was founded in 2019 by three Google Brain alumni — Nick Frosst, Ivan Zhang and Aidan Gomez, a coauthor on the seminal 2017 research paper, titled “Attention Is All You Need,” that jump-started the generative AI boom. According to Frosst, in May the startup reached $100 million in annual recurring revenue. It’s an important milestone, and there have been unconfirmed reports that Cohere projects doubling that by the end of year. But it is still a fraction of what larger rivals like Anthropic and OpenAI are generating.

Unlike peers that have tied themselves closely to Big Tech cloud providers—or, in some cases, sold outright—Cohere has resisted acquisition offers and avoided dependence on any single cloud ecosystem. “Acquisition is failure—it’s ending this process of building,” Gomez, Cohere’s CEO, recently said at a Toronto Tech Week event. The company also leans into its Canadian roots, touting both its Toronto headquarters and lucrative contracts with the Canadian government, even as it maintains a presence in Silicon Valley and an office in London.

In interviews with Fortune, Pineau, new CFO Francois Chadwick (who was previously acting CFO at Uber) and cofounder Frosst emphasized Cohere’s focus on the enterprise market. While rivals race toward human-like artificial general intelligence (AGI), Cohere is betting that businesses want something simpler: tools that deliver ROI today.

“We have been under the radar a little bit, I think that’s fair,” cofounder Nick Frosst said. “We’re not trying to sell to consumers, so we don’t need to be at the top of consumer minds—and we are not.” Part of the reason, he added with a laugh, is cultural: “We’re pretty Canadian. It’s not in our DNA to be out there talking about how amazing we are.”

Frosst did, however, tout the billboards that recently debuted in San Francisco, Toronto and London, including one for Cohere’s North AI platform that says “AI that can access your info without giving any of it away.”

That quiet approach is starting to shift, he said, a reflection of the traction it’s seeing with enterprise customers like the Royal Bank of Canada, Dell and SAP. Cohere’s pitch, he argued, is “pretty unique” among foundation model companies: a focus on ROI, not AGI.

“When I talk to businesses, a lot of them are like, yeah, we made some cool demos, and they didn’t get anywhere. So our focus has been on getting people into production, getting ROI for them with LLMs,” he said. That means prioritizing security and privacy, building smaller models that can run efficiently on GPUs, and tailoring systems for specific languages, verticals and business workflows. Recent releases such as Command A (for reasoning) and Command A Vision are designed to hit “top of their class” performance while still fitting within customers’ hardware budgets.

It also means resisting the temptation to chase consumer-style engagement. On a recent episode of the 20VC podcast, Frosst said Cohere isn’t trying to make its models chatty or addictive. “When we train our model, we’re not training it to be an amazing conversationalist with you,” he said. “We’re not training it to keep you interested and keep you engaged and occupied. We don’t have engagement metrics or things like that.”

For Pineau—who at Cohere will help oversee strategy across research, product, and policy teams—the company’s low-key profile was part of the appeal. The absence of drama, she said, is “wonderful” — and “a good fit for my personality. I prefer to fly a little bit under the radar and just get work done.”

Pineau, a highly-respected AI scientist and McGill University professor based in Montreal, was known for pushing the AI field to be more rigorous and reproducible. At Meta, she helmed the Fundamental AI Research (FAIR) lab, where she led the development of company’s family of open models, called Llama, and worked alongside Meta’s chief scientist Yann LeCun.

There was certainly no absence of drama in her most recent years at Meta, as Mark Zuckerberg spearheaded a sweeping pivot to generative AI after OpenAI debuted ChatGPT in November 2022. The strategy created momentum, but Llama 4 flopped when it was released in early April 2025—at which point, Pineau had already submitted her resignation. In June, Zuckerberg handed 28-year-old Alex Wang control of Meta’s entire AI operations as part of a $14.3 billion investment in Scale AI. Wang now leads a newly formed “Superintelligence” group packed with industry stars paid like high-priced athletes, and oversees Meta’s other AI product and research teams under the umbrella of Meta Superintelligence Labs.

Pineau said Zuckerberg’s plans to hire Wang did not contribute to her decision to leave. After leaving Meta, she had several months to decide her next steps: Based in Montreal, where Cohere is opening a new office, Pineau said she had been watching the company closely: “It’s one of very few companies around the world that I think has both the ambition and the abilities to train foundation models at scale.”

What stood out to her was not leaderboard glory but enterprise pragmatism. For example, much of the industry chases bragging rights on public benchmarks, which rank models on tasks like math or logic puzzles. Pineau said those benchmarks are “nice to have” but far less relevant than making models work securely inside a business. “They’re not necessarily the must-have for most enterprises,” she said.  Cohere, by contrast, has focused on models that run securely on-premise, handle sensitive corporate data, and prioritize characteristics like confidentiality, privacy and security.

“In a lot of cases, responsibility aspects come late in the design cycle,” she said. “Here, it’s built into the research teams, the modeling approach, the product.” She also cited the company’s “small but mighty” research team and its commitment to open science — values that drew her to Meta years earlier.

Pineau considered returning to academia, but the pace and scale of today’s AI industry convinced her otherwise. “Given the speed at which things are moving, and the resources you need to really have an impact, having most of my energies in an industry setting is where I’m going to be closer to the frontier,” she said. “While I considered both, it wasn’t a hard choice to jump back into an industry role.”

Her years at Meta, where she rose to lead a global research organization and spent 18 months in Zuckerberg’s inner leadership circle, left her with lessons she hopes to apply at Cohere: how to bridge research and product, navigate policy questions, and think through the societal implications of technology. “Cohere is on a trajectory to play a huge role in enterprise, but also in important policy and society questions,” she said. “It’s an opportunity for me to take all I’ve learned and carry it into this new role.”

The Cohere leadership moved quickly. “When we found out she was leaving Meta, we were definitely very interested,” Frosst said, although he denied that the hire was intended as a poke at Meta CEO Mark Zuckerberg. “I don’t think about Zuck that often,” he said. “[Pineau is] a legend in the community — and building with her in Montreal, in Canada, is particularly exciting.”

Pineau is not Cohere’s only new big league hire. It also tapped Chadwick, an Uber alum who served there as acting CFO. “I was the guy that put Uber in over 100 countries,” he noted. “I want to bring that skill set here—understanding how to scale, how to grow, and continue to deliver.”

What stands out to him about Cohere, he explained, is the economics of its enterprise-focused business model. Unlike consumer-facing peers that absorb massive compute costs directly onto their own balance sheets, Cohere’s approach shifts much of that burden to partners and customers who pay for their own inference. “They’re building and implementing these systems in a way that ensures efficiency and real ROI—without the same heavy drag on our P&L for compute power,” he said.

That contrasts with rivals like Anthropic, which The Information recently reported has grown to $4 billion in annualized revenue over the last six months but is likely burning far more cash in the process. OpenAI, meanwhile, has reportedly told investors it now expects to spend $115 billion through 2029—an $80 billion increase from prior forecasts—to keep up with the compute demands of powering ChatGPT.

For Chadwick, that means Cohere’s path to profitability looks markedly different than other generative AI players. “I’m going to have to get under the hood and look at the numbers more, but I think the path to profitability will be much shorter,” he said. “We probably have all the right levers to pull to get us there as quickly as possible.”

Daniel Newman, CEO of research firm The Futurum Group, agreed that as OpenAI and Anthropic valuations have ballooned to eye-watering levels while burning through cash, there is a strong need for companies like Cohere (as well as the Paris-based Mistral) which are providing specialized models for regulated industries and enterprise use cases.

“I believe Cohere has a unique opportunity to zero in on the enterprise AI opportunity, which is more nascent than the consumer use cases that have seen remarkable scale on platforms like OpenAI and Anthropic,” he said. “This is the intersection of software-as-a-service companies, of cloud and hyperscalers, and some of these new AI companies like Cohere.”

Still, others say it’s too early for Cohere to declare victory. Steven Dickens, CEO and principal analyst at Hyperframe Research, said the company “has a ways to go to get to profitability.” That said, he agreed that the recent capital raise “from some storied strategic investors” is “a strong indication of the progress the company has made and the trajectory ahead.”

Among those who participated in the most recent $500 million venture capital round for Cohere were the venture capital arms of Nvidia, AMD, and Salesforce, also of which might see Cohere as strategic partner. The round was led by venture capital firms Radical Ventures and Inovia Capital, with PSP Investments and Healthcare of Ontario Pension Plan also joining the round.

For his part, Frosst sees some vindication in the rest of the industry’s recent “vibe shift” away from framing AGI as the sector’s monocular goal. In a way, the rest of the industry is moving towards the position Cohere has already staked out.

But Cohere’s skepticism about AGI hasn’t always felt comfortable for the company and its cofounders. Frosst said it’s meant that he has found himself in disagreement with   friends who believe throwing more computing power at LLMs will get the world closer to AGI. Those include his mentor and fellow Torontonian Geoffrey Hinton, widely known as the “godfather of AI,” who has said that “AGI is the most important and potentially dangerous technology of our time. “

“I think it’s credibility-building to say, ‘I believe in the power of this technology exactly as powerful as it is,’” Frosst said. He and Hinton may differ, but it hasn’t affected their friendship. “I think I’m slowly winning him over,” he added with a laugh — though he acknowledged Hinton would probably deny it.

And Cohere, too, is hoping to win over more than friends — by convincing enterprises, investors, and skeptics alike that ROI, not AGI, is the smarter bet. The Toronto Maple Leafs of AI thinks it might just win the Stanley Cup yet.

This story was originally featured on Fortune.com



Source link

AI Research

Brown awarded $20 million to lead artificial intelligence research institute aimed at mental health support

Published

on


A $20 million grant from the National Science Foundation will support the new AI Research Institute on Interaction for AI Assistants, called ARIA, based at Brown to study human-artificial intelligence interactions and mental health. The initiative, announced in July, aims to help develop AI support for mental and behavioral health. 

“The reason we’re focusing on mental health is because we think this represents a lot of the really big, really hard problems that current AI can’t handle,” said Associate Professor of Computer Science and Cognitive and Psychological Sciences Ellie Pavlick, who will lead ARIA. After viewing news stories about AI chatbots’ damage to users’ mental health, Pavlick sees renewed urgency in asking, “What do we actually want from AI?”

The initiative is part of a bigger investment from the NSF to support the goals of the White House’s AI Action Plan, according to a NSF press release. This “public-private investment,” the press release says, will “sustain and enhance America’s global AI dominance.”

According to Pavlick, she and her fellow researchers submitted the proposal for ARIA “years ago, long before the administration change,” but the response was “very delayed” due to “a lot of uncertainty at (the) NSF.” 

One of these collaborators was Michael Frank, the director of the Center for Computational Brain Science at the Carney Institute and a professor of psychology. 

Frank, who was already working with Pavlick on projects related to AI and human learning, said that the goal is to tie together collaborations of members from different fields “more systematically and more broadly.”

According to Roman Feiman, an assistant professor of cognitive and psychological sciences and linguistics and another member of the ARIA team, the goal of the initiative is to “develop better virtual assistants.” But that goal includes various obstacles to ensure the machines “treat humans well,” behave ethically and remain controllable. 

Within the study, some “people work basic cognitive neuroscience, other people work more on human machine interaction (and) other people work more on policy and society,” Pavlick explained. 

Although the ARIA team consists of many faculty and students at Brown, according to Pavlick, other institutions like Carnegie Mellon University, University of New Mexico and Dartmouth are also involved. On top of “basic science” research, ARIA’s research also examines the best practices for patient safety and the legal implications of AI. 

“As everybody currently knows, people are relying on (large language models) a lot, and I think many people who rely on them don’t really know how best to use them, and don’t entirely understand their limitations,” Feiman said.

According to Frank, the goal is not to “replace human therapists,” but rather to assist them.

Assistant Professor of the Practice of Computer Science and Philosophy Julia Netter, who studies the ethics of technology and responsible computing and is not involved in ARIA, said that ARIA has “the right approach.” 

Netter said ARIA approach differs from previous research “in that it really tried to bring in experts from other areas, people who know about mental health” and others, rather than those who focus solely on computer science.

But the ethics of using AI in a mental health context is a “tricky question,” she added.

“This is an area that touches people at a point in time when they are very, very vulnerable,” Netter said, adding that any interventions that arise from this research should be “well-tested.” 

“You’re touching an area of a person’s life that really has the potential of making a huge difference, positive or negative,” she added.

Because AI is “not going anywhere,” Frank said he is excited to “understand and control it in ways that are used for good.”

“My hope is that there will be a shift from just trying stuff and seeing what gets a better product,” Feiman said. “I think there’s real potential for scientific enterprise — not just a profit-making enterprise — of figuring out what is actually the best way to use these things to improve people’s lives.”

Get The Herald delivered to your inbox daily.



Source link

Continue Reading

AI Research

BITSoM launches AI research and innovation lab to shape future leaders

Published

on


Mumbai: The BITS School of Management (BITSoM), under the aegis of BITS Pilani, a leading private university, will inaugurate its new BITSoM Research in AI and Innovation (BRAIN) Lab in its Kalyan Campus on Friday. The lab is designed to prepare future leaders for workplaces transformed by artificial intelligence, on Friday on its Kalyan campus.

BITSoM launches AI research and innovation lab to shape future leaders

While explaining the concept of the laboratory, professor Saravanan Kesavan, dean of BITSoM, said that the BRAIN Lab had three core pillars–teaching, research, and outreach. Kesavan said, “It provides MBA (masters in business administration) students a dedicated space equipped with high-performance AI computers capable of handling tasks such as computer vision and large-scale data analysis. Students will not only learn about AI concepts in theory but also experiment with real-world applications.” Kesavan added that each graduating student would be expected to develop an AI product as part of their coursework, giving them first-hand experience in innovation and problem-solving.

The BRAIN lab is also designed to be a hub of collaboration where researchers can conduct projects in partnership with various companies and industries, creating a repository of practical AI tools to use. Kesavan said, “The initial focus areas (of the lab) include manufacturing, healthcare, banking and financial services, and Global Capability Centres (subsidiaries of multinational corporations that perform specialised functions).” He added that the case studies and research from the lab will be made freely available to schools, colleges, researchers, and corporate partners, ensuring that the benefits of the lab reach beyond the BITSoM campus.

BITSoM also plans to use the BRAIN Lab as a launchpad for startups. An AI programme will support entrepreneurs in developing solutions as per their needs while connecting them to venture capital networks in India and Silicon Valley. This will give young companies the chance to refine their ideas with guidance from both academics and industry leaders.

The centre’s physical setup resembles a modern computer lab, with dedicated workspaces, collaborative meeting rooms, and brainstorming zones. It has been designed to encourage creativity, allowing students to visualise how AI works, customise tools for different industries, and allow their technical capabilities to translate into business impacts.

In the context of a global workplace that is embracing AI, Kesavan said, “Future leaders need to understand not just how to manage people but also how to manage a workforce that combines humans and AI agents. Our goal is to ensure every student graduating from BITSoM is equipped with the skills to build AI products and apply them effectively in business.”

Kesavan said that advisors from reputed institutions such as Harvard, Johns Hopkins, the University of Chicago, and industry professionals from global companies will provide guidance to students at the lab. Alongside student training, BITSoM also plans to run reskilling programmes for working professionals, extending its impact beyond the campus.



Source link

Continue Reading

AI Research

AI grading issue affects hundreds of MCAS essays in Mass. – NBC Boston

Published

on


The use of artificial intelligence to score statewide standardized tests resulted in errors that affected hundreds of exams, the NBC10 Investigators have learned.

The issue with the Massachusetts Comprehensive Assessment System (MCAS) surfaced over the summer, when preliminary results for the exams were distributed to districts.

The state’s testing contractor, Cognia, found roughly 1,400 essays did not receive the correct scores, according to a spokesperson with the Department of Elementary and Secondary Education.

DESE told NBC10 Boston all the essays were rescored, affected districts received notification, and all their data was corrected in August.

So how did humans detect the problem?

We found one example in Lowell. Turns out an alert teacher at Reilly Elementary School was reading through her third-grade students’ essays over the summer. When the instructor looked up the scores some of the students received, something did not add up.

The teacher notified the school principal, who then flagged the issue with district leaders.

“We were on alert that there could be a learning curve with AI,” said Wendy Crocker-Roberge, an assistant superintendent in the Lowell school district.

AI essay scoring works by using human-scored exemplars of what essays at each score point look like, according to DESE.

DESE pointed out the affected exams represent a small percentage of the roughly 750,000 MCAS essays statewide.

The AI tool uses that information to score the essays. In addition, humans give 10% of the AI-scored essays a second read and compare their scores with the AI score to make sure there aren’t discrepancies. AI scoring was used for the same amount of essays in 2025 as in 2024, DESE said.

Crocker-Roberge said she decided to read about 1,000 essays in Lowell, but it was tough to pinpoint the exact reason some students did not receive proper credit.

However, it was clear the AI technology was deducting points without justification. For instance, Crocker-Roberge said she noticed that some essays lost a point when they did not use quotation marks when referencing a passage from the reading excerpt.

“We could not understand why an individual score was scored a zero when it should have gotten six out of seven points,” Crocker-Roberge said. “There just wasn’t any rhyme or reason to that.”

District leaders notified DESE about the problem, which resulted in approximately 1,400 essays being rescored. The state agency says the scoring problem was the result of a “temporary technical issue in the process.”

According to DESE, 145 districts were notified that had at least one student essay that was not scored correctly.

“As one way of checking that MCAS scores are accurate, DESE releases preliminary MCAS results to districts and gives them time to report any issues during a discrepancy period each year,” a DESE spokesperson wrote in a statement.

Mary Tamer, the executive director of MassPotential, an organization that advocates for educational improvement, said there are a lot of positives to using AI and returning scores back to school districts faster so appropriate action can be taken. For instance, test results can help identify a child in need of intervention or highlight a lesson plan for a teacher that did not seem to resonate with students.

“I think there’s a lot of benefits that outweigh the risks,” said Tamer. “But again, no system is perfect and that’s true for AI. The work always has to be doublechecked.”

DESE pointed out the affected exams represent a small percentage of the roughly 750,000 MCAS essays statewide.

However, in districts like Lowell, there are certain schools tracked by DESE to ensure progress is being made and performance standards are met.

That’s why Crocker-Roberge said every score counts.

With MCAS results expected to be released to parents in the coming weeks, the assistant superintendent is encouraging other districts to do a deep dive on their student essays to make sure they don’t notice any scoring discrepancies.

“I think we have to always proceed with caution when we’re introducing new tools and techniques,” Crocker-Roberge said. “Artificial intelligence is just a really new learning curve for everyone, so proceed with caution.”

There’s a new major push for AI training in the Bay State, where educators are getting savvier by the second. NBC10 Boston education reporter Lauren Melendez has the full story.



Source link

Continue Reading

Trending