Tools & Platforms
ASML invests $1.5B in French AI startup Mistral, forming European tech alliance

LONDON — ASML, a leading Dutch maker of chipmaking gear, is investing 1.3 billion euros ($1.5 billion) into French artificial intelligence startup Mistral AI, the two said on Tuesday, announcing a partnership between two of Europe’s top technology companies.
ASML Holding, based in Veldhoven, Netherlands, holds an important role in the global tech industry because it makes equipment used to manufacture semiconductors, including the most advanced microchips used for cutting-edge AI systems.
Mistral was founded two years ago in Paris by former researchers at Google DeepMind and Meta Platforms and quickly became a European tech darling.
The partnership underscores Europe’s efforts to reduce exposure to American technology. President Donald Trump’s increasingly hostile attitude to European Union tech regulations has fueled debate about whether the continent is too dependent on services provided by U.S. tech companies such as cloud computing and mobile operating systems.
Mistral makes the Le Chat chatbot but it has struggled to keep up with American AI companies like ChatGPT-maker OpenAI, and Chinese rivals like DeepSeek.
ASML’s chipmaking equipment can cost hundreds of millions of dollars but the U.S. government has blocked it from selling its most advanced machines to China.
The deal gives ASML an 11% stake in Mistral, and values the startup at about 11.7 billion euros. The 1.3 billion euro investment is part of a larger funding round worth 1.7 billion euros, which also involves venture capital firms and chipmaker Nvidia.
Mistral CEO Arthur Mensch said in a press release that the alliance combines Mistral’s “frontier AI expertise with ASML’s unmatched industrial leadership and most sophisticated engineering capabilities.”
“Together, we will accelerate technological progress across the global semiconductor and AI value chain,” Mensch said.
Tools & Platforms
AI engineers are being deployed as consultants and getting paid $900 per hour

AI engineers are being paid a premium to work as consultants to help large companies troubleshoot, adopt, and integrate AI with enterprise data—something traditional consultants may not be able to do.
PromptQL, an enterprise AI platform created by San Francisco-based developer tooling company Hasura, is doling out $900-per-hour wages to its engineers tasked with building and deploying AI agents to analyze internal company data using large language models (LLMs).
The price point reflects the “intuition” and technical skills needed to keep pace with a rapidly-changing technology, Tanmai Gopal, PromptQL’s cofounder and CEO, told Fortune.
Gopal said the company hourly wage for AI engineers as consultants is “aligned with the going rate that you would see for AI engineers,” but that “it feels like we should be increasing that price even more,” as customers aren’t pushing back on the price PromptQL sets.
“MBA types… are very strategic thinkers, and they’re smart people, but they don’t have an intuition for what AI can do,” Gopal said.
Gopal declined to disclose any customers that have used PromptQL to integrate AI into their businesses, but says the list includes “the largest networking company” as well as top fast food, e-commerce, grocery and food delivery tech companies, and “one of the largest B2B companies.”
Oana Iordăchescu, founder of Deep Tech Recruitment, a boutique agency focused on AI, quantum, and frontier tech talent, told Fortune enterprises and startups are competing for senior AI engineers at “unprecedented rates,” and which is leading to wage inflation.
Iordăchescu said the wages are priced “far above even Big Four consulting partners,” who often make around $400 to $600 per hour.
“Traditional management consultants can design AI strategies, but most lack the hands-on technical expertise to debug models, build pipelines, or integrate systems into legacy infrastructure,” Iordăchescu said. “AI engineers working as consultants bridge that gap. They don’t just advise, they execute.”
AI consultant Rob Howard told Fortune he wasn’t surprised at “mind-blowing numbers” like a $900-per-hour wage for AI consulting work, as he’s seen a price premium on projects that have an AI component while companies rush to adopt it into their businesses.
Howard, who is also the CEO Innovating with AI, a program to teach people to become AI consultants in their own right, said some students of his have sold AI trainings or two-day boot camps that net out to $400 or $500 per hour.
“The pricing for this is high in general across the market, because it’s in demand and new and relatively rare to find, you know, people who are qualified to do it,” Howard said.
A recent report published by MIT’s NANDA initiative, revealed that while generative AI holds promise for enterprises, 95% of initiatives to drive rapid revenue growth failed. Aditya Challapally, the lead author of the report and a research contributor to project NANDA at MIT, previously told Fortune the AI pilot program failures did not fall on the quality of the AI models, but the “learning gap” for both tools and organizations.
“Some large companies’ pilots and younger startups are really excelling with generative AI,” Challapally told Fortune earlier this month. Startups led by 19- or 20-year-olds, for example, “have seen revenues jump from zero to $20 million in a year,” he said.
“It’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” he added.
Jim Johson, an AI consulting executive at AnswerRocket, told Fortune the $900-per-hour wage “makes perfect sense” when considering companies have spent two years experimenting with AI and “have little to show for it.”
“Now the pressure’s on to demonstrate real progress, and they’re discovering there’s no easy button for enterprise AI,” Johnson said. “This premium won’t last forever, but right now companies are essentially buying insurance against joining that 95% failure statistic.”
Gopal said PromptQL’s business model to have AI engineers serve as both consultants and forward deployed engineers (FDEs)—hybrid sales and engineering jobs tasked with integrating AI solutions—is what makes their employees so valuable.
This new wave of AI engineer consultants is shaking up the consulting industry, Gopal said. But he sees his company as helping shift traditional consulting partnership expectations and culture.
“The demand is there,” he said. “I think what makes it hard is that leaders, especially in some of the established companies… are kind of more used to the traditional style of consultants.”
Gopal said the challenge for his company will be to “drive that leadership and education, and saying, ‘Folks, there is a new way of doing things.’”
Tools & Platforms
ChatGPT causes outrage after it refuses to do one task for users

As the days go by, ChatGPT is becoming more and more of a useful tool for humans.
Whether it be asking for information on a subject, help in drafting an email, or even opinions on fashion choices, AI is becoming an essential part of some people’s lifestyles in 2025.
People are having genuine conversations with programmes like ChatGPT for advice on situations in life, and while it answers almost anything you ask it, there’s one question it refuses to answer, for some bizarre reason.
The popular AI chatbot’s capabilities seemed endless, but this seemingly newly found barrier has driven people crazy on social media, who don’t understand why it says no to answering this one question.
To nobody’s surprise, this confusion online has stemmed from a viral TikTok video.
AI is heavily relied upon by some (Getty Stock Image)
All the user did was demand for ChatGPT to count to a million – but how did the chatbot respond?
“I know you just won that counting, but the truth is counting all the way to a million would literally take days,” it replied.
While he kept insisting, the bot kept turning the request down, with the voice saying that it ‘isn’t really practical’, ‘even for me’.
The hilarious exchange included the bot saying that it’s ‘not really possible’ either, saying that it simply won’t be able to carry the prompt out for him.
Replies included the bot repeatedly saying that it ‘understood’ and ‘heard’ what he was saying, but his frustrations grew as the clip went on.
Many have now questioned why this might be the case, as one wrote in response to the user’s anger: “I don’t even use ChatGPT and I’ll say this is a win for them. AIs should not be enablers of abusive behaviour in their users.”
Another posted: “So AI does have limits?! Or maybe it’s just going through a rough day at the office. Too many GenZ are asking about Excel and saving Word documents.”
A third claimed: “I think it’s good that AI can identify and ignore stupid time-sink requests that serve no purpose.”

ChatGPT will not count to a million (Samuel Boivin/NurPhoto via Getty Images)
Others joked that the man would be first to be targeted in an AI-uprising, while some suggested that the programme might need a higher subscription plan to count to a million.
“What it really wanted to say is the amount of time you require is higher than your subscription,” a different user said.
As long as you don’t ask the bot to count to ridiculous numbers, it looks like it can help with more or less anything else.
Tools & Platforms
AI is introducing new risks in biotechnology. It can undermine trust in science

The bioeconomy is entering a defining moment. Advances in biotechnology, artificial intelligence (AI) and global collaboration are opening new frontiers in health, agriculture and climate solutions. Within reach are safe and effective vaccines and therapeutics, developed within days of a new outbreak, precision diagnostics that can be deployed anywhere and bio-based materials that replace fossil fuels.
But alongside these breakthroughs lies a challenge: the very tools that accelerate discovery can also introduce new risks of accidental release or deliberate misuse of biological agents, technologies and knowledge. Left unchecked, these risks could undermine trust in science and slow progress at a time when the world most needs solutions.
The question is not whether biotechnology will reshape our societies: it already is. The question is whether we can build a bioeconomy that is responsibly safeguarded, inclusive and resilient.
The promise and the risk
AI is transforming biotechnology at a remarkable speed. Machine learning models and biological design tools can identify promising vaccine candidates, design novel therapeutic molecules and optimize clinical trials, regulatory submissions and manufacturing processes – all in a fraction of the time it once took. These advances are essential for achieving ambitious goals such as the 100 Days Mission, the effort to compress vaccine development timelines in response to future emergent pandemics within 100 days, enabled by AI-driven tools and technologies.
The stakes extend beyond security. Without equitable access to AI-driven tools, low- and middle-income countries risk falling behind in innovation and preparedness. Without distributed infrastructure, inclusive training datasets, skilled personnel and role models, the benefits of the bioeconomy could remain concentrated in a few regions, perpetuating inequities in health security, technological opportunity and scientific progress.
Building a culture of responsibility
Technology alone cannot solve these challenges. What is required is a culture of responsibility embedded across the entire innovation ecosystem, from scientists and startups to policymakers, funders and publishers.
This culture is beginning to take shape. Some research institutions are integrating biosecurity into operational planning and training. Community-led initiatives are emerging to embed biosafety and biosecurity awareness into everyday laboratory practices. International bodies are responding as well: in 2024, the World Health Organization adopted a resolution to strengthen laboratory biological risk management, underscoring the importance of safe and secure practices amid rapid scientific progress.
The Global South is leading the way in practice. Rwanda, for instance, responded rapidly to a Marburg virus outbreak in 2024 by integrating biosecurity into national health security strategies and collaborating with global partners. Such exemplars demonstrate that with political will and the right systems in place, emerging innovation ecosystems play leadership roles in protecting communities and enabling safe participation in the global bioeconomy.
Why inclusion and equity matter
Safeguarding the bioeconomy is not only about biosecurity; it is also about inclusion. If only a handful of countries shape the rules, control the infrastructure and train the talent, innovation will remain unevenly distributed and risks will multiply.
That is why expanding AI and biotechnology capacity globally is so urgent. Distributed cloud infrastructure, diverse training datasets and inclusive training programmes can help ensure that all regions are equipped to participate. Diverse perspectives from scientists, regulators and civil society, across the Global South and Global North, are essential to evaluating risks and identifying solutions that are fair, secure and effective.
Equity is also a matter of resilience. A pandemic that spreads quickly will not wait for producer countries to supply vaccines and treatments. A bioeconomy that works for all must empower all to respond.
The way forward
The World Economic Forum, alongside partners such as CEPI and IBBIS, continues to bring together leaders from science, industry and civil society to mobilize collective action on these issues. At this year’s BIO convention, for example, a group of senior health and biosecurity leaders from industry and civil society met to discuss the foundational importance of biosecurity and biosafety for life science, to future-proof preparedness and innovation ecosystems for tomorrow’s global bioeconomy and to achieve the 100 Days Mission.
The bioeconomy stands at a crossroads. On one path, innovation accelerates solutions to humanity’s greatest challenges: pandemics, climate change and food security. On the other path, the same innovations, unmanaged, could deepen inequities and expose society to new vulnerabilities.
The choice is ours. By embedding responsibility, biosecurity and inclusive governance into today’s breakthroughs, we can secure the foundation of tomorrow’s bioeconomy.
But responsibility cannot rest with a few institutions alone. Building a secure and equitable bioeconomy requires a shared commitment across regions, sectors and disciplines.
The bioeconomy’s potential is immense. Realizing it safely will depend on the choices made now. Choices that determine not just how we innovate, but how we safeguard humanity’s future.
This article is republished from World Economic Forum under a Creative Commons license. Read the original article.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries