Connect with us

AI Insights

Why the EU’s AI talent strategy needs a reality check 

Published

on


A raft of recent policy changes in the U.S. touching trade, immigration, education, and public spending has sparked upheaval in research communities around the globe. The American economy, once the dream destination for the most talented, suddenly looks like it could lose its allure for the world’s brightest scholars. The sudden crisis of faith in the American innovation ecosystem has also sparked a fresh debate: Can the European Union seize the moment to attract disenchanted researchers and strengthen its own innovation ecosystem? 

The opportunity is real for Brussels, and the stakes are high, as the EU continues to trail the U.S. on virtually every cutting-edge technology—including artificial intelligence. A recent BCG Henderson Institute report shows that that stricter immigration rules and deep funding cuts for academic research in the U.S. raise the possibility that top AI researchers, a large share of whom are not U.S.-born, could look to take their talents elsewhere. Repatriating those top European academics is an important step for European policymakers, but to catch up, the EU must also be able to attract talent beyond the European diaspora, which is only a small fraction of the globally mobile AI talent base.

To remake itself into a tech talent magnet, Europe needs to build an academic ecosystem more closely integrated with its industries, a necessary step to provide the career pathways and information flows needed to turn academic discoveries and inventions into business value. The cost of this transformation will be considerable, as publicly discussed in, for instance, the Draghi report. Only then can the EU’s investments in academia help generate longstanding economic and geopolitical returns for the bloc.

The opportunity for Europe must not be overstated 

The EU recently announced a €500 million allocation over the next two years to help attract foreign researchers. Member states have also launched their own initiatives, including France’s €100 million commitment to its “Choose France for Science” platform to attract international researchers, and Spain’s €45 million pledge to help lure scientists “despised or undervalued by the Trump administration.” 

If these investments are made with the sole aim of repatriating European AI talent in the U.S., they risk falling short. The U.S. is home to roughly 60% of the top 2,000 AI researchers in the world, only one-fifth of whom are originally from continental Europe. Even an exodus of historical proportions would cover only half of the current gap between the EU and U.S. shares of the top AI researchers. 

At top GenAI labs, such as OpenAI and Anthropic, only a very small fraction of AI specialists (less than 1 percentage point of the 25% of workers who have completed their undergraduate degree outside of the U.S.) completed their bachelor’s degree in the EU. The future pipeline of AI talent is no different: In 2023, the top 10 contributing countries of foreign-born PhD recipients in computer science and mathematics to the U.S. accounted for 80% of the total. But not one of those countries is in continental Europe.

The U.S. AI research ecosystem is overwhelmingly supported by talent from Asia, not Europe: 85% of U.S.-based foreign nationals in technical AI jobs at leading American labs hail from China or India. So do 60% of all U.S. computer science and math Ph.D graduates in the U.S. Iran, Bangladesh and Taiwan account for most of the rest. If the EU is serious about becoming a vibrant hub for global AI research talent, it needs to look eastward.

But current (and prospective) AI researchers often don’t see Europe as a top destination. BCG’s Talent Tracker shows that Germany does best among European countries, ranking 5th globally as a “dream destination” for highly skilled talent, followed by France (9th), Spain (10th), and the Netherlands (16th). The EU is not just less attractive than the U.S. (2nd), but also Canada (3rd), the UK (4th), and Australia (1st), and roughly on par with the UAE (11th). European countries are by no means the only nations committed to boosting their own talent bases.

Part of the challenge is the lack of large EU academic institutions with strong AI credentials compared to other regions. None of the top 50 AI institutions worldwide (as ranked by Google Scholar’s H5 journal impact index) are in the EU. A strong institutional base for leading AI labs is essential to create the work environment capable of attracting the best and brightest. 

The EU needs to invest in its universities to improve its standing, but it must also look beyond academia to improve its entire innovation ecosystem. Nearly a third of non-U.S. AI specialists go to the U.S. because of its extensive opportunities for career growth, including entrepreneurial endeavors, a BHI survey of top tech talent recruiters found.

The need for a concerted strategy across academia and industry

To get started, European countries must improve academic compensation in critical fields related to AI, and technology more broadly. In Europe, even when adjusting for purchasing power parity, salaries at the associate professor level are half of those paid at top U.S. institutions. Europe also needs to increase grant availability for research. Public research grants for computer science and informatics at leading American AI institutions are double those available in Europe. Europe may get a boost however, if the U.S. goes through with proposed cuts to the National Science Foundation’s budget.

It’s well known that incentives for innovation matter. In the 2000s, a few European countries reformed their academic patenting laws to follow the U.S. model, where American universities hold patent rights and share commercialization profits with professors. But the reforms were not well tailored to the European context and led to a significant decrease in academic patenting (between 17% and 50% depending on the country).  

Furthermore, only about a third of patented inventions from EU universities and research institutions ever get exploited, largely due to their weak integration into innovation clusters that drive commercialization. Even the best EU innovation clusters, once again, fall outside the top 10 globally, with the U.S. accounting for four spots, and China three. To change that, it’s essential for European policymakers to help build stronger bridges between academia and industry to ensure that foundational research effectively fuels economic value creation.

That includes strengthening the startup and innovation ecosystem around universities themselves. The ultimate aim of attracting top AI researchers is not to simply catch up, but to skip ahead and produce the next IP breakthrough, which will only rise in importance as more AI models become commoditized. Coming up with the next big thing, however, requires an investment environment capable of supporting ambitious bets on potential breakthroughs coming out of academia. Countries like Canada and the U.K. serve as cautionary tales of AI research hotspots that have often struggled to translate academic breakthroughs into commercial successes, a leap successfully undertaken by large U.S. tech companies.

Many of the usual items in the European reform menu will also bolster the AI talent and innovation ecosystem. As the 2024 Draghi report on the future of European competitiveness noted, the integration of EU capital markets is vital, as is the removal of internal trade barriers that hamper early-stage startups’ growth. Between 2019 and 2024, AI venture capital investment in the EU was just a tenth of that in the U.S. It is no wonder then that nearly a third of European “unicorns” founded between 2008 and 2021 relocated elsewhere—usually to the U.S. 

But crucially, the list of reforms must also include strong incentives for AI adoption. At present, EU companies lag their U.S. counterparts in generative AI adoption by between 45% and 70%. Closing that gap will simultaneously help fuel European demand for specialized AI talent and create the economic opportunities beyond academia that are critical to attracting the world’s best and brightest.

Overconfidence could set back the EU 

The EU is right to want to lure researchers into its academic institutions that have historically pushed the frontier of AI. This will require revamping the academic ecosystem and more systematically translating academic breakthroughs into long-term economic and strategic leadership. 

But it would be wrong for European policymakers to assume that the erosion of U.S. attractiveness will organically lead to a talent windfall, predicated on their belief that Europe is the inevitable “next best” option. That will only be true if the region acts decisively to build its own, integrated, AI ecosystem capable of attracting the brightest minds from China, India, and beyond. In the AI race, as on many other fronts, the EU bears the risk of being too confident in its belief that it is entrenched in third place. That kind of complacency could very well accelerate the EU’s descent into the minor leagues of global innovation.

***

Read other Fortune columns by François Candelon.

François Candelon is a partner at private equity firm Seven2 and the former global director of the BCG Henderson Institute

Etienne Cavin is a consultant at Boston Consulting Group and a former ambassador at the BCG Henderson Institute.

David Zuluaga Martínez is senior director at Boston Consulting Group’s Henderson Institute.

Some of the companies mentioned in this column are past or present clients of the authors’ employers.



Source link

AI Insights

Darwin Awards For AI Celebrate Epic Artificial Intelligence Fails

Published

on


Not every artificial intelligence breakthrough is destined to change the world. Some are destined to make you wonder “With all this so-called intelligence flooding our lives, how could anyone think that was a smart idea?” That’s the spirit behind the AI Darwin Awards, which recognize the most spectacularly misguided uses of the technology. Submissions are open now.

Reads an introduction to the growing list of nominees, which include legal briefs replete with fictional court cases, fake books by real writers and an Airbnb host manipulating images with AI to make it appear a guest owed money for damages:

“Behold, this year’s remarkable collection of visionaries who looked at the cutting edge of artificial intelligence and thought, ‘Hold my venture capital.’ Each nominee has demonstrated an extraordinary commitment to the principle that if something can go catastrophically wrong with AI, it probably will — and they’re here to prove it.”

A software developer named Pete — who asked that his last name not be used to protect his privacy — launched the AI Darwin Awards last month, mostly as a joke, but also as a cheeky reminder that humans ultimately decide how technology gets deployed.

Don’t Blame The Chainsaw

“Artificial intelligence is just a tool — like a chainsaw, nuclear reactor or particularly aggressive blender,” reads the website for the awards. “It’s not the chainsaw’s fault when someone decides to juggle it at a dinner party.

“We celebrate the humans who looked at powerful AI systems and thought, ‘You know what this needs? Less testing, more ambition, and definitely no safety protocols!’ These visionaries remind us that human creativity in finding new ways to endanger ourselves knows no bounds.”

The AI Darwin Awards are not affiliated with the original Darwin Awards, which famously call out people who, through extraordinarily foolish choices, “protect our gene pool by making the ultimate sacrifice of their own lives.” Now that we let machines make dumb decisions for us too, it’s only fair they get their own awards.

Who Will Take The Crown?

Among the contenders for the inaugural AI Darwin Awards winner are the lawyers who defended MyPillow CEO Mike Lindell in a defamation lawsuit. They submitted an AI-generated brief with almost 30 defective citations, misquotes and references to completely fictional court cases. A federal judge fined the attorneys for their misstep, saying they violated a federal law requiring that lawyers certify court filings are grounded in the actual law.

Another nominee: the AI-generated summer reading list published earlier this year by the Chicago Sun Times and The Philadelphia Inquirer that contained fake books by real authors. “WTAF. I did not write a book called Boiling Point,” one of those authors, Rebecca Makkai, posted to BlueSky. Another writer, Min Jin Lee, also felt the need to issue a clarification.

“I have not written and will not be writing a novel called Nightshare Market,” the Pachinko author wrote on X. “Thank you.”

Then there’s the executive producer at Xbox Games Studios who suggested scores of newly laid-off employees should turn to chatbots for emotional support after losing their jobs, an idea that did not go over well.

“Suggesting that people process job loss trauma through chatbot conversations represents either breathtaking tone-deafness or groundbreaking faith in AI therapy — likely both,” the submission reads.

What Inspired The AI Darwin Awards?

The creator of the awards, who lives in Melbourne, Australia, and has worked in software for three decades, said he frequently uses large language models, including to craft the irreverent text for the AI Darwin Awards website. “It takes a lot of steering from myself to give it the desired tone, but the vast majority of actual content, probably 99%, is all the work of my LLM minions,” he said in an interview.

Pete got the idea for the awards as he and co-workers shared their experiences with AI on Slack. “Occasionally someone would post the latest AI blunder of the day and we’d all have either a good chuckle, or eye-roll or both,” he said.

The awards sit somewhere between reality and satire.

“AI will mean lots of good things for us all and it will mean lots of bad things,” the contest’s creator said. “We just need to work out how to try and increase the good and decrease the bad. In fact, our first task is to identify both the good and the bad. Hopefully the AI Darwin Awards can be a small part of that by highlighting some of the ‘bad.’”

He plans to invite the public to vote on candidates in January, with the winner to be announced in February.

For those who’d rather not win an AI Darwin Award, the site includes a handy guide for how for avoiding the dubious distinction. It includes these tips: “Test your AI systems in safe environments before deploying them globally,” “consider hiring humans for tasks that require empathy, creativity or basic common sense” and “ask ‘What’s the worst that could happen?’ and then actually think about the answer.”



Source link

Continue Reading

AI Insights

Redefining speed: The AI revolution in clinical decision-making

Published

on


As AI tools further enter the clinical setting, they can provide huge opportunities for time savings through more efficient decision-making.

Clinicians need one main thing: More time

As the EHR and data collection have become more robust, clinicians are spending more time on paperwork and administration. The American Medical Association conducted surveys in 2024 and found that physicians spent an average of 13 hours on indirect patient care (order entry, documentation, lab interpretation) and over seven hours on administrative tasks (prior authorization, insurance forms, meetings). On top of patient care, this meant a 57.8-hour workweek.

Ultimately, clinicians need more time with their patients and less time taking notes. They need more time to understand complex cases and less time spent searching for information. Information overload is also a challenge: Medical knowledge is doubling every 73 days, and patients are increasingly relying on multiple medications. It also takes an average of 17 years between clinical discovery and changing practice based on evidence—clinicians need efficient ways to stay updated in their area of expertise.

AI can produce time savings that add up

We’re seeing a revolution in how artificial intelligence (AI) can support them. As AI is introduced further into healthcare administrative work and clinical settings, there are opportunities for clinicians to be more productive and meaningful with their time.

When we look at how AI-enabled features can save time for clinicians, the amazing thing is that it’s not massive blocks of time—like 5 or 10 minutes. It’s 10 seconds on a task, or 30 seconds here, or 45 seconds there. And the clinicians we speak with are so happy about it. AI can help speed up the little things—the couple of clicks saved—and over time, that can make a huge difference. It’s multiple moments of small savings that add up to these meaningful productivity gains.

So, as we find ways to further integrate UpToDate into the workflow, this is what we think about: Finding those extra moments that matter. Getting clinical information closer to the provider so they don’t have to open extra applications for decision-making. We’re looking for multiple ways to get evidence and clinical intelligence streamlined throughout the care experience and into the EHR, presenting tremendous opportunities for time savings.

The opportunities are plentiful. How can ambient and note-taking technology link to the relevant evidence-based clinical content for quick reference? How could patient interactions with chatbots ahead of a clinic visit prep the provider with relevant evidence in advance? Identifying innovative partners that can work alongside us in ambient solutions, documentation, chatbots, and more can help bring content and evidence closer to clinicians and save those seconds over time.

Time savings can bring new clinical opportunities

What can clinicians do with that saved time? Some have been concerned that GenAI tools will deteriorate clinical decision-making skills—our recent Future Ready Healthcare report showed that 57% of respondents share these concerns. But I like to think about the opportunities created through those time savings: How can AI help open up space for deeper critical thinking?

With AI saving time and supporting smaller tasks, the first thing it can do is alleviate some of the administrative burden, which is already happening. It can also expand critical thinking opportunities and provide space to consider challenges in healthcare that historically we haven’t had time to solve. It can “re-humanize medical practice” in a way that provides professional fulfillment and allows clinicians to spend more time as caregivers, rather than note-takers. When these efforts are scaled across the workforce, it can result in productivity gains and operational efficiencies across an enterprise.

AI tools need to be grounded in expert-driven evidence

As we rapidly move into the AI era, it’s easy to find tools that seem to give faster answers, especially among generative AI (GenAI) tools. But are they grounded in evidence and industry recommendations?

Keeping expert clinicians in the loop is critical—if you’ve trusted UpToDate for a while, you’ll know this is our position. Our clinical decision support is grounded not just in evidence but in the recommendations of over 7,600 clinical practitioners and experts who curate content as new evidence emerges, and provide graded recommendations to help guide decision-making, even when the conditions are gray. Relying on clinical recommendations curated by human experts keeps the information and care guidance current and relevant. As AI is layered on top of these human-generated recommendations, clinicians can start finding information more efficiently—saving precious seconds with each patient.

We know this expertise matters. A 2024 Wolters Kluwer Health survey of US physicians showed they were overall positive about the prospects of GenAI in clinical settings; however, 91% said they would have to know the materials the AI was trained on were created by doctors and medical experts in order to trust it. They also overwhelmingly wanted (89%) the technology vendor to be transparent about where the information came from, who created it, and how it was sourced.

The UpToDate, you know and trust, is entering a new era, which is in line with Bud Rose’s vision for a consultative conversation with clinical experts. And we’re just getting started—join us in helping shape the next wave of healthcare innovation.

Read our vision for the future of healthcare and explore our perspectives on AI in clinical content.



Source link

Continue Reading

AI Insights

Swift Tests Use of AI to Fight Cross-Border Payment Fraud

Published

on


Swift conducted tests to demonstrate the potential impact of artificial intelligence in preventing cross-border payments fraud.

The global messaging system collaborated with 13 banks on experiments using privacy-enhancing technologies (PETs) to let institutions securely share fraud insights across borders, according to a Monday (Sept. 15) press release.

In one instance, the PETs allowed participants to verify intelligence on suspicious accounts in real time, “a development which could speed up the time taken to identify complex international financial crime networks and avoid fraudulent transactions being executed,” the release said.

In another case, participants employed a combination of PETs and federated learning, or an AI model that “visits” institutions to train on their data locally and lets them work together without sharing customer information, to spot anomalous transactions, per the release.

Trained using synthetic data from 10 million artificial transactions, the model was twice as effective in identifying fraud than a model trained using a single institution’s dataset, the release said.

“These experiments demonstrate the convening power of Swift as a trusted cooperative at the heart of global finance,” Rachel Levi, head of AI for Swift, said in the release. “A united, industry-wide fraud defense will always be stronger than one put up by a single institution acting alone. The industry loses billions [of dollars] to fraud each year, but by enabling the secure sharing of intelligence across borders, we’re paving the way for this figure to be significantly reduced and allowing fraud to be stopped in a matter of minutes, not hours or days.”

In the wake of these experiments, Swift plans to widen participation before beginning a second round of tests, which will use real transaction data in hopes of demonstrating the technologies’ effect on real-world fraud, the release said.

When it comes to preserving trust in financial transactions, sharing data is important.

“It’s a team sport,” Entersekt Chief Product Officer Pradheep Sampath told PYMNTS in August. “And the thread that binds us all together is data that’s actionable, shared in good faith, and governed responsibly.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.



Source link

Continue Reading

Trending