Connect with us

AI Insights

Minister tells UK’s Turing AI institute to focus on defence

Published

on


Science and Technology Secretary Peter Kyle has written to the UK’s national institute for artificial intelligence (AI) to tell its bosses to refocus on defence and security.

In a letter, Kyle said boosting the UK’s AI capabilities was “critical” to national security and should be at the core of the Alan Turing Institute’s activities.

Kyle suggested the institute should overhaul its leadership team to reflect its “renewed purpose”.

The cabinet minister said further government investment in the institute would depend on the “delivery of the vision” he had outlined in the letter.

A spokesperson for the Alan Turing Institute said it welcomed “the recognition of our critical role and will continue to work closely with the government to support its priorities”.

“The Turing is focussing on high-impact missions that support the UK’s sovereign AI capabilities, including in defence and national security,” the spokesperson said.

“We share the government’s vision of AI transforming the UK for the better.”

The letter comes after Prime Minister Sir Keir Starmer committed to a Nato alliance target of increasing UK defence spending to 5% of national income by 2035 and invest more in military uses of AI technology.

A recent government review of UK defence said “an immediate priority for force transformation should be a shift towards greater use of autonomy and artificial intelligence”.

Set up under Prime Minister David Cameron’s government as the National Institute for Data Science in 2015, the institute added AI to its remit two years later.

It receives public funding and was given a grant of £100m by the previous Conservative government last year.

The Turing institute’s work has focused on AI and data science research in three main areas – environmental sustainability, health and national security.

Lately, the institute has focused more on responsible AI and ethics, and one of its recent reports was on the increasing use of the tech by romance scammers.

But Kyle’s letter suggests the government wants the Turing institute to make defence its main priority, which would be a significant pivot for the organisation.

“There is an opportunity for the ATI to seize this moment,” Kyle wrote in the letter to the institute’s chairman, Dr Douglas Gurr.

“I believe the institute should build on its existing strengths, and reform itself further to prioritise its defence, national security and sovereign capabilities.”

It’s been a turbulent few months for the institute, which finds itself in survival mode in 2025.

A review last year by UK Research and Innovation, the government funding body, found “a clear need for the governance and leadership structure of the Institute to evolve”.

At the end of 2024, 93 members of staff signed a letter expressing lack of confidence in its leadership team.

In March, Jean Innes, who was appointed chief executive in July 2023, said the Turing needed to modernise and focus on AI projects, in an interview with the Financial Times.

She said “a big strategic shift to a much more focused agenda on a small number of problems that have an impact in the real world”.

In April, Chief Scientist Mark Girolami said in an interview the organisation would be taking forward just 22 projects out of a portfolio of 104.

Kyle’s letter said the institute “should continue to receive the funding needed to implement reforms and deliver Turing 2.0”.

But he said there could be a review of the ATI’s “longer-term funding arrangement” next year.

The use of AI in defence is as powerful as it is controversial.

Google’s parent company Alphabet faced criticism earlier this year for removing a self-imposed ban on developing AI weapons.

Meanwhile, the British military and other forces are already investing in AI-enabled tools.

The government’s defence review said AI technologies “would provide greater accuracy, lethality, and cheaper capabilities”.

The review said “uncrewed and autonomous systems” could be used within the UK’s conventional forces within the next five years.

In one example, the review said the Royal Navy could use “acoustic detection systems powered by artificial intelligence” to monitor the “growing underwater threat from a modernising Russian submarine force”.

The tech firm Palantir has provided data operations software to the UK’s armed forces.

Louis Mosley, the head of Palantir UK, told the BBC that shift the institute’s focus to AI defence technologies was a good idea.

He said: “Right now we face a daunting combination of darkening geopolitics and technological revolution – with the world becoming a more dangerous place right at the moment when artificial intelligence is changing the face of war and deterrence.

“What that means in practice is that we are now in an AI arms race against our adversaries.

“And the government is right that we need to put all the resources we have into staying ahead – because that is our best path to preserving peace.”

Additional reporting by Chris Vallance, senior technology reporter



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Concordia-led research program is Indigenizing artificial intelligence | Education

Published

on


An initiative steered by Concordia researchers is challenging the conversation around the direction of artificial intelligence (AI). It charges that the current trajectory is inherently biased against non-Western modes of thinking about intelligence — especially those originating from Indigenous cultures. As a way of decolonising the future of AI, they have created the Abundant Intelligences research program: an international, multi-institutional and interdisciplinary program that seeks to rethink how we conceive of AI. The driving concept behind it is the incorporation of Indigenous knowledge systems to create an inclusive, robust concept of intelligence and intelligent action, and how that can be embedded into existing and future technologies.

The full concept is described in a recent paper for the journal AI & Society.

 “Artificial intelligence has inherited conceptual and intellectual ideas from past formulations of intelligence that took on certain colonial pathways to establish itself, such as emphasizing a kind of industrial or production focus,” says Ceyda Yolgörmez, a postdoctoral fellow with Abundant Intelligences and one of the paper’s authors.

They write that this scarcity mindset contributed to resource exploitation and extraction that has extended a legacy of Indigenous erasure that influences discussion around AI to this day, adds lead author Jason Edward Lewis. The professor in the Department of Design and Computation Arts is also the University Research Chair in Computational Media and the Indigenous Future Imaginary. “The Abundant Intelligences research program is about deconstructing the scarcity mindset and making room for many kinds of intelligence and ways we might think about it.”

The researchers believe this alternative approach can create an AI that is oriented toward human thriving, that preserves and supports Indigenous languages, addresses pressing environmental and sustainability issues, re-imagines public health solutions and more.

Relying on local intelligence

The community-based research program is directed from Concordia in Montreal but much of the local work will be done by individual research clusters (called pods) across Canada, in the United States and in New Zealand.

The pods will be anchored to Indigenous-centred research and media labs at Western University in Ontario, the University of Lethbridge in Alberta, the University of Hawai’i—West Oahu, Bard College in New York and Massey University in New Zealand.

They bring together Indigenous knowledge-holders, cultural practitioners, language keepers, educational institutions and community organizations with research scientists, engineers, artists and social scientists to develop new computational practices fitted to an Indigenous-centred perspective.

The researchers are also partnering with AI professionals and industry researchers, believing that the program will open new avenues of research and propose new research questions for mainstream AI research. “For example, how do you build a rigorous system out of a small amount of resource data like different Indigenous languages?” asks Yolgörmez.  “How do you make multi-agent systems that are robust, recognize and support non-human actors and integrate different sorts of activities within the body of a single system?”

Lewis asserts that their approach is both complementary and alternative to mainstream AI research, particularly regarding data sets like Indigenous languages that are much smaller than the ones currently being used by industry leaders. “There is a commitment to working with data from Indigenous communities in an ethical way, compared to simply scraping the internet,” he says. “This yields miniscule amounts of data compared to what the larger companies are working with, but it presents the potential to innovate different approaches when working with small languages. That can be useful to researchers who want to take a different approach than the mainstream.

“This is one of the strengths of the decolonial approach: it’s one way to get out of this tunnel vision belief that there is only one way of doing things.”

Hēmi Whaanga, professor at Massey University in New Zealand, also contributed to the paper.

Read the cited paper: “Abundant intelligences: placing AI within Indigenous knowledge frameworks.

— By Patrick Lejtenyi

Concordia University

@ConcordiaUnews

— AB





Source link

Continue Reading

AI Insights

Eckerd College launches new minor in AI studies – News

Published

on


It couldn’t have come at a better time. Students have become more and more reliant on AI for coursework, and national studies are sending up warning signals about the new and creative ways students are utilizing AI to complete assignments.

“AI is definitely a balancing act that I think so many of us in higher education are dealing with,” says Ramsey-Tobienne, who also oversees the College Academic Honor Council. “As professors, we have to decide how, if and when to use it, and we need to help our students develop into critical consumers of AI. Indeed, critical AI literacy is really the foundation of so much of what we’re doing in the minor.

“For better or worse, AI is not going anywhere,” Ramsey-Tobienne adds. “And I think we do ourselves a disservice if we’re not helping our students to understand how to navigate this new AI world.” 



Source link

Continue Reading

AI Insights

AI drug companies are struggling—but don’t blame the AI

Published

on


Moonshot hopes of artificial intelligence being used to expedite the development of drugs are coming back down to earth. 

More than $18 billion has flooded into more than 200 biotechnology companies touting AI to expedite development, with 75 drugs or vaccines entering clinical trials, according to Boston Consulting Group. Now, investor confidence—and funding—is starting to waver.

In 2021, venture capital investment in AI drug companies reached an apex with more than 40 deals being made worth about $1.8 billion. This year, there have been fewer than 20 deals worth about half of that peak sum, the Financial Times reported, citing data from Pitchbook. 

Some existing companies have struggled in the face of challenges. In May, biotech company Recursion tabled three of its prospective drugs in a cost-cutting effort following a merger with Exscientia, a similar biotech firm, last year. Fortune previously reported that none of Recursion’s discovered AI-compounds have reached the market as approved drugs. After a major restructuring in December 2024, biotech company BenevolentAI delisted from the Euronext Amsterdam stock exchange in March before merging with Osaka Holdings. 

A Recursion spokesperson told Fortune the decision to shelve the drugs was “data-driven” and a planned outcome of its merger with Exscientia.

“Our industry’s 90% failure rate is not acceptable when patients are waiting, and we believe approaches like ours that integrate cutting-edge tools and technologies will be best positioned for long-term success,” the spokesperson said in a statement.

BenevolentAI did not respond to a request for comment.

The struggles of the industry coincide with a broader conversation around the failure of generative AI to deliver more quickly on its lofty promises of productivity and efficiency. An MIT report last month found 95% of generative AI pilots at companies failed to accelerate revenue. A U.S. Census Bureau survey this month found AI adoption in large U.S. companies has declined from its 14% peak earlier this year to 12% as of August.

But the AI technology used to help develop drugs is far different than those from large language models used in most workplace initiatives and should therefore not be held to the same standards, according to Scott Schoenhaus, managing director and equity research analyst for KeyBanc Capital Markets Inc. Instead, the industry faces its own set of challenges.

“No matter how much data you have, human biology is still a mystery,” Schoenhaus told Fortune.

Macro and political factors drying up AI drug development funding

At the crux of the slowed funding and slower development results may not be the limitations of the technology itself, but rather a slew of broader factors, Schoenhaus said.

“Everyone acknowledges the funding environment has dried up,” he said. “The biotech market is heavily influenced by low interest rates. Lower interest rates equals more funding coming into biotechs, which is why we’re seeing funding for biotech at record lows over the last several years, because interest rates have remained elevated.”

It wasn’t always this way. Leveraging AI in drug development is not only thanks to growing access to semiconductor chips, but also how technology has allowed for quick and now cheap ways of mapping the entire human genome. In 2001, it cost more than $100 million to map the human genome. Two decades later, that undertaking cost about $1,000.

Beyond having the pandemic to thank for next-to-nothing interest rates in 2021, COVID also expedited partnerships between AI drug development start ups and Big Pharma companies. In early 2022 biotechnology startup AbCellera and Eli Lilly got emergency FDA approval for an antibody used in the early COVID vaccines, a tangible example of how the tech could be used to aid in drug discoveries.

But since then, there have been other industry hurdles, Schoenhaus said, including Big Pharma cutting back on research and development costs amid slowing demand, as well as uncertainty surrounding whether President Donald Trump would impose a tariff on pharmaceuticals as the U.S. and European Union tussled over a trade deal. Trump signed a memo this week threatening to ban direct-to-consumer advertising for prescription medications, theoretically driving down pharma revenues.

Limitations of AI

That’s not to say there haven’t been technological hiccups in the industry.

“There is scrutiny around the technology themselves,” Schoenhaus said. “Everyone’s waiting for these readouts to prove that.”

The next 12 months of emerging data from AI drug development startups will be critical in determining how successful these companies stand to be, Schoenhaus said. Some of the results so far have been mixed. For example, Recursion released data from a mid-stage clinical trial of a drug to treat a neurovascular condition in September last year, finding the drug was safe but that there was little evidence of how effective it was. Company shares fell double digits following the announcement. 

These companies are also limited by how they’re able to leverage AI. The drug development process is one that takes 10 years and is intentionally bottlenecked to ensure the safety and efficacy of the drugs in question, according to according to David Siderovski, chair of University of North Texas Health Science Center’s Department of Pharmacology & Neuroscience, who has previously worked with AI drug development companies in the private sector. Biotechnology companies using AI to make these processes more efficient are usually only tackling one small part of this bottleneck, such as being able to screen and identify a drug-like molecule faster than previously.

“There are so many stages that have to be jumped over before you can actually declare the [European Medicines Agency], or the FDA, or Health Canada, whoever it is, will designate this as a safe, approved drug to be marketed to patients out in the world,” Siderovski told Fortune. “That one early bottleneck of auditioning compounds is not the be-all and end-all of satisfying shareholders by announcing, ‘We have approval for this compound as a drug.’”

Smaller companies in the sector have also made a concerted effort to partner less with Big Pharma companies, preferring instead to build their own pipelines, even if it means no longer having access to the franchise resources of industry giants. 

“They want to be able to pursue their technology and show the validation of their platform sooner than later,” Schoenhaus said. “They’re not going to wait around for large pharma to pursue a partnered molecule. They’d rather just do it themselves and say, ‘Hey, look, our technology platform works.’”

Schoenhaus sees this strategy as a way for companies looking to prove themselves by perfecting the use of AI to better understand the slippery, mysterious, and still greatly unknown frontier of human biology.

“It’s just a very much more complex application of AI,” he said, “hence why I think we are still seeing these companies focus on their own internal pipelines so that they can really, squarely focus their resources on trying to better their technology.”



Source link

Continue Reading

Trending