Connect with us

AI Research

Eaton, Nvidia collaborate on power infrastructure in artificial intelligence

Published

on


Eaton (ETN) announced that it is enabling the shift to high-voltage direct current power infrastructure in artificial intelligence data centers. Eaton is collaborating with Nvidia (NVDA) on design best practices, reference architectures and innovative power management solutions tailored to support high-density GPU deployments, such as Nvidia Kyber rack-scale systems with Nvidia Rubin Ultra GPUs. This includes helping lead the transition to 800 V HVDC power infrastructure to support 1 megawatt racks and beyond as well as exploring opportunities to leverage its solutions in the Nvidia Omniverse Blueprint for AI factory design and operations.

Elevate Your Investing Strategy:

Disclaimer & DisclosureReport an Issue



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Google invests £5bn to help power the UK’s AI economy

Published

on


Demis Hassabis, co-founder and chief executive of Google DeepMind (Credit: Ange.original)

Google has opened a data centre in Hertfordshire to meet growing demand for AI services, as part of a two-year £5bn investment in the UK.

The centre in Waltham Cross, opened by chancellor Rachel Reeves, encompasses Google DeepMind with its AI research in science and healthcare, and will help the UK develop its AI economy by advancing AI breakthroughs and supporting around 8,250 jobs.

It is part of a £5bn investment including capital expenditure, research and development, and related engineering.

Reeves said: “Google’s £5bn investment is a powerful vote of confidence in the UK economy and the strength of our partnership with the US, creating jobs and economic growth for years to come.”

Google is investing to support people across the UK to gain the skills for AI adoption and is part of an industry group, announced by the government in July 2025, to train 7.5 million people by 2030.

Demis Hassabis, co-founder and chief executive of Google DeepMind, said: “We founded DeepMind in London because we knew the UK had the potential and talent to be a global hub for pioneering AI.

“The UK has a rich history of being at the forefront of technology – from Lovelace to Babbage to Turing – so it’s fitting that we’re continuing that legacy by investing in the next wave of innovation and scientific discovery in the UK.”

Google will establish a community fund, managed by Broxbourne Council, to support local economic development.

Ruth Porat, president and chief investment officer at Alphabet and Google, said: “With today’s announcement, Google is deepening our roots in the UK and helping support Great Britain’s potential with AI to add £40bn to the economy by 2030 while also enhancing critical social services.

“Google’s investment in technical infrastructure, expanded energy capacity and job-ready AI skills will help ensure everyone in Broxbourne and across the whole of the UK stays at the cutting-edge of global tech opportunities.” 

The news follows announcements from pharmaceutical giants Merck and AstraZeneca that they are pulling out of the UK.

Merck, known as MSD in Europe, halted plans to build a £1bn research centre under construction in London and is cutting more than 100 scientific staff, citing concerns about the UK’s commercial environment.

Meanwhile, AstraZeneca has paused a planned £200 million investment in its Cambridge research site, which was expected to create thousands of jobs. 

This is a blow for the government, which is seeking to boost economic growth and attract investment to life sciences, with Wes Streeting, health secretary, pledging to make Britain a “powerhouse” for the sector.

The government’s Life Sciences Sector Plan, published in July 2025, sets an ambition to harness scientific innovation for economic growth, which includes making the UK “an outstanding place to start, scale and invest”.

Commenting on Google’s investment, Nick Lansman, chief executive and founder of the Health Tech Alliance, said: “This kind of scalable computing and world‑class R&D will help health tech innovators accelerate discovery, deployment and safe adoption across the NHS, supporting the UK’s ambition to be a global hub for life sciences growth.”



Source link

Continue Reading

AI Research

5 steps for deploying agentic AI red teaming

Published

on


AI-based agentic sources of security exploits aren’t new. The Open Worldwide Application Security Project (OWASP) published a paper that examines all kinds of agentic AI security issues with specific focus on model and application architecture and how multiple agents can collaborate and interact. It reviewed how users of various general-purpose agent frameworks such as LangChain, CrewAI and AutoGPT should better protect their infrastructure and data. Like many other OWASP projects, its focus is on how application development can incorporate better security earlier in the software lifecycle.

Andy Swan at Gray Swan AI led a team to publish an academic paper on AI agent security challenges. In March, they pitted 22 frontier AI agents in 44 realistic deployment scenarios that resulted in observing the effects of almost two million prompt injection attacks. Over 60,000 attacks were successful, “suggesting that additional defenses are needed against adversaries. This effort was used to create an agent red teaming benchmark and framework to evaluate high-impact attacks.” The results revealed deep and recurring failures: agents frequently violated explicit policies, failed to resist adversarial inputs, and performed high-risk actions across domains such as finance, healthcare, and customer support. “These attacks proved highly transferable and generalizable, affecting models regardless of size, capability, or defense strategies.”

Part of the challenge for assembling effective red team forays into your infrastructure is that the entire way incidents are discovered and mitigated is different when it comes to dealing with agentic AI. “From an incident management perspective, there are some common elements between agents and historical attacks in terms of examining what data needs to be protected,” Myles Suer of Dresner Advisory, an agentic AI researcher, tells CSO. “But gen AI stores data not in rows and columns but in chunks and may be harder to uncover.” Plus, time is of the essence: “The time between vulnerability and exploit is exponentially shortened thanks to agentic AI,” Bar-El Tayouri, the head of AI security at Mend.io, tells CSO.



Source link

Continue Reading

AI Research

AI fares better than doctors at predicting deadly complications after surgery

Published

on


A new artificial intelligence model found previously undetected signals in routine heart tests that strongly predict which patients will suffer potentially deadly complications after surgery. The model significantly outperformed risk scores currently relied upon by doctors.

The federally funded work by Johns Hopkins University researchers, which turns standard and inexpensive test results into a potentially lifesaving tool, could transform decision-making and risk calculation for both patients and surgeons.

“We demonstrate that a basic electrocardiogram contains important prognostic information not identifiable by the naked eye.”

Robert D. Stevens

Division of Informatics, Integration, and Innovation at Johns Hopkins Medicine

“We demonstrate that a basic electrocardiogram contains important prognostic information not identifiable by the naked eye,” said senior author Robert D. Stevens, chief of the Division of Informatics, Integration, and Innovation at Johns Hopkins Medicine. “We can only extract it with machine learning techniques.”

The findings are published today in the British Journal of Anaesthesia.

A substantial portion of people develop life-threatening complications after major surgery. The risk scores relied upon by doctors to identify who is at risk for complications are only accurate in about 60% of cases.

Hoping to create a more accurate way to predict these health risks, the Johns Hopkins team turned to the electrocardiogram, or ECG, a standard, pre-surgical heart test widely obtained before major surgery. It’s a fast, non-invasive way to evaluate cardiac activity through electric signals, and it can signal heart disease.

But ECG signals also pick up on other, more subtle physiological information, Stevens said, and the Hopkins team suspected they might find a treasure trove of rich, predictive data—if AI could help them see it.

“The ECG contains a lot of really interesting information not just about the heart but about the cardiovascular system,” Stevens said. “Inflammation, the endocrine system, metabolism, fluids, electrolytes—all of these factors shape the morphology of the ECG. If we could get a really big dataset of ECG results and analyze it with deep learning, we reasoned we could get valuable information not currently available to clinicians.”

Image caption: Stevens’ team used artificial intelligence to extract previously undetected signals in these routine heart tests that strongly predict which patients will suffer potentially deadly complications after surgery

Image credit: Will Kirk / Johns Hopkins University

The team analyzed preoperative ECG data from 37,000 patients who had surgery at Beth Israel Deaconess Medical Center in Boston.

The team trained two AI models to identify patients likely to have a heart attack, a stroke, or die within 30 days after their surgery. One model was trained on just ECG data. The other, which the team called a “fusion” model, combined the ECG information with more details from patient medical records such as age, gender, and existing medical conditions.

The ECG-only model predicted complications better than current risk scores, but the fusion model was even better, able to predict which patients would suffer post-surgical complications with 85% accuracy.

“Surprising that we can take this routine diagnostic, this 10 seconds worth of data, and predict really well if someone will die after surgery,” said lead author Carl Harris, a PhD student in biomedical engineering. “We have a really meaningful finding that can can improve the assessment of surgical risk.”

The team also developed a method to explain which ECG features might be associated with a heart attack or a stroke after an operation.

“You can imagine if you’re undergoing major surgery, instead of just having your ECG put in your records where no one will look at it, it’s run thru a model and you get a risk assessment and can talk with your doctor about the risks and benefits of surgery,” Stevens said. “It’s a transformative step forward in how we assess risk for patients.”

Next the team will further test the model on datasets from more patients. They would also like to test the model prospectively with patients about to undergo surgery.

The team would also like to determine what other information might be extracted from ECG results through AI.

Authors, all from the Johns Hopkins School of Medicine and the Whiting School of Engineering, include Anway Pimpalkar, Ataes Aggarwal, Jiyuan Yang, Xiaojian Chen, Samuel Schmidgall, Sampath Rapuri, Joseph L. Greenstein, and Casey O. Taylor.

The work was supported by National Science Foundation Graduate Research Fellowship DGE2139757.



Source link

Continue Reading

Trending