Connect with us

AI Research

CyXcel research discovers a third of UK businesses at AI risk

Published

on


Research by cybersecurity consultancy CyXcel has revealed 29% of UK businesses surveyed have only recently implemented their first AI risk strategy, with 31% not no AI governance policies set up. The is despite a third of businesses recognising AI as a potential cybersecurity threat. The blind spot in AI risk preparedness leaves businesses prey to data leaks and breaches, operational disruptions, and regulatory fines, the company says.

Of those surveyed, 18% of UK and US organisations are unprepared for AI data poisoning, a form of cyberattack that targets the training data of AI and machine learning models. Moreover, 16% have no policies in place to fight cloning and deepfake incidents.

Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, stated there is a catch 22 situation, where companies want to adopt AI solutions but simultaneously worry about its risks. “Organisations want to use AI but are worried about risks – especially as many do not have a policy and governance process in place.”

He said CyXcel’s Digital Risk Management (DRM) platform can help respond to mounting threats. “The CyXcel DRM provides clients in all sectors, especially those that have limited technological resources in house, with a robust tool to proactively manage digital risk and harness AI confidently and safely.”

The CyXcel DRM platform is designed to provide businesses with insight into growing AI risks. It combines cyber, legal, technical, and strategic expertise to help manage threats and improve digital resilience. The company says its DRM platform also helps implement governance and policies that will mitigate possible risks.

The DRM platform provides strategies for – AI, Cyber, Supply Chain, Geopolitics, Regulation, Technology (OT/IT), and Corporate Responsibility, available through a dashboard where users can manage digitals risks using solutions proffered by the platform.

Legal and technical insights come from expertise coded into the platform, so users can see trends, the potential impact of risks, and emerging threats. It advises on possible strategies for combatting danger and vulnerabilities.

The DRM also offers a “full-spectrum dispute resolution and litigation service” aimed at reducing the time needed for organisations to follow regulations and laws related to various digital threats. For businesses with strict regulations in place, CyXcel’s DRM covers 26 sectors legally required to follow regulations like the EU’s NIS2 and DORA (Digital Operational Resilience Act). These sectors are considered essential infrastructure, with each classified as Critical National Infrastructure (CNI) in regions like the US, UK, and EU.

CyXcel CEO, Edward Lewis, spoke on the evolving and complex landscape of cybersecurity regulation. “Governments worldwide are enhancing protections for critical infrastructure and sensitive data through legislation like the EU’s Cyber Resilience Act, which mandates security measures like automatic updates and incident reporting. Similarly, new laws are likely to arrive in the UK next year which introduce mandatory ransomware reporting and stronger regulatory powers.”

Businesses worldwide are at the mercy of digital breaches and attacks, including, by its own admission CyXcel itself. Commercially, legally, and strategically, CyXcel’s DRM platform is designed to tackle the issues it’s also at risk from.

CyXcel clients are typically bound by stringent cybersecurity laws, which, if broken, can result in fines and reputational damage. Similarly, if CyXcel’s advice falters, the company itself could be on the hook for failed compliance and breaches.

The company is at pains to stress thet it’s facing the same digital risks as its clients. CyXcel’s marketing materials state that the company’s commitment to risk isn’t advisory, it’s ‘personal.’

(Image “Risk – MSK” by anarchosyn is licensed under CC BY-SA 2.0)

See also: Huawei HarmonyOS 6 AI agents offer alternative to Android and iOS

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Australia’s China AI quandary is a dealmaker’s opportunity

Published

on


It is not surprising that reactions to Chinese ambassador Xiao Qian’s suggestion that Australia and China cooperate more on artificial intelligence as part of an expanded Free Trade Agreement have been hawkish. However, it highlights the need for Australian organisations to broaden their view on the AI world.

It would take a dramatic shift in policy position for Australia to suddenly start collaborating with China on AI infrastructure such as data centres and the equipment that runs them. But it would be wrong to assume that advances in capability will always come from America first.

Loading…



Source link

Continue Reading

AI Research

Joint UT, Yale research develops AI tool for heart analysis – The Daily Texan

Published

on


A study published on June 23 in collaboration with UT and Yale researchers developed an artificial intelligence tool capable of performing and analyzing the heart using echocardiography. 

The app, PanEcho, can analyze echocardiograms, or pictures of the heart, using ultrasounds. The tool was developed and trained on nearly one million echocardiographic videos. It can perform 39 echocardiographic tasks and accurately detect conditions such as systolic dysfunction and severe aortic stenosis.

“Our teammates helped identify a total of 39 key measurements and labels that are part of a complete echocardiographic report — basically what a cardiologist would be expected to report on when they’re interpreting an exam,” said Gregory Holste, an author of the study and a doctoral candidate in the Department of Electrical and Computer Engineering. “We train the model to predict those 39 labels. Once that model is trained, you need to evaluate how it performs across those 39 tasks, and we do that through this robust multi site validation.” 

Holste said out of the functions PanEcho has, one of the most impressive is its ability to measure left ventricular ejection fraction, or the proportion of blood the left ventricle of the heart pumps out, far more accurately than human experts. Additionally, Holste said PanEcho can analyze the heart as a whole, while humans are limited to looking at the heart from one view at a time. 

“What is most unique about PanEcho is that it can do this by synthesizing information across all available views, not just curated single ones,” Holste said. “PanEcho integrates information from the entire exam — from multiple views of the heart to make a more informed, holistic decision about measurements like ejection fraction.” 

PanEcho is available for open-source use to allow researchers to use and experiment with the tool for future studies. Holste said the team has already received emails from people trying to “fine-tune” the application for different uses. 

“We know that other researchers are working on adapting PanEcho to work on pediatric scans, and this is not something that PanEcho was trained to do out of the box,” Holste said. “But, because it has seen so much data, it can fine-tune and adapt to that domain very quickly. (There are) very exciting possibilities for future research.”



Source link

Continue Reading

AI Research

Google launches AI tools for mental health research and treatment

Published

on


Google announced two new artificial intelligence initiatives on July 7, 2025, designed to support mental health organizations in scaling evidence-based interventions and advancing research into anxiety, depression, and psychosis treatments.

The first initiative involves a comprehensive field guide developed in partnership with Grand Challenges Canada and McKinsey Health Institute. According to the announcement from Dr. Megan Jones Bell, Clinical Director for Consumer and Mental Health at Google, “This guide offers foundational concepts, use cases and considerations for using AI responsibly in mental health treatment, including for enhancing clinician training, personalizing support, streamlining workflows and improving data collection.”

The field guide addresses the global shortage of mental health providers, particularly in low- and middle-income countries. According to analysis from the McKinsey Health Institute cited in the document, “closing this gap could result in more years of life for people around the world, as well as significant economic gains.”

Summary

Who: Google for Health, Google DeepMind, Grand Challenges Canada, McKinsey Health Institute, and Wellcome Trust, targeting mental health organizations and task-sharing programs globally.

What: Two AI initiatives including a practical field guide for scaling mental health interventions and a multi-year research investment for developing new treatments for anxiety, depression, and psychosis.

When: Announced July 7, 2025, with ongoing development and research partnerships extending multiple years.

Where: Global implementation with focus on low- and middle-income countries where mental health provider shortages are most acute.

Why: Address the global shortage of mental health providers and democratize access to quality, evidence-based mental health support through AI-powered scaling solutions and advanced research.

The 73-page guide outlines nine specific AI use cases for mental health task-sharing programs, including applicant screening tools, adaptive training interfaces, real-time guidance companions, and provider-client matching systems. These tools aim to address challenges such as supervisor shortages, inconsistent feedback, and protocol drift that limit the effectiveness of current mental health programs.

Task-sharing models allow trained non-mental health professionals to deliver evidence-based mental health services, expanding access in underserved communities. The guide demonstrates how AI can standardize training, reduce administrative burdens, and maintain quality while scaling these programs.

According to the field guide documentation, “By standardizing training and avoiding the need for a human to be involved at every phase of the process, AI can help mental health task-sharing programs effectively scale evidence-based interventions throughout communities, maintaining a high standard of psychological support.”

The second initiative represents a multi-year investment from Google for Health and Google DeepMind in partnership with Wellcome Trust. The funding, which includes research grant funding from the Wellcome Trust, will support research projects developing more precise, objective, and personalized measurement methods for anxiety, depression, and psychosis conditions.

The research partnership aims to explore new therapeutic interventions, potentially including novel medications. This represents an expansion beyond current AI applications into fundamental research for mental health treatment development.

The field guide acknowledges that “the application of AI in task-sharing models is new and only a few pilots have been conducted.” Many of the outlined use cases remain theoretical and require real-world validation across different cultural contexts and healthcare systems.

For the marketing community, these developments signal growing regulatory attention to AI applications in healthcare advertising. Recent California guidance on AI healthcare supervision and Google’s new certification requirements for pharmaceutical advertising demonstrate increased scrutiny of AI-powered health technologies.

The field guide emphasizes the importance of regulatory compliance for AI mental health tools. Several proposed use cases, including triage facilitators and provider-client matching systems, could face classification as medical devices requiring regulatory oversight from authorities like the FDA or EU Medical Device Regulation.

Organizations considering these AI tools must evaluate technical infrastructure requirements, including cloud versus edge computing approaches, data privacy compliance, and integration with existing healthcare systems. The guide recommends starting with pilot programs and establishing governance committees before full-scale implementation.

Technical implementation challenges include model selection between proprietary and open-source systems, data preparation costs ranging from $10,000 to $90,000, and ongoing maintenance expenses of 10 to 30 percent of initial development costs annually.

The initiatives build on growing evidence that task-sharing approaches can improve clinical outcomes while reducing costs. Research cited in the guide shows that mental health task-sharing programs are cost-effective and can increase the number of people treated while reducing mental health symptoms, particularly in low-resource settings.

Real-world implementations highlighted in the guide include The Trevor Project’s AI-powered crisis counselor training bot, which trained more than 1,000 crisis counselors in approximately one year, and Partnership to End Addiction’s embedded AI simulations for peer coach training.

These organizations report improved training efficiency and enhanced quality of coach conversations through AI implementation, suggesting practical benefits for established mental health programs.

The field guide warns that successful AI adoption requires comprehensive planning across technical, ethical, governance, and sustainability dimensions. Organizations must establish clear policies for responsible AI use, conduct risk assessments, and maintain human oversight throughout implementation.

According to the World Health Organization principles referenced in the guide, responsible AI in healthcare must protect autonomy, promote human well-being, ensure transparency, foster responsibility and accountability, ensure inclusiveness, and promote responsive and sustainable development.

Timeline

  • July 7, 2025: Google announces two AI initiatives for mental health research and treatment
  • January 2025California issues guidance requiring physician supervision of healthcare AI systems
  • May 2024: FDA reports 981 AI and machine learning software devices authorized for medical use
  • Development ongoing: Field guide created through 10+ discovery interviews, expert summit with 20+ specialists, 5+ real-life case studies, and review of 100+ peer-reviewed articles



Source link

Continue Reading

Trending