Connect with us

AI Research

Ship footage captures sound of Oceangate’s Titan sub imploding

Published

on


Alison Francis

Senior Science Journalist

Stockton Rush’s wife Wendy asks “what’s that bang?” in footage that appears in new BBC documentary

The moment that Oceangate’s Titan submersible was lost has been revealed in footage recorded on the sub’s support ship.

Titan imploded about 90 minutes into a descent to see the wreck of the Titanic in June 2023, killing all five people on board.

The passengers had paid Oceangate to see the ship, which lies 3,800m down.

On board were Oceangate’s CEO Stockton Rush, British explorer Hamish Harding, veteran French diver Paul Henri Nargeolet, the British-Pakistani businessman Shahzada Dawood and his 19-year-old son Suleman.

The BBC has had unprecedented access to the US Coast Guard’s (USCG) investigation for a documentary, Implosion: The Titanic Sub Disaster.

The footage was recently obtained by the USCG and shows Wendy Rush, the wife of Mr Rush, hearing the sound of the implosion while watching on from the sub’s support ship and asking: “What was that bang?”

The video has been presented as evidence to the USCG Marine Board of Investigation, which has spent the last two years looking into the sub’s catastrophic failure.

The documentary also reveals the carbon fibre used to build the submersible started to break apart a year before the fatal dive.

Titan’s support ship was with the sub while it was diving in the Atlantic Ocean. The video shows Mrs Rush, who was a director of Oceangate with her husband, sitting in front of a computer that was used to send and receive text messages from Titan.

When the sub reaches a depth of about 3,300m, a noise that sounds like a door slamming is heard. Mrs Rush is seen to pause then look up and ask other Oceangate crew members what the noise was.

Within moments she then receives a text message from the sub saying it had dropped two weights, which seems to have led her to mistakenly think the dive was proceeding as expected.

The USCG says the noise was in fact the sound of Titan imploding. However, the text message, which must have been sent just before the sub failed, took longer to reach the ship than the sound of the implosion.

All five people on board Titan died instantly.

Graphic showing text messages sent by submersible against blue water backdrop

Prior to the fatal dive, warnings had been raised by deep sea experts and some former Oceangate employees about Titan’s design. One described it as an “abomination” and said the disaster was “inevitable”.

Titan had never undergone an independent safety assessment, known as certification, and a key concern was that its hull – the main body of the sub where the passengers sat – was made of layers of carbon fibre mixed with resin.

The USCG says it has now identified the moment the hull started to fail.

Carbon fibre is a highly unusual material for a deep sea submersible because it is unreliable under pressure. A known problem is that the layers of carbon fibre can separate, a process called delamination.

The USCG believes that the carbon fibre layers of the hull started to break apart during a dive to the Titanic, which took place a year before the disaster – the 80th dive that Titan had made.

Passengers on board reported hearing a loud bang as the sub made its way back to the surface. They said that at the time Mr Rush said that this noise was the sub shifting in its frame.

But the USCG says the data collected from sensors fitted to Titan shows that the bang was caused by delamination.

“Delamination at dive 80 was the beginning of the end,” said Lieutenant Commander Katie Williams from USCG.

“And everyone that stepped onboard the Titan after dive 80 was risking their life.”

Titan took passengers on three more dives in the summer of 2022 – two to the Titanic and one to a nearby reef, before it failed on its next deep dive, in June 2023.

US Coastguard Wreck of submersible on seabed showing carbon fibre layers exposed
US Coastguard

The flaws of Titan’s carbon fibre shell were shown to the inquiry

Businessman Oisin Fanning was onboard Titan for the last two dives before the disaster.

“If you’re asking a simple question: ‘Would I go again knowing what I know now?’ – the answer is no,” he told BBC News.

“A lot of people would not have gone. Very intelligent people who lost their lives, who, had they had all the facts, would not have made that journey.”

Deep sea explorer Victor Vescovo said he had grave misgivings about Titan and that he had told people that diving in the sub was like playing Russian roulette.

“I myself warned people away from getting into that submersible. I specifically told them that it was simply a matter of time before it failed catastrophically. I told Stockton Rush himself that I believed that.”

After the sub imploded, its mangled wreckage was discovered scattered across the sea floor of the Atlantic.

The USCG has described the process of sifting through the recovered debris – and said clothing from Mr Rush had been found, as well as business cards and stickers of the Titanic.

Supplied via Reuters / AFP Stockton Rush, Hamish Harding, Paul-Henri Nargeolet, Shahzada Dawood and his son Suleman

Supplied via Reuters / AFP

Clockwise from top left: Stockton Rush, Hamish Harding, Shahzada Dawood and his son Suleman, and Paul-Henri Nargeolet were all onboard the Titan

Later this year, the US Coast Guard will publish a final report of the findings from its investigation, which aims to establish what went wrong and prevent a disaster like this from ever happening again.

Speaking to the BBC’s documentary team, Christine Dawood, who lost her husband Shahzada and son Suleman in the disaster, said it had changed her forever.

“I don’t think that anybody who goes through loss and such a trauma can ever be the same,” she said.

The ripples from the Oceangate disaster are likely to continue for years – some private lawsuits have already been filed and criminal prosecutions may follow.

Oceangate told the BBC: “We again offer our deepest condolences to the families of those who died on June 18, 2023, and to all those impacted by the tragic accident.

“Since the tragedy occurred, Oceangate permanently wound down its operations and focused its resources on fully cooperating with the investigations. It would be inappropriate to respond further while we await the agencies’ reports.”

You can watch Implosion: The Titanic Sub Disaster on 9pm on Tuesday 27 May on BBC Two. It will also be available on the BBC iPlayer.

A thin, grey banner promoting the News Daily newsletter. On the right, there is a graphic of an orange sphere with two concentric crescent shapes around it in a red-orange gradient, like a sound wave. The banner reads: "The latest news in your inbox first thing.”

Get our flagship newsletter with all the headlines you need to start the day. Sign up here.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Joint UT, Yale research develops AI tool for heart analysis – The Daily Texan

Published

on


A study published on June 23 in collaboration with UT and Yale researchers developed an artificial intelligence tool capable of performing and analyzing the heart using echocardiography. 

The app, PanEcho, can analyze echocardiograms, or pictures of the heart, using ultrasounds. The tool was developed and trained on nearly one million echocardiographic videos. It can perform 39 echocardiographic tasks and accurately detect conditions such as systolic dysfunction and severe aortic stenosis.

“Our teammates helped identify a total of 39 key measurements and labels that are part of a complete echocardiographic report — basically what a cardiologist would be expected to report on when they’re interpreting an exam,” said Gregory Holste, an author of the study and a doctoral candidate in the Department of Electrical and Computer Engineering. “We train the model to predict those 39 labels. Once that model is trained, you need to evaluate how it performs across those 39 tasks, and we do that through this robust multi site validation.” 

Holste said out of the functions PanEcho has, one of the most impressive is its ability to measure left ventricular ejection fraction, or the proportion of blood the left ventricle of the heart pumps out, far more accurately than human experts. Additionally, Holste said PanEcho can analyze the heart as a whole, while humans are limited to looking at the heart from one view at a time. 

“What is most unique about PanEcho is that it can do this by synthesizing information across all available views, not just curated single ones,” Holste said. “PanEcho integrates information from the entire exam — from multiple views of the heart to make a more informed, holistic decision about measurements like ejection fraction.” 

PanEcho is available for open-source use to allow researchers to use and experiment with the tool for future studies. Holste said the team has already received emails from people trying to “fine-tune” the application for different uses. 

“We know that other researchers are working on adapting PanEcho to work on pediatric scans, and this is not something that PanEcho was trained to do out of the box,” Holste said. “But, because it has seen so much data, it can fine-tune and adapt to that domain very quickly. (There are) very exciting possibilities for future research.”



Source link

Continue Reading

AI Research

Google launches AI tools for mental health research and treatment

Published

on


Google announced two new artificial intelligence initiatives on July 7, 2025, designed to support mental health organizations in scaling evidence-based interventions and advancing research into anxiety, depression, and psychosis treatments.

The first initiative involves a comprehensive field guide developed in partnership with Grand Challenges Canada and McKinsey Health Institute. According to the announcement from Dr. Megan Jones Bell, Clinical Director for Consumer and Mental Health at Google, “This guide offers foundational concepts, use cases and considerations for using AI responsibly in mental health treatment, including for enhancing clinician training, personalizing support, streamlining workflows and improving data collection.”

The field guide addresses the global shortage of mental health providers, particularly in low- and middle-income countries. According to analysis from the McKinsey Health Institute cited in the document, “closing this gap could result in more years of life for people around the world, as well as significant economic gains.”

Summary

Who: Google for Health, Google DeepMind, Grand Challenges Canada, McKinsey Health Institute, and Wellcome Trust, targeting mental health organizations and task-sharing programs globally.

What: Two AI initiatives including a practical field guide for scaling mental health interventions and a multi-year research investment for developing new treatments for anxiety, depression, and psychosis.

When: Announced July 7, 2025, with ongoing development and research partnerships extending multiple years.

Where: Global implementation with focus on low- and middle-income countries where mental health provider shortages are most acute.

Why: Address the global shortage of mental health providers and democratize access to quality, evidence-based mental health support through AI-powered scaling solutions and advanced research.

The 73-page guide outlines nine specific AI use cases for mental health task-sharing programs, including applicant screening tools, adaptive training interfaces, real-time guidance companions, and provider-client matching systems. These tools aim to address challenges such as supervisor shortages, inconsistent feedback, and protocol drift that limit the effectiveness of current mental health programs.

Task-sharing models allow trained non-mental health professionals to deliver evidence-based mental health services, expanding access in underserved communities. The guide demonstrates how AI can standardize training, reduce administrative burdens, and maintain quality while scaling these programs.

According to the field guide documentation, “By standardizing training and avoiding the need for a human to be involved at every phase of the process, AI can help mental health task-sharing programs effectively scale evidence-based interventions throughout communities, maintaining a high standard of psychological support.”

The second initiative represents a multi-year investment from Google for Health and Google DeepMind in partnership with Wellcome Trust. The funding, which includes research grant funding from the Wellcome Trust, will support research projects developing more precise, objective, and personalized measurement methods for anxiety, depression, and psychosis conditions.

The research partnership aims to explore new therapeutic interventions, potentially including novel medications. This represents an expansion beyond current AI applications into fundamental research for mental health treatment development.

The field guide acknowledges that “the application of AI in task-sharing models is new and only a few pilots have been conducted.” Many of the outlined use cases remain theoretical and require real-world validation across different cultural contexts and healthcare systems.

For the marketing community, these developments signal growing regulatory attention to AI applications in healthcare advertising. Recent California guidance on AI healthcare supervision and Google’s new certification requirements for pharmaceutical advertising demonstrate increased scrutiny of AI-powered health technologies.

The field guide emphasizes the importance of regulatory compliance for AI mental health tools. Several proposed use cases, including triage facilitators and provider-client matching systems, could face classification as medical devices requiring regulatory oversight from authorities like the FDA or EU Medical Device Regulation.

Organizations considering these AI tools must evaluate technical infrastructure requirements, including cloud versus edge computing approaches, data privacy compliance, and integration with existing healthcare systems. The guide recommends starting with pilot programs and establishing governance committees before full-scale implementation.

Technical implementation challenges include model selection between proprietary and open-source systems, data preparation costs ranging from $10,000 to $90,000, and ongoing maintenance expenses of 10 to 30 percent of initial development costs annually.

The initiatives build on growing evidence that task-sharing approaches can improve clinical outcomes while reducing costs. Research cited in the guide shows that mental health task-sharing programs are cost-effective and can increase the number of people treated while reducing mental health symptoms, particularly in low-resource settings.

Real-world implementations highlighted in the guide include The Trevor Project’s AI-powered crisis counselor training bot, which trained more than 1,000 crisis counselors in approximately one year, and Partnership to End Addiction’s embedded AI simulations for peer coach training.

These organizations report improved training efficiency and enhanced quality of coach conversations through AI implementation, suggesting practical benefits for established mental health programs.

The field guide warns that successful AI adoption requires comprehensive planning across technical, ethical, governance, and sustainability dimensions. Organizations must establish clear policies for responsible AI use, conduct risk assessments, and maintain human oversight throughout implementation.

According to the World Health Organization principles referenced in the guide, responsible AI in healthcare must protect autonomy, promote human well-being, ensure transparency, foster responsibility and accountability, ensure inclusiveness, and promote responsive and sustainable development.

Timeline

  • July 7, 2025: Google announces two AI initiatives for mental health research and treatment
  • January 2025California issues guidance requiring physician supervision of healthcare AI systems
  • May 2024: FDA reports 981 AI and machine learning software devices authorized for medical use
  • Development ongoing: Field guide created through 10+ discovery interviews, expert summit with 20+ specialists, 5+ real-life case studies, and review of 100+ peer-reviewed articles



Source link

Continue Reading

AI Research

New Research Shows Language Choice Alone Can Guide AI Output Toward Eastern or Western Cultural Outlooks

Published

on


A new study shows that the language used to prompt AI chatbots can steer them toward different cultural mindsets, even when the question stays the same. Researchers at MIT and Tongji University found that large language models like OpenAI’s GPT and China’s ERNIE change their tone and reasoning depending on whether they’re responding in English or Chinese.

The results indicate that these systems translate language while also reflecting cultural patterns. These patterns appear in how the models provide advice, interpret logic, and handle questions related to social behavior.

Same Question, Different Outlook

The team tested both GPT and ERNIE by running identical tasks in English and Chinese. Across dozens of prompts, they found that when GPT answered in Chinese, it leaned more toward community-driven values and context-based reasoning. In English, its responses tilted toward individualism and sharper logic.

Take social orientation, for instance. In Chinese, GPT was more likely to favor group loyalty and shared goals. In English, it shifted toward personal independence and self-expression. These patterns matched well-documented cultural divides between East and West.

When it came to reasoning, the shift continued. The Chinese version of GPT gave answers that accounted for context, uncertainty, and change over time. It also offered more flexible interpretations, often responding with ranges or multiple options instead of just one answer. In contrast, the English version stuck to direct logic and clearly defined outcomes.

No Nudging Needed

What’s striking is that these shifts occurred without any cultural instructions. The researchers didn’t tell the models to act more “Western” or “Eastern.” They simply changed the input language. That alone was enough to flip the models’ behavior, almost like switching glasses and seeing the world in a new shade.

To check how strong this effect was, the researchers repeated each task more than 100 times. They tweaked prompt formats, varied the examples, and even changed gender pronouns. No matter what they adjusted, the cultural patterns held steady.

Real-World Impact

The study didn’t stop at lab tests. In a separate exercise, GPT was asked to choose between two ad slogans, one that stressed personal benefit, another that highlighted family values. When the prompt came in Chinese, GPT picked the group-centered slogan most of the time. In English, it leaned toward the one focused on the individual.

This might sound small, but it shows how language choice can guide the model’s output in ways that ripple into marketing, decision-making, and even education. People using AI tools in one language may get very different advice than someone asking the same question in another.

Can You Steer It?

The researchers also tested a workaround. They added cultural prompts, telling GPT to imagine itself as a person raised in a specific country. That small nudge helped the model shift its tone, even in English, suggesting that cultural context can be dialed up or down depending on how the prompt is framed.

Why It Matters

The findings concern how language affects the way AI models present information. Differences in response patterns suggest that the input language influences how content is structured and interpreted. As AI tools become more integrated into routine tasks and decision-making processes, language-based variations in output may influence user choices over time.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: Jack Dorsey Builds Offline Messaging App That Uses Bluetooth Instead of the Internet





Source link

Continue Reading

Trending