Connect with us

AI Insights

Prediction of suicide using web based voice recordings analyzed by artificial intelligence

Published

on


In this study, we utilized publicly available web-based, real-world voice data to develop an AI model capable of distinguishing between individuals who died by suicide and carefully matched controls. The most effective model, a feedforward neural network (Multilayer Perceptron), demonstrated high predictive accuracy, especially considering its reliance on incidental real-world data. Notably, this accuracy improved significantly when analyzing the subset of NearGroup data, which includes individuals who died by suicide within 12 months of the audio recording. Specifically, the model achieved higher AUC and accuracy for this subset, underscoring the critical role of temporal proximity in the identification of voice biomarkers linked to suicide risk.

These findings substantiate earlier work suggesting that paralinguistic features are promising biomarkers for mental health conditions, including suicidality. For instance, prior studies by Pestian et al.9,10 and Hashim et al.11,12 demonstrated that acoustic and linguistic features could differentiate individuals with suicidal ideation from controls with high accuracy. However, those studies relied on clinical or structured settings and predicted suicidal ideation or questionnaire outcomes, not actual suicide.

Our work builds on and extends these efforts by demonstrating that AI models can predict completed suicides using publicly available, naturalistic voice data—a significant advancement over studies like those by Amiriparian et al.13 and Song et al.14which, while highly accurate, relied on emergency or hotline data and still focused on risk indicators rather than confirmed outcomes. By leveraging data from the general population and focusing on verified suicide deaths, our model aligns more closely with real-world application potential, improving ecological validity as highlighted by Iyer et al.17 and Belouali et al.18.

Moreover, our findings echo observations by Walsh et al.34who found that suicide risk prediction improves with temporal proximity to the event. We show a similar trend: predictive accuracy increased significantly in cases where the suicide occurred within 12 months of the recording, suggesting that the acoustic markers of suicide risk may become more pronounced as the event nears.

Furthermore, the model achieved strong results not only with the feedforward neural network but also across multiple classification algorithms, including Logistic Regression, Nearest Neighbors, XGBoost Linear, and XGBoost Tree with various paralinguistic features. The robustness of the study is further evidenced by the model’s ability to maintain high performance despite perturbations in the analytical pipeline. These findings suggest the potential for significant advancements in suicide risk assessment through the use of ML models—specifically neural networks—and voice data.

All in all, despite the inherent complexity and limitations of the data, characterized by noise, lack of background clinical information, which could shed light on the potential causes that led to completed suicide, as well as the difficulty of the task at hand, the model exhibits remarkable robustness. It is important, however, to note the absence of clinical information detailing the specific psychopathology experienced by the participants, in our study, both by those who died by suicide and those in the control group. Despite the necessity to make assumptions about potential co-morbidities and external influences such as medications and drugs of abuse, the dataset remains invaluable for its unequivocal ‘hard outcome’. This inherent shortcoming can, however, be seen as a strength of our investigation, i.e. we were able to discern distinct groups even amid lack of background clinical information.

Importantly, this is the first study to successfully predict actual suicidal behavior rather than relying on surrogate markers, such as self-reported measures from questionnaires. This marks a significant advancement in suicide prevention research, as it demonstrates the feasibility of using AI to analyze naturalistic voice data for the identification of suicide risk. These findings hold promise for improving early detection efforts and tailoring interventions to prevent suicide, particularly in the critical period leading up to the event.

Future research

Our future research efforts will focus on a rigorous validation of the results by acquiring data in a clinical study featuring well-diagnosed patients. Furthermore, the exploration of additional acoustic feature groups, such as tempo, rhythm, spectral features, and nuanced analysis of pauses (both filled and unfilled), presents an avenue to further increase the model’s performance. The consideration of larger datasets based on automated analysis, allowing for the formation of a holdout set, improving upon the current cross-validation approach for a more robust evaluation of the results. Moreover, the incorporation of more extensive datasets could facilitate the application of alternative, end-to-end AI approaches, including transformer models. Additionally, textual information (use of certain word classes or a particular ratio of word classes) is needed to make the analysis more comprehensive. Furthermore, differentiating between various psychopathologies that contribute to depression, suicidal ideation, and death by suicide within the general population will provide deeper understanding of the complexities surrounding mental health. One way to achieve these goals would be to integrate voice biomarker analysis into national suicide prevention hotlines.

Ethical considerations

The integration of AI and ML in health data analysis poses numerous ethical issues that should be carefully considered. It is imperative to safeguard individual privacy while weighing against the potential benefits to well-being. As highlighted by Lejeune et al.35 ‘the application of AI to health data will require robust cyber security as well as a clear legal framework’ (p.7). This group of researchers further emphasizes that while AI holds great promise, there is a need for caution, as it could introduce a responsibility problem, i.e. overreliance on technology in healthcare decision-making. They argue that such reliance may lead to diminished human oversight and accountability in clinical settings. Consequently, AI should be regarded as a tool to complement, rather than replace, human clinical judgment. Ensuring that healthcare professionals remain integral to the decision-making process is essential to maintain the quality and ethical standards of patient care. Furthermore, while this study highlights the potential of repurposing publicly available data for addressing critical health challenges and gaining valuable insights (data collection procedure which has been validated by an ethics board), we strongly agree that the development and deployment of AI systems in healthcare must be guided by foundational ethical principles, including transparency, fairness, and the right to privacy, to build trust and ensure the responsible use of technology.



Source link

AI Insights

In the ever-changing artificial intelligence (AI) world, there is a place that is gaining an unrival..

Published

on


In the ever-changing artificial intelligence (AI) world, there is a place that is gaining an unrivaled status as an AI-based language-specific service. DeepL started in Germany in 2017 and now has 200,000 companies around the world as customers.

DeepL Chief Revenue Officer David Parry Jones, whom Mail Business recently met via video, is in charge of all customer management and support.

DeepL is focusing on securing customers by introducing a large number of services tailored to their needs, such as launching “Deep L for Enterprise,” a corporate product, and “Deep L Voice,” a voice translation solution, last year.

“We are focusing on translators, which are key products, and DeepL Voice is gaining popularity as it is installed in the Teams environment,” Pari-Jones CRO said. “We are also considering installing it on Zoom, a video conference platform.”

DeepL’s voice translation solution is currently integrated into Microsoft’s Teams. If participants in the meeting using Teams speak their own language, other participants can check subtitles that are translated in real-time. As the global video conference market accounts for nearly 90% of Zoom and MS Teams, if DeepL solutions are introduced through Zoom, the language barrier in video conferences will now disappear.

DeepL solutions are all focused on saving time and resources going into translation and delivering accurate results. “According to a study commissioned by Forrester Research last year, companies’ internal document translation time was reduced by 90% when using DeepL solutions,” Parry Jones CRO said, explaining that it is playing a role in breaking down language barriers and strengthening efficiency.

The Asian market, including Korea, a non-English speaking country, is considered a key market for DeepL. CEO Yarek Kutilovsky also visits Korea almost every year and meets with domestic customers.

“The Asia-Pacific region and Japan are behind DeepL’s rapid growth,” said CRO Pari-Jones. In translation services, the region accounts for 45% of sales, he said. “In particular, Japan is the second largest market, and Korea is closely following it.” He explains that Korea and Japan have similar levels of English proficiency, and there are many large corporations that are active in multiple countries, so there is a high demand for high-quality translations.

In Japan, Daiwa Securities is using DeepL solutions in the process of disclosing performance-related data to the world, and Fujifilm and NEC are also representative customers of DeepL. In Korea, Yanolja, Lotte Innovate, and Lightning Market are using DeepL.

DeepL currently only has branches in Japan among Asian countries, and the Korean branch is also considering establishing it, although the exact timing has not been set.

“DeepL continues to improve translation quality and invest at the same time for growth in Korea,” said CRO Pari-Jones. “We need a local team for growth.” We can’t promise you the exact schedule, but (the Korean branch) will be a natural development,” he said.

Meanwhile, as Generative AI services such as ChatGPT become more common, these services are also not the main function, but they also perform compliance levels of translation, threatening translators.

DeepL also sees them as competitors and competes. “DeepL is a translation company, so the difference is that it strives to provide accuracy or innovative language services,” Pari-Jones CRO said. “When comparing translation quality, the gap has narrowed slightly with ChatGPT.” We will continue to improve quality while testing regularly,” he said.

[Reporter Jeong Hojun]



Source link

Continue Reading

AI Insights

There is No Such Thing as Artificial Intelligence – Nathan Beacom

Published

on


One man tried to kill a cop with a butcher knife, because OpenAI killed his lover. A 29-year-old mother became violent toward her husband when he suggested that her relationship with ChatGPT was not real. A 41-year-old now-single mom split with her husband after he became consumed with chatbot communication, developing bizarre paranoia and conspiracy theories.

These stories, reported by the New York Times and Rolling Stone, represent the frightening, far end of the spectrum of chatbot-induced madness. How many people, we might wonder, are quietly losing their minds because they’ve turned to chatbots as a salve for loneliness or frustrated romantic desire?



Source link

Continue Reading

AI Insights

Apple Supplier Lens Tech Said to Price $607 Million Hong Kong Listing at Top of Range

Published

on




Apple Inc. supplier Lens Technology Co. has raised HK$4.8 billion ($607 million) after pricing its Hong Kong listing at the top of the marketed range, according to people familiar with the matter.



Source link

Continue Reading

Trending