Connect with us

AI Research

Multimodal AI to forecast arrhythmic death in hypertrophic cardiomyopathy

Published

on


This study complies with all relevant ethical regulations and has been approved by the institutional review boards of Johns Hopkins Medicine and Atrium Health.

Patient population and datasets

JHH-HCM registry (internal)

A retrospective analysis was performed on patient data from the JHH-HCM registry spanning 2005–2015. Enrollment in the registry was based on the first visit to the Johns Hopkins HCM Center of Excellence, where patients meeting the diagnostic criteria for HCM were included. These criteria focused on the presence of unexplained left ventricular hypertrophy (maximal wall thickness ≥15 mm) without evidence of uncontrolled hypertension, valvular heart disease and HCM phenocopies, such as amyloidosis and storage disorders. Patients were followed for a mean duration of 2.86 years (median 1.92 years; 25th–75th percentile = 0.94–4.28 years). The current study focused on a subset of patients with HCM who were enrolled between 2005 and 2015 and had adequate LGE-CMR images, totaling 553 patients (Extended Data Fig. 3).

SHVI-HCM registry (external)

A retrospective analysis was performed on patient data from the Atrium Health SHVI-HCM registry spanning 2015–2023. This registry includes patients who presented to the SHVI HCM Center of Excellence with a preexisting HCM diagnosis or were subsequently diagnosed based on cardiac imaging, personal and family history, and/or genetic testing in accordance with current guideline definitions. Patients within this longitudinal database are still being followed, as the endpoint for registry inclusion is the transfer of care to an outside facility or death. For the purposes of this study, the SHVI-HCM registry was interrogated for patients who had undergone CMR imaging and ICD placement, and enrollment was delineated by the patient’s first visit with the SHVI.

Data collection and primary endpoint

Clinical data, including demographics, symptoms, comorbidities, medical history and stress test results, were ascertained during the initial clinic visit and at each follow-up visit. Rest and stress echocardiography and CMR imaging were performed as routine components of clinical evaluation for all patients referred to the HCM centers. For the internal JHH-HCM registry, echocardiography and CMR imaging were conducted before the first clinic visit, with typically 3 months between the imaging assessment and the first clinic visit. For the SHVI-HCM registry, patients typically underwent echocardiography and CMR imaging after the first clinic visit. The full list of covariates used in MAARS can be found in Extended Data Tables 1 and 2. The data were extracted through a manual search of patients’ EHRs. EchoPAC software (GE Healthcare) was used to quantitatively analyze the echocardiogram and compute related covariates. Of note, the internal and external cohorts have distinct patient populations with different demographic characteristics and different levels of risk factors (Table 1).

The CMR images in the JHH-HCM registry were acquired using 1.5-T magnetic resonance imaging (MRI) devices (Aera, Siemens; Avanto, Siemens; Signa, GE; Intera, Phillips). In the SHVI-HCM registry, most CMR images were acquired using 1.5-T MRI devices (Aera, Siemens; Sola, Siemens), and a small proportion of CMR images were acquired using 3-T MRI devices (Vida, Siemens). LGE images were obtained 10–15 min after intravenous administration of 0.2 mmol kg−1 gadopentetate dimeglumine. An inversion scout sequence was used to select the optimal inversion time for nulling normal myocardial signal. All images used were 2D parallel short-axis left ventricular stacks. Typical spatial resolutions were in the range of 1.4–2.9 × 1.4–2.9 × 7–8 mm, with 1.6- to 2-mm gaps.

The primary endpoint for the JHH-HCM registry was SCDA defined as sustained ventricular tachycardia (ventricular rate ≥130 beats per min lasting for ≥30 s) or ventricular fibrillation resulting in defibrillator shocks or antitachycardia pacing. Arrhythmic events were ascertained by reviewing electrocardiogram, Holter monitor and ICD interrogation data. The primary endpoint for the SHVI-HCM registry was SCDA defined as device shock, appropriate interventions or out-of-hospital cardiac arrest.

More details regarding patient inclusion, assessment, follow-up, echocardiography and CMR acquisition can be found in previous work23,51.

Data preparation

The multimodal inputs to MAARS included LGE-CMR scans and clinical covariates from EHRs and CIRs (Extended Data Tables 1 and 2). The labels were the outcomes (SCDA or non-SCDA). The preprocessing steps for LGE-CMR scans (described below) aimed to exclude nonrelevant background information and to standardize the CMR image volume for consistent analysis across all patients. We first obtained the left ventricular region of interest using our previously developed and validated deep learning algorithm52. Once each patient’s LGE-CMR 2D slices were processed using this algorithm, all pixels outside the left ventricle were zeroed out, and the pixels within the left ventricle were normalized by the median blood pool pixel intensity in each slice. Finally, the processed slices were stacked and interpolated to a regular 96 × 96 × 20 grid with voxel dimensions of 4.0 × 4.0 × 6.3 mm.

The EHR and CIR data were structured as tabular data. The input features included in the analysis were ensured to have <40% missing values originally; missing values were imputed using multivariate imputation by chained equations (MICE)53. MICE is a fully conditional specification approach that models each input feature with missing values as a function of all other features iteratively. To address the feature mismatch issue between the internal and external cohorts, we used a MICE imputer based on the internal dataset to impute the missing values in both datasets. After the imputation, the EHR and CIR data were standardized using the z-score method, which involves subtracting the mean and dividing by the s.d. of each feature.

Transformer-based multimodal neural network

Modality-specific branch networks

Three unimodal branch networks are included in MAARS, each learning from a specific input modality: a 3D-ViT29 for LGE-CMR images, an FNN for EHR data and an FNN for CIR data.

In the LGE-CMR branch, the image vector embeddings ζ are obtained by dividing the original 3D image X into n flattened nonoverlapping 3D image patches xi and following the operations

$$begin{array}{c}{zeta }_{{rm{CMR}}}^{,0}=left[{z}_{{rm{cls}}},E{x}_{1},E{x}_{2},ldots ,E{x}_{n}right]+{p}end{array}$$

(1)

where E is a linear projection, zcls is a classification token (CLS-token) and ‘p’ is a learnable positional embedding to retain positional information.

The image vector embeddings ({zeta }_{{rm{CMR}}}^{,0}) are then processed by a sequence of LViT transformer encoder blocks. Each transformer encoder block, ({zeta }_{{rm{CMR}}}^{,l+1}={rm{Transformer}}left({zeta }_{{rm{CMR}}}^{,l};{theta }_{{rm{ViT}}}^{l}right)), consists of two submodules: (1) a multihead self-attention (MSA) module and (2) a two-layer fully connected FNN.

$$begin{array}{c}{nu }^{l}={rm{MSA}}left({rm{LN}}left({zeta }_{{rm{CMR}}}^{,l}right)right)+{zeta }^{,l}end{array}$$

(2)

$$begin{array}{c}{zeta }_{{rm{CMR}}}^{,l+1}={rm{FNN}}left({rm{LN}}left({nu }^{l}right)right)+{nu }^{l}end{array}$$

(3)

where LN is the layer normalization operation. In the final transformer encoder block, the encoded CMR knowledge, ξCMR, is defined as

$$begin{array}{c}{zeta }_{{rm{CMR}}}^{{,L}_{{rm{ViT}}}}=left[{z}_{{rm{cls}}}^{{,L}_{{rm{ViT}}}},{z}_{1}^{{,L}_{{rm{ViT}}}},{z}_{2}^{{,L}_{{rm{ViT}}}},ldots ,{z}_{n}^{{,L}_{{rm{ViT}}}}right]={rm{Transformer}}left({zeta }_{{rm{CMR}}}^{{,L}_{{rm{ViT}}}-1};{theta }_{{rm{ViT}}}^{{L}_{{rm{ViT}}}-1}right)end{array}$$

(4)

$$begin{array}{c}{{rm{xi }}}_{{rm{CMR}}}={rm{LN}}left({z}_{{rm{cls}}}^{{,L}_{{rm{ViT}}}}cdot Wright)end{array}$$

(5)

where W is a learnable matrix.

In the EHR and CIR branches, processed EHR and CIR data are converted to vectors ζEHR, ζCIR fed into two FNNs, with outputs ξEHR and ξCIR representing the encoded EHR and CIR knowledge.

$$begin{array}{c}{xi }_{{rm{EHR}}}={rm{FNN}}left({zeta }_{{rm{EHR}}};{theta }_{{rm{EHR}}}right)end{array}$$

(6)

$$begin{array}{c}{xi }_{{rm{CIR}}}={rm{FNN}}left({zeta }_{{rm{CIR}}};{theta }_{{rm{CIR}}}right)end{array}$$

(7)

Multimodal fusion

Following knowledge encoding from the LGE-CMR, EHR and CIR subnetworks, we used an MBT consisting of multiple blocks to fuse the knowledge across modalities. MBT has demonstrated state-of-the-art performance in multimodal fusion tasks and has a light computational cost30. In each MBT block, the unimodal knowledge vectors concatenated with a shared fusion vector, ξfsn, are fed into modality-specific transformers:

$$begin{array}{c}left[{{xi }_{* }^{l+1},hat{xi }}_{{rm{fsn}},* }^{l+1}right]={rm{Transformer}}left(left[{xi }_{* }^{l},{xi }_{{rm{fsn}}}^{l}right];{theta }_{{rm{MBT}},* }^{l}right)end{array}$$

(8)

The fusion vector in layer l + 1 is updated as follows:

$$begin{array}{c}{xi }_{{rm{fsn}}}^{,l+1}={rm{Avg}}left({hat{xi}}_{{rm{fsn}},* }^{,l+1}right)end{array}$$

(9)

The last MBT block outputs a predicted SCDA risk score p using the following equation:

$$begin{array}{c}p={rm{sigmoid}}left(left[{xi}_{{rm{CMR}}}^{{,L}_{{rm{MBT}}}},{xi}_{{rm{EHR}}}^{{,L}_{{rm{MBT}}}},{xi }_{{rm{CIR}}}^{{,L}_{{rm{MBT}}}}right]cdot W+bright)end{array}$$

(10)

Model training and implementation details

For patient i, their SCDA outcome yi is 1 if they experienced an SCDA event during the follow-up, and 0 otherwise. We adopted the balanced focal loss as the loss function54:

$$L=-sum _{i}{alpha }_{i}{({,y}_{i}-{p}_{i})}^{gamma }log {p}_{i}$$

(11)

where αi is a class-dependent scaling factor, and γ is the focusing parameter that controls the level of how the model focuses on its mistakes and prioritizes improving on the hard examples, which was set as γ = 2 in this study.

The LGE-CMR, EHR and CIR branch networks were first trained independently, and then MAARS was trained end-to-end with all the branch networks and the multimodal fusion module. All models were trained with a batch size of 64 and a maximum of 150 epochs with early stopping based on loss. The Adam optimizer was used, with β1 = 0.9, β2 = 0.999, and the learning rate was initially set at 1 × 10−3 for the LGE-CMR branch network, 1 × 10−2 for the EHR and CIR branch networks, and 3 × 10−2 for the multimodal fusion and was adaptively adjusted during the training process. For the LGE-CMR branch network, the ViT has LViT = 8 transformer encoder blocks, eight heads for each attention module and dimension d = 512. The EHR branch network used an FNN with two hidden layers and a latent dimension of 16. The CIR branch network used an FNN with one hidden layer and a latent dimension of 16. The encoded unimodal knowledge vectors have dimensions ξCMRR32, ξEHRR16, ξCIRR16. We set LMBT as 3 and the bottleneck fusion vector dimension as 8.

Assessing model performance and clinical validation

Performance metrics

The values of metrics derived from the confusion matrix (BA, sensitivity and specificity) were computed at optimal probability decision thresholds selected to maximize Youden’s J statistic. When comparing the AI model’s performance to that of the clinical tools, we also adjusted the decision threshold by matching the sensitivities of the clinical tools to evaluate their specificities. All metrics were in the range of 0 to 1, with the baseline levels obtained by random chance being as follows: AUROC = 0.5, BA = 0.5, AUPRC = 0.03 and Bs = 0.25.

Internal and external validation

The internal model performance was assessed in a fivefold cross-validation of the JHH-HCM cohort on the patient level stratified by outcome. The training and test sets were split on the patient level; that is, all LGE-CMR scans corresponding to a given patient case were only present in either the training or validation set and never simultaneously partly in both. After five training folds, the model’s performance metrics were calculated based on the aggregation of all validation folds.

For the external performance evaluation, we trained the model using the entire JHH-HCM dataset (with 90% as the training set and 10% as the development set) and tested the model’s performance on the SHVI-HCM cohort. Of note, the model for external validation inherited the same hyperparameters as the internal model.

Model interpretability

We interpreted the MAARS network weights and predictions using attribution- and attention-based methods.

Shapley value

The EHR and CIR branch networks were interpreted using the Shapley value, which quantifies the incremental attribution of every input feature to the final prediction. The Shapley value32 is based on the cooperative game theory and explains a prediction as a coalitional game played by the feature values. The Shapley value has a collection of desirable properties, including efficiency, symmetry, dummy and additivity. In this study, the Shapley values were estimated using a permutation formulation implemented in SHAP55.

Attention rollout

For the LGE-CMR branch network, we used a technique called attention rollout to quantify attention flows from the start to the end throughout the ViT. Formally, at transformer encoder block l, the average of the attention matrices of all attention heads is Al. The residual connection at each block is modeled by adding the identity matrix I to the attention matrix. Therefore, the attention rollout is recursively computed by

$$begin{array}{c}{A}_{{rm{Rollout}}}^{l}=left({A}^{l}+Iright)cdot{A}_{{rm{Rollout}}}^{l-1}end{array}$$

(12)

We explained the predictions of the LGE-CMR branch network using the attention rollout at the end of the ViT after flowing through LViT transformer blocks, ({A}_{{rm{Rollout}}}^{{L}_{{rm{ViT}}}}).

Statistical analysis

The P values of clinical covariates between the internal and external cohorts were based on a two-sample Welch’s t-test for numerical variables and the Mann–Whitney U test for categorical variables before data imputation. Kolmogorov–Smirnov tests for the risk score distributions were based on the aggregated predictions on all internal validation folds. The means and CIs of model performance metrics in the internal fivefold cross-validation were estimated using 200 bootstrapping samples of the aggregated predictions on all validation folds. The performance metrics in the external validation were calculated using model predictions on 200 bootstrapping resampled datasets of the SHVI-HCM cohort. The computations were based on the bias-corrected and accelerated bootstrap method. Pearson’s r for clinical covariates in the network interpretations was based on aggregated interpretations from all internal validation folds.

Computational hardware and software

MAARS was built in Python 3.9 using packages including PyTorch 2.0, NumPy 1.23.5, Pandas 1.5.3, SciPy 1.10, scikit-learn 1.2.0, scikit-image 0.19.3, pydicom 2.3, SimpleITK 2.2.1 and SHAP 0.41. Data preprocessing, model training and result analysis were performed on a machine with an AMD Ryzen Threadripper 1920X 12-core CPU and NVIDIA TITAN RTX GPUs, and on the Rockfish cluster at Johns Hopkins University using NVIDIA A100 GPU nodes, with NVIDIA software CUDA 11.7 and cuDNN 8.5. For a reference of the computational requirements of MAARS inference, on a machine with an AMD Ryzen 2700X 8-core CPU and an NVIDIA GeForce RTX 2060 GPU, the average processing time for inference is 0.034 s per patient using GPU or 0.086 s per patient using solely CPU.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.



Source link

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

AI Research

Positive attitudes toward AI linked to more prone to problematic social media use

Published

on


People who have a more favorable view of artificial intelligence tend to spend more time on social media and may be more likely to show signs of problematic use, according to new research published in Addictive Behaviors Reports.

The new study was designed to explore a question that, until now, had been largely overlooked in the field of behavioral research. While many factors have been identified as risk factors for problematic social media use—including personality traits, emotional regulation difficulties, and prior mental health issues—no research had yet explored whether a person’s attitude toward artificial intelligence might also be linked to unhealthy social media habits.

The researchers suspected there might be a connection, since social media platforms are deeply intertwined with AI systems that drive personalized recommendations, targeted advertising, and content curation.

“For several years, I have been interested in understanding how AI shapes societies and individuals. We also recently came up with a framework called IMPACT to provide a theoretical framework to understand this. IMPACT stand for the Interplay of Modality, Person, Area, Country/Culture and Transparency variables, all of relevance to understand what kind of view people form regarding AI technologies,” said study author Christian Montag, a distinguished professor of cognitive and brain sciences at the Institute of Collaborative Innovation at University of Macau.

Artificial intelligence plays a behind-the-scenes role in nearly every major social media platform. Algorithms learn from users’ behavior and preferences in order to maximize engagement, often by showing content that is likely to capture attention or stir emotion. These AI-powered systems are designed to increase time spent on the platform, which can benefit advertisers and the companies themselves. But they may also contribute to addictive behaviors by making it harder for users to disengage.

Drawing from established models in psychology, the researchers proposed that attitudes toward AI might influence how people interact with social media platforms. In this case, people who trust AI and believe in its benefits might be more inclined to embrace AI-powered platforms like social media—and potentially use them to excess.

To investigate these ideas, the researchers analyzed survey data from over 1,000 adults living in Germany. The participants were recruited through an online panel and represented a wide range of ages and education levels. After excluding incomplete or inconsistent responses and removing extreme outliers (such as those who reported using social media for more than 16 hours per day), the final sample included 1,048 people, with roughly equal numbers of men and women.

Participants completed a variety of self-report questionnaires. Attitudes toward artificial intelligence were measured using both multi-item scales and single-item ratings. These included questions such as “I trust artificial intelligence” and “Artificial intelligence will benefit humankind” to assess positive views, and “I fear artificial intelligence” or “Artificial intelligence will destroy humankind” to capture negative perceptions.

To assess social media behavior, participants were asked whether they used platforms like Facebook, Instagram, TikTok, YouTube, or WhatsApp, and how much time they spent on them each day, both for personal and work purposes. Those who reported using social media also completed a measure called the Social Networking Sites–Addiction Test, which includes questions about preoccupation with social media, difficulty cutting back, and using social media to escape from problems.

Overall, 956 participants said they used social media. Within this group, the researchers found that people who had more positive attitudes toward AI also tended to spend more time on social media and reported more problematic usage patterns. This relationship held for both men and women, but it was stronger among men. In contrast, negative attitudes toward AI showed only weak or inconsistent links to social media use, suggesting that it is the enthusiastic embrace of AI—not fear or skepticism—that is more closely associated with excessive use.

“It is interesting to see that the effect is driven by the male sample,” Montag told PsyPost. “On second thought, this is not such a surprise, because in several samples we saw that males reported higher positive AI attitudes than females (on average). So, we must take into account gender for research questions, such as the present one.”

“Further I would have expected that negative AI attitudes would have played a larger role in our work. At least for males we observed that fearing AI went also along with more problematic social media use, but this effect was mild at best (such a link might be explained via negative affect and escapism tendencies). I would not be surprised if such a link becomes more visible in future studies. Let’s keep in mind that AI attitudes might be volatile and change (the same of course is also true for problematic social media use).”

To better understand how these variables were related, the researchers conducted a mediation analysis. This type of analysis can help clarify whether one factor (in this case, time spent on social media) helps explain the connection between two others (positive AI attitudes and problematic use).

The results suggested that people with positive attitudes toward AI tended to spend more time on social media, and that this increased usage was associated with higher scores on the addiction measure. In other words, time spent on social media partly accounted for the link between AI attitudes and problematic behavior.

“I personally believe that it is important to have a certain degree of positive attitude towards benevolent AI technologies,” Montag said. “AI will profoundly change our personal and business lives, so we should better prepare ourselves for active use of this technology. This said, our work shows that positive attitudes towards AI, which are known to be of relevance to predict AI technology use, might come with costs. This might be in form of over-reliance on such technology, or in our case overusing social media (where AI plays an important role in personalizing content). At least we saw this to be true for male study participants.”

Importantly, the researchers emphasized that their data cannot establish cause and effect. Because the study was cross-sectional—that is, based on a single snapshot in time—it is not possible to say whether positive attitudes toward AI lead to excessive social media use, or whether people who already use social media heavily are more likely to hold favorable views of AI. It’s also possible that a third factor, such as general interest in technology, could underlie both tendencies.

The study’s sample, while diverse in age and gender, skewed older on average, with a mean age of 45. This may limit the generalizability of the findings, especially to younger users, who are often more active on social media and may have different relationships with technology. Future research could benefit from focusing on younger populations or tracking individuals over time to see how their attitudes and behaviors change.

“In sum, our work is exploratory and should be seen as stimulating discussions. For sure, it does not deliver final insights,” Montag said.

Despite these limitations, the findings raise important questions about how people relate to artificial intelligence and how that relationship might influence their behavior. The authors suggest that positive attitudes toward AI are often seen as a good thing—encouraging people to adopt helpful tools and new innovations. But this same openness to AI might also make some individuals more vulnerable to overuse, especially when the technology is embedded in products designed to maximize engagement.

The researchers also point out that people may not always be aware of the role AI plays in their online lives. Unlike using an obvious AI system, such as a chatbot or virtual assistant, browsing a social media feed may not feel like interacting with AI. Yet behind the scenes, algorithms are constantly shaping what users see and how they engage. This invisible influence could contribute to compulsive use without users realizing how much the technology is guiding their behavior.

The authors see their findings as a starting point for further exploration. They suggest that researchers should look into whether positive attitudes toward AI are also linked to other types of problematic online behavior, such as excessive gaming, online shopping, or gambling—especially on platforms that make heavy use of AI. They also advocate for studies that examine whether people’s awareness of AI systems influences how those systems affect them.

“In a broader sense, we want to map out the positive and negative sides of AI technology use,” Montag explained. “I think it is important that we use AI in the future to lead more productive and happier lives (we investigated also AI-well-being in this context recently), but we need to be aware of potential dark sides of AI use.”

“We are happy if people are interested in our work and if they would like to support us by filling out a survey. Here we do a study on primary emotional traits and AI attitudes. Participants also get as a ‘thank you’ insights into their personality traits: https://affective-neuroscience-personality-scales.jimdosite.com/take-the-test/).”

The study, “The darker side of positive AI attitudes: Investigating associations with (problematic) social media use,” was authored by Christian Montag and Jon D. Elhai.



Source link

Continue Reading

Trending