Connect with us

AI Research

Cedars-Sinai’s New PhD in Health AI Program Earns Accreditation

Published

on


Newswise — LOS ANGELES (Aug. 27, 2025) — The newly established PhD in Health Artificial Intelligence (AI) program in Cedars-Sinai’s Health Sciences University has earned accreditation from the Senior College and University Commission of the Western Association of Schools and Colleges, affirming the program’s high standards in graduate education.

Accreditation was granted in May 2025, less than six months after the PhD in Health AI program was submitted for review. It is the first in the U.S. to be embedded in a hospital and the first to combine interdisciplinary academic training with hands-on clinical data experience, giving students opportunities to develop AI solutions that could improve diagnostics, patient care and healthcare delivery.

“Accreditation is a meaningful milestone for the PhD in Health AI program,” said Graciela Gonzalez-Hernandez, PhD, director of the program and professor and vice chair for Research and Education in the Department of Computational Biomedicine. “It signifies that the highest standards of academic excellence and innovation are at the heart of our curriculum, and it signals to prospective students, as well as our faculty and partners, that we’re pioneering a new kind of doctoral training with quality and rigor.”

In their report, commission evaluators praised the program for integrating doctoral-level learning into real-world work experiences and for offering extensive academic resources and support services for students’ coursework and research.

“Our Health Sciences University continues to evolve, with our newest PhD program enabling us to train the next generation of AI experts specifically focused on healthcare,” said Jeffrey A. Golden, MD, executive vice dean of Research and Education, director of the Burns and Allen Research Institute, and the Linda and Jim Lippman Distinguished Chair in Academic Medicine at Cedars-Sinai. “As AI is poised to rapidly transform medicine, accreditation for the PhD in Health AI program helps further reflect and reinforce our commitment to shaping the future of healthcare.”

The PhD in Health AI program aims to attract applicants from diverse fields beyond healthcare, including computer science, engineering, math and gaming. Through an active-learning and structured mentoring model, students will gain exposure to real-world clinical environments and collaborate closely with clinicians and scientific investigators.

In addition to laboratory rotations with Cedars-Sinai’s AI research faculty, the program also includes clinical rotations—rare for nonmedical PhD students—to help participants understand how clinical information is generated and used.

“We look forward to welcoming our first cohort of exceptional students this week—each with a strong technical background and a shared commitment to improving healthcare,” Gonzalez-Hernandez said. “Throughout their time at Cedars-Sinai Health Sciences University they will engage with faculty, clinicians and each other to meaningfully and ethically apply AI in real clinical settings. They will graduate with a deep understanding of real-world healthcare challenges, positioning them for successful careers in academic research, industry, healthcare innovation, and public policy.”

Cedars-Sinai’s Health Sciences University was established in 2024. The university offers other graduate degrees, including a PhD in Biomedical SciencesMaster of Science in Health Systems and Master of Science in Magnetic Resonance in Medicine.

The university also is home to several professional training programs, including nondegree educational certifications, formal trainings, internships and other ongoing opportunities to benefit students and professionals at all career levels.

Read more in Cedars-Sinai Discoveries Magazine: Next Generation Health Education





Source link

AI Research

Artificial intelligence loses out to humans in credibility during corporate crisis responses

Published

on


As artificial intelligence tools become increasingly integrated into public relations workflows, many organizations are considering whether these technologies can handle high-stakes communication tasks such as crisis response. A new study published in Corporate Communications: An International Journal provides evidence that, at least for now, human-written crisis messages are perceived as more credible and reputationally beneficial than those authored by AI systems.

The rise of generative artificial intelligence has raised questions about its suitability for replacing human labor in communication roles. In public relations, AI tools are already used for media monitoring, message personalization, and social media management. Some advocates even suggest that AI could eventually author press releases or crisis response messages.

However, prior studies have found that people often view AI-generated messages with suspicion. Despite improvements in the sophistication of these systems, the use of AI can still reduce perceptions of warmth, trustworthiness, and competence. Given the importance of credibility and trust in public relations, especially during crises, the study aimed to evaluate how the perceived source of a message—human or AI—affects how people interpret crisis responses.

The researchers also wanted to assess whether the tone or strategy of a message—whether sympathetic, apologetic, or informational—would influence perceptions. Drawing on situational crisis communication theory, they hypothesized that more accommodating responses might boost credibility and protect organizational reputation, regardless of the source.

“Our interest in understanding how people judge the credibility of AI-generated text grew out of a graduate class Ayman Alhammad (the lead author) took with Cameron Piercy (the third author). They talked about research about trust in AI during the class and the questions we posed in our study naturally grew out of that,” said study author Christopher Etheridge, an assistant professor in the William Allen White School of Journalism and Mass Communications at the University of Kansas.

To explore these questions, the researchers designed a controlled experiment using a hypothetical crisis scenario. Participants were told about a fictitious company called the Chunky Chocolate Company, which was facing backlash after a batch of its chocolate bars reportedly caused consumers to become ill. According to the scenario, the company had investigated the incident and determined that the problem was due to product tampering by an employee.

Participants were then shown one of six possible press releases responding to the crisis. The releases varied in two key ways. First, they were attributed either to a human spokesperson (“Chris Smith”) or to an AI system explicitly labeled as such. Second, the tone of the message followed one of three common strategies: informational (providing details about the incident), sympathetic (expressing empathy for affected customers), or apologetic (taking responsibility and issuing an apology).

The wording of the messages was carefully controlled to ensure consistency across versions, with only the source and emotional tone changing between conditions. After reading the message, participants were asked to rate the perceived credibility of the author, the credibility of the message, and the overall reputation of the company. These ratings were made using standardized scales based on prior research.

The sample included 447 students enrolled in journalism and communication courses at a public university in the Midwestern United States. These participants were chosen because of their familiarity with media content and their relevance as potential future professionals or informed consumers of public relations material. Their average age was just over 20 years old, and most participants identified as white and either full- or part-time employed.

The results provided clear support for the idea that human authors are still viewed as more credible than AI. Across all three key outcomes—source credibility, message credibility, and organizational reputation—participants rated human-written messages higher than identical messages attributed to AI.

“It’s not surprising, given discussions that are taking place, that people found AI-generated content to be less credible,” Etheridge told PsyPost. “Still, capturing the data and showing it in an experiment like this one is valuable as the landscape of AI is ever-changing.”

Participants who read press releases from a human author gave an average source credibility rating of 4.40 on a 7-point scale, compared to 4.11 for the AI author. Message credibility followed a similar pattern, with human-authored messages receiving an average score of 4.82, compared to 4.38 for AI-authored versions. Finally, organizational reputation was also judged to be higher when the company’s message came from a human, with ratings averaging 4.84 versus 4.49.

These differences, while not massive, were statistically significant and suggest that the mere presence of an AI label can diminish trust in a message. Importantly, the content of the message was identical across the human and AI conditions. The only change was who—or what—was said to have authored it.

In contrast, the tone or strategy of the message (apologetic, sympathetic, or informational) did not significantly influence any of the credibility or reputation ratings. Participants did perceive the tone differences when asked directly, meaning the manipulations were effective. But these differences did not translate into significantly different impressions of the author, message, or company. Even though past research has emphasized the importance of an apologetic or sympathetic tone during a crisis, this study found that source effects had a stronger influence on audience perceptions.

“People are generally still pretty weary of AI-generated messages,” Etheridge explained. “They don’t find them as credible as human-written content. In our case, news releases written by humans are more favorably viewed by readers than those written by AI. For people who are concerned about AI replacing jobs, that could be welcome news. We caution Public Relations agencies against over-use of AI, as it could hurt their reputation with the public when public reputation is a crucial measure of the industry.”

But as with all research, there are some caveats to consider. The study relied on a fictional company and crisis scenario, which might not fully capture real-world reactions, and participants—primarily university students—may not represent broader public attitudes due to their greater familiarity with AI. Additionally, while the study clearly labeled the message as AI-generated, real-world news releases often lack such transparency, raising questions about how audiences interpret content when authorship is ambiguous.

“We measured credibility and organizational reputation but didn’t really look at other important variables like trust or message retention,” Etheridge said. “We also may have been more transparent about our AI-generated content than a professional public relations outlet might be, but that allowed us to clearly measure responses. Dr. Alhammad is leading where our research effort might go from here. We have talked about a few ideas, but nothing solid has formed as of yet.”

The study, “Credibility and organizational reputation perceptions of news releases produced by artificial intelligence,” was authored by Ayman Alhammad, Christopher Etheridge, and Cameron W. Piercy.



Source link

Continue Reading

AI Research

Oxford and Ellison Institute Collaborate to Integrate AI in Vaccine Research – geneonline.com

Published

on



Oxford and Ellison Institute Collaborate to Integrate AI in Vaccine Research  geneonline.com



Source link

Continue Reading

AI Research

Chinese social media firms comply with strict AI labelling law, making it clear to users and bots what’s real and what’s not

Published

on


Chinese social media companies have begun requiring users to classify AI generated content that is uploaded to their services in order to comply with new government legislation. By law, the sites and services now need to apply a watermark or explicit indicator of AI content for users, as well as include metadata for web crawling algorithms to make it clear what was generated by a human and what was not, according to SCMP.

Countries and companies the world over have been grappling with how to deal with AI generated content since the explosive growth of popular AI tools like ChatGPT, Midjourney, and Dall-E. After drafting the new law in March, China has now implemented it, taking the lead in increasing oversight and curtailing rampant use with its new labeling law making social media companies more responsible for the content on their platforms.



Source link

Continue Reading

Trending