AI Insights
Diagnosing facial synkinesis using artificial intelligence to advance facial palsy care
Over the past decade, a plethora of software applications have emerged in the field of patient medical care, supporting the diagnosis and management of various clinical conditions14,15,20,21,22. Our study contributes to this evolving field by introducing a novel application for holistic synkinesis diagnosis and leveraging the power of convolutional neural networks (CNN) to analyze images of periocular regions.
The development and validation of our CNN-based model for diagnosing facial synkinesis in FP patients mark a significant advancement in the realm of automated medical diagnostics. Our model demonstrated a high degree of accuracy (98.6%) in distinguishing between healthy individuals and those with synkinesis, with an F1-score of 98.4%, precision of 100%, and recall of 96.9%. These metrics highlight the model’s robustness and reliability, rendering it a valuable tool for clinicians. The confusion matrix analysis provided further insights into the model’s performance, revealing only one misclassification among the 71 test images. These metrics echo findings from previous work in diagnosing sequelae of FP. For example, our group reported comparable metrics for CNN-based assessment of lagophthalmos. Using a training set of 826 images, the validation accuracy was 97.8% over the span of 64 epochs17. Another study leveraged a CNN to automatically identify (peri-)ocular pathologies such as enophthalmos with an accuracy of 98.2%, underscoring the potential of neural networks when diagnosing facial conditions23. Such tools can broaden access to FP diagnostics, thus reducing time-to-diagnosis and effectively triaging patients to the appropriate treatment pathway (e.g., conservative therapy, cross-face-nerve-grafts)2,3,14,24. Overall, our CNN adds another highly accurate diagnostic tool for reliably detecting facial pathologies, especially in FP patients.
Another strength of our CNN lies in its high user-friendliness and rapid processing and training times. The mean image processing time was 24 ± 11 ms, and the overall training time was 14.4 min. The development of a lightweight, dockerized web application enhanced the model’s practicality and accessibility. In addition, the total development costs of the CNN were only $311 USD. Such parameters have been identified as key parameters for impactful AI research and effective integration into clinical workflows25,26,27. More precisely, the short training times may pave the avenue toward additional AI-supported diagnostic tools in FP care to detect common short- and long-term complications of FP (e.g., ectropion, hemifacial tissue atrophy). The easy-to-use and cost-effective web application may facilitate clinical use for healthcare providers in low- and middle-income countries, where the incidence and prevalence of FP are higher compared to the high-income countries28. To facilitate the download and use of our algorithm, we (i) uploaded the code to GitHub (San Francisco, USA), (ii) integrated the code into an application, and (iii) recorded an instructional video that details the different steps. Healthcare providers from low- and middle-income countries only require an internet connection to install the application. The instructional video will then guide them through the next steps to set up the application and start screening patients. Our application is free to use, and the number of daily screens is not limited. The rapid processing times also carry the potential to increase the screening throughput, further broadening the access to FP care and reducing waiting times for FP patients3. Collectively, the CNN represents a rapid, user-friendly, and cost-effective tool.
While our study presents promising results, it is not without limitations. The relatively small sample size, especially for the validation and test sets, suggests the need for further validation with larger and more diverse (i.e., multi-center, -racial, -surgeon) datasets to ensure the model’s robustness and generalizability. Additionally, the model’s ability to distinguish synkinesis from other facial conditions was not evaluated in this study, representing an area for future research. Moreover, integrating our model into clinical practice will require careful consideration of various factors, including user training, data privacy, and the ethical implications of automated diagnostics. Ensuring that clinicians are adequately trained to use the model and interpret its results is essential for maximizing its benefits. Additionally, robust data privacy measures must be implemented to protect sensitive patient information, particularly when using web-based applications. Thus, further validation is essential before clinical implementation. In a broader context, there are different AI/machine-learning-powered tools that have shown promising outcomes in pre-clinical studies and small patient samples (face transplantation, facial reanimation, etc.)29,30,31,32. However, these tools remain to be investigated in larger-scale trials and integrated into standard clinical workup. Thus, cross-disciplinary efforts are needed to bridge the gap from bench to bedside and to fuel translational efforts.
AI Insights
What Is Artificial Intelligence? Explained Simply With Real-Life Examples – The Times of India
AI Insights
Cal State LA secures funding for two artificial intelligence projects from CSU
Cal State LA has won funding for two faculty-led artificial intelligence projects through the California State University’s (CSU) Artificial Intelligence Educational Innovations Challenge (AIEIC).
The CSU launched the initiative to ensure that faculty from its 23 campuses are key drivers of innovative AI adoption and deployment across the system. In April, the AIEIC invited faculty to develop innovative instructional strategies that leverage AI tools.
The response was overwhelming, with more than 400 proposals submitted by over 750 faculty members across the state. The Chancellor’s Office will award a total of $3 million to fund the 63 winning proposals, which were chosen for their potential to enable transformative teaching methods, foster groundbreaking research, and address key concerns about AI adoption within academia.
“CSU faculty and staff aren’t just adopting AI—they are reimagining what it means to teach, learn, and prepare students for an AI-infused world,” said Nathan Evans, CSU deputy vice chancellor of Academic and Student Affairs and chief academic officer. “The number of funded projects underscores the CSU’s strong commitment to innovation and academic excellence. These initiatives will explore and demonstrate effective AI integration in student learning, with findings shared systemwide to maximize impact. Our goal is to prepare students to engage with AI strategically, ethically, and successfully in California’s fast-changing workforce.”
Cal State LA’s winning projects are titled “Teaching with Integrity in the Age of AI” and “AI-Enhanced STEM Supplemental Instruction Workshops.”
For “Teaching with Integrity in the Age of AI,” the university’s Center for Effective Teaching and Learning will form a Faculty Learning Community (FLC) to address faculty concerns about AI and academic integrity. From September 2025 to April 2026, the FLC will support eight to 15 cross-disciplinary faculty members in developing AI-informed, ethics-focused pedagogy. Participants will explore ways to minimize AI-facilitated cheating, apply ethical decision-making frameworks, and create assignments aligned with AI literacy standards.
The “AI-Enhanced STEM Supplemental Instruction Workshops” project will look to expand and improve student success in challenging first-year Science, Technology, Engineering, and Math courses by integrating generative AI tools, specifically ChatGPT, into Supplemental Instruction workshops. By leveraging AI, the project addresses the limitations of collaborative learning environments, providing personalized, real-time feedback, and guidance.
The AIEIC is a key component of the CSU’s broader AI Strategy, which was launched in February 2025 to establish the CSU as the first AI-empowered university system in the nation. It was designed with three goals: to encourage faculty to explore AI literacies and competencies, focusing on how to help students build a fluent relationship with the technologies; to address the need for meaningful engagement with AI, emphasizing strategies that ensure students actively participate in learning alongside AI; and to examine the ethics of AI use in higher education, promoting approaches that embed academic integrity.
Awarded projects span a broad range of academic areas, including business, engineering, ethnic studies, history, health sciences, teacher preparation, scholarly writing, journalism, and theatre arts. Several projects are collaborative efforts across multiple disciplines or focus on faculty development—equipping instructors with the tools to navigate course design, policy development, and classroom practices in an AI-enabled environment.
AI Insights
Will we ever feel comfortable with AIs taking on important tasks?
Imagine a map of the world, divided by national borders. How many colours do you need to fill each country, plus the sea, without any identical colours touching?
The answer is four – indeed, no matter what your map looks like, four colours will always be enough. But proving this required a schism in mathematics. The four colour theorem, as it is known, was the first major result to be proved using a computer. The 1976 proof reduced the problem to a few thousand map arrangements, each of which was then checked by software.
Many mathematicians at the time were up in arms. How could something be called proven, they argued, if the core of the proof hides behind an unknowable machine? Perhaps because of this pushback, computer-aided proofs have remained a minority pursuit.
But that may be starting to change. As we report in “AI could be about to completely change the way we do mathematics”, the latest generation of artificial intelligence is turning this argument on its head. Why, ask its proponents, should we trust the mathematics of flawed humans, with their assumptions and shortcuts, when we can turn the verification of a proof over to a machine?
The argument raging over AI in mathematics is a microcosm of a larger question facing society
Naturally, not everyone agrees with this suggestion. And the argument raging over AI’s use in mathematics is a microcosm of a larger question facing society: just when is it appropriate to let a machine take over? Tech firms are increasingly promising that AI agents will remove drudgery by taking on mundane tasks from processing invoices to booking holidays. However, when we tried letting them run our day (see “‘Flashes of brilliance and frustration’: I let an AI agent run my day”), we found that these agents aren’t yet fully up to the job.
Relinquishing control by handing your credit cards or your password to an opaque AI creates the same sense of unease as with the four colour proof. Only now, we are no longer colouring in a map, but trying to find its edges as we probe new territory. Does evidence that we can rely on machines await us over the horizon, or merely a digital version of “here be dragons”?
Topics:
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle