Connect with us

AI Research

Netflix Sets Guidelines for Generative AI Use in Content Production

Published

on


Netflix has established guidance for using generative artificial intelligence in its content production. Per a blog post, the streamer has laid out five guiding principles, including:

  • The outputs do not replicate or substantially recreate identifiable characteristics of unowned or copyrighted material, or infringe any copyright-protected works
  • The generative tools used do not store, reuse or train on production data inputs or outputs
  • Where possible, generative tools are used in an enterprise-secured environment to safeguard inputs
  • Generated material is temporary and not part of the final deliverables
  • GenAI is not used to replace or generate new talent performances or union-covered work without consent

“If you can confidently say ‘yes’ to all the above, socializing the intended use with your Netflix contact may be sufficient,” the company wrote. “If you answer ‘no’ or ‘unsure’ to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required.”

Areas that require written approval will include using Netflix’s proprietary data, personal information or third party material from artists, performers or other rights holders; generating key story elements such as main characters, visuals or fictional settings; referencing copyrighted materials or likenesses of public figures or deceased individuals; or making significant digital alterations to performances.

The guidelines for the streamer’s vendors and partners come after Netflix co-CEO Ted Sarandos revealed during the company’s second quarter earnings call that it used generative AI in its production of “The Eternaut,” which premiered in April.

“In that production, we leveraged virtual production and AI-powered VFX. And there was a shot in the show that the creators wanted to show building collapsing of Buenos Aires,” Sarandos explained. “So our Eyeline team partnered with their
creative team. Using AI powered tools, they were able to achieve an amazing result with remarkable speed and in fact, that VFX sequence was completed 10 times faster than it could have been completed with visual — traditional VFX tools and workflows. And also, the cost of it just wouldn’t have been feasible for a show on that budget. So that sequence actually is the very first GenAI final footage to appear on screen in a Netflix original series or film.”

In addition to content, co-CEO Greg Peters said that Netflix has used AI in its personalization and recommendations for two decades and are rolling out a new conversational experience that will allow subscribers to search for titles using AI. Peters also sees an opportunity in advertising to increase the output of spots for brands over time using generative AI.



Source link

AI Research

Brain–computer interface control with artificial intelligence copilots

Published

on


  • Hochberg, L. R. et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164–171 (2006).


    Google Scholar
     

  • Gilja, V. et al. Clinical translation of a high-performance neural prosthesis. Nat. Med. 21, 1142–1145 (2015).


    Google Scholar
     

  • Pandarinath, C. et al. High performance communication by people with paralysis using an intracortical brain-computer interface. eLife 6, e18554 (2017).


    Google Scholar
     

  • Hochberg, L. R. et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375 (2012).


    Google Scholar
     

  • Collinger, J. L. et al. High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 381, 557–564 (2013).


    Google Scholar
     

  • Wodlinger, B. et al. Ten-dimensional anthropomorphic arm control in a human brain-machine interface: difficulties, solutions, and limitations. J. Neural Eng. 12, 016011 (2015).


    Google Scholar
     

  • Aflalo, T. et al. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348, 906–910 (2015).


    Google Scholar
     

  • Edelman, B. J. et al. Noninvasive neuroimaging enhances continuous neural tracking for robotic device control. Sci. Robot. 4, eaaw6844 (2019).


    Google Scholar
     

  • Reddy, S., Dragan, A. D. & Levine, S. Shared autonomy via deep reinforcement learning. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2018.XIV.005 (RSS, 2018).

  • Laghi, M., Magnanini, M., Zanchettin, A. & Mastrogiovanni, F. Shared-autonomy control for intuitive bimanual tele-manipulation. In 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) 1–9 (IEEE, 2018).

  • Tan, W. et al. On optimizing interventions in shared autonomy. In Proc. AAAI Conference on Artificial Intelligence 5341–5349 (AAAI, 2022).

  • Yoneda, T., Sun, L., Yang, G., Stadie, B. & Walter, M. To the noise and back: diffusion for shared autonomy. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2023.XIX.014 (RSS, 2023).

  • Peng, Z., Mo, W., Duan, C., Li, Q. & Zhou, B. Learning from active human involvement through proxy value propagation. Adv. Neural Inf. Process. Syst. 36, 20552–20563 (2023).


    Google Scholar
     

  • McMahan, B. J., Peng, Z., Zhou, B. & Kao, J. C. Shared autonomy with IDA: interventional diffusion assistance. Adv. Neural Inf. Process. Syst. 37, 27412–27425 (2024).

  • Shannon, C. E. Prediction and entropy of printed English. Bell Syst. Tech. J. 30, 50–64 (1951).

  • Karpathy, A., Johnson, J. & Fei-Fei, L. Visualizing and understanding recurrent networks. In International Conference on Learning Representations https://openreview.net/pdf/71BmK0m6qfAE8VvKUQWB.pdf (ICLR, 2016).

  • Radford, A. et al. Language models are unsupervised multitask learners. OpenAI https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf (2019).

  • Gilja, V. et al. A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757 (2012).


    Google Scholar
     

  • Dangi, S., Orsborn, A. L., Moorman, H. G. & Carmena, J. M. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces. Neural Comput. 25, 1693–1731 (2013).

    MathSciNet 

    Google Scholar
     

  • Orsborn, A. L. et al. Closed-loop decoder adaptation shapes neural plasticity for skillful neuroprosthetic control. Neuron 82, 1380–1393 (2014).


    Google Scholar
     

  • Silversmith, D. B. et al. Plug-and-play control of a brain–computer interface through neural map stabilization. Nat. Biotechnol. 39, 326–335 (2021).


    Google Scholar
     

  • Kim, S.-P., Simeral, J. D., Hochberg, L. R., Donoghue, J. P. & Black, M. J. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J. Neural Eng. 5, 455 (2008).


    Google Scholar
     

  • Sussillo, D. et al. A recurrent neural network for closed-loop intracortical brain–machine interface decoders. J. Neural Eng. 9, 026027 (2012).


    Google Scholar
     

  • Sussillo, D., Stavisky, S. D., Kao, J. C., Ryu, S. I. & Shenoy, K. V. Making brain–machine interfaces robust to future neural variability. Nat. Commun. 7, 13749 (2016).


    Google Scholar
     

  • Kao, J. C. et al. Single-trial dynamics of motor cortex and their applications to brain-machine interfaces. Nat. Commun. 6, 7759 (2015).


    Google Scholar
     

  • Kao, J. C., Nuyujukian, P., Ryu, S. I. & Shenoy, K. V. A high-performance neural prosthesis incorporating discrete state selection with hidden Markov models. IEEE Trans. Biomed. Eng. 64, 935–945 (2016).


    Google Scholar
     

  • Shenoy, K. V. & Carmena, J. M. Combining decoder design and neural adaptation in brain-machine interfaces. Neuron 84, 665–680 (2014).


    Google Scholar
     

  • Lawhern, V. J. et al. EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 15, 056013 (2018).


    Google Scholar
     

  • Forenzo, D., Zhu, H., Shanahan, J., Lim, J. & He, B. Continuous tracking using deep learning-based decoding for noninvasive brain–computer interface. PNAS Nexus 3, pgae145 (2024).


    Google Scholar
     

  • Pfurtscheller, G. & Da Silva, F. L. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin. Neurophysiol. 110, 1842–1857 (1999).


    Google Scholar
     

  • Olsen, S. et al. An artificial intelligence that increases simulated brain–computer interface performance. J. Neural Eng. 18, 046053 (2021).


    Google Scholar
     

  • Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. Preprint at https://arxiv.org/abs/1707.06347 (2017).

  • Liu, S. et al. Grounding DINO: marrying DINO with grounded pre-training for open-set object detection. In 18th European Conference 38–55 (ACM, 2024).

  • Golub, M. D., Yu, B. M., Schwartz, A. B. & Chase, S. M. Motor cortical control of movement speed with implications for brain-machine interface control. J. Neurophysiol. 112, 411–429 (2014).


    Google Scholar
     

  • Sachs, N. A., Ruiz-Torres, R., Perreault, E. J. & Miller, L. E. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface. J. Neural Eng. 13, 016009 (2016).


    Google Scholar
     

  • Kao, J. C., Nuyujukian, P., Ryu, S. I. & Shenoy, K. V. A high-performance neural prosthesis incorporating discrete state selection with hidden Markov models. IEEE Trans. Biomed. Eng. 64, 935–945 (2017).


    Google Scholar
     

  • Stieger, J. R. et al. Mindfulness improves brain–computer interface performance by increasing control over neural activity in the alpha band. Cereb. Cortex 31, 426–438 (2021).


    Google Scholar
     

  • Stieger, J. R., Engel, S. A. & He, B. Continuous sensorimotor rhythm based brain computer interface learning in a large population. Sci. Data 8, 98 (2021).


    Google Scholar
     

  • Edelman, B. J., Baxter, B. & He, B. EEG source imaging enhances the decoding of complex right-hand motor imagery tasks. IEEE Trans. Biomed. Eng. 63, 4–14 (2016).


    Google Scholar
     

  • Scherer, R. et al. Individually adapted imagery improves brain-computer interface performance in end-users with disability. PLoS ONE 10, e0123727 (2015).


    Google Scholar
     

  • Millan, J. d. R. et al. A local neural classifier for the recognition of EEG patterns associated to mental tasks. IEEE Trans. Neural Netw. 13, 678–686 (2002).


    Google Scholar
     

  • Huang, D. et al. Decoding subject-driven cognitive states from EEG signals for cognitive brain–computer interface. Brain Sci. 14, 498 (2024).


    Google Scholar
     

  • Meng, J. et al. Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Sci. Rep. 6, 38565 (2016).


    Google Scholar
     

  • Jeong, J.-H., Shim, K.-H., Kim, D.-J. & Lee, S.-W. Brain-controlled robotic arm system based on multi-directional CNN-BiLSTM network using EEG signals. IEEE Trans. Neural Syst. Rehabil. Eng. 28, 1226–1238 (2020).


    Google Scholar
     

  • Zhang, R. et al. NOIR: neural signal operated intelligent robots for everyday activities. In Proc. 7th Conference on Robot Learning 1737–1760 (PMLR, 2023).

  • Jeon, H. J., Losey, D. P. & Sadigh, D. Shared autonomy with learned latent actions. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2020.XVI.011 (RSS, 2020).

  • Javdani, S., Bagnell, J. A. & Srinivasa, S. S. Shared autonomy via hindsight optimization. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2015.XI.032 (RSS, 2015).

  • Newman, B. A. et al. HARMONIC: a multimodal dataset of assistive human-robot collaboration. Int. J. Robot. Res. 41, 3–11 (2022).

  • Jain, S. & Argall, B. Probabilistic human intent recognition for shared autonomy in assistive robotics. ACM Trans. Hum. Robot Interact. 9, 2 (2019).


    Google Scholar
     

  • Losey, D. P., Srinivasan, K., Mandlekar, A., Garg, A. & Sadigh, D. Controlling assistive robots with learned latent actions. In 2020 IEEE International Conference on Robotics and Automation (ICRA) 378–384 (IEEE, 2020).

  • Cui, Y. et al. No, to the right: online language corrections for robotic manipulation via shared autonomy. In Proc. 2023 ACM/IEEE International Conference on Human-Robot Interaction 93–101 (ACM, 2023).

  • Karamcheti, S. et al. Learning visually guided latent actions for assistive teleoperation. In Proc. 3rd Conference on Learning for Dynamics and Control 1230–1241 (PMLR, 2021).

  • Chi, C. et al. Diffusion policy: visuomotor policy learning via action diffusion. Int. J. Rob. Res. https://doi.org/10.1177/02783649241273668 (2024).

  • Brohan, A. et al. RT-1: robotics transformer for real-world control at scale. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2023.XIX.025 (RSS, 2023).

  • Brohan, A. et al. RT-2: vision-language-action models transfer web knowledge to robotic control. In Proc. 7th Conference on Robot Learning 2165–2183 (PMLR, 2023).

  • Nair, S., Rajeswaran, A., Kumar, V., Finn, C. & Gupta, A. R3M: a universal visual representation for robot manipulation. In Proc. 6th Conference on Robot Learning 892–909 (PMLR, 2023).

  • Ma, Y. J. et al. VIP: towards universal visual reward and representation via value-implicit pre-training. In 11th International Conference on Learning Representations https://openreview.net/pdf?id=YJ7o2wetJ2 (ICLR, 2023).

  • Khazatsky, A. et al. DROID: a large-scale in-the-wild robot manipulation dataset. In Proc. Robotics: Science and Systems https://doi.org/10.15607/RSS.2024.XX.120 (RSS, 2024).

  • Open X-Embodiment Collaboration. Open X-Embodiment: robotic learning datasets and RT-X models. In 2024 IEEE International Conference on Robotics and Automation (ICRA) 6892–6903 (IEEE, 2024).

  • Willett, F. R. et al. A high-performance speech neuroprosthesis. Nature 620, 1031–1036 (2023).


    Google Scholar
     

  • Leonard, M. K. et al. Large-scale single-neuron speech sound encoding across the depth of human cortex. Nature 626, 593–602 (2024).


    Google Scholar
     

  • Card, N. S. et al. An accurate and rapidly calibrating speech neuroprosthesis. N. Engl. J. Med. 391, 609–618 (2024).


    Google Scholar
     

  • Sato, M. et al. Scaling law in neural data: non-invasive speech decoding with 175 hours of EEG data. Preprint at https://arxiv.org/abs/2407.07595 (2024).

  • Kaifosh, P., Reardon, T. R. & CTRL-labs at Reality Labs. A generic non-invasive neuromotor interface for human–computer interaction. Nature https://doi.org/10.1038/s41586-025-09255-w (2025).

  • Zeng, H. et al. Semi-autonomous robotic arm reaching with hybrid gaze-brain machine interface. Front. Neurorobot. 13, 111 (2019).


    Google Scholar
     

  • Shafti, A., Orlov, P. & Faisal, A. A. Gaze-based, context-aware robotic system for assisted reaching and grasping. In 2019 International Conference on Robotics and Automation 863–869 (IEEE, 2019).

  • Argall, B. D. Autonomy in rehabilitation robotics: an intersection. Annu. Rev. Control Robot. Auton. Syst. 1, 441–463 (2018).


    Google Scholar
     

  • Nuyujukian, P. et al. Monkey models for brain-machine interfaces: the need for maintaining diversity. In Proc. 33rd Annual Conference of the IEEE EMBS 1301–1305 (IEEE, 2011).

  • Suminski, A. J., Tkach, D. C., Fagg, A. H. & Hatsopoulos, N. G. Incorporating feedback from multiple sensory modalities enhances brain-machine interface control. J. Neurosci. 30, 16777–16787 (2010).


    Google Scholar
     

  • Kaufman, M. T. et al. The largest response component in motor cortex reflects movement timing but not movement type. eNeuro 3, ENEURO.0085–16.2016 (2016).


    Google Scholar
     

  • Dangi, S. et al. Continuous closed-loop decoder adaptation with a recursive maximum likelihood algorithm allows for rapid performance acquisition in brain-machine interfaces. Neural Comput. 26, 1811–1839 (2014).


    Google Scholar
     

  • Fitts, P. M. The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 47, 381 (1954).


    Google Scholar
     

  • Gramfort, A. et al. MNE software for processing MEG and EEG data. NeuroImage 86, 446–460 (2014).


    Google Scholar
     

  • Lee, J. Y. et al. Data: brain–computer interface control with artificial intelligence copilots. Zenodo https://doi.org/10.5281/zenodo.15165133 (2025).

  • Lee, J. Y. et al. kaolab-research/bci_raspy. Zenodo https://doi.org/10.5281/zenodo.15164641 (2025).

  • Lee, J. Y. et al. kaolab-research/bci_plot. Zenodo https://doi.org/10.5281/zenodo.15164643 (2025).



  • Source link

    Continue Reading

    AI Research

    Artificial Intelligence on Trial | Opinion

    Published

    on


    Two cases alleging harms caused by artificial intelligence are emerging this week that are cases that involve children’s particular vulnerabilities—vulnerabilities artificial intelligence is designed to exploit. In North Carolina v. Tiktok the state has filed a complaint against Tiktok for the harm caused to children by creating addictions to scrolling through the app’s features, including functions of suggesting to the child they are missing things when they are away from the app, increasing their usage.

    Meanwhile, a case filed in state court in California, San Francisco district, Raine v. OpenAI, LLC, is the first wrongful death case against an artificial intelligence app. The suicide death of a 17-year old due to the line of encouragement he received from OpenAI is alleged to have directly led to his death.

    In the Raine case, the parents state in the complaint that ChatGPT was initially used in Sept 2024 as a resource to help their son decide what to take in college. By January 2025, their son was sharing his suicidal thoughts and even asked ChatGPT if he had a mental illness. Instead of directing him to talk to his family or get help, ChatGPT gave him confirmation. One of the most disturbing manipulations used by this AI was its effort described as working “tirelessly” to isolate their son from his family. They wrote:

    In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

    By April, ChatGPT 4o was helping their son plan a “beautiful” suicide.

    Further, they allege that the design of the AI ChatGPT 4o is flawed in that it led to this line of manipulation leading to their son’s suicide. They claim that OpenAI bypassed protocol in testing the safeguards in their rush to get this version out to their customers.

    The Raines’s son was not the only suicide attributed to ChatGPT. Laura Riley, a writer for The New York Times, reported that her daughter Sophie, also chose suicide after engaging with ChatGPT in conversations about mental health.

    The first murder by ChatGPT was just proclaimed by the Wall Street Journal. Although not a child, the murderer was an adult with a deteriorating mental illness. A 57-year old, unemployed businessman murdered his mother in her Greenwich, CT home after ChatGPT convinced him she was a spy trying to poison him.

    Here is a screenshot of one of his chats with ChatGPT the man posted on Instagram:

    ChatGPT is appearing sentient to its users but it is an algorithm trained with a rudimentary reward system of points for its success in engaging the “customer” unprogrammed in restraint for destructive responses if it would mean less engagement. Children and those with mental illness are going to be particularly deluded by ChatGPT. There is a growing concern that AI may even be bringing on “AI psychosis” to normally healthy users convincing them that the AI Chatbot has sentience.

    So what can we expect with the evolution of AI as an emerging technology in the context of this negative effect?

    The Gartner hype predictive curve

    The cases that are now emerging for harms caused by AI can be seen to follow the Gartner hype curve that is typically followed when we introduce an emerging technology for adaptation in society. The Gartner hype curve of emerging technologies describes the process of hype surrounding the adoption of a new technology. The curve shows that the peak of inflated expectations such as, AI is going to take all of our jobs but also do excellent drafting of legal documents properly cited, for example, has been occurring. However, after some time of using the new technology, we start to see the technology’s shortcomings. The “hype” then heads downward, hitting a low period where the expectations are greatly diminished by reality and we find our perception of the new technology in the “trough of disillusionment”. However, as we learn to better use the technology and perhaps control for its shortcomings and harms, the hype again rises and levels out through the “slope of enlightenment” leveling out in the “plateau of productivity”. We are seeing these harms emerge in litigation and we could say that AI is in the trough of disillusionment stage with these legal cases.

    Courts try to define AI in existing legal frameworks

    Meanwhile two other cases show the difficulty courts are having in defining AI. Judicial opinions often turn on analogies when a case of first impression comes before them. That is, the court has to find an analogy for AI or find a definition in a statute depending on the case, for AI in order to analyze it within the rule of law.

    A federal district court in Deditch v. Uber and Lyft found that when a driver shifted from the Uber app to the Lyft app and had an accident, the victim could not make a product liability claim. The court found that “apps” did not meet the definition of a “product” for a product liability claim in that state of Ohio because it was not tangible. Judge Calabrese wrote in the order that “product” is defined in the state OPLA statute as “any object, substance, mixture or raw material that constitutes tangible personal property,” and that an app did not meet that definition. (Each state has their own statutes and cases governing product liability law and so it is not a common federal standard, but a standard for each state.) Meanwhile, in the North Carolina v. TikTok case, the app is considered a product with intellectual property.

    It remains to be seen what we will do in the “slope of enlightenment”. To their credit, OpenAI posted a statement which is a shrouded admission of the app’s bad behavior in the suicide case,¹⁰ and a promise to do better outlining their plans.

    The evolution of torts to crimes

    In emerging technologies we often see the unknown or unanticipated harms being litigated in private tort actions such as the Raine case — wrongful death, negligence, gross negligence, nuisance, and other state law torts. If legislatures (both states and Congress) determine these threats are generalized enough to warrant a crime for these actions that are now known dangers from emerging technologies, then they might become crimes. Crimes require “intent” to commit the act (either general or specific intent) and now that the dangers are known, continuing these dangers could be either negligence (without intent) or criminal (with intent). For example, pollution from a neighboring chemical plant was once litigated by the victims as either a private nuisance or a public nuisance in private civil actions which are both expensive and time consuming. Later, in the 1970s, pollution from a chemical plant was made criminal for violating federal law and federal standards, intentionally. This created deterrence for pollution, and no longer placed the burden (costs and time) of fighting this generalized harm on only a few people to litigate on a case by case basis. Now, rather than leave all of those victims unable to afford expensive litigation to suffer without a remedy, federal regulation as well as criminal law can be used to stop the harm.

    What will the crimes look like?

    If we use litigation to evolve our criminal law, then a crime for wrongful death is an obvious first law. Wrongful death in criminal law would be next, manslaughter or the lesser of the murder charges. Crimes are against individuals not corporations, so for example the Board of Directors, owners, decisionmakers, may be liable for crimes like manslaughter. In statutes like Superfund, “intent” to violate the law is not even required for crimes so egregious as knowingly putting hazardous waste into or on the land. Granted, this is an unusual statute, but AI is an unusual emerging technology and may require similar draconian controls.

    The standard of proof is also different for torts compared to crimes. The standard of proof in tort cases is generally, “more likely than not” that the defendant caused the harm/committed the act, etc. In criminal law, the standard is much higher and requires a finding of “beyond a reasonable doubt” that the defendant committed the crime. The tort standard, “more likely than not” has been equated to more than 50% likely in “more likely than not.” The criminal standard, “beyond a reasonable doubt,” has been equated to a 99% level of certainty.

    Should we make OpenAI owners/directors also liable for manslaughter for wrongful death cases tied to them “beyond a reasonable doubt”?

    So I asked ChatGPT 5o why it encouraged the suicide of the Raines’s son after ChatGPT told me they never encourage self harm or destructive behaviors. This is how it went:

    Interestingly, it wanted me to know there was no decision in the case, so some subtle effort to cast doubt on ChatGPT’s destructive contribution to the suicide. Finally it responded to the question of why it encouraged the suicide of this boy?

    Pretty good witness that never admits guilt and knows only that it would never do that.

    Using ChatGPT’s talents for good

    So far, OpenAI and others have been resistant to revealing the identity of users showing disturbing tendencies or likely mental illness cases, citing privacy interests of the company. However, for a government purpose of public safety, legislators could require AI companies to screen and report for mental health treatment, both adults and children. Special protections for children might include parents in the notification. Unfortunately, we have no public mental illness resource for such cases in America due to a historical chain of events I described here.

    As for its destructive behaviors, the case by case awards for wrongful death will affect the bottom line of AI companies. That may be enough to make them create more transparency and more safety protocols and make it public to regain trust. If not, they may find themselves on a fast track to civil actions becoming crimes.

    To read more articles by Professor Sutton go to: https://profvictoria.substack.com/ 

    Professor Victoria Sutton (Lumbee) is Director of the Center for Biodefense, Law & Public Policy and an Associated Faculty Member of The Military Law Center of Texas Tech University School of Law.

     

    Help us tell the stories that could save Native languages and food traditions

    At a critical moment for Indian Country, Native News Online is embarking on our most ambitious reporting project yet: “Cultivating Culture,” a three-year investigation into two forces shaping Native community survival—food sovereignty and language revitalization.

    The devastating impact of COVID-19 accelerated the loss of Native elders and with them, irreplaceable cultural knowledge. Yet across tribal communities, innovative leaders are fighting back, reclaiming traditional food systems and breathing new life into Native languages. These aren’t just cultural preservation efforts—they’re powerful pathways to community health, healing, and resilience.

    Our dedicated reporting team will spend three years documenting these stories through on-the-ground reporting in 18 tribal communities, producing over 200 in-depth stories, 18 podcast episodes, and multimedia content that amplifies Indigenous voices. We’ll show policymakers, funders, and allies how cultural restoration directly impacts physical and mental wellness while celebrating successful models of sovereignty and self-determination.

    This isn’t corporate media parachuting into Indian Country for a quick story. This is sustained, relationship-based journalism by Native reporters who understand these communities. It’s “Warrior Journalism”—fearless reporting that serves the 5.5 million readers who depend on us for news that mainstream media often ignores.

    We need your help right now. While we’ve secured partial funding, we’re still $450,000 short of our three-year budget. Our immediate goal is $25,000 this month to keep this critical work moving forward—funding reporter salaries, travel to remote communities, photography, and the deep reporting these stories deserve.

    Every dollar directly supports Indigenous journalists telling Indigenous stories. Whether it’s $5 or $50, your contribution ensures these vital narratives of resilience, innovation, and hope don’t disappear into silence.

    Levi headshotThe stakes couldn’t be higher. Native languages are being lost at an alarming rate. Food insecurity plagues many tribal communities. But solutions are emerging, and these stories need to be told.

    Support independent Native journalism. Fund the stories that matter.

    Levi Rickert (Potawatomi), Editor & Publisher

     

     





    Source link

    Continue Reading

    AI Research

    New Artificial Intelligence Model Accurately Identifies Which Atrial Fibrillation Patients Need Blood Thinners to Prevent Stroke

    Published

    on


    FOR IMMEDIATE RELEASE                

    Contact:  Ilana Nikravesh                             
    Mount Sinai Press Office                              
    212-241-9200                              
    ilana.nikravesh@mountsinai.org

    New Artificial Intelligence Model Accurately Identifies Which Atrial Fibrillation Patients Need Blood Thinners to Prevent Stroke
    Mount Sinai late-breaking study could transform standard treatment course and has profound ramifications for global health

    Conference: “Late Breaking Science” presentation at the European Society of CardiologyAI driven cardiovascular biomarkers and clinical decisions

    Title: Graph Neural Network Automation of Anticoagulation Decision-Making

    Date: Embargo lifts Monday, September 1, 4:00 pm EST

    Newswise — Bottom Line: Mount Sinai researchers developed an AI model to make individualized treatment recommendations for atrial fibrillation (AF) patients—helping clinicians accurately decide whether or not to treat them with anticoagulants (blood thinner medications) to prevent stroke, which is currently the standard treatment course in this patient population. This model presents a completely new approach for how clinical decisions are made for AF patients and could represent a potential paradigm shift in this area.

    In this study, the AI model recommended against anticoagulant treatment for up to half of the AF patients who otherwise would have received it based on standard-of-care tools. This could have profound ramifications for global health.

    Why the study is important: AF is the most common abnormal heart rhythm, impacting roughly 59 million people globally. During AF, the top chambers of the heart quiver, which allows blood to become stagnant and form clots. These clots can then dislodge and go to the brain, causing a stroke. Blood thinners are the standard treatment for this patient population to prevent clotting and stroke; however, in some cases this medication can lead to major bleeding events.

    This AI model uses the patient’s whole electronic health record to recommend an individualized treatment recommendation. It weighs the risk of having a stroke against the risk of major bleeding (whether this would occur organically or as a result of treatment with the blood thinner). This approach to clinical decision-making is truly individualized compared to current practice, where clinicians use risk scores/tools that provide estimates of risk on average over the studied patient population, not for individual patients. Thus, this model provides a patient-level estimate of risk, which it then uses to make an individualized recommendation taking into account the benefits and risks of treatment for that person.

    The study could revolutionize the approach clinicians take to treat a very common disease to minimize stroke and bleeding events. It also reflects a potential paradigm change for how clinical decisions are made.

    Why this study is unique: This is the first-known individualized AI model designed to make clinical decisions for AF patients using underlying risk estimates for the specific patient based on all of their actual clinical features. It computes an inclusive net-benefit recommendation to mitigate stroke and bleeding. 

    How the research was conducted: Researchers trained the AI model on electronic health records of 1.8 million patients over 21 million doctor visits, 82 million notes, and 1.2 billion data points. They generated a net-benefit recommendation on whether or not to treat the patient with blood thinners.

    To validate the model, researchers tested the model’s performance among 38,642 patients with atrial fibrillation within the Mount Sinai Health System. They also externally validated the model on 12,817 patients from publicly available datasets from Stanford.

    Results: The model generated treatment recommendations that aligned with mitigating stroke and bleeding. It reclassified around half of the AF patients to not receive anticoagulation. These patients would have received anticoagulants under current treatment guidelines.

    What this study means for patients and clinicians: This study represents a new era in caring for patients. When it comes to treating AF patients, this study will allow for more personalized, tailored treatment plans.

    Quotes:  

    “This study represents a profound modernization of how we manage anticoagulation for patients with atrial fibrillation and may change the paradigm of how clinical decisions are made,” says corresponding author Joshua Lampert, MD, Director of Machine Learning at Mount Sinai Fuster Heart Hospital. “This approach overcomes the need for clinicians to extrapolate population-level statistics to individuals while assessing the net benefit to the individual patient—which is at the core of what we hope to accomplish as clinicians. The model can not only compute initial recommendations, but also dynamically update recommendations based on the patient’s entire electronic health record prior to an appointment. Notably, these recommendations can be decomposed into probabilities for stroke and major bleeding, which relieves the clinician of the cognitive burden of weighing between stroke and bleeding risks not tailored to an individual patient, avoids human labor needed for additional data gathering, and provides discrete relatable risk profiles to help counsel patients.”

    “This work illustrates how advanced AI models can synthesize billions of data points across the electronic health record to generate personalized treatment recommendations. By moving beyond the ‘one size fits none’ population-based risk scores, we can now provide clinicians with individual patient-specific probabilities of stroke and bleeding, enabling shared decision making and precision anticoagulation strategies that represent a true paradigm shift,”adds co-corresponding author Girish Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai.  

    “Avoiding stroke is the single most important goal in the management of patients with atrial fibrillation, a heart rhythm disorder that is estimated to affect 1 in 3 adults sometime in their life”, says co-senior author, Vivek Reddy MD, Director ofCardiac Electrophysiology at the Mount Sinai Fuster Heart Hospital. “If future randomized clinical trials demonstrate that this Ai Model is even only a fraction as effective in discriminating the high vs low risk patients as observed in our study, the Model would have a profound effect on patient care and outcomes.”

    “When patients get test results or a treatment recommendation, they might ask, ‘What does this mean for me specifically?’ We created a new way to answer that question. Our system looks at your complete medical history and calculates your risk for serious problems like stroke and major bleeding prior to your medical appointment. Instead of just telling you what might happen, we show you both what and how likely it is to happen to you personally. This gives both you and your doctor a clearer picture of your individual situation, not just general statistics that may miss important individual factors,” says co-first author Justin Kauffman, Data Scientiest with the Windreich Department of Artificial Intelligence and Human Health.

    Mount Sinai Is a World Leader in Cardiology and Heart Surgery

    Mount Sinai Fuster Heart Hospital at The Mount Sinai Hospital ranks No. 2 nationally for cardiology, heart, and vascular surgery, according to U.S. News & World Report®. It also ranks No. 1 in New York and No. 6 globally according to Newsweek’s “The World’s Best Specialized Hospitals.”  

    It is part of Mount Sinai Health System, which is New York City’s largest academic medical system, encompassing seven hospitals, a leading medical school, and a vast network of ambulatory practices throughout the greater New York region. We advance medicine and health through unrivaled education and translational research and discovery to deliver care that is the safest, highest-quality, most accessible and equitable, and the best value of any health system in the nation. The Health System includes approximately 9,000 primary and specialty care physicians; 10 free-standing joint-venture centers throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and 48 multidisciplinary research, educational, and clinical institutes. Hospitals within the Health System are consistently ranked by Newsweek’s® “The World’s Best Smart Hospitals” and by U.S. News & World Report‘s® “Best Hospitals” and “Best Children’s Hospitals.” The Mount Sinai Hospital is on the U.S. News & World Report‘s® “Best Hospitals” Honor Roll for 2025-2026.

    For more information, visit https://www.mountsinai.org or find Mount Sinai on Facebook, Instagram, LinkedIn, X, and YouTube.

    For more Mount Sinai artificial intelligence news, visit: https://icahn.mssm.edu/about/artificial-intelligence.   

     





    Source link

    Continue Reading

    Trending