Connect with us

AI Research

When a journalist uses AI to interview a dead child, isn’t it time to ask what the boundaries should be? | Gaby Hinsliff

Published

on


Joaquin Oliver was 17 years old when he was shot in the hallway of his high school. An older teenager, expelled some months previously, had opened fire with a high-powered rifle on Valentine’s Day in what became America’s deadliest high school shooting. Seven years on, Joaquin says he thinks it’s important to talk about what happened on that day in Parkland, Florida, “so that we can create a safer future for everyone”.

But sadly, what happened to Joaquin that day is that he died. The oddly metallic voice speaking to the ex-CNN journalist Jim Acosta in an interview on Substack this week was actually that of a digital ghost: an AI, trained on the teenager’s old social media posts at the request of his parents, who are using it to bolster their campaign for tougher gun controls. Like many bereaved families, they have told their child’s story over and over again to heartbreakingly little avail. No wonder they’re pulling desperately at every possible lever now, wondering what it takes to get dead children heard in Washington.

But they also wanted, his father, Manuel, admits, simply to hear their son’s voice again. His wife, Patricia, spends hours asking the AI questions, listening to him saying: “I love you, Mommy.”

No parent in their right mind would ever judge a bereaved one. If it’s a comfort to keep the lost child’s bedroom as a shrine, talk to their gravestone, sleep with a T-shirt that still faintly smells like them, then that’s no business of anyone else’s. People hold on to what they can. After 9/11, families listened until the tapes physically ran out to answerphone messages left by loved ones, calling home to say goodbye from burning towers and hijacked planes. I have a friend who still regularly re-reads old WhatsApp exchanges with her late sister, and another who occasionally texts her late father’s number with snippets of family news: she knows he isn’t there, of course, but isn’t quite ready to end the conversation yet. Some people even pay psychics to commune, in suspiciously vague platitudes, with the dead. But it’s precisely because it’s so hard to let go that grief is vulnerable to exploitation. And there may soon be big business in digitally bringing back the dead.

As with the mawkish AI-generated video Rod Stewart played on stage this week, featuring the late Ozzy Osbourne greeting various dead music legends, that might mean little more than glorified memes. Or it might be for a temporary purpose, such as the AI avatar recently created by the family of a shooting victim in Arizona to address the judge at the gunman’s sentencing. But in time, it may be something more profoundly challenging to ideas of selfhood and mortality. What if it were possible to create a permanent AI replica of someone who had died, perhaps in robot form, and carry on the conversation with them for ever?

An AI image of Ozzy Osbourne and Tina Turner shown at a Rod Stewart concert in the US, August 2025. Photograph: Iamsloanesteel Instagram

Resurrection is a godlike power, not for surrendering lightly to some tech bro with a messiah complex. But while the legal rights of the living not to have their identities stolen for use in AI deepfakes are becoming more established, the rights of the dead are muddled.

Reputation dies with us – the dead can’t be libelled – while DNA is posthumously protected. (The 1996 birth of Dolly the sheep, a genetic clone copied from a single cell, triggered global bans on human cloning.) The law governs the respectful disposal of human tissue, but it’s not bodies that AI will be trained on: it’s the private voicenotes and messages and pictures of what mattered to a person. When my father died, personally I never felt he was really in the coffin. He was so much more obviously to be found in the boxes of his old letters, the garden he planted, the recordings of his voice. But everyone grieves differently. What happens if half of a family wants Mum digitally resurrected, and the other half doesn’t want to live with ghosts?

That the Joaquin Oliver AI can never grow up – that he will be for ever 17, trapped in the amber of his teenage social media persona – is ultimately his killer’s fault, not his family’s. Manuel Oliver says he knows full well the avatar isn’t really his son, and he isn’t trying to bring him back. To him, it seems more a natural extension of the way the family’s campaign already evokes Joaquin’s life story. Yet there’s something unsettling about the plan to give his AI access to a social media account, to upload videos and gain followers. What if it begins hallucinating, or veering on to topics where it can’t possibly know what the real Joaquin would have thought?

While for now there’s a telltale glitchiness about AI avatars, as technology improves it may become increasingly hard to distinguish them from real humans online. Perhaps it won’t be long before companies or even government agencies already using chatbots to deal with customer inquiries start wondering if they could deploy PR avatars to answer journalists’ questions. Acosta, a former White House correspondent, should arguably have known better than to muddy the already filthy waters in a post-truth world by agreeing to interview someone who doesn’t technically exist. But for now, perhaps the most obvious risk is of conspiracy theorists citing this interview as “proof” that any story challenging to their beliefs could be a hoax, the same deranged lie famously peddled by Infowars host Alex Jones about the Sandy Hook school shootings.

The professional challenges involved here, however, are not just for journalists. As AI evolves, we will all increasingly be living with synthetic versions of ourselves. It won’t just be the relatively primitive Alexa in your kitchen or chatbot in your laptop – though already there are stories of people anthropomorphising AI or even falling in love with ChatGPT – but something much more finely attuned to human emotions. When one in 10 British adults tell researchers they have no close friends, of course there will be a market for AI companions, just as there is today for getting a cat or scrolling through strangers’ lives on TikTok.

Perhaps, as a society, we will ultimately decide we’re comfortable with technology meeting people’s needs when other humans sadly have not. But there’s a big difference between conjuring up a generic comforting presence for the lonely and waking the dead to order, one lost loved one at a time. There is a time to be born and a time to die, according to the verse so often read at funerals. How will it change us as a species, when we are no longer sure which is which?

  • Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI-powered research training to begin at IPE for social science scholars

Published

on


Hyderabad: The Institute of Public Enterprise (IPE), Hyderabad, has launched a pioneering 10-day Research Methodology Course (RMC) focused on the application of Artificial Intelligence (AI) tools in social science research. Sponsored by the Indian Council of Social Science Research (ICSSR), Ministry of Education, Government of India, the program commenced on October 6 and will run through October 16, 2025, at the IPE campus in Osmania University.

Designed exclusively for M.Phil., Ph.D., and Post-Doctoral researchers across social science disciplines, the course aims to equip young scholars with cutting-edge AI and Machine Learning (ML) skills to enhance research quality, ethical compliance, and interdisciplinary collaboration. The initiative is part of ICSSR’s Training and Capacity Building (TCB) programme and is offered free of cost, with travel and daily allowances reimbursed as per eligibility.

The course is being organized by IPE’s Centre for Data Science and Artificial Intelligence (CDSAI), under the academic leadership of Prof. S Sreenivasa Murthy, Director of IPE and Vice-Chairman of AIMS Telangana Chapter. Dr. Shaheen, Associate Professor of Information Technology & Analytics, serves as the Course Director, while Dr. Sagyan Sagarika Mohanty, Assistant Professor of Marketing, is the Co-Director.

Participants will undergo hands-on training in Python, R, Tableau, and Power BI, alongside modules on Natural Language Processing (NLP), supervised and unsupervised learning, and ethical frameworks such as the Digital Personal Data Protection (DPDP) Act, 2023.

The curriculum also includes field visits to policy labs like T-Hub and NIRDPR, mentorship for research proposal refinement, and guidance on publishing in Scopus and ABDC-indexed journals.

Speaking about the program, Dr. Shaheen emphasized the need for social scientists to evolve beyond traditional methods and embrace computational tools for data-driven insights.

“This course bridges the gap between conventional research and emerging technologies, empowering scholars to produce impactful, ethical, and future-ready research,” she said.

Seats for the course are allocated on a first-come, first-served basis. The last date for nominations is September 15, 2025. With its unique blend of technical training, ethical grounding, and publication support, the RMC at IPE intends to take a significant step to empower scholars in the process of modernizing social science research in India.

Interested candidates can contact: Dr Shaheen, Programme Director, at [email protected] or on mobile number 9866666620.



Source link

Continue Reading

AI Research

New AI study aims to predict and prevent sinkholes in Tennessee’s vulnerable roadways

Published

on


A large sinkhole that appeared on Chattanooga’s Northshore after last month’s historic flooding is just the latest example of roadway problems that are causing concern for drivers.

But a new study looks to use artificial intelligence (AI) to predict where these sinkholes will appear before they do any damage.

“It’s pretty hard to go about a week without hearing somebody talking about something going wrong with the road.”

According to the American Geoscience Institute, sinkholes can have both natural and artificial causes.

However, they tend to occur in places where water can dissolve bedrock, making Tennessee one of the more sinkhole prone states in the country.

Brett Malone, CEO of UTK’s research park, says…

“Geological instability, the erosions, we have a lot of that in East Tennessee, and so a lot of unsteady rock formations underground just create openings that then eventually sort of cave in.”

Sinkholes like the one on Heritage Landing Drive have become a serious headache for drivers in Tennessee.

Nearby residents say its posed safety issues for their neighborhood.

Now, UTK says they are partnering with tech company TreisD to find a statewide solution.

The company’s AI technology could help predict where a sinkhole forms before it actually happens.

“You can speed up your research. So since we’ve been able to now use AI for 3D images, it means we get to our objective and our goals much faster.”

TreisD founder Jerry Nims says their AI algorithm uses those 3D images to study sinkholes in the hopes of learning ways to prevent them.

“If you can see what you’re working with, the experts, and they can gain more information, more knowledge, and it’ll help them in their decision making.”

We asked residents in our area, like Hudson Norton, how they would feel about a study like this in our area.

“If it’s helping people and it can save people, then it sounds like a good use of AI, and responsible use of it, more importantly.”

Chattanooga officials say the sinkhole on Heritage Landing Drive could take up to 6 months to repair.



Source link

Continue Reading

AI Research

New Study Reveals Challenges in Integrating AI into NHS Healthcare

Published

on


Implementing artificial intelligence (AI) within the National Health Service (NHS) has emerged as a daunting endeavor, revealing significant challenges rarely anticipated by policymakers and healthcare leaders. A recent peer-reviewed qualitative study conducted by researchers at University College London (UCL) sheds light on the complexities involved in the procurement and early deployment of AI technologies tailored for diagnosing chest conditions, particularly lung cancer. The study surfaces amidst a broader national momentum aimed at integrating digital technology within healthcare systems as outlined in the UK Government’s ambitious 10-year NHS plan, which identifies digital transformation as pivotal for enhancing service delivery and improving patient experiences.

As artificial intelligence gains traction in healthcare diagnostics, NHS England launched a substantial initiative in 2023, whereby AI tools were introduced across 66 NHS hospital trusts, underpinned by a notable funding commitment of £21 million. This ambitious project aimed to establish twelve imaging diagnostic networks that could expand access to specialist healthcare opinions for a greater number of patients. The expected functionalities of these AI tools are significant, including prioritizing urgent cases for specialist review and assisting healthcare professionals by flagging abnormalities in radiological scans—tasks that could potentially ease the burden on overworked NHS staff.

However, two key aspects have emerged from this research, revealing that the rollout of AI systems has not proceeded as swiftly as NHS leadership had anticipated. Building on evidence gleaned from interviews with hospital personnel and AI suppliers, the UCL team identified procurement processes that were unanticipatedly protracted, with delays stretching from four to ten months beyond initial schedules. Strikingly, by June 2025—18 months post-anticipated completion—approximately a third of the participating hospital trusts had yet to integrate these AI tools into clinical practice. This delay emphasizes a critical gap between the technological promise of AI and the operational realities faced by healthcare institutions.

Compounding these challenges, clinical staff equipped with already high workloads have found it tough to engage wholeheartedly with the AI project. Many staff members expressed skepticism about the efficacy of AI technologies, rooted in concerns about their integration with existing healthcare workflows, and the compatibility of new AI tools with aging IT infrastructures that vary widely across numerous NHS hospitals. The researchers noted that many frontline workers struggled to perceive the full potential of AI, especially in environments that overly complicated the procurement and implementation processes.

In addition to identifying these hurdles, the study underscored several factors that proved beneficial in the smooth embedding of AI tools. Enthusiastic and committed local hospital teams played a significant role in facilitating project management, and strong national leadership was critical in guiding the transition. Hospitals that employed dedicated project managers to oversee the implementation found their involvement invaluable in navigating bureaucratic obstacles, indicating a clear advantage to having directed oversight in challenging integrations.

Dr. Angus Ramsay, the study’s first author, observed the lessons highlighted by this investigation, particularly within the context of the UK’s push toward digitizing the NHS. The study advocates for a recalibrated approach towards AI implementation—one that considers existing pressures within the healthcare system. Ramsay noted that the integration of AI technologies, while potentially transformative, requires tempered expectations regarding their ability to resolve deep-rooted challenges within healthcare services as policymakers might wish.

Throughout the evaluation, which spanned from March to September of last year, the research team analyzed how different NHS trusts approached AI deployment and their varied focal points, such as X-ray and CT scanning applications. They observed both the enthusiasm and the reluctance among staff to adapt to this novel technology, with senior clinical professionals expressing reservations over accountability and decision-making processes potentially being handed over to AI systems without adequate human oversight. This skepticism highlighted an urgent need for comprehensive training and guidance, as current onboarding processes were often inadequate for addressing the query-laden concerns of employees.

The analysis conducted by the UCL-led research team revealed that initial challenges, such as the overwhelming amount of technical information available, hampered effective procurement. Many involved in the selection process struggled to distill and comprehend essential elements contained within intricate AI proposals. This situation suggests the utility of establishing a national shortlist of approved AI suppliers to streamline procurement processes at local levels and alleviate the cognitive burdens faced by procurement teams.

Moreover, the emergence of widespread enthusiasm in some instances provided a counterbalance to initial skepticism. The collaborative nature of the imaging networks was particularly striking; team members freely exchanged knowledge and resources, which enriched the collective expertise as they navigated the implementation journey. The fact that many hospitals had staff committed to fostering interdepartmental collaboration made a substantial difference, aiding the mutual learning process involved in the integration of AI technologies.

One of the most pressing findings from the study was the realization that AI is unlikely to serve as a “silver bullet” for the multifaceted issues confronting the NHS. The variability in clinical requirements among the numerous organizations that compose the NHS creates an inherently complicated landscape for the introduction of diagnostic tools. Professor Naomi Fulop, a senior author of the study, emphasized that the diversity of clinical needs across numerous agencies complicates the implementation of diagnostic systems that can cater effectively to everyone. Lessons learned from this research will undoubtedly inform future endeavors in making AI tools more accessible while ensuring the NHS remains responsive to its staff and patients.

Moving forward, an essential next step will involve evaluating the use of AI tools post-implementation, aiming to understand their impact once they have been fully integrated into clinical operations. The researchers acknowledge that, while they successfully captured the procurement and initial deployment stages, further investigation is necessary to assess the experiences of patients and caregivers, thereby filling gaps in understanding around equity in healthcare delivery with AI involvement.

The implications of this study are profound, shedding light on the careful considerations necessary for effective AI introduction within healthcare systems, underscoring the urgency of embedding educational frameworks that equip staff not just with operational knowledge, but with an understanding of the philosophical, ethical, and practical nuances of AI in medicine. This nuanced understanding is pivotal as healthcare practitioners prepare for a future increasingly defined by technological integration and automation.

Faculty members involved in this transformative study, spanning various academic and research backgrounds, are poised to lead this critical discourse, attempting to bridge the knowledge gap that currently exists between technological innovation and clinical practice. As AI continues its trajectory toward becoming an integral part of healthcare, this analysis serves as a clarion call for future studies that prioritize patient experience, clinical accountability, and healthcare equity in the age of artificial intelligence.

Subject of Research: AI tools for chest diagnostics in NHS services.
Article Title: Procurement and early deployment of artificial intelligence tools for chest diagnostics in NHS services in England: A rapid, mixed method evaluation.
News Publication Date: 11-Sep-2025.
Web References: –
References: –
Image Credits: –

Keywords

AI, NHS, healthcare, diagnostics, technology, implementation, policy, research, patient care, digital transformation.

Tags: AI integration challenges in NHS healthcareAI tools for urgent case prioritizationartificial intelligence in lung cancer diagnosiscomplexities of AI deployment in healthcareenhancing patient experience with AIfunding for AI in NHS hospitalshealthcare technology procurement difficultiesNHS digital transformation initiativesNHS imaging diagnostic networksNHS policy implications for AI technologiesrole of AI in improving healthcare deliveryUCL research on AI in healthcare



Source link

Continue Reading

Trending