AI Insights
‘I was constantly scared of what she was going to do’: the troubled life and shocking death of Immy Nunn | Mental health

Just a few hours before she ended her life, Immy Nunn seemed happy. She and her mother, Louise, had been shopping and had lunch. It was the final day of 2022 and Immy, who was 25, appeared positive about the new year. She talked about taking her driving test and looking for a new flat. She was excited about the opportunities her profile on TikTok was bringing her; known as Deaf Immy, she had nearly 800,000 followers, attracted by her honest and often funny videos about her deafness and her mental health.
By the early hours of the next morning, Immy was dead, having taken poison she bought online, almost certainly after discovering it through an online pro-suicide forum.
On a sunny day, kitchen doors open to the garden, Louise sits at her table; every so often she glances at the photographs of her daughter. Immy’s assistance dog, Whitney, now lives with her parents, and wanders around, stopping occasionally to be stroked. Louise describes these last couple of years as: “Hell. Horrible.” The pain of losing her child, she says, “you wouldn’t wish on anyone”. She copes, she says, “day by day. I struggle with a lot of things. I don’t like doing a lot.”
For the previous 10 years, Louise had been on high alert, always terrified something would happen to her daughter. Since she was about 14, Immy had periods of severe mental illness. She had self-harmed, and attempted suicide many times, and for four years she had been an inpatient at a psychiatric hospital.
She had spent the Christmas of 2022 at her parents’ home in Bognor Regis, West Sussex, then gone back to her flat in Brighton. On 29 December, she had cut herself and gone to hospital – as far as her family knew, it was the first time she had self-harmed in ages. Immy’s dad, Ray, went straight to see her and tried to get her to come home with him, but she told him she wanted to stay, and that she had an appointment with one of her support workers the following day. On 31 December, Louise and Ray went to spend the day with her in Brighton. They returned to Bognor Regis with Whitney because Immy was going to a New Year’s Eve party at a friend’s house in nearby Shoreham-by-Sea.
Louise was woken about 5am by the mother of Immy’s friend calling to say Immy had left unexpectedly, and without her coat and shoes. They had known Immy since she was a child, and were aware of her mental health problems. Louise phoned the police straight away and kept trying to ring Immy; Ray went out to look for their daughter, eventually driving to her flat in Brighton. When he arrived, the police and an ambulance were already there.
Immy’s devastated family is one of several that appear in a two-part Channel 4 documentary, Poisoned: Killer in the Post. It is based on an investigation by the Times journalist James Beal, which started after he was contacted by David Parfett, whose son Tom also died after taking a substance he bought online. The documentary shows the impact on vulnerable people of a pro-suicide forum where methods were discussed, including signposting to a Canadian chef, Kenneth Law, who Canadian police believe shipped about 1,200 packages of poison around the world. In the UK, the National Crime Agency has identified 97 potential victims. Law is awaiting trial in Canada, charged with 14 counts of murder – the dead were in the Ontario area and between the ages of 16 and 36 – but is pleading not guilty. About five months after Immy’s death, the police told Louise and Ray that they had been given a list of names of British people linked to Law, and Immy was on it. They were doing checks, Louise says the police told her, to see who on the list was still alive.
Louise would like to see Law extradited to the UK, though she knows this is unlikely. For a decade, she and her family went through heartbreaking effort to try to keep Immy safe. “And then it’s someone online. You fear the man on the corner, don’t you, but not the man you can’t see?” And she would like to see more regulation of sites that can be harmful to vulnerable people. “The [government] are allowing them; no one’s stopping them from doing it.” The site Immy is believed to have accessed is now under investigation by Ofcom; as of 1 July, the site was no longer accessible to people in the UK.
A journalist had showed Louise the site, and she was shocked at how accessible it was. “It wasn’t even on the dark web,” she says. “I was just shocked that something like that is just there. How is it even allowed?” Vulnerable people who are struggling understandably might want to find others who are feeling the same, but the site encourages and facilitates suicide – methods are discussed and tips swapped, and the “goodbye” posts are met with congratulatory messages. As for Law, Louise says: “I hate him. Hate the sound of his name, hate seeing his face.”
Immy was always a fighter, Louise says. She had been born six weeks early and spent her first couple of weeks in hospital. The fourth of her five children, Immy had siblings who doted on her. “She was just beautiful,” Louise says of Immy as a baby. “She was so good and happy; everything about her was just perfect.” The family found out that Immy was profoundly deaf when she was 18 months old, though Louise suspected it already (one of her older children also has hearing loss, though not to the extent Immy did). Having a child with additional needs meant they spent a lot of time together. When Immy was three, she had cochlear implants, which involved trips to Great Ormond Street hospital in London every few weeks.
She was happy at school, Louise remembers. It was a mainstream school but with a unit for the several deaf children there at the time. Then, when Immy was about 13, Louise noticed a change in her. Some of her deaf friends had left, and Immy stopped seeing other friends. “You just thought: ‘Typical teenager’, until one day I saw cuts on her legs and I realised that there was something going on,” says Louise. She had been running away from school, and was clearly unhappy there. She had an appointment with the Child and Adolescent Mental Health Services but refused to go, then took her first overdose shortly before she turned 15. “I thought she was dead at that point,” says Louise. “Reality hits – this is really serious.”
The National Deaf Children’s Society helped Louise advocate for Immy at school, and find her a place at a leading school for deaf children, but it took a while, and Immy’s mental health was deteriorating. After school one day, Louise could hear her in the bathroom and became worried about what she was doing, but couldn’t get her to come out. Immy’s older sister went in and found she had cut her arm badly. “I just remember her face and her saying, ‘Mum, you need to get her to hospital straight away.’ I was constantly scared of what she was going to do.”
There were other suicide attempts. Ray is a roofer and Louise had worked part-time in a shop, around looking after the children, but she gave that up to be there for Immy. “If she was at home, you wouldn’t leave her for second,” she says.
Immy was in and out of children’s mental health units and then got a place in a unit for deaf children in London. “We would go up two, three times a week to visit and she was doing really well, but she could only stay there until she was 18,” says Louise. Once Immy was discharged, Louise says there was no follow-up care and she was instead put on unfamiliar medication, which she had a terrible reaction to. “We ended up right back where we were. She was in her room smashing things over her head, blood everywhere.”
The following year, Immy was back in psychiatric hospital, where she would be for the next four years. The family hoped it would be the start of Immy getting better, but it was also, says Louise, “four years of hell. We just didn’t know when you were going to get a phone call.” On the weekends she was allowed home, Louise would sleep in her room with her “because I was so scared of what she’d get up and do”.
Immy had been diagnosed with emotionally unstable personality disorder, PTSD and other conditions including depression and anxiety. There were periods when she was well and she seemed happy; she had a girlfriend for a while. “She’d have really good days; you’d be able to go on holiday and have fun times. But you just never knew when her mind was going to suddenly hurt herself, and she didn’t know. That was the scary thing. She’d just dissociate.”
Starting a TikTok account in 2020 helped her, Louise says. “It took her mind off things. Obviously, she was still really poorly. She’d have her good days and bad days. But I think because of the followers that grew, she felt she could help other people. As her followers grew, her confidence grew, and I think she felt as if she’d finally found something that she could do.” It helped her embrace the deaf and LGBTQ+ communities and gave her a sense of identity. “She felt as if she belonged, whereas she never really knew where she belonged.”
Immy showed her followers what life in a psychiatric hospital was like, and was open about her struggles. But she could also be joyful, and often got her family involved, usually her mum. “You’d be sat in the evenings, and she’d say, ‘Mum: I’ve got an idea – I want you to be in it.’ I loved watching her laugh.” Immy was getting brand and charity collaborations, and positive messages from people who said she’d helped them. “She just couldn’t believe it, and we were just so excited for her,” says Louise.
She was desperate to try to live more independently, even though Louise thought she wasn’t ready to leave hospital. “She was determined. She’d been in there for four years; she wanted out, she wanted a normal life.” It was a worry, she says, having Immy live an hour away in Brighton, and she would video-call her often – again and again if she didn’t pick up. “She didn’t want me to keep worrying. She was like, ‘Mum: I’m 24 – let me have my life.’” And she seemed to be doing well, though Louise could never relax.
Early in 2022, Immy took an overdose. Nine months after that, in November, she told her support worker she had been on a pro-suicide forum and had bought poison from it. Louise didn’t know about this until just before the inquest. The police went to do a welfare check on Immy, but didn’t take a British Sign Language (BSL) interpreter – something Louise was familiar with in all the years of trying to get Immy the care she needed. She would go to see doctors with her, she says, and there would be no interpreter. Louise would have to accompany Immy, even when Immy didn’t want her to, so that she could explain things to her. After that police visit, Immy wasn’t seen by a mental health professional for several weeks.
A few days after Christmas with her parents, Immy harmed herself and went to hospital but left before being seen by the mental health team. She told her parents that she’d been in hospital, and Ray immediately went to see her. “We didn’t know how bad she was,” says Louise. “The plan was that he was going to bring her home, but she said she wasn’t coming back.” Of course they were alarmed, but sadly this wasn’t out of the ordinary for Immy. “She self-harmed a lot. That was her coping mechanism. We had no clue that anything else was going on.”
Immy had sent a text to her support worker, saying she thought she needed to be admitted to psychiatric hospital and that she “could easily go to the last resort” even though she didn’t want to. In another message to her psychologist the following day, she said she planned to take poison, but also said she didn’t have any (she did – it was later discovered she had already bought some online). She agreed to be admitted to a mental health crisis facility, but that didn’t happen that day. A meeting that she was supposed to have with her care coordinator also didn’t happen. The inquest found failings in mental health care contributed to Immy’s death. The coroner also highlighted systemic challenges to deaf patients, particularly the shortage of BSL interpreters. With grim irony, the inquest itself had to be adjourned at one point because of a lack of interpreters.
Louise says the family has received no apology. The trial of Law isn’t due to start until early next year, and he has been charged only over deaths in Canada. She says she feels stuck. “I always feel as if I’m waiting for the next thing. It’s just hard.”
She likes to talk about Immy, but she finds it hard to watch her videos. “The dogs start crying when they hear her voice, especially Whitney – she still recognises Immy’s voice, and then that upsets me.” There are some lovely videos of Immy and her mum together, including the two of them singing and signing You Are My Sunshine – the first song, Immy wrote, that her mum taught her with sign language.
She touched a lot of people in her short life. It has helped to receive messages from people who were helped by Immy’s videos and her work on deaf awareness and mental health, says Louise. “I’ve had some that said: ‘She basically saved my life.’”
Poisoned: Killer in the Post is on Channel 4 at 9pm on Wednesday 9 and Thursday 10 July
For more information on online safety for young people, visit the Thomas William Parfett Foundation and the Molly Rose Foundation
AI Insights
From Language Sovereignty to Ecological Stewardship – Intercontinental Cry

Last Updated on September 10, 2025
Artificial intelligence is often framed as a frontier that belongs to Silicon Valley, Beijing, or the halls of elite universities. Yet across the globe, Indigenous peoples are shaping AI in ways that reflect their own histories, values, and aspirations. These efforts are not simply about catching up with the latest technological wave—they are about protecting languages, reclaiming data sovereignty, and aligning computation with responsibilities to land and community.
From India’s tribal regions to the Māori homelands of Aotearoa New Zealand, Indigenous-led AI initiatives are emerging as powerful acts of cultural resilience and political assertion. They remind us that intelligence—whether artificial or human—must be grounded in relationship, reciprocity, and respect.
Giving Tribal Languages a Digital Voice
Just this week, researchers at IIIT Hyderabad, alongside IIT Delhi, BITS Pilani, and IIIT Naya Raipur, launched Adi Vaani, a suite of AI-powered tools designed for tribal languages such as Santali, Mundari, and Bhili.
At the heart of the project is a simple premise that technology should serve the people who need it most. Adi Vaani offers text-to-speech, translation, and optical character recognition (OCR) systems that allow speakers of marginalized languages to access education, healthcare, and public services in their mother tongues.
One of the project’s most promising outputs is a Gondi translator app that enables real-time communication between Gondi, Hindi, and English. For the nearly three million Gondi speakers who have long been excluded from India’s digital ecosystem, this tool is nothing less than transformative.
Speaking about the value of the app, research scholar Gopesh Kumar Bharti commented, “Like many tribal languages, Gondi faces several challenges due to its lack of representation in the official schedule, which hampers its preservation and development. The aim is to preserve and restore the Gondi language so that the next generation understands its cultural and historical significance.”
Latin America’s Open-Source Revolution
In Latin America, a similar wave of innovation is underway. Earlier this year, researchers at the Chilean National Center for Artificial Intelligence (CENIA) unveiled Latam-GPT, a free and open-source large language model trained not only on Spanish and Portuguese, but also incorporating Indigenous languages such as Mapuche, Rapanui, Guaraní, Nahuatl, and Quechua.
Unlike commercial AI systems that extract and commodify, Latam-GPT was designed with sovereignty and accessibility in mind.
To be successful, Latam-GPT needs to ensure the participation of “Indigenous peoples, migrant communities, and other historically marginalized groups in the model’s validation,” said Varinka Farren, chief executive officer of Hub APTA.
But as with most good things, it’s going to take time. Rodrigo Durán, CENIA’s general manager, told Rest of World that it will likely take at least a decade.
Māori Data Sovereignty: “Our Language, Our Algorithms”
Half a world away, the Māori broadcasting collective Te Hiku Media has become a global leader in Indigenous AI. In 2021, the organization released an automatic speech recognition (ASR) model for Te Reo Māori with an accuracy rate of 92%—outperforming international tech giants.
Their achievement was not the result of corporate investment or vast computing power, but of decades of community-led language revitalization. By combining archival recordings with new contributions from fluent speakers, Te Hiku demonstrated that Indigenous peoples can own not only their languages but also the algorithms that process them.
As co-director Peter-Lucas Jones explained, “In the digital world, data is like land,” he says. “If we do not have control, governance, and ongoing guardianship of our data as indigenous people, we will be landless in the digital world, too.”
Indigenous Leadership at UNESCO
On the global policy front, leadership is also shifting. Earlier this year, UNESCO appointed Dr. Sonjharia Minz, an Oraon computer scientist from India’s Jharkhand state, as co-chair of the Indigenous Knowledge Research Governance and Rematriation program.
Her mandate is ambitious: to guide the development of AI-based systems that can securely store, share, and repatriate Indigenous cultural heritage. For communities who have seen their songs, rituals, and even sacred objects stolen and digitized without consent, this initiative signals a long-overdue turn toward justice.
As Dr. Minz told The Times of India, “We are on the brink of losing indigenous languages around the world. Indigenous languages are more than mere communication tools. They are repository of culture, knowledge and knowledge system. They are awaiting urgent attention for revitalization.”
AI and Environmental Co-Stewardship
Artificial intelligence is also being harnessed to care for the land and waters that sustain Indigenous peoples. In the Arctic, communities are blending traditional ecological knowledge with AI-driven satellite monitoring to guide adaptive mariculture practices—helping to ensure that changing seas still provide food for generations to come.
In the Pacific Northwest, Indigenous nations are deploying AI-powered sonar and video systems to monitor salmon runs, an effort vital not only to ecosystems but to cultural survival. Unlike conventional “black box” AI, these systems are validated by Indigenous experts, ensuring that machine predictions remain accountable to local governance and ecological ethics.
Such projects remind us that AI need not be extractive. It can be used to strengthen stewardship practices that have protected biodiversity for millennia.
The Hidden Toll of AI’s Appetite
As Indigenous communities lead the charge toward ethical and ecologically grounded AI, we must also confront the environmental realities underpinning the technology—especially the vast energy and water demands of large language models.
In Chile, the rapid proliferation of data centers—driven partly by AI demands—has sparked fierce opposition. Activists argue that facilities run by tech giants like Amazon, Google, and Microsoft exacerbate water scarcity in drought-stricken regions. As one local put it, “It’s turned into extractivism … We end up being everybody’s backyard.”
The energy hunger of LLMs compounds this strain further. According to researchers at MIT, training clusters for generative AI consume seven to eight times more energy than typical computing workloads, accelerating energy demands just as renewable capacity lags behind.
Globally, by 2022, data centers had consumed a staggering 460 terawatt-hours—a scale comparable to the electricity use of entire states such as France—and are projected to reach 1,050 TWh by 2026, which would place data centers among the top five global electricity users.
LLMs aren’t just energy-intensive; their environmental footprint also extends across their whole lifecycle. New modeling shows that inference—the use of pre-trained models—now contributes to more than half of total emissions. Meanwhile, Google’s own reporting suggests that AI operations have increased greenhouse gas emissions by roughly 48% over five years.
Communities hosting data centers often face additional challenges, including:
This environmental reckoning matters deeply to Indigenous-led AI initiatives—because AI should not replicate colonial patterns of extraction and dispossession. Instead, it must align with ecological reciprocity, sustainability, and respect for all forms of life.
Rethinking Intelligence
Together, these Indigenous-led initiatives compel us to rethink both what counts as intelligence and where AI should be heading. In the mainstream tech industry, intelligence is measured by processing power, speed, and predictive accuracy. But for Indigenous nations, intelligence is relational: it lives in languages that carry ancestral memory and in stories that guide communities toward balance and responsibility.
When these values shape artificial intelligence, the results look radically different from today’s extractive systems. AI becomes a tool for reciprocity instead of extraction. In other words, it becomes less about dominating the future and more about sustaining the conditions for life itself.
This vision matters because the current trajectory of AI as an arms race of ever-larger models, resource-hungry data centers, and escalating ecological costs—cannot be sustained.
The challenge is no longer technical but political and ethical. Will governments, institutions, and corporations make space for Indigenous leadership to shape AI’s future? Or will they repeat the same old colonial logics of extraction and exclusion? Time will tell.
AI Insights
How federal tech leaders are rewriting the rules for AI and cyber hiring

Terry Gerton Well, there’s a lot of things happening in your world. Let’s talk about, first, the new memo that came out at the end of August that talks about FedRAMP 20x. Put that in plain language for folks and then tell us what it means for PSC and its stakeholders.
Jim Carroll Yeah, I think really what it means, it’s a reflection of what’s happening in the industry overall, the GovCon world, as well as probably everything that we do, you know, even as individual citizens, which is more and more reliance on AI. What we’re seeing is the artificial intelligence world has really picked up steam, not only I saw mention of it on the news today and they were talking about — every Google search now incorporates AI. So what we’re seeing with this GSA and FedRAMP initiative is really trying to fast track the authorization of the cloud-based services side of AI. Because it really is becoming more and more part of every basic use, not only in our private lives, like they talk about, but also in the federal contracting space. And what we are seeing are more and more federal government officials using it for routine things. And so I think what this is is really a reflection that they are going to move this as quickly as possible, in recognition that the world is changing right in front of us.
Terry Gerton So is this more for government contractors who are offering AI products, or for government contractors who are using AI in their internal products?
Jim Carroll It’s really for AI-based cloud services who are able to use AI tools that not only allow them, but really allow federal workers to be able to access AI in a much faster space. And, you know, there’s certainly some challenges with AI. I think, you’re hearing some of the futurists talk about, do we really understand AI enough to embrace it to the extent that we have? I don’t think anyone really knows the answer to that, but we know it’s out there and there is this recognition that there will be an ongoing routine federal use of AI. So let’s at least have the major players that are doing it the best authorized to be able to provide the service. And so much is happening right now in the AI space. And I think everyone knows the acronym. There’s a lot of acronyms we’re going to talk about today that are happening, but AI is an acronym that really is. And we did a poll and looked at our 400 member companies at PS Council. And I think it was 45% or 50% mentioned the use of AI on their homepage. And so I think there’s just recognition that GSA wants to be able to provide these solutions to the federal government workers.
Terry Gerton Do you see any risks or trade-offs in accelerating this approval versus adopting things that might not quite be ready for prime time?
Jim Carroll You know, I think there’s always that concern, as I mentioned, about some of the futurists that are looking at this and making sure that it’s safe. We’re hearing about it from the White House and we’re putting together — you’ve seen some public panels already with the White House, we’ve been asked to bring our PSC members for a policy discussion and some of the legal issues around AI to the White House. And so we’ll be bringing some members to the White House here in the next couple of weeks. And so I think there is concern that the people who use AI are also double-checking to make sure it’s accurate, right? That’s one of the concerns I think that people want to make sure is that there should not be an over-reliance or an exclusive reliance on AI tools. And we need to make sure that the solutions and the answers that our AI tools are giving us are actually accurate. One of the concerns, which I think goes into something we need to discuss that’s happening this week, is cybersecurity. Is AI secure? Is the use of it going to be able to safeguard some of the really important national security work that we’re doing? And how do we do that?
Terry Gerton I’m speaking with Jim Carroll. He’s the CEO of the Professional Services Council. Well, let’s stick in that tech vein and cybersecurity. There’s a new bill in Congress that wants to shift cybersecurity hiring to more of a skills-based qualification than professional degrees. How does PSC think about that proposal?
Jim Carroll I think again, it’s a reflection of what’s actually out there — that these new tools, we’ll say in cybersecurity, [are] really based on an individual’s ability to maneuver in this space, as opposed to just a degree. And being able to really focus on the ability of everyone, I think equals the playing field, right? It means more and more people are qualified to do this. When you take away a — I hate to say a barrier such as a degree, but it’s a reflection that there are other skill sets that people have learned to be able to actually do their work. And I can say this, having gotten a law degree many years ago, that you really sort of learn how to practice law by doing it and by having a mentor and doing it over the years, as opposed to just having a law degree. I don’t think it would be a good person to just go out and represent anyone on anything on the day after graduating from law school. You really need to learn how to apply it and I think that’s what this bipartisan bill is doing. And so you know, we’re encouraging more and more people being able to get into this, because there’s a greater and greater need, Terry. And so we’re okay with this.
Terry Gerton So what might it mean then for the GovCon workforce?
Jim Carroll I think there’s an opportunity here for the GovCon workspace and employees to be able to expand and really get some super-talented people to be able to work at these federal agencies. Which is a great plus, I think, for actually achieving the desired results that our GovCon members at PS Council are able to deliver, is we’re going to get the best and brightest and bring those people in to give real solutions.
Terry Gerton So the bill calls for more transparency from OPM on education-related hiring policies. Does PSC have an idea of what kind of oversight they’d like to see about that practice?
Jim Carroll Yeah, we’re looking into it now. We’re talking to our members and seeing what kind of oversight they have. You know, representing 400 organizations, companies that do business with the federal government and so many in this space of cybersecurity, being the leading trade organization for these 400 companies, it means that we’re able to go to our members and get from them, really, the safeguards that they think are important. Get the requirements that they think are important and get it in there. And so this is going to be a deliberative process. We have a little bit of time to work on this. But we’re excited about the potential. We really think this will be able to deliver great solutions, Terry.
Terry Gerton Well, speaking of cyber, there’s a new memo out on the cybersecurity maturity model. What’s your hot take there?
Jim Carroll Terry, how long has that been pending? I think five years. I think it’s five years is what I heard this morning. And so, you know, this will provide three levels of certification and clarity for CMMC [(Cybersecurity Maturity Model Certification)]. We’re looking at it now. This is obviously a critical issue and we are starting a working group. And we’re going to be able to provide resources to our members for this, to make sure that the certification — some of which are going to be very expensive for our members, depending on what type of certification that they want. So we’re gearing up. We have been ready for this. Like I said, we started planning this for five years ago, right? So did you, Terry. And so we have five years of thought going into it and we will be announcing and developing a website for our members to be able to have information on this, learn from this. We’ll be conducting seminars for our members. So now that CMMC — the other acronym I think that I mentioned earlier — is finally here, it’ll be implemented, I guess, in 60 days. And so we’ll have some time to use the skills that we have been developing over the last five years to give to our members.
Terry Gerton Any surprises for you in the final version? I know that PSC had quite a bit of input in the development.
Jim Carroll Not right now. We’re sort of looking at it; obviously, it just dropped in the last 24 hours. And so nothing right now that has caught us off guard. And so we’ve been ready for this and we’re ready to educate our members on this.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
AI Insights
Techno-Utopians Like Elon Musk Are Treading Old Ground

In “The Singularity is Nearer: When We Merge with AI,” the futurist Ray Kurzweil imagines the point in 2045 when rapid technological progress crosses a threshold as humans merge with machines, an event he calls “the singularity.”
Although Kurzweil’s predictions may sound more like science fiction than fact-based forecasting, his brand of thinking goes well beyond the usual sci-fi crowd. It has provided inspiration for American technology industry elites for some time, chief among them Elon Musk.
With Neuralink, his company that is developing computer interfaces implanted in people’s brains, Musk says he intends to “unlock new dimensions of human potential.” This fusion of human and machine echoes Kurzweil’s singularity. Musk also cites apocalyptic scenarios and points to transformative technologies that can save humanity.
Ideas like those of Kurzweil and Musk, among others, can seem as if they are charting paths into a brave new world. But as a humanities scholar who studies utopianism and dystopianism, I’ve encountered this type of thinking in the futurist and techno-utopian art and writings of the early 20th century.
Techno-utopianism’s origins
Techno-utopianism emerged in its modern form in the 1800s, when the Industrial Revolution ushered in a set of popular ideas that combined technological progress with social reform or transformation.
Kurzweil’s singularity parallels ideas from Italian and Russian futurists amid the electrical and mechanical revolutions that took place at the turn of the 20th century. Enthralled by inventions like the telephone, automobile, airplane and rocket, those futurists found inspiration in the concept of a “New Human,” a being who they imagined would be transformed by speed, power and energy.
A century ahead of Musk, Italian futurists imagined the destruction of one world, so that it might be replaced by a new one, reflecting a common Western techno-utopian belief in a coming apocalypse that would be followed by the rebirth of a changed society.
One especially influential figure of the time was Filippo Marinetti, whose 1909 “Founding and Manifesto of Futurism” offered a nationalistic vision of a modern, urban Italy. It glorified the tumultuous transformation caused by the Industrial Revolution. The document describes workers becoming one with their fiery machines. It encourages “aggressive action” coupled with an “eternal” speed designed to break things and bring about a new world order.
The overtly patriarchal text glorifies war as “hygiene” and promotes “scorn for woman.” The manifesto also calls for the destruction of museums, libraries and universities and supports the power of the rioting crowd.
Marinetti’s vision later drove him to support and even influence the early fascism of Italian dictator Benito Mussolini. However, the relationship between the futurism movement and Mussolini’s increasingly anti-modern regime was an uneasy one, as Italian studies scholar Katia Pizzi wrote in “Italian Futurism and the Machine.”
Further east, the Russian revolutionaries of 1917 adopted a utopian faith in material progress and science. They combined a “belief in the ease with which culture could be destroyed” with the benefits of “spreading scientific ideas to the masses of Russia,” historian Richard Stites wrote in “Revolutionary Dreams.”
For the Russian left, an “immediate and complete remaking” of the soul was taking place. This new proletarian culture was personified in the ideal of the New Soviet Man. This “master of nature by means of machines and tools” received a polytechnical education instead of the traditional middle-class pursuit of the liberal arts, humanities scholar George Young wrote in “The Russian Cosmists.” The first Soviet People’s Commissar of Education, Anatoly Lunacharsky, supported these movements.
Although their political ideologies took different forms, these 20th-century futurists all focused their efforts on technological advancement as an ultimate objective. Techno-utopians were convinced that the dirt and pollution of real-world factories would automatically lead to a future of “perfect cleanliness, efficiency, quiet, and harmony,” historian Howard Segal wrote in “Technology and Utopia.”
Myths of efficiency and everyday tech
Despite the remarkable technological advances of that time, and since, the vision of those techno-utopians largely has not come to pass. In the 21st century, it can seem as if we live in a world of near-perfect efficiency and plenitude thanks to the rapid development of technology and the proliferation of global supply chains. But the toll that these systems take on the natural environment – and on the people whose labor ensures their success – presents a dramatically different picture.
Today, some of the people who espouse techno-utopian and apocalyptic visions have amassed the power to influence, if not determine, the future. At the start of 2025, through the Department of Government Efficiency, or DOGE, Musk introduced a fast-paced, tech-driven approach to government that has led to major cutbacks in federal agencies. He’s also influenced the administration’s huge investments in artificial intelligence, a class of technological tools that public officials are only beginning to understand.
The futurists of the 20th century influenced the political sphere, but their movements were ultimately artistic and literary. By contrast, contemporary techno-futurists like Musk lead powerful multinational corporations that influence economies and cultures across the globe.
Does this make Musk’s dreams of human transformation and societal apocalypse more likely to become reality? If not, these elements of Musk’s project are likely to remain more theoretical, just as the dreams of last century’s techno-utopians did.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi