Connect with us

AI Insights

We’re Light-Years Away from True Artificial Intelligence, Says Murderbot Author Martha Wells

Published

on


Many people fear that if fully sentient machine intelligence ever comes to exist, it will take over the world. The real threat, though, is the risk of tech companies enslaving robots to drive up profits, author Martha Wells suggests in her far-future-set book series The Murderbot Diaries. In Wells’s world, machine intelligences inhabit spaceships and bots, and half-human, half-machine constructs offer humans protection from danger (in the form of “security units”), as well as sexual pleasure (“comfort units”). The main character, a security unit who secretly names itself Murderbot, manages to gain free will by hacking the module its owner company uses to enslave it. But most beings like it aren’t so lucky.

In Murderbot’s world, corporations control almost everything, competing among themselves to exploit planets and indentured labor. The rights of humans and robots often get trampled by capitalist greed—echoing many of the real-world sins Wells attributes to today’s tech companies. But just outside the company territory (called the “Corporation Rim”) is an independent planet named Preservation, a relatively free and peaceful society that Murderbot finds itself, against all odds, wanting to protect.

Now, with the TV adaptation Murderbot airing on Apple TV+, Wells is reaching a whole new audience. The show has won critical acclaim (and, at the time of writing, an audience rating of 96 percent on Rotten Tomatoes), and it is consistently ranked among the streamer’s most-watched series. It was recently renewed for a second season. “I’m still kind of overwhelmed by everything happening with the show,” Wells says. “It’s hard to believe.”


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Scientific American spoke to Wells about the difference between today’s AI and true machine intelligence, artificial personhood and neurodivergent robots.

[An edited transcript of the interview follows.]

The Corporation Rim feels so incredibly prescient, perhaps even more now than when you published the first book in the series in 2017.

Yes, disturbingly so. This corporate trend has kind of been percolating over the past 10 or 15 years—this was the direction we’ve been going in as a society. Once we have the idea of corporations having personhood, that a corporation is somehow more of a person than an actual human individual, then it really starts to show you just how bad it can get. I feel like that’s been possible at any time; it’s not just a far-future thing. But depicting it in the far future makes it less horrific, I guess. It allows you to think about these things without feeling like you’re watching the news.

Currently the idea of going to Mars is being pushed by private companies as an answer to all the problems. But [the implication is that those who go will be] some billionaires and their coterie and their indentured servants, and that will somehow be paradise for them and just the reverse for everybody else. With corporations taking over, that’s when profit is the bottom line—profit and personal aggrandizement of whoever’s running it. You can’t have the kind of serious, careful scientific progress that we’ve had with NASA.

This world that you’ve created is so interesting because it’s a dystopia in some ways. The Corporation Rim certainly is. And yet Preservation is kind of a utopia. Do you think of them in those terms?

Not really, because by that standard, we live in a dystopia now, and I think that the term dystopia is almost making light of reality. It’s like if you call something a dystopia, you don’t have to worry about fixing it or doing anything to try to alleviate the problems. It feels hopeless. And if you have something you call a utopia, then it’s perfect, and you don’t have to think about problems it might have or how you could make it better for people.

So I don’t really think in those terms because they feel very limited. And clearly in the Corporation Rim, there are still people who manage to live there, mostly okay, just like we do here, now. And in Preservation, there are still people who have prejudices, and they still have some things to work on. But they are actually working on them, which sets it apart from the Corporation Rim.

One of the central themes of the Murderbot stories is this idea of personhood. Your books make it very clear that Murderbot, as a part-human, part-artificial construct, is definitely a person. With our technology today, do you think artificial intelligence, large language models or ChatGPT should be considered people?

Well, Murderbot is a machine intelligence, and ChatGPT is not. It’s called artificial intelligence as a marketing tool, but it’s not actually artificial intelligence. A large language model is not a machine intelligence. We don’t really have that right now.

We have algorithms that can be very powerful and can parse large amounts of data. But they do not have a sentient individual intelligence at this time. I still think we’re probably years and years and years away from anyone creating an actual artificial intelligence.

So Murderbot is fiction, because machine intelligence right now is fiction.

A large language model that pattern matches words, sometimes sort of sounds vaguely like it might be talking to you and sometimes sounds like it’s just putting patterns together in ways that look really bizarre—that’s not anywhere close to sentient machine intelligence.

I find myself feeling really conflicted because I often resent the intrusion of these language models and products that are being called artificial intelligence into modern life today. And yet I feel such affection and love for fictional artificial intelligences.

Yes! I wonder if that’s one thing that’s enabled the whole scam of AI to get such a foothold. Because so many people don’t like having it in their stuff, knowing that it’s basically taking all your data, anything you’re working on, anything you’re writing, and putting it into this churn of a pattern-matching algorithm. Probably the fictional artificial and machine intelligences over the years have sort of convinced people that this is possible and that it’s happening now. People think talking to these large language models is somehow helping them gain sentience or learn more, when it’s really not. It’s a waste of your time.

Humans are really prone to anthropomorphizing objects, especially things like our laptop and phone and all these things that respond to what we do. I think it’s just kind of baked into us, and it’s being taken advantage of by corporations to try to make money, to take jobs away from people and for their own reasons.

My favorite character in the story is ART, who is a spaceship—that is, an artificial intelligence controlling a spaceship. How did you think about differentiating this character from the half-machine, half-human Murderbot?

Ship-based consciousnesses have been around in fiction for a long time, so I can’t take credit for that. But because Murderbot relies on human neural tissue, that’s why it is subject to the anxiety and depression and other things that humans have. And ART is not. ART was very intentionally created to work with humans and be part of a of a team, so it’s never had to deal with a lot of the negative things that Murderbot has. Someone on the internet described ART as, basically, if Skynet was an academic with a family. That’s one of the best descriptions I think I’ve ever seen.

One of the reasons that I and so many people love this series is how well it explores neurodiversity. You have this diversity of kinds of intelligences, and they parallel a lot of the different types of neurodiversity we see among humans in the real world. Were you thinking of it this way when you designed this universe?

Well, it taught me about my own neurodiversity. I knew I had problems with anxiety and things like that, but I didn’t know I probably had autism. I didn’t know a lot of other things until writing this particular story and then having people talk to me about it. They’re like, “How did you manage to portray neurodiversity like this?” And I’m thinking, “That’s just how my brain works.This is the way I think people think.” Until Murderbot, I don’t think I realized the extent to which it affects my writing. I have had a lot of people tell me that it helped them work out things about themselves and that it was just nice to see a character who thought and felt a lot of the same things they did.

Do you think science fiction is an especially helpful genre to explore some of these aspects of humanity?

It can be. I don’t know if it always has been.Science fiction is written by people, and the good and bad aspects of their personality go into it. A genre changes as the people who are working in it change. So I think it’s been better lately because we’ve finally gotten some more women and people of color and neurodivergent people and disabled people’s voices being heard now. And it’s made for a lot of really exciting work coming out. Lately, a lot of people are calling it another golden age of science fiction.

When I wrote [the first book in the series], All Systems Red, I put a lot of myself into it. And I think one of the reasons why people identify with a lot of different aspects of it is because I put a lot of genuine emotion into it and I was very specific about the way Murderbot was feeling about certain things and what was going on with it. I think there’s been a fallacy in fiction, particularly genre fiction, that if you make a character very generic, that lets more people identify with it. But that’s actually not true. The more specific someone is about their feelings and their issues and what’s going on with them, the more people can identify with that because of that specificity.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Nvidia hits $4T market cap as AI, high-performance semiconductors hit stride

Published

on


“The company added $1 trillion in market value in less than a year, a pace that surpasses Apple and Microsoft’s previous trajectories. This rapid ascent reflects how indispensable AI chipmakers have become in today’s digital economy,” Kiran Raj, practice head, Strategic Intelligence (Disruptor) at GlobalData, said in a statement.

According to GlobalData’s Innovation Radar report, “AI Chips – Trends, Market Dynamics and Innovations,” the global AI chip market is projected to reach $154 billion by 2030, growing at a compound annual growth rate (CAGR) of 20%. Nvidia has much of that market, but it also has a giant bullseye on its back with many competitors gunning for its crown.

“With its AI chips powering everything from data centers and cloud computing to autonomous vehicles and robotics, Nvidia is uniquely positioned. However, competitive pressure is mounting. Players like AMD, Intel, Google, and Huawei are doubling down on custom silicon, while regulatory headwinds and export restrictions are reshaping the competitive dynamics,” he said.



Source link

Continue Reading

AI Insights

Federal Leaders Say Data Not Ready for AI

Published

on


ICF has found that, while artificial intelligence adoption is growing across the federal government, data remains a challenge.

In The AI Advantage: Moving from Exploration to Impact, published Thursday, ICF revealed that 83 percent of 200 federal leaders surveyed do not think their respective organizations’ data is ready for AI use.

“As federal leaders look to begin scaling AI programs, many are hitting the same wall: data readiness,” commented Kyle Tuberson, chief technology officer at ICF. “This report makes it clear: without modern, flexible data infrastructure and governance, AI will remain stuck in pilot mode. But with the right foundation, agencies can move faster, reduce costs, and deliver better outcomes for the public.”

The report also shared that 66 percent of respondents are optimistic that their data will be ready for AI implementation within the next two years.

ICF’s Study Findings

The report shows that many agencies are experimenting with AI, with 41 percent of leaders surveyed saying that they are running small-scale pilots and 16 percent in the process of escalating efforts to implement the technology. About 8 percent of respondents shared that their AI programs have matured.

Half of the respondents said their respective organizations are focused on AI experimentations. Meanwhile, 51 percent are prioritizing planning and readiness.

The report provides advice on steps federal leaders can take to advance their AI programs, including upskilling their workforce, implementing policies to ensure responsible and enterprise-wide adoption, and establishing scalable data strategies.





Source link

Continue Reading

AI Insights

AI Video Creation for Social Impact: PixVerse Empowers Billions to Tell Their Stories Using Artificial Intelligence | AI News Detail

Published

on


The rapid evolution of artificial intelligence in video creation tools is transforming how individuals and businesses communicate, share stories, and market their products. A notable development in this space comes from PixVerse, an AI-driven video creation platform, as highlighted in a keynote address by co-founder Jaden Xie on July 11, 2025. During his speech titled AI Video for Good, Xie emphasized the platform’s mission to democratize video production, stating that billions of people worldwide have never created a video or used one to share their stories. PixVerse aims to empower these individuals by leveraging AI to simplify video creation, making it accessible to non-professionals and underserved communities. This aligns with broader AI trends in 2025, where generative AI tools are increasingly focused on user-friendly interfaces and inclusivity, enabling content creation at scale. According to industry reports from sources like TechRadar, the global AI video editing market is projected to grow at a compound annual growth rate of 25.3% from 2023 to 2030, driven by demand for accessible tools in education, marketing, and personal storytelling. PixVerse’s entry into this space taps into a critical need for intuitive solutions that lower the technical barriers to video production, positioning it as a potential game-changer in the content creation ecosystem. The platform’s focus on empowering billions underscores a significant shift towards AI as a tool for social impact, beyond mere commercial applications.

From a business perspective, PixVerse’s mission opens up substantial market opportunities, particularly in sectors like education, small business marketing, and social media content creation as of mid-2025. For small businesses, AI-driven video tools can reduce the cost and time associated with professional video production, enabling them to compete with larger brands on platforms like YouTube and TikTok. Monetization strategies for platforms like PixVerse could include subscription-based models, freemium access with premium features, or partnerships with social media giants to integrate their tools directly into content-sharing ecosystems. However, challenges remain in scaling such platforms, including ensuring data privacy for users and managing the high computational costs of AI video generation. The competitive landscape is also heating up, with key players like Adobe Express and Canva incorporating AI video features into their suites as reported by Forbes in early 2025. PixVerse must differentiate itself through user experience and accessibility to capture market share. Additionally, regulatory considerations around AI-generated content, such as copyright issues and deepfake risks, are becoming more stringent, with the EU AI Act of 2024 setting precedents for compliance that PixVerse will need to navigate. Ethically, empowering users must be balanced with guidelines to prevent misuse of AI video tools for misinformation.

On the technical front, PixVerse likely relies on advanced generative AI models, such as diffusion-based algorithms or transformer architectures, to automate video editing and content generation, reflecting trends seen in 2025 AI research from sources like VentureBeat. Implementation challenges include optimizing these models for low-bandwidth environments to serve global users, especially in developing regions where internet access is limited. Solutions could involve edge computing or lightweight AI models to ensure accessibility, though this may compromise output quality initially. Looking ahead, the future implications of such tools are vast—by 2030, AI video platforms could redefine digital storytelling, with applications in virtual reality and augmented reality content creation. PixVerse’s focus on inclusivity could also drive adoption in educational sectors, where students and teachers create interactive learning materials. However, businesses adopting these tools must invest in training to maximize their potential and address ethical concerns through transparent usage policies. As the AI video market evolves in 2025, PixVerse stands at the intersection of technology and social good, potentially shaping how billions engage with video content while navigating a complex landscape of competition, regulation, and innovation.

FAQ:
What is PixVerse’s mission in AI video creation?
PixVerse aims to empower billions of people who have never made a video by using AI to simplify video creation, making it accessible to non-professionals and underserved communities, as stated by co-founder Jaden Xie on July 11, 2025.

How can businesses benefit from AI video tools like PixVerse?
Businesses, especially small enterprises, can reduce costs and time in video production, enabling competitive marketing on social platforms. Monetization for platforms like PixVerse could involve subscriptions or partnerships with social media ecosystems as of mid-2025.

What are the challenges in implementing AI video tools globally?
Challenges include optimizing AI models for low-bandwidth regions, managing high computational costs, ensuring data privacy, and addressing regulatory and ethical concerns around AI-generated content as highlighted in industry trends of 2025.



Source link

Continue Reading

Trending