AI Insights
Ares’ Star Says Her Character Reveals a Hard Truth About AI and Humans

Even in the opening moments of the Tron: Ares trailer, the story wastes no time in putting humanity’s future with artificial intelligence into jeopardy. And while lots of fans are wondering what kind of performance we’re going to get out of Jared Leto in the titular role of Ares in the Nine Inch Nails-scored sci-fi sequel, fellow cast member Jodie Turner-Smith has a lot to say about how her character is aimed to provoke audiences to think hard about the realities of an AI-filled future.
In a new feature interview with Vogue, Turner-Smith says that although her character, Athena, serves as a sort of antagonist to Leto’s Ares, Athena isn’t the real villain of the story.
“Every villain always has a story that justified their existence, which is why I think it’s so entertaining to watch movies where we take traditional villain characters and tell their story, like Cruella or like Maleficent. Nobody ever truly starts as a villain, and you can obviously empathize with and humanize anyone,” Turner-Smith said. “Athena is really principled. She’s the other side of the coin—what happens if AI begins to gain a consciousness that tells it to override whatever a human is telling them to do.”
Turner-Smith argued that if fans think Athena is villainous, the reason why is because of the corporations competing to bring the fantastical digital elements of the Grid into the real world, transforming Ares and Athena into soldiers.
“She was created by somebody with a dark spirit and energy, which was something that really interested me about this film, too. It opens up conversations about what can happen when artificial intelligence falls into the wrong hands,” Turner-Smith said. “Will humanity use it to wreak havoc? It feels very current. What happens when you are prompting artificial intelligence in certain ways?”
Turner-Smith went on to compare Athena to people in the real world who project their negative ideologies or biases onto AI training and prompts, causing the AI to reflect those biases. She cited Elon Musk’s Grok, which briefly declared itself “Mecha-Hitler,” as an example.
“I don’t know if you saw the article about Grok AI and how people were essentially training it to become antisemitic,” she told Vogue. “It makes you wonder: If someone with hateful intentions—whether they’re white supremacist, antisemitic, or homophobic—is prompting an AI with those ideas, is the AI just reflecting that? Arguing against it? And why would it, if it has no consciousness of its own?”
Tangentially, when discussing the film’s scripting, writer David DiGilio told The Hollywood Reporter that Ares was initially conceived as the movie’s villain before becoming its title character. Leto is famous for method acting and leaving his co-workers awful gifts, but Turner-Smith says that wasn’t the case on the set of Tron: Ares.
“Honestly, I was kind of hoping he’d go full Method—I would have loved it if he had sent me a Light Cycle! I think because we knew each other through fashion beforehand, it probably made the dynamic different for me,” Turner-Smith said. “But my experience was that he was a comrade-in-arms on set. Maybe that’s why it felt like we could have that kind of relationship during filming.”
Folks can see for themselves if we’re the real baddies after all when Tron: Ares hits theaters on October 10.
Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
AI Insights
General Counsel’s Job Changing as More Companies Adopt AI

The general counsel’s role is evolving to include more conversations around policy and business direction, as more companies deploy artificial intelligence, panelists at a University of California Berkeley conference said Thursday.
“We are not just lawyers anymore. We are driving a lot of the policy conversations, the business conversations, because of the geopolitical issues going on and because of the regulatory, or lack thereof, framework for products and services,” said Lauren Lennon, general counsel at Scale AI, a company that uses data to train AI systems.
Scattered regulation and fraying international alliances are also redefining the general counsel’s job, panelists …
AI Insights
California bill regulating companion chatbots advances to Senate

The California State Assembly approved legislation Tuesday that would place new safeguards on artificial intelligence-powered chatbots to better protect children and other vulnerable users.
Introduced in July by state Sen. Steve Padilla, Senate Bill 243 requires companies that operate chatbots marketed as “companions” to avoid exposing minors to sexual content, regularly remind users that they are speaking to an AI and not a person, as well as disclose that chatbots may not be appropriate for minors.
The bill passed the Assembly with bipartisan support and now heads to California’s Senate for a final vote.
“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” Padilla said in statement. “Safety must be at the heart of all developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”
The push for regulation comes as tragic instances of minors harmed by chatbot interactions have made national headlines. Last year, Adam Raine, a teenager in California, died by suicide after allegedly being encouraged by OpenAI’s chatbot, ChatGPT. In Florida, 14-year-old Sewell Setzer formed an emotional relationship with a chatbot on the platform Character.ai before taking his own life.
A March study by the MIT Media Lab examining the relationship between AI chatbots and loneliness found that higher daily usage correlated with increased loneliness, dependence and “problematic” use, a term that researchers used to characterize addiction to using chatbots. The study revealed that companion chatbots can be more addictive than social media, due to their ability to figure out what users want to hear and provide that feedback.
Setzer’s mother, Megan Garcia, and Raine’s parents have filed separate lawsuits against Character.ai and OpenAI, alleging that the chatbots’ addictive and reward-based features did nothing to intervene when both teens expressed thoughts of self-harm.
The California legislation also mandates companies program AI chatbots to respond to signs of suicidal thoughts or self-harm, including directing users to crisis hotlines, and requires annual reporting on how the bots affect users’ mental health. The bill allows families to pursue legal action against companies that fail to comply.
AI Insights
AI a 'Game Changer' for Assistance, Q&As in NJ Classrooms – GovTech
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi