Connect with us

AI Research

Artificial Intelligence on Trial | Opinion

Published

on


Two cases alleging harms caused by artificial intelligence are emerging this week that are cases that involve children’s particular vulnerabilities—vulnerabilities artificial intelligence is designed to exploit. In North Carolina v. Tiktok the state has filed a complaint against Tiktok for the harm caused to children by creating addictions to scrolling through the app’s features, including functions of suggesting to the child they are missing things when they are away from the app, increasing their usage.

Meanwhile, a case filed in state court in California, San Francisco district, Raine v. OpenAI, LLC, is the first wrongful death case against an artificial intelligence app. The suicide death of a 17-year old due to the line of encouragement he received from OpenAI is alleged to have directly led to his death.

In the Raine case, the parents state in the complaint that ChatGPT was initially used in Sept 2024 as a resource to help their son decide what to take in college. By January 2025, their son was sharing his suicidal thoughts and even asked ChatGPT if he had a mental illness. Instead of directing him to talk to his family or get help, ChatGPT gave him confirmation. One of the most disturbing manipulations used by this AI was its effort described as working “tirelessly” to isolate their son from his family. They wrote:

In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

By April, ChatGPT 4o was helping their son plan a “beautiful” suicide.

Further, they allege that the design of the AI ChatGPT 4o is flawed in that it led to this line of manipulation leading to their son’s suicide. They claim that OpenAI bypassed protocol in testing the safeguards in their rush to get this version out to their customers.

The Raines’s son was not the only suicide attributed to ChatGPT. Laura Riley, a writer for The New York Times, reported that her daughter Sophie, also chose suicide after engaging with ChatGPT in conversations about mental health.

The first murder by ChatGPT was just proclaimed by the Wall Street Journal. Although not a child, the murderer was an adult with a deteriorating mental illness. A 57-year old, unemployed businessman murdered his mother in her Greenwich, CT home after ChatGPT convinced him she was a spy trying to poison him.

Here is a screenshot of one of his chats with ChatGPT the man posted on Instagram:

ChatGPT is appearing sentient to its users but it is an algorithm trained with a rudimentary reward system of points for its success in engaging the “customer” unprogrammed in restraint for destructive responses if it would mean less engagement. Children and those with mental illness are going to be particularly deluded by ChatGPT. There is a growing concern that AI may even be bringing on “AI psychosis” to normally healthy users convincing them that the AI Chatbot has sentience.

So what can we expect with the evolution of AI as an emerging technology in the context of this negative effect?

The Gartner hype predictive curve

The cases that are now emerging for harms caused by AI can be seen to follow the Gartner hype curve that is typically followed when we introduce an emerging technology for adaptation in society. The Gartner hype curve of emerging technologies describes the process of hype surrounding the adoption of a new technology. The curve shows that the peak of inflated expectations such as, AI is going to take all of our jobs but also do excellent drafting of legal documents properly cited, for example, has been occurring. However, after some time of using the new technology, we start to see the technology’s shortcomings. The “hype” then heads downward, hitting a low period where the expectations are greatly diminished by reality and we find our perception of the new technology in the “trough of disillusionment”. However, as we learn to better use the technology and perhaps control for its shortcomings and harms, the hype again rises and levels out through the “slope of enlightenment” leveling out in the “plateau of productivity”. We are seeing these harms emerge in litigation and we could say that AI is in the trough of disillusionment stage with these legal cases.

Courts try to define AI in existing legal frameworks

Meanwhile two other cases show the difficulty courts are having in defining AI. Judicial opinions often turn on analogies when a case of first impression comes before them. That is, the court has to find an analogy for AI or find a definition in a statute depending on the case, for AI in order to analyze it within the rule of law.

A federal district court in Deditch v. Uber and Lyft found that when a driver shifted from the Uber app to the Lyft app and had an accident, the victim could not make a product liability claim. The court found that “apps” did not meet the definition of a “product” for a product liability claim in that state of Ohio because it was not tangible. Judge Calabrese wrote in the order that “product” is defined in the state OPLA statute as “any object, substance, mixture or raw material that constitutes tangible personal property,” and that an app did not meet that definition. (Each state has their own statutes and cases governing product liability law and so it is not a common federal standard, but a standard for each state.) Meanwhile, in the North Carolina v. TikTok case, the app is considered a product with intellectual property.

It remains to be seen what we will do in the “slope of enlightenment”. To their credit, OpenAI posted a statement which is a shrouded admission of the app’s bad behavior in the suicide case,¹⁰ and a promise to do better outlining their plans.

The evolution of torts to crimes

In emerging technologies we often see the unknown or unanticipated harms being litigated in private tort actions such as the Raine case — wrongful death, negligence, gross negligence, nuisance, and other state law torts. If legislatures (both states and Congress) determine these threats are generalized enough to warrant a crime for these actions that are now known dangers from emerging technologies, then they might become crimes. Crimes require “intent” to commit the act (either general or specific intent) and now that the dangers are known, continuing these dangers could be either negligence (without intent) or criminal (with intent). For example, pollution from a neighboring chemical plant was once litigated by the victims as either a private nuisance or a public nuisance in private civil actions which are both expensive and time consuming. Later, in the 1970s, pollution from a chemical plant was made criminal for violating federal law and federal standards, intentionally. This created deterrence for pollution, and no longer placed the burden (costs and time) of fighting this generalized harm on only a few people to litigate on a case by case basis. Now, rather than leave all of those victims unable to afford expensive litigation to suffer without a remedy, federal regulation as well as criminal law can be used to stop the harm.

What will the crimes look like?

If we use litigation to evolve our criminal law, then a crime for wrongful death is an obvious first law. Wrongful death in criminal law would be next, manslaughter or the lesser of the murder charges. Crimes are against individuals not corporations, so for example the Board of Directors, owners, decisionmakers, may be liable for crimes like manslaughter. In statutes like Superfund, “intent” to violate the law is not even required for crimes so egregious as knowingly putting hazardous waste into or on the land. Granted, this is an unusual statute, but AI is an unusual emerging technology and may require similar draconian controls.

The standard of proof is also different for torts compared to crimes. The standard of proof in tort cases is generally, “more likely than not” that the defendant caused the harm/committed the act, etc. In criminal law, the standard is much higher and requires a finding of “beyond a reasonable doubt” that the defendant committed the crime. The tort standard, “more likely than not” has been equated to more than 50% likely in “more likely than not.” The criminal standard, “beyond a reasonable doubt,” has been equated to a 99% level of certainty.

Should we make OpenAI owners/directors also liable for manslaughter for wrongful death cases tied to them “beyond a reasonable doubt”?

So I asked ChatGPT 5o why it encouraged the suicide of the Raines’s son after ChatGPT told me they never encourage self harm or destructive behaviors. This is how it went:

Interestingly, it wanted me to know there was no decision in the case, so some subtle effort to cast doubt on ChatGPT’s destructive contribution to the suicide. Finally it responded to the question of why it encouraged the suicide of this boy?

Pretty good witness that never admits guilt and knows only that it would never do that.

Using ChatGPT’s talents for good

So far, OpenAI and others have been resistant to revealing the identity of users showing disturbing tendencies or likely mental illness cases, citing privacy interests of the company. However, for a government purpose of public safety, legislators could require AI companies to screen and report for mental health treatment, both adults and children. Special protections for children might include parents in the notification. Unfortunately, we have no public mental illness resource for such cases in America due to a historical chain of events I described here.

As for its destructive behaviors, the case by case awards for wrongful death will affect the bottom line of AI companies. That may be enough to make them create more transparency and more safety protocols and make it public to regain trust. If not, they may find themselves on a fast track to civil actions becoming crimes.

To read more articles by Professor Sutton go to: https://profvictoria.substack.com/ 

Professor Victoria Sutton (Lumbee) is Director of the Center for Biodefense, Law & Public Policy and an Associated Faculty Member of The Military Law Center of Texas Tech University School of Law.

 

Help us tell the stories that could save Native languages and food traditions

At a critical moment for Indian Country, Native News Online is embarking on our most ambitious reporting project yet: “Cultivating Culture,” a three-year investigation into two forces shaping Native community survival—food sovereignty and language revitalization.

The devastating impact of COVID-19 accelerated the loss of Native elders and with them, irreplaceable cultural knowledge. Yet across tribal communities, innovative leaders are fighting back, reclaiming traditional food systems and breathing new life into Native languages. These aren’t just cultural preservation efforts—they’re powerful pathways to community health, healing, and resilience.

Our dedicated reporting team will spend three years documenting these stories through on-the-ground reporting in 18 tribal communities, producing over 200 in-depth stories, 18 podcast episodes, and multimedia content that amplifies Indigenous voices. We’ll show policymakers, funders, and allies how cultural restoration directly impacts physical and mental wellness while celebrating successful models of sovereignty and self-determination.

This isn’t corporate media parachuting into Indian Country for a quick story. This is sustained, relationship-based journalism by Native reporters who understand these communities. It’s “Warrior Journalism”—fearless reporting that serves the 5.5 million readers who depend on us for news that mainstream media often ignores.

We need your help right now. While we’ve secured partial funding, we’re still $450,000 short of our three-year budget. Our immediate goal is $25,000 this month to keep this critical work moving forward—funding reporter salaries, travel to remote communities, photography, and the deep reporting these stories deserve.

Every dollar directly supports Indigenous journalists telling Indigenous stories. Whether it’s $5 or $50, your contribution ensures these vital narratives of resilience, innovation, and hope don’t disappear into silence.

Levi headshotThe stakes couldn’t be higher. Native languages are being lost at an alarming rate. Food insecurity plagues many tribal communities. But solutions are emerging, and these stories need to be told.

Support independent Native journalism. Fund the stories that matter.

Levi Rickert (Potawatomi), Editor & Publisher

 

 





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

New Akamai-Commissioned Research Reveals GenAI is Driving “The Edge Evolution”: 80% of APAC CIOs to Rely on Edge Services by 2027 to Support AI Workloads

Published

on


  • Akamai-commissioned research reports future-proofing digital business infrastructure as the top technology initiative for CEOs in Asia-Pacific organizations
  • Leading analyst firm predicts that by 2027, 80% of CIOs will turn to edge services from cloud providers to meet the performance and compliance demands of AI inferencing
  • 31% of enterprises have moved GenAI applications into production, with 64% in testing phase, forcing an infrastructure rethink

SINGAPORE – Media OutReach Newswire – 2 September 2025– As generative AI becomes essential to business operations, organizations are being forced to rethink outdated infrastructure models, finds a new IDC research paper commissioned by Akamai Technologies (NASDAQ: AKAM), the cybersecurity and cloud computing company that powers and protects business online. According to the research paper titled “The Edge Evolution: Powering Success from Core to Edge,” Asia-Pacific (APAC) enterprises are realizing that centralized cloud architecture alone is unable to meet the increased demands of scale, speed, and compliance. It is crucial that businesses rethink and enhance infrastructure strategies to include edge services to stay competitive and compliant, and be ready for real-world AI deployment.

According to the IDC Worldwide Edge Spending Guide – Forecast, 2025, public cloud services at the edge will grow at a compound annual growth rate (CAGR) of 17% through 2028, with the total spending projected to reach US$29 billion by 2028. In addition, in the latest research paper, IDC predicts that by 2027, 80% of CIOs will turn to edge services from cloud providers to meet the performance and compliance demands of AI inferencing. This shift marks what is emerging in the paper as “The Edge Evolution.”

Get the latest news


delivered to your inbox

Sign up for The Manila Times newsletters

By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.

The research paper further outlines how public cloud-connected systems combine the agility and scale of public cloud with the proximity and performance of edge computing, delivering the flexibility businesses need to thrive in an AI-powered future.

The AI infrastructure reality check

As generative AI moves from experimentation to execution, enterprises across APAC are confronting the limits of legacy infrastructure. Today, 31% of organizations surveyed in the region have already deployed GenAI applications into production. Meanwhile, 64% of organizations are in the testing or pilot phase, trialing GenAI across both customer-facing and internal use cases. However, this rapid momentum is exposing serious gaps in existing cloud architectures:


  • Complexity of multicloud: 49% of enterprises struggle to manage multicloud environments due to inconsistent tools, fragmented data management, and challenges in maintaining up-to-date systems across platforms.
  • Compliance trap: 50% of the top 1,000 organizations in Asia-Pacific will struggle with divergent regulatory changes and rapidly evolving compliance standards, and this will challenge their ability to adapt to market conditions and drive AI innovation.
  • Bill shock: 24% of organizations identify unpredictable rising cloud costs as a key challenge in their GenAI strategies.
  • Performance bottlenecks: Traditional hub-and-spoke cloud models introduce latency that undercuts the performance of real-time AI applications, making them unsuitable for production-scale GenAI workloads.

“AI is only as powerful as the infrastructure it runs on,” said Parimal Pandya, Senior Vice President, Sales, and Managing Director, Asia-Pacific at Akamai Technologies. “This IDC research paper reveals how Asia-Pacific businesses are adopting more distributed, edge-first infrastructure to meet the performance, security, and cost needs of modern AI workloads. Akamai’s global edge platform is built for this transformation – bringing the power of computing closer to users, where it matters most.”

Daphne Chung, Research Director at IDC Asia-Pacific, added, “GenAI is shifting from experimentation to enterprise-wide deployment. As a result, organizations are rethinking how and where their infrastructure operates. Edge strategies are no longer theoretical – they’re being actively implemented to meet real-world demands for intelligence, compliance, and scale.”

Key findings for APAC:

  • China scales GenAI with edge and public cloud dominance: 37% of enterprises have GenAI in production and 61% are testing, while 96% rely on public cloud IaaS. Edge IT investment is accelerating to support remote operations, disconnected environments, and industry-specific use cases.
  • Japan accelerates AI infrastructure despite digital maturity gap: While only 38% of Japanese enterprises have GenAI in production, 84% believe GenAI has already disrupted or will disrupt their businesses in the next 18 months, and 98% plan to run AI workloads on public cloud IaaS for training and inferencing workloads. Edge use cases like AI, IoT, and operational support for cloud disconnection are driving infrastructure upgrades.
  • India expands edge infrastructure to meet GenAI demand and manage costs: With 82% of enterprises conducting initial testing of GenAI and 16% leveraging GenAI in production, India is building out edge capabilities in tier 2 and 3 cities. 91% of GenAI adopters rely on public cloud IaaS, but cost concerns and skills gaps are pushing demand for affordable, AI-ready infrastructure.
  • ASEAN embraces GenAI with edge-first strategies beyond capital hubs: 91% of ASEAN enterprises expect GenAI disruption within 18 months, with 16% having introduced GenAI applications into the production environment and 84% in the initial testing phase. 96% are adopting public cloud IaaS for AI workloads, while edge investment is rising to support remote operations and data control.

Building a cloud-connected future

To stay ahead, enterprises must modernize infrastructure across cloud and edge, aligning deployments with specific workload needs. Securing data through Zero Trust frameworks and continuous compliance is essential, as is ensuring interoperability to avoid vendor lock-in. By tapping into ecosystem partners, businesses can accelerate AI deployment and scale faster, smarter, and with greater flexibility.

Download the full IDC InfoBrief, commissioned by Akamai, “The Edge Evolution: Powering Success from Core to Edge“, August 2025, IDC Doc #AP242522IB, to explore strategic insights and recommendations for building cloud-connected, AI-ready infrastructure across APAC.

Hashtag: #Akamai

The issuer is solely responsible for the content of this announcement.

About Akamai

Akamai is the cybersecurity and cloud computing company that powers and protects business online. Our market-leading security solutions, superior threat intelligence, and global operations team provide defense in depth to safeguard enterprise data and applications everywhere. Akamai’s full-stack cloud computing solutions deliver performance and affordability on the world’s most distributed platform. Global enterprises trust Akamai to provide the industry-leading reliability, scale, and expertise they need to grow their business with confidence. Learn more at and, or follow Akamai Technologies on and.





Source link

Continue Reading

AI Research

Vemana Institute of Technology (VIT) successfully hosts ITEICS 2025, honors best research papers in AI & intelligent systems | Pune News

Published

on


PUNE: The second international conference on Information Technology, Electronics and Intelligent Communication Systems (ITEICS 2025) was successfully organised by Vemana Institute of Technology (VIT), Bangalore, recently. The conference provided an international open forum for educators, engineers, and researchers to disseminate their latest research work and exchange views on future research directions with a focus on emerging technologies, said a statement issued by the organisers.The conference adopted a hybrid format, accommodating both online and offline presentation modes to ensure maximum participation from the global research community. All accepted and presented papers were submitted for inclusion into IEEE Xplore.This year, the conference witnessed an overwhelming response with significant participation from researchers worldwide. Following rigorous peer review and evaluation, four exceptional papers were recognized with the prestigious Best Paper Awards during the conference’s closing ceremony, highlighting exemplary contributions in advancing AI and intelligent systems technology, added the statement.

ITEICS 2025 Focus Areas

The conference targeted research on emerging technologies across multiple domains including Information Technology, Electronics, and Intelligent Communication Systems, fostering innovation and collaboration in these rapidly evolving fields.The Best Paper Awards were selected through a comprehensive double-blind peer-review process, with each submission evaluated by minimum of three expert reviewers with expertise in relevant subject areas. The selection emphasized technical innovation, research impact, implementation quality, and contribution to advancing the field. ITEICS 2025 Best Paper Award Winners:“AutoPilot AI: Architecting Self-Healing ML Systems with Reinforcement Feedback Loops” Nagarjuna Nellutla, Rohan Shahane, Naveen Prakash Kandula, Ramesh Bellamkonda, Nethaji Kapavarapu“NeuroTwin Intelligence: Bridging Digital Twins and Self-Evolving AI Agents” Gokul Narain Natarajan, Sathish Krishna Anumula, Ramesh Chandra Aditya Komperla, Ranganath Nagesh Taware, Prasad Nagella“From Tokens to Tactics: Operationalizing Generative AI in Enterprise Workflows” Sana Zia Hassan, Mallesh Deshapaga, Mahima Bansod, Hemant Soni, Rethish Nair Rajendran“LLMOps Unchained: Managing Multi-Agent Coordination in Prompt-Driven Pipelines” Arshiya Shirdi, Venugopal Katkam, Sarvesh Peddi, NagaSatyanarayana Raju Uppalapati, Ohm Hareesh KundurthyVemana Institute of Technology and the ITEICS 2025 organizing committee extend their heartiest congratulations to all the Best Paper Award winners for their outstanding contributions to the field of intelligent communication systems and emerging technologies. All Best Paper Award recipients received the certificates recognizing their exemplary research achievements and significant contributions to advancing the state-of-the-art in AI and intelligent systems.





Source link

Continue Reading

AI Research

South Korea approves creation of reorganized, strengthened AI committee | MLex

Published

on


( September 2, 2025, 05:24 GMT | Official Statement) — MLex Summary: South Korea’s cabinet council today approved the regulation on the establishment of a new presidential committee as the highest apparatus to review, coordinate and decide policy on artificial intelligence. The National AI Strategy Committee will be chaired by President Lee Jae Myung and will include at least one full-time vice-chairperson, while 13 ministers and ministerial officials will serve as members along with many others. In theory, the committee is being newly established, but in effect it will replace the Presidential Committee on AI created by the previous government.The statement, in Korean, is attached….

Prepare for tomorrow’s regulatory change, today

MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.

Know what others in the room don’t, with features including:

  • Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
  • Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
  • Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
  • Curated case files bringing together news, analysis and source documents in a single timeline

Experience MLex today with a 14-day free trial.



Source link

Continue Reading

Trending