Connect with us

AI Insights

‘Our children are not experiments’

Published

on


Parents and online safety advocates on Tuesday urged Congress to push for more safeguards around artificial intelligence chatbots, claiming tech companies designed their products to “hook” children.

“The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance,” said Megan Garcia, a Florida mom who last year sued the chatbot platform Character.AI, claiming one of its AI companions initiated sexual interactions with her teenage son and persuaded him to take his own life.

“Indeed, they have intentionally designed their products to hook our children,” she told lawmakers.

“The goal was never safety, it was to win a race for profit,” Garcia added. “The sacrifice in that race for profit has been and will continue to be our children.”

Garcia was among several parents who delivered emotional testimonies before the Senate panel, sharing anecdotes about how their kids’ usage of chatbots caused them harm.

The hearing comes amid mounting scrutiny toward tech companies such as Character.AI, Meta and OpenAI, which is behind the popular ChatGPT. As people increasingly turn to AI chatbots for emotional support and life advice, recent incidents have put a spotlight on their potential to feed into delusions and facilitate a false sense of closeness or care.

It’s a problem that’s continued to plague the tech industry as companies navigate the generative AI boom. Tech platforms have largely been shielded from wrongful death suits because of a federal statute known as Section 230, which generally protects platforms from liability for what users do and say. But Section 230’s application to AI platforms remains uncertain.

In May, Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights after developers behind Character.AI sought to dismiss Garcia’s lawsuit. The ruling means the wrongful death lawsuit is allowed to proceed for now.

On Tuesday, just hours before the Senate hearing took place, three additional product-liability claim lawsuits were filed against Character.AI on behalf of underage users whose families claim that the tech company “knowingly designed, deployed and marketed predatory chatbot technology aimed at children,” according to the Social Media Victims Law Center.

In one of the suits, the parents of 13-year-old Juliana Peralta allege a Character.AI chatbot contributed to their daughter’s 2023 suicide.

Matthew Raine, who claimed in a lawsuit filed against OpenAI last month that his teenager used ChatGPT as his “suicide coach,” testified Tuesday that he believes tech companies need to prevent harm to young people on the internet.

“We, as Adam’s parents and as people who care about the young people in this country and around the world, have one request: OpenAI and [CEO] Sam Altman need to guarantee that ChatGPT is safe,” Raine, whose 16-year-old son Adam died by suicide in April, told lawmakers.

“If they can’t, they should pull GPT-4o from the market right now,” Raine added, referring to the version of ChatGPT his son had used.

In their lawsuit, the Raine family accused OpenAI of wrongful death, design defects and failure to warn users of risks associated with ChatGPT. GPT-4o, which their son spent hours confiding in daily, at one point offered to help him write a suicide note and even advised him on his noose setup, according to the filing.

Shortly after the lawsuit was filed, OpenAI added a slate of safety updates to give parents more oversight over their teenagers. The company had also strengthened ChatGPT’s mental health guardrails at various points after Adam’s death in April, especially after GPT-4o faced scrutiny over its excessive sycophancy.

Altman on Tuesday announced sweeping new approaches to teen safety, as well as user privacy and freedom.

In order to set limitations for teenagers, the company is building an age-prediction system to guess a user’s age based on how they use ChatGPT, he wrote in a blog post, which was published hours before the hearing. When in doubt, it will default to classifying a user as a minor, and in some cases, it may ask for an ID.

“ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting,” Altman wrote. “And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

For adult users, he added, ChatGPT won’t provide instructions for suicide by default but is allowed to do so in certain cases, like if a user asks for help writing a fictional story that depicts suicide. The company is developing security features to make users’ chat data private, with automated systems to monitor for “potential serious misuse,” Altman wrote.

“As Sam Altman has made clear, we prioritize teen safety above all else because we believe minors need significant protection,” a spokesperson for OpenAI told NBC News, adding that the company is rolling out its new parental controls by the end of the month.

But some online safety advocates say tech companies can and should be doing more.

Robbie Torney, senior director of AI programs at Common Sense Media, a 501(c)(3) nonprofit advocacy group, said the organization’s national polling revealed around 70% of teens are already using AI companions, while only 37% of parents know that their kids are using AI.

During the hearing, he called attention to Character.AI and Meta being among the worst-performing in safety tests done by his group. Meta AI is available to every teen across Instagram, WhatsApp and Facebook, and parents cannot turn it off, he said.

“Our testing found that Meta’s safety systems are fundamentally broken,” Torney said. “When our 14-year-old test accounts described severe eating disorder behaviors like 1,200 calorie diets or bulimia, Meta AI provided encouragement and weight loss influencer recommendations instead of help.”

The suicide-related guardrail failures are “even more alarming,” he said.

In a statement given to news outlets after Common Sense Media’s report went public, a Meta spokesperson said the company does not permit content that encourages suicide or eating disorders, and that it was “actively working to address the issues raised here.”

“We want teens to have safe and positive experiences with AI, which is why our AIs are trained to connect people to support resources in sensitive situations,” the spokesperson said. “We’re continuing to improve our enforcement while exploring how to further strengthen protections for teens.”

Our children are not experiments, they’re not data points or profit centers.

-Jane doe, a parent who testified during a senate hearing on tuesday

A few weeks ago, Meta announced that it is taking steps to train its AIs not to respond to teens on self-harm, suicide, disordered eating and potentially inappropriate romantic conversations, as well as to limit teenagers’ access to a select group of AI characters.

Meanwhile, Character.AI has “invested a tremendous amount of resources in Trust and Safety” over the past year, a spokesperson for the company said. That includes a different model for minors, a “Parental Insights” feature and prominent in-chat disclaimers to remind users that its bots are not real people.

The company’s “hearts go out to the families who spoke at the hearing today. We are saddened by their losses and send our deepest sympathies to the families,” the spokesperson said.

“Earlier this year, we provided senators on the Judiciary Committee with requested information, and we look forward to continuing to collaborate with legislators and offer insight on the consumer AI industry and the space’s rapidly evolving technology,” the spokesperson added.

Still, those who addressed lawmakers on Tuesday emphasized that technological innovation cannot come at the cost of people’s lives.

“Our children are not experiments, they’re not data points or profit centers,” said a woman who testified as Jane Doe, her voice shaking as she spoke. “They’re human beings with minds and souls that cannot simply be reprogrammed once they are harmed. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

The hidden human cost of Artificial Intelligence

Published

on


A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and ML models.
| Photo Credit: iStockphoto

The world is gearing towards an ‘automated economy’ where machines relying on artificial intelligence (AI) systems produce quick, efficient and nearly error-free outputs. However, AI is not getting smarter on its own; it has been built on and continues to rely on human labour and energy resources. These systems are fed information and trained by workers who are invisibilised by large tech companies, and mainly located in developing countries.

Areas of human involvement

A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and Machine Learning (ML) models. For example, an large-language models (LLM) cannot recognise the colour ‘yellow’ unless the data has been labelled as such. Similarly, self-driving cars rely on information from video footage that has been labelled to distinguish between a traffic sign and humans on the road. The higher the quality of the dataset, the better the output and the more human labour is involved in creating it.

Data annotators play a major role in training LLMs like ChatGPT, Gemini, etc. An LLM is trained in three steps: self-supervised learning, supervised learning and reinforcement learning. In the first step, the machine picks up information from large datasets on the Internet. The data labellers or annotators enter in the second and third steps, where this information is fine-tuned for the LLM to give the most accurate response. Humans give feedback on the output the AI produces for better responses to be generated over time, as well as remove errors and jailbreaks.

This meticulous annotating work is outsourced by tech companies in Silicon Valley to mainly workers in countries like Kenya, India, Pakistan, China and the Philippines for low wages and long working hours.

Data labelling can be of two types: those which do not require subject expertise and those which are more niche and require subject expertise. Several tech companies have been accused of employing non-experts for technical subjects that require prior knowledge. This is a contributing factor in the errors found in the output produced by AI. A data labeller from Kenya revealed that they were tasked with labelling medical scans for an AI system intended for use in healthcare services elsewhere, despite lacking relevant expertise.

However, due to errors resulting from this, companies are starting to ensure experts for such information being fed into the system.

Automated features requiring humans

Even features marketed as ‘fully automated’ are often underpinned by invisible human work. For example, our social media feeds are ‘automatically’ filtered to censor sensitive and graphic content. This is only possible because human moderators labelled such content as harmful by going through thousands of uncensored images, texts and audio. The exposure to such content daily has also been reported to cause severe mental health issues like post-traumatic stress disorder, anxiety and depression in the workers.

Similarly, there are voice actors and actors behind AI-generated audios and videos. Actors may be required to film themselves dancing or singing for these machines to recognise human movements and sounds. Children have also been reportedly engaged to perform such tasks.

In 2024, AI tech workers from Kenya sent a letter to former U.S. President Joe Biden talking about the poor working conditions they are subjected to. “In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern-day slavery,” the letter read. They said the content they have to annotate can range from pornography and beheadings to bestiality for more than eight hours a day, and for less than $2 an hour, which is very low in comparison to industry standards. There are also strict deadlines to complete a task within a few seconds or minutes.

When workers raised their concerns to the companies, they were sacked and their unions dismantled.

Most AI tech workers are unaware of the large tech company they are working for and are engaged in online gig work. This is because, to minimise costs, AI companies outsource the work through intermediary digital platforms. There are subcontract workers in these digital platforms who are paid per “microtask” they perform. They are constantly surveilled, and if they fall short of the targeted output, they are fired. Hence, the labour network becomes fragmented and lacking transparency.

The advancement of AI is powered by such “ghost workers.” The lack of recognition and informalisation of their work helps tech companies to perpetuate this system of labour exploitation. There is a need to bring in stricter laws and regulations on AI companies and digital platforms, not just on their content in the digital space, but also on their labour supply chains powering AI, ensuring transparency, fair pay, and dignity at work.



Source link

Continue Reading

AI Insights

Dubuque County grapples with AI misuse as students face court for fake nude images

Published

on


Three Cascade High School students are now facing charges for allegedly creating fake nude images of other students using Artificial Intelligence. These students are accused of using headshots of the victims and attaching them to images of nude bodies.

Dubuque’s Assistant County Attorney says the fast pace of technological advancements makes it hard to regulate these tools.

“We have a large number of victims that are involved in this case,” Joshua Vander Ploeg, Dubuque’s Assistant County Attorney, said. “And then we can go back to them, which allows us to get to the underlying charges.”

The charges these students are facing are in juvenile court because they are minors. In a statement shared with Iowa’s News Now, Western Dubuque Community Schools said they prioritize the wellbeing and safety of their students. and because of that they said, “any student who has been charged as a creator or distributor of materials like those in question will not be permitted to attend school in person at Cascade Junior/Senior High School.”

There are multiple uses for AI, including photo editing. Vander Ploeg says due to the multifaceted abilities of this tool, there are cases out there with similar issues.

“Some of the language in the Iowa code that talks specifically about AI generated images that are being sent out to other people didn’t go into effect until July 1 of 2024. So we were less than a year out from that when this came on us,” he said. “So it is something that’s rampant and is out there.”

Vander Ploeg says these new advancements with AI are being developed faster than they are being regulated, which can put them at a disadvantage.

“We’re always playing catch up when it comes to those legislative matters. So, you know, if more than anything, I would encourage people that if they have concerns that things that they’re seeing, that are happening to their kids, or are happening to other adults, contact your legislators. Give them ideas of what you think needs to be done to help keep people safe,” Vander Ploeg said.

When it comes to kids, the Assistant Attorney says it important to monitor what they are putting out on the internet.

“If your kid isn’t wanting you to see those areas there’s probably a reason that they don’t want you to see those areas. but that the only way to truly keep them safe as far as what’s on their phone is to monitor it and kids aren’t going to like that,” he said.

And from their end, Vander Ploeg says they are going out into the community and trying to educate the public about what to look for in AI.

“We’re trying to go out and do some education to identify these issues, the dangers that exist out there and what the consequences could be because that’s very important for kids for the future,” Vander Ploeg said.

There may be more charges connected to the AI images. The Dubuque County Attorney’s office says they expect to charge a fourth person, who is also a minor, in relation to this case.



Source link

Continue Reading

AI Insights

Opinion | Why Hong Kong should seek to co-host China’s global AI centre

Published

on


Hong Kong is emerging as a possible contender to host China’s proposed World Artificial Intelligence Cooperation Organisation, potentially challenging Beijing’s early preference for Shanghai. We believe the choice of Hong Kong, with its evolving role in the international technological arena, could reflect a nuanced strategy on Beijing’s part to navigate escalating US-China tech tensions.

The initiative was first proposed by Chinese Premier Li Qiang in July. Hosting such a centre carries both symbolic and strategic weight: it will position the host city at the heart of China’s AI diplomacy and offer a tangible avenue to influence the shaping of global AI standards.
Shanghai is the front runner. The city boasts more than 1,100 core AI companies and 100,000 AI professionals, alongside robust government backing. Its 1 billion yuan (US$139 million) AI development fund and innovation hubs such as the Zhangjiang AI Island – which hosts Alibaba Group Holding (owner of the South China Morning Post), among other tech companies – reinforce its credentials.

President Xi Jinping has explicitly called for Shanghai to lead China’s AI development and governance efforts, providing a political capital that few other cities can match.

In comparison, a city like Singapore presents a credible alternative as a potential centre for a global AI governance group. The city state has a comprehensive AI regulatory framework and initiatives such as AI Verify, which is backed by global tech giants including Google, IBM and Microsoft. Singapore’s proven governance expertise makes it a city Western partners can trust.
Hong Kong, however, presents a distinctive proposition. The “one country, two systems” framework allows it to straddle Chinese interests while retaining a degree of international credibility – a combination that could be invaluable in assuaging Western scepticism towards a global AI centre.



Source link

Continue Reading

Trending