Connect with us

AI Insights

AI in classroom NC | Board of education considers policy for artificial intelligence in Wake County Public School district

Published

on


CARY, N.C. (WTVD) — The Wake County Board of Education held the first in a series of meetings to discuss the development of the district’s AI policy.

Board members learned about a number of topics related to AI, including how AI is being used now and the potential risks associated with the use of the growing software.

WCPSS Superintendent Dr. Robert Taylor said the district wanted board members to be well-informed now before developing a district-wide policy.

“The one thing I wanted to make sure is that we didn’t create a situation where we restrict something that is going to be a part of society, that our students are going to be responsible for learning that our teachers are going to be responsible for doing so,” he said.

A team from Amazon Web Services, or AWS, gave the board an informational presentation on AI.

District staff say there is no timeline for the adoption of the policy right now.

AI could be a major tool for the district, with the board saying it could help with personalized learning plans.

Still, some board members expressed concerns on how to teach students to use AI responsibly.

“I think the biggest concern that everyone has is academic integrity and honesty, things that can be used with AI to give false narratives, false pictures,” said Dr. Taylor.

Mikaya Thurmond is an AI expert and lecturer. She says the district needs to consider including AI training for teachers and develop rules governing students’ AI usage for their policy framework.

“If anyone believes that students are not using AI to get to conclusions and to turn in homework, at this point, they’re just not being honest about it,” she said.

For starters, she says students should credit AI when used on assignments and show their chat history with AI programs.

“That tells me you’re at least doing the critical thinking part,” said Thurmond. “And then there should be some assignments that no eye is allowed for and some where it is integrated. But I think that there has to be a mixture once educators know how to use it themselves.”

Something the superintendent and Thurmond agree on is parental involvement.

They both say parents should be having conversations now with their children about appropriate conversations to have with AI.

Copyright © 2025 WTVD-TV. All Rights Reserved.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

The hidden human cost of Artificial Intelligence

Published

on


A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and ML models.
| Photo Credit: iStockphoto

The world is gearing towards an ‘automated economy’ where machines relying on artificial intelligence (AI) systems produce quick, efficient and nearly error-free outputs. However, AI is not getting smarter on its own; it has been built on and continues to rely on human labour and energy resources. These systems are fed information and trained by workers who are invisibilised by large tech companies, and mainly located in developing countries.

Areas of human involvement

A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and Machine Learning (ML) models. For example, an large-language models (LLM) cannot recognise the colour ‘yellow’ unless the data has been labelled as such. Similarly, self-driving cars rely on information from video footage that has been labelled to distinguish between a traffic sign and humans on the road. The higher the quality of the dataset, the better the output and the more human labour is involved in creating it.

Data annotators play a major role in training LLMs like ChatGPT, Gemini, etc. An LLM is trained in three steps: self-supervised learning, supervised learning and reinforcement learning. In the first step, the machine picks up information from large datasets on the Internet. The data labellers or annotators enter in the second and third steps, where this information is fine-tuned for the LLM to give the most accurate response. Humans give feedback on the output the AI produces for better responses to be generated over time, as well as remove errors and jailbreaks.

This meticulous annotating work is outsourced by tech companies in Silicon Valley to mainly workers in countries like Kenya, India, Pakistan, China and the Philippines for low wages and long working hours.

Data labelling can be of two types: those which do not require subject expertise and those which are more niche and require subject expertise. Several tech companies have been accused of employing non-experts for technical subjects that require prior knowledge. This is a contributing factor in the errors found in the output produced by AI. A data labeller from Kenya revealed that they were tasked with labelling medical scans for an AI system intended for use in healthcare services elsewhere, despite lacking relevant expertise.

However, due to errors resulting from this, companies are starting to ensure experts for such information being fed into the system.

Automated features requiring humans

Even features marketed as ‘fully automated’ are often underpinned by invisible human work. For example, our social media feeds are ‘automatically’ filtered to censor sensitive and graphic content. This is only possible because human moderators labelled such content as harmful by going through thousands of uncensored images, texts and audio. The exposure to such content daily has also been reported to cause severe mental health issues like post-traumatic stress disorder, anxiety and depression in the workers.

Similarly, there are voice actors and actors behind AI-generated audios and videos. Actors may be required to film themselves dancing or singing for these machines to recognise human movements and sounds. Children have also been reportedly engaged to perform such tasks.

In 2024, AI tech workers from Kenya sent a letter to former U.S. President Joe Biden talking about the poor working conditions they are subjected to. “In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern-day slavery,” the letter read. They said the content they have to annotate can range from pornography and beheadings to bestiality for more than eight hours a day, and for less than $2 an hour, which is very low in comparison to industry standards. There are also strict deadlines to complete a task within a few seconds or minutes.

When workers raised their concerns to the companies, they were sacked and their unions dismantled.

Most AI tech workers are unaware of the large tech company they are working for and are engaged in online gig work. This is because, to minimise costs, AI companies outsource the work through intermediary digital platforms. There are subcontract workers in these digital platforms who are paid per “microtask” they perform. They are constantly surveilled, and if they fall short of the targeted output, they are fired. Hence, the labour network becomes fragmented and lacking transparency.

The advancement of AI is powered by such “ghost workers.” The lack of recognition and informalisation of their work helps tech companies to perpetuate this system of labour exploitation. There is a need to bring in stricter laws and regulations on AI companies and digital platforms, not just on their content in the digital space, but also on their labour supply chains powering AI, ensuring transparency, fair pay, and dignity at work.



Source link

Continue Reading

AI Insights

Dubuque County grapples with AI misuse as students face court for fake nude images

Published

on


Three Cascade High School students are now facing charges for allegedly creating fake nude images of other students using Artificial Intelligence. These students are accused of using headshots of the victims and attaching them to images of nude bodies.

Dubuque’s Assistant County Attorney says the fast pace of technological advancements makes it hard to regulate these tools.

“We have a large number of victims that are involved in this case,” Joshua Vander Ploeg, Dubuque’s Assistant County Attorney, said. “And then we can go back to them, which allows us to get to the underlying charges.”

The charges these students are facing are in juvenile court because they are minors. In a statement shared with Iowa’s News Now, Western Dubuque Community Schools said they prioritize the wellbeing and safety of their students. and because of that they said, “any student who has been charged as a creator or distributor of materials like those in question will not be permitted to attend school in person at Cascade Junior/Senior High School.”

There are multiple uses for AI, including photo editing. Vander Ploeg says due to the multifaceted abilities of this tool, there are cases out there with similar issues.

“Some of the language in the Iowa code that talks specifically about AI generated images that are being sent out to other people didn’t go into effect until July 1 of 2024. So we were less than a year out from that when this came on us,” he said. “So it is something that’s rampant and is out there.”

Vander Ploeg says these new advancements with AI are being developed faster than they are being regulated, which can put them at a disadvantage.

“We’re always playing catch up when it comes to those legislative matters. So, you know, if more than anything, I would encourage people that if they have concerns that things that they’re seeing, that are happening to their kids, or are happening to other adults, contact your legislators. Give them ideas of what you think needs to be done to help keep people safe,” Vander Ploeg said.

When it comes to kids, the Assistant Attorney says it important to monitor what they are putting out on the internet.

“If your kid isn’t wanting you to see those areas there’s probably a reason that they don’t want you to see those areas. but that the only way to truly keep them safe as far as what’s on their phone is to monitor it and kids aren’t going to like that,” he said.

And from their end, Vander Ploeg says they are going out into the community and trying to educate the public about what to look for in AI.

“We’re trying to go out and do some education to identify these issues, the dangers that exist out there and what the consequences could be because that’s very important for kids for the future,” Vander Ploeg said.

There may be more charges connected to the AI images. The Dubuque County Attorney’s office says they expect to charge a fourth person, who is also a minor, in relation to this case.



Source link

Continue Reading

AI Insights

Opinion | Why Hong Kong should seek to co-host China’s global AI centre

Published

on


Hong Kong is emerging as a possible contender to host China’s proposed World Artificial Intelligence Cooperation Organisation, potentially challenging Beijing’s early preference for Shanghai. We believe the choice of Hong Kong, with its evolving role in the international technological arena, could reflect a nuanced strategy on Beijing’s part to navigate escalating US-China tech tensions.

The initiative was first proposed by Chinese Premier Li Qiang in July. Hosting such a centre carries both symbolic and strategic weight: it will position the host city at the heart of China’s AI diplomacy and offer a tangible avenue to influence the shaping of global AI standards.
Shanghai is the front runner. The city boasts more than 1,100 core AI companies and 100,000 AI professionals, alongside robust government backing. Its 1 billion yuan (US$139 million) AI development fund and innovation hubs such as the Zhangjiang AI Island – which hosts Alibaba Group Holding (owner of the South China Morning Post), among other tech companies – reinforce its credentials.

President Xi Jinping has explicitly called for Shanghai to lead China’s AI development and governance efforts, providing a political capital that few other cities can match.

In comparison, a city like Singapore presents a credible alternative as a potential centre for a global AI governance group. The city state has a comprehensive AI regulatory framework and initiatives such as AI Verify, which is backed by global tech giants including Google, IBM and Microsoft. Singapore’s proven governance expertise makes it a city Western partners can trust.
Hong Kong, however, presents a distinctive proposition. The “one country, two systems” framework allows it to straddle Chinese interests while retaining a degree of international credibility – a combination that could be invaluable in assuaging Western scepticism towards a global AI centre.



Source link

Continue Reading

Trending