Connect with us

AI Insights

The hidden human cost of Artificial Intelligence

Published

on


A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and ML models.
| Photo Credit: iStockphoto

The world is gearing towards an ‘automated economy’ where machines relying on artificial intelligence (AI) systems produce quick, efficient and nearly error-free outputs. However, AI is not getting smarter on its own; it has been built on and continues to rely on human labour and energy resources. These systems are fed information and trained by workers who are invisibilised by large tech companies, and mainly located in developing countries.

Areas of human involvement

A machine cannot process the meaning behind raw data. Data annotators label raw images, audio, video, and text with information that trains AI and Machine Learning (ML) models. This, then, becomes the training set for AI and Machine Learning (ML) models. For example, an large-language models (LLM) cannot recognise the colour ‘yellow’ unless the data has been labelled as such. Similarly, self-driving cars rely on information from video footage that has been labelled to distinguish between a traffic sign and humans on the road. The higher the quality of the dataset, the better the output and the more human labour is involved in creating it.

Data annotators play a major role in training LLMs like ChatGPT, Gemini, etc. An LLM is trained in three steps: self-supervised learning, supervised learning and reinforcement learning. In the first step, the machine picks up information from large datasets on the Internet. The data labellers or annotators enter in the second and third steps, where this information is fine-tuned for the LLM to give the most accurate response. Humans give feedback on the output the AI produces for better responses to be generated over time, as well as remove errors and jailbreaks.

This meticulous annotating work is outsourced by tech companies in Silicon Valley to mainly workers in countries like Kenya, India, Pakistan, China and the Philippines for low wages and long working hours.

Data labelling can be of two types: those which do not require subject expertise and those which are more niche and require subject expertise. Several tech companies have been accused of employing non-experts for technical subjects that require prior knowledge. This is a contributing factor in the errors found in the output produced by AI. A data labeller from Kenya revealed that they were tasked with labelling medical scans for an AI system intended for use in healthcare services elsewhere, despite lacking relevant expertise.

However, due to errors resulting from this, companies are starting to ensure experts for such information being fed into the system.

Automated features requiring humans

Even features marketed as ‘fully automated’ are often underpinned by invisible human work. For example, our social media feeds are ‘automatically’ filtered to censor sensitive and graphic content. This is only possible because human moderators labelled such content as harmful by going through thousands of uncensored images, texts and audio. The exposure to such content daily has also been reported to cause severe mental health issues like post-traumatic stress disorder, anxiety and depression in the workers.

Similarly, there are voice actors and actors behind AI-generated audios and videos. Actors may be required to film themselves dancing or singing for these machines to recognise human movements and sounds. Children have also been reportedly engaged to perform such tasks.

In 2024, AI tech workers from Kenya sent a letter to former U.S. President Joe Biden talking about the poor working conditions they are subjected to. “In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern-day slavery,” the letter read. They said the content they have to annotate can range from pornography and beheadings to bestiality for more than eight hours a day, and for less than $2 an hour, which is very low in comparison to industry standards. There are also strict deadlines to complete a task within a few seconds or minutes.

When workers raised their concerns to the companies, they were sacked and their unions dismantled.

Most AI tech workers are unaware of the large tech company they are working for and are engaged in online gig work. This is because, to minimise costs, AI companies outsource the work through intermediary digital platforms. There are subcontract workers in these digital platforms who are paid per “microtask” they perform. They are constantly surveilled, and if they fall short of the targeted output, they are fired. Hence, the labour network becomes fragmented and lacking transparency.

The advancement of AI is powered by such “ghost workers.” The lack of recognition and informalisation of their work helps tech companies to perpetuate this system of labour exploitation. There is a need to bring in stricter laws and regulations on AI companies and digital platforms, not just on their content in the digital space, but also on their labour supply chains powering AI, ensuring transparency, fair pay, and dignity at work.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Free AI, data science lecture series launched at UH Mānoa

Published

on


Reading time: 2 minutes

Associate Chair Mahdi Belcaid introducing Eliane Ubalijoro

The University of Hawaiʻi at Mānoa launched a free artificial intelligence (AI) and data science public lecture series on September 15, with a talk by Eliane Ubalijoro, chief executive officer of the Center for International Forestry Research and World Agroforestry. Ubalijoro, based in Nairobi, Kenya, spoke on AI governance policies and ethics for managing land, biodiversity and fire.

person holding a microphone and speaking
Eliane Ubalijoro

The event, hosted at the Walter Dods, Jr. RISE Center, was organized by the Department of Information and Computer Sciences (ICS) in partnership with the Pacific Asian Center for Entrepreneurship (PACE). It kicked off a four-part series designed to share industry and government perspectives on emerging issues in AI and data science.

All lectures are open to students, professionals and community members, providing another avenue for the public to engage with UH Mānoa’s new graduate certificate and professional master’s program in AI and data science. The series is tied to ICS 601, the Applied Computing Industry Seminar, which connects students to real-world applications of AI.

“This series opens the door for our students and community to learn directly from leaders shaping the future of AI and data science,” said Department of Information and Computer Sciences Chair and Professor Guylaine Poisson.

PACE Executive Director Sandra Fujiyama added, “By bringing these talks into the public sphere, we’re strengthening the bridge between UH Mānoa, industry sectors and Hawaiʻi’s innovation community.”

Three additional talks are scheduled this fall:

  • September 22, 12–1:15 p.m.: Rebecca Cai, chief data officer for the State of Hawaiʻi, will discuss government data and AI use cases.
  • October 13, 12–1:15 p.m.: Shovit Bhari of IBM will share industry lessons on machine learning.
  • November 10, 12–1:15 p.m.: Peter Dooher, senior vice president at Digital Service Pacific Inc., will cover designing end-to-end AI systems.

Register for the events at the PACE website.

ICS is housed in UH Mānoa’s College of Natural Sciences and PACE is housed in UH Mānoa’s Shidler College of Business.



Source link

Continue Reading

AI Insights

Americans Prioritize AI Safety and Data Security

Published

on


WASHINGTON, D.C. — As artificial intelligence continues to develop and grow in capability, Americans say the government should prioritize maintaining rules for AI safety and data security. According to a new nationally representative Gallup survey conducted in partnership with the Special Competitive Studies Project (SCSP), 80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly.

In contrast, 9% say the government should prioritize developing AI capabilities as quickly as possible, even if it means reducing rules for AI safety and data security. Eleven percent of Americans are unsure.

###Embeddable###

Majority-level support for maintaining rules for AI safety and data security is seen across all key subgroups of U.S. adults, including by political affiliation, with 88% of Democrats and 79% of Republicans and independents favoring maintaining rules for safety and security. The poll did not explore which specific AI rules Americans support maintaining.

This preference is notable against the backdrop of global competitiveness in AI development. Most Americans (85%) agree that global competition for the most advanced AI is already underway, and 79% say it is important for the U.S. to have more advanced AI technology than other countries.

However, there are concerns about the United States’ current standing, with more Americans saying the U.S. is falling behind other countries (22%) than moving ahead (12%) in AI development. Another 34% say the U.S. is keeping pace, while 32% are unsure. Despite ambitions for U.S. AI leadership — and doubts about achieving it — Americans still prefer maintaining rules for safety and security, even if development slows. This view aligns with their generally low levels of trust in AI, which is correlated to low adoption and use.

Only 2% of U.S. adults “fully” trust AI’s capability to make fair and unbiased decisions, while 29% trust it “somewhat.” Six in 10 Americans distrust AI somewhat (40%) or fully (20%), although trust rises notably among AI users (46% trust it somewhat or fully).

Among those who favor maintaining rules for AI safety and data security, 30% trust AI either somewhat or fully, compared with 56% among those who favor developing AI capabilities as quickly as possible.

###Embeddable###

Robust Support for Shared Governance and Independent Testing

Almost all Americans (97%) agree that AI safety and security should be subject to rules and regulations, but views diverge on who should be responsible for creating them. Slightly over half say the U.S. government should create rules and regulations governing private companies developing AI (54%), in line with the percentage who think companies should work together to create a shared set of rules (53%).

Relatively few Americans (16%) say each company should be allowed to create its own rules and regulations. These findings indicate broad support for both government and industry standards.

###Embeddable###

People are more emphatic about peer testing and evaluating the safety of AI systems before they are released. A majority (72%) say independent experts should conduct safety tests and evaluations, significantly more than those who think the government (48%) or each company (37%) should conduct tests.

###Embeddable###

Multilateral Advancement Preferred to Working Alone

The spirit of cooperation extends to how people think the U.S. should develop its AI technology. Americans favor advancing AI technology in partnership with a broad coalition of allies and friendly countries (42%) over collaborating with a smaller group of its closest allies (19%) or working independently (14%).

This preference for AI multilateralism holds across party lines. Although Democrats are nearly twice as likely as Republicans (58% vs. 30%, respectively) to favor the U.S. collaborating with a larger group of allies, Republicans still favor working with either a large or small group of allies over working independently (19%).

###Embeddable###

Bottom Line

Findings from Gallup’s research with SCSP highlight important commonalities in how Americans wish to see AI governance evolve. Americans favor U.S. advancement in developing AI while also prioritizing maintaining rules for AI safety and data security. Majorities favor government regulation of AI, company collaboration on shared rules, independent expert testing, and multilateral cooperation in development. As policymakers and companies chart the future of AI, public trust — which is closely tied to adoption and use — will play an important role in advancing AI technology and shaping which rules are maintained.

Read the full Reward, Risk, and Regulation: American Attitudes Toward Artificial Intelligence report.

Stay up to date with the latest insights by following @Gallup on X and on Instagram.

Learn more about how the Gallup Panel works.

###Embeddable###





Source link

Continue Reading

AI Insights

Artificial Intelligence Applications in the Prediction and Management of Pediatric Asthma Exacerbation: A Systematic Review

Published

on






Source link

Continue Reading

Trending