Connect with us

AI Insights

The human thinking behind artificial intelligence

Published

on


Artificial intelligence is built on the thinking of intelligent humans, including data labellers who are paid as little as US$1.32 per hour. Zena Assaad, an expert in human-machine relationships, examines the price we’re willing to pay for this technology. This article was originally published in the Cosmos Print Magazine in December 2024.

From Blade Runner to The Matrix, science fiction depicts artificial intelligence as a mirror of human intelligence. It’s portrayed as holding a capacity to evolve and advance with a mind of its own. The reality is very different.

The original conceptions of AI, which hailed from the earliest days of computer science, defined it as the replication of human intelligence in machines. This definition invites debate on the semantics of the notion of intelligence.

Can human intelligence be replicated?

The idea of intelligence is not contained within one neat definition. Some view intelligence as an ability to remember information, others see it as good decision making, and some see it in the nuances of emotions and our treatment of others.

As such, human intelligence is an open and subjective concept. Replicating this amorphous notion in a machine is very difficult.

Software is the foundation of AI, and software is binary in its construct; something made of two things or parts. In software, numbers and values are expressed as 1 or 0, true or false. This dichotomous design does not reflect the many shades of grey of human thinking and decision making.

Not everything is simply yes or no. Part of that nuance comes from intent and reasoning, which are distinctly human qualities.

To have intent is to pursue something with an end or purpose in mind. AI systems can be thought to have goals, in the form of functions within the software, but this is not the same as intent.
The main difference is goals are specific and measurable objectives whereas intentions are the underlying purpose and motivation behind those actions.

You might define the goals as ‘what’, and intent as ‘why’.

To have reasoning is to consider something with logic and sensibility, drawing conclusions from old and new information and experiences. It is based on understanding rather than pattern recognition. AI does not have the capacity for intent and reasoning and this challenges the feasibility of replicating human intelligence in a machine.

There is a cornucopia of principles and frameworks that attempts to address how we design and develop ethical machines. But if AI is not truly a replication of human intelligence, how can we hold these machines to human ethical standards?

Can machines be ethical?

Ethics is a study of morality: right and wrong, good and bad. Imparting ethics on a machine, which is distinctly not human, seems redundant. How can we expect a binary construct, which cannot reason, to behave ethically?

Similar to the semantic debate around intelligence, defining ethics is its own Pandora’s box. Ethics is amorphous, changing across time and place. What is ethical to one person may not be to another. What was ethical 5 years ago may not be considered appropriate today.

These changes are based on many things; culture, religion, economic climates, social demographics, and more. The idea of machines embodying these very human notions is improbable, and so it follows that machines cannot be held to ethical standards. However, what can and should be held to ethical standards are the people who make decisions for AI.

Contrary to popular belief, technology of any form does not develop of its own accord. The reality is their evolution has been puppeteered by humans. Human beings are the ones designing, developing, manufacturing, deploying and using these systems.

If an AI system produces an incorrect or inappropriate output, it is because of a flaw in the design, not because the machine is unethical.

The concept of ethics is fundamentally human. To apply this term to AI, or any other form of technology, anthropomorphises these systems. Attributing human characteristics and behaviours to a piece of technology creates misleading interpretations of what that technology is and is not capable of.

Decades long messaging about synthetic humans and killer robots have shaped how we conceptualise the advancement of technology, in particular, technology which claims to replicate human intelligence.
AI applications have scaled exponentially in recent years, with many AI tools being made freely available to the general public. But freely accessible AI tools come at a cost. In this case, the cost is ironically in the value of human intelligence.

The hidden labour behind AI

At a basic level, artificial intelligence works by finding patterns in data, which involves more human labour than you might think.

ChatGPT is one example of AI, referred to as a large language model (LLM). ChatGPT is trained on carefully labelled data which adds context, in the form of annotations and categories, to what is otherwise a lot of noise.

Using labelled data to train an AI model is referred to as supervised learning. Labelling an apple as “apple”, a spoon as “spoon”, a dog as “dog”, helps to contextualise these pieces of data into useful information.

When you enter a prompt into ChatGPT, it scours the data it has been trained on to find patterns matching those within your prompt. The more detailed the data labels, the more accurate the matches. Labels such as “pet” and “animal” alongside the label “dog” provide more detail, creating more opportunities for patterns to be exposed.

Data is made up of an amalgam of content (images, words, numbers, etc.) and it requires this context to become useful information that can be interpreted and used.

As the AI industry continues to grow, there is a greater demand for developing more accurate products. One of the main ways for achieving this is through more detailed and granular labels on training data.
Data labelling is a time consuming and labour intensive process. In absence of this work, data is not usable or understandable by an AI model that operates through supervised learning.

Despite the task being essential to the development of AI models and tools, the work of data labellers often goes entirely unnoticed and unrecognised.

Data labelling is done by human experts and these people are most commonly from the Global South – Kenya, India and the Philippines. This is because data labelling is labour intensive work and labour is cheaper in the Global South.

Data labellers are forced to work under stressful conditions, reviewing content depicting violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.

Data labellers are pressured to meet high demands within short timeframes. For this, they earn as little as US$1.32 per hour, according to TIME magazine’s 2023 reporting, based on an OpenAI contract with data labelling company Sama.

Countries such as Kenya, India and the Philippines incur less legal and regulatory oversight of worker rights and working conditions.

Similar to the fast fashion industry, cheap labour enables cheaply accessible products, or in the case of AI, it’s often a free product.

AI tools are commonly free or cheap to access and use because costs are being cut around the hidden labour that most people are unaware of.

When thinking about the ethics of AI, cracks in the supply chain of development rarely come to the surface of these discussions. People are more focused on the machine itself, rather than how it was created. How a product is developed, be it an item of clothing, a TV, furniture or an AI-enabled capability, has societal and ethical impacts that are far reaching.

A numbers game

In today’s digital world, organisational incentives have shifted beyond revenue and now include metrics around the number of users.

Releasing free tools for the public to use exponentially scales the number of users and opens pathways for alternate revenue streams.

That means we now have a greater level of access to technology tools at a fraction of the cost, or even at no monetary cost at all. This is a recent and rapid change in the way technology reaches consumers.
In 2011, 35% of Americans owned a mobile phone. By 2024 this statistic increased to a whopping 97%. In 1973, a new TV retailed for $379.95 USD, equivalent to $2,694.32 USD today. Today, a new TV can be purchased for much less than that.

Increased manufacturing has historically been accompanied by cost cutting in both labour and quality. We accept poorer quality products because our expectations around consumption have changed. Instead of buying things to last, we now buy things with the expectation of replacing them.

The fast fashion industry is an example of hidden labour and its ease of acceptance in consumers. Between 1970 and 2020, the average British household decreased their annual spending on clothing despite the average consumer buying 60% more pieces of clothing.

The allure of cheap or free products seems to dispel ethical concerns around labour conditions. Similarly, the allure of intelligent machines has created a facade around how these tools are actually developed.

Achieving ethical AI

Artificial intelligence technology cannot embody ethics; however, the manner in which AI is designed, developed and deployed can.

In 2021, UNESCO released a set of recommendations on the ethics of AI, which focus on the impacts of the implementation and use of AI. The recommendations do not address the hidden labour behind the development of AI.

Misinterpretations of AI, particularly those which encourage the idea of AI developing with a mind of its own, isolate the technology from the people designing, building and deploying that technology. These are the people making decisions around what labour conditions are and are not acceptable within their supply chain, what remuneration is and isn’t appropriate for the skills and expertise required for data labelling.

If we want to achieve ethical AI, we need to embed ethical decision making across the AI supply chain; from the data labellers who carefully and laboriously annotate and categorise an abundance of data through to the consumers who don’t want to pay for a service they have been accustomed to thinking should be free.

Everything comes at a cost, and ethics is about what costs we are and are not willing to pay.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Artificial Intelligence Consultant Ashley Gross Shares Details on Pittsboro Commissioner Candidacy

Published

on


Among the eight candidates looking to connect with voters in Pittsboro this fall is Ashley Gross, an artificial intelligence advocate, consultant and course creator.

Gross filed to run for the town government’s Board of Commissioners in July, joining a crowded race to replace the outgoing Pamela Baldwin and James Vose. A resident of the Vineyards neighborhood of Chatham Park, she works as a keynote speaker and consultant for businesses looking to learn more about AI practices in the emerging technology space, leading her own consulting company and working as the CEO of the organization AI Workforce Alliance.

In an email with Chapelboro, Gross described herself as “a mom who loves this little corner of the world we call home” and committed to the area. When describing her motivation to run — in which she incorrectly stated she was running for a county commissioner seat — she said helping the greater Pittsboro community feel connected and supported with a variety of resources is key amid the town’s ongoing growth.

“I see the push and pull between people who have called Chatham home for generations and those who are just discovering it,” Gross said. “I believe that our differences are not barriers. They are opportunities to learn from each other. My strength is sitting down with people, even when we disagree, and finding the common ground we share. I am a researcher and an experimenter by nature, and I have seen that the most successful communities are built when people come together around shared interests and goals. That is the kind of leadership I want to bring, one that unites us instead of dividing us.”

Gross cited uplifting small businesses to help maintain the local economy as a key priority, as well as public safety and investments into local infrastructure.

“Safe roads, modern emergency response systems, and preparation for the weather risks we face mean families can feel secure no matter what comes our way,” she said. “And as we grow, I will focus on smart development that keeps our small town character intact while building the infrastructure we need for the future.”

Other priorities the Pittsboro resident listed as having strong local schools, improving partnerships with local colleges and expanding reliable internet to each home and business — all issues that fall more under the purview of the Chatham County government more than the town government.

When describing what she is looking forward to during her campaign for Pittsboro’s Board of Commissioners, Gross wrote that she wants to hear directly from residents about their “concerns, hopes and ideas” while listening and using “data and common sense” to inform her policy decisions.

“Every choice I make,” Gross wrote, “will be guided by a simple question: will this keep our families safe, connected, and thriving? At the end of the day, I am just a mom who believes Chatham is at its best when we work as one community, where families stay close, opportunities grow here, and every neighbor feels they belong.”

Gross will be on the ballot along with Freda Alston, Alex M. Brinker, Corey Forrest, Candace Hunziker, Tobais Palmer, Nikkolas Shramek and Tiana Thurber. The top two commissioner candidates to receive votes will serve four-year terms on the five-seat town board alongside Pittsboro Mayor Kyle Shipp — who is running unopposed for re-election.

Election Day for the 2025 fall cycle will be Tuesday, Nov. 4, with early voting in Chatham County’s municipal elections beginning on Thursday, Oct. 10.

 

Featured image via Ashley Gross.


Chapelboro.com does not charge subscription fees, and you can directly support our efforts in local journalism here. Want more of what you see on Chapelboro? Let us bring free local news and community information to you by signing up for our newsletter.

Related Stories



Source link

Continue Reading

AI Insights

Google AI Model Uses Virtual Satellite to Map Earth

Published

on


Google DeepMind introduced a new artificial intelligence model that captures the vivid details of the Earth’s surface, which helps scientists and governments make better decisions about the land and sea.

Called AlphaEarth Foundations, the geospatial AI model pulls together satellite images and other environmental data to create a single picture of the planet, according to a blog post.

Satellites orbiting Earth collect large amounts of information every day. While this data is valuable, it’s often stored in many different formats and comes from different times and sensors, making it hard to combine.

AlphaEarth Foundations acts like a “virtual satellite” that can merge all this information into one consistent view, the post said.

For example, the model can see through persistent clouds in Ecuador to map agricultural plots in various stages of development, according to the post. It can also map the surface of Antarctica, normally a tough place to image.

The AI model can spot changes in land use, such as new construction, deforestation or crop growth, in 10-meter squares, per the post. It also stores the information much more efficiently, using 16 times less space than comparable AI systems.

To make it available to the public, Google is releasing yearly snapshots from 2017 through 2024 in a new Satellite Embedding dataset within Google Earth Engine. This dataset contains more than 1.4 trillion data points each year and is ready to use without extra processing work, the post said.

Over 50 organizations have already tested the system, including the United Nations’ Food and Agriculture Organization, Stanford University, Oregon State University and others, according to the post.

For example, the Global Ecosystems Atlas, an initiative that aims to comprehensively map and monitor the world’s ecosystems, is using AlphaEarth Foundations to help countries classify unmapped ecosystems into categories such as coastal shrublands, per the post. In Brazil, environmental mapping group MapBiomas is using the tool to track farmland and forest changes.

However, even though the model is a “cutting-edge technological breakthrough” in Earth mapping, it is dependent on high-quality satellite data, according to a GoGeomatics Canada blog post.

“While it is known for effectively filling gaps in missing or incomplete data, interpreting poor-quality inputs in critical situations can lead to misdirection,” the post said.

Read also: T-Mobile and SpaceX to Test Satellite-to-Cell Service

How AlphaEarth Foundations Works

AlphaEarth Foundations combines information from many different satellite and environmental sources into one clear, consistent picture of Earth. It works a bit like stitching together thousands of puzzle pieces into a single image, except the puzzle pieces come from different satellites, sensors and even time periods.

The system takes in a variety of public data, including the following:

  • Optical satellite photos, like those available on Google Earth
  • Radar scans that can penetrate cloud cover
  • 3D laser mapping
  • Climate and environmental readings, such as temperature and rainfall
  • Elevation maps and gravity measurements
  • Descriptive information linked to locations

It treats the images from the same location over time like frames in a video. This lets itunderstand” changes through the seasons or from one year to the next, like crops being planted and harvested, forests being cleared or cities expanding, the Deepmind blog post said.

AlphaEarth Foundations then condenses all this into what Google called a “64-dimensional representation” for each 10-meter square, whether land or coastal water.

Consider that 3D only provides latitude, longitude and elevation; 64 dimensions provide richer detail, not just location but also appearance, environment and behavior over time.

“What is interesting is that they’re able to get down to 10-by-10 meter squares, which is phenomenal,” Christopher Seeger, professor and extension specialist of landscape architecture and geospatial technology at Iowa State University, told The Register. “It’s going to be great for decision makers.”

“This breakthrough enables scientists to do something that was impossible until now: create detailed, consistent maps of our world, on demand,” the DeepMind blog post said. “…[T]hey no longer have to rely on a single satellite passing overhead. They now have a new kind of foundation for geospatial data.”

The system can be used for many purposes, including monitoring wildfires, tracking water levels in reservoirs and spotting urban growth. It can also help create detailed maps with fewer samples, saving time and resources.

Google plans to explore combining AlphaEarth Foundations with its Gemini multimodal model to expand the system’s capabilities further.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:

Amazon Plans to Offer Satellite Internet Service in Late 2025

Amazon’s Space Lasers Redefine Global Data Connectivity

The Cyber Hack From Space



Source link

Continue Reading

AI Insights

Artificial intelligence to make professional sports debut as Oakland Ballers manager – CBS News

Published

on



Artificial intelligence to make professional sports debut as Oakland Ballers manager  CBS News



Source link

Continue Reading

Trending