Connect with us

AI Insights

The human thinking behind artificial intelligence

Published

on


Artificial intelligence is built on the thinking of intelligent humans, including data labellers who are paid as little as US$1.32 per hour. Zena Assaad, an expert in human-machine relationships, examines the price we’re willing to pay for this technology. This article was originally published in the Cosmos Print Magazine in December 2024.

From Blade Runner to The Matrix, science fiction depicts artificial intelligence as a mirror of human intelligence. It’s portrayed as holding a capacity to evolve and advance with a mind of its own. The reality is very different.

The original conceptions of AI, which hailed from the earliest days of computer science, defined it as the replication of human intelligence in machines. This definition invites debate on the semantics of the notion of intelligence.

Can human intelligence be replicated?

The idea of intelligence is not contained within one neat definition. Some view intelligence as an ability to remember information, others see it as good decision making, and some see it in the nuances of emotions and our treatment of others.

As such, human intelligence is an open and subjective concept. Replicating this amorphous notion in a machine is very difficult.

Software is the foundation of AI, and software is binary in its construct; something made of two things or parts. In software, numbers and values are expressed as 1 or 0, true or false. This dichotomous design does not reflect the many shades of grey of human thinking and decision making.

Not everything is simply yes or no. Part of that nuance comes from intent and reasoning, which are distinctly human qualities.

To have intent is to pursue something with an end or purpose in mind. AI systems can be thought to have goals, in the form of functions within the software, but this is not the same as intent.
The main difference is goals are specific and measurable objectives whereas intentions are the underlying purpose and motivation behind those actions.

You might define the goals as ‘what’, and intent as ‘why’.

To have reasoning is to consider something with logic and sensibility, drawing conclusions from old and new information and experiences. It is based on understanding rather than pattern recognition. AI does not have the capacity for intent and reasoning and this challenges the feasibility of replicating human intelligence in a machine.

There is a cornucopia of principles and frameworks that attempts to address how we design and develop ethical machines. But if AI is not truly a replication of human intelligence, how can we hold these machines to human ethical standards?

Can machines be ethical?

Ethics is a study of morality: right and wrong, good and bad. Imparting ethics on a machine, which is distinctly not human, seems redundant. How can we expect a binary construct, which cannot reason, to behave ethically?

Similar to the semantic debate around intelligence, defining ethics is its own Pandora’s box. Ethics is amorphous, changing across time and place. What is ethical to one person may not be to another. What was ethical 5 years ago may not be considered appropriate today.

These changes are based on many things; culture, religion, economic climates, social demographics, and more. The idea of machines embodying these very human notions is improbable, and so it follows that machines cannot be held to ethical standards. However, what can and should be held to ethical standards are the people who make decisions for AI.

Contrary to popular belief, technology of any form does not develop of its own accord. The reality is their evolution has been puppeteered by humans. Human beings are the ones designing, developing, manufacturing, deploying and using these systems.

If an AI system produces an incorrect or inappropriate output, it is because of a flaw in the design, not because the machine is unethical.

The concept of ethics is fundamentally human. To apply this term to AI, or any other form of technology, anthropomorphises these systems. Attributing human characteristics and behaviours to a piece of technology creates misleading interpretations of what that technology is and is not capable of.

Decades long messaging about synthetic humans and killer robots have shaped how we conceptualise the advancement of technology, in particular, technology which claims to replicate human intelligence.
AI applications have scaled exponentially in recent years, with many AI tools being made freely available to the general public. But freely accessible AI tools come at a cost. In this case, the cost is ironically in the value of human intelligence.

The hidden labour behind AI

At a basic level, artificial intelligence works by finding patterns in data, which involves more human labour than you might think.

ChatGPT is one example of AI, referred to as a large language model (LLM). ChatGPT is trained on carefully labelled data which adds context, in the form of annotations and categories, to what is otherwise a lot of noise.

Using labelled data to train an AI model is referred to as supervised learning. Labelling an apple as “apple”, a spoon as “spoon”, a dog as “dog”, helps to contextualise these pieces of data into useful information.

When you enter a prompt into ChatGPT, it scours the data it has been trained on to find patterns matching those within your prompt. The more detailed the data labels, the more accurate the matches. Labels such as “pet” and “animal” alongside the label “dog” provide more detail, creating more opportunities for patterns to be exposed.

Data is made up of an amalgam of content (images, words, numbers, etc.) and it requires this context to become useful information that can be interpreted and used.

As the AI industry continues to grow, there is a greater demand for developing more accurate products. One of the main ways for achieving this is through more detailed and granular labels on training data.
Data labelling is a time consuming and labour intensive process. In absence of this work, data is not usable or understandable by an AI model that operates through supervised learning.

Despite the task being essential to the development of AI models and tools, the work of data labellers often goes entirely unnoticed and unrecognised.

Data labelling is done by human experts and these people are most commonly from the Global South – Kenya, India and the Philippines. This is because data labelling is labour intensive work and labour is cheaper in the Global South.

Data labellers are forced to work under stressful conditions, reviewing content depicting violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.

Data labellers are pressured to meet high demands within short timeframes. For this, they earn as little as US$1.32 per hour, according to TIME magazine’s 2023 reporting, based on an OpenAI contract with data labelling company Sama.

Countries such as Kenya, India and the Philippines incur less legal and regulatory oversight of worker rights and working conditions.

Similar to the fast fashion industry, cheap labour enables cheaply accessible products, or in the case of AI, it’s often a free product.

AI tools are commonly free or cheap to access and use because costs are being cut around the hidden labour that most people are unaware of.

When thinking about the ethics of AI, cracks in the supply chain of development rarely come to the surface of these discussions. People are more focused on the machine itself, rather than how it was created. How a product is developed, be it an item of clothing, a TV, furniture or an AI-enabled capability, has societal and ethical impacts that are far reaching.

A numbers game

In today’s digital world, organisational incentives have shifted beyond revenue and now include metrics around the number of users.

Releasing free tools for the public to use exponentially scales the number of users and opens pathways for alternate revenue streams.

That means we now have a greater level of access to technology tools at a fraction of the cost, or even at no monetary cost at all. This is a recent and rapid change in the way technology reaches consumers.
In 2011, 35% of Americans owned a mobile phone. By 2024 this statistic increased to a whopping 97%. In 1973, a new TV retailed for $379.95 USD, equivalent to $2,694.32 USD today. Today, a new TV can be purchased for much less than that.

Increased manufacturing has historically been accompanied by cost cutting in both labour and quality. We accept poorer quality products because our expectations around consumption have changed. Instead of buying things to last, we now buy things with the expectation of replacing them.

The fast fashion industry is an example of hidden labour and its ease of acceptance in consumers. Between 1970 and 2020, the average British household decreased their annual spending on clothing despite the average consumer buying 60% more pieces of clothing.

The allure of cheap or free products seems to dispel ethical concerns around labour conditions. Similarly, the allure of intelligent machines has created a facade around how these tools are actually developed.

Achieving ethical AI

Artificial intelligence technology cannot embody ethics; however, the manner in which AI is designed, developed and deployed can.

In 2021, UNESCO released a set of recommendations on the ethics of AI, which focus on the impacts of the implementation and use of AI. The recommendations do not address the hidden labour behind the development of AI.

Misinterpretations of AI, particularly those which encourage the idea of AI developing with a mind of its own, isolate the technology from the people designing, building and deploying that technology. These are the people making decisions around what labour conditions are and are not acceptable within their supply chain, what remuneration is and isn’t appropriate for the skills and expertise required for data labelling.

If we want to achieve ethical AI, we need to embed ethical decision making across the AI supply chain; from the data labellers who carefully and laboriously annotate and categorise an abundance of data through to the consumers who don’t want to pay for a service they have been accustomed to thinking should be free.

Everything comes at a cost, and ethics is about what costs we are and are not willing to pay.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Artificial intelligence to make professional sports debut as Oakland Ballers manager – CBS News

Published

on



Artificial intelligence to make professional sports debut as Oakland Ballers manager  CBS News



Source link

Continue Reading

AI Insights

‘It is a war of drones now’: the ever-evolving tech dominating the frontline in Ukraine | Ukraine

Published

on


“It’s more exhausting,” says Afer, a deputy commander of the “Da Vinci Wolves”, describing how one of the best-known battalions in Ukraine has to defend against constant Russian attacks. Where once the invaders might have tried small group assaults with armoured vehicles, now the tactic is to try and sneak through on foot one by one, evading frontline Ukrainian drones, and find somewhere to hide.

Under what little cover remains, survivors then try to gather a group of 10 or so and attack Ukrainian positions. It is costly – “in the last 24 hours we killed 11,” Afer says – but the assaults that previously might have happened once or twice a day are now relentless. To the Da Vinci commander it seems that the Russians are terrified of their own officers, which is why they follow near suicidal orders.

At the command centre of the Da Vinci Wolves battalion
At the command centre of the Da Vinci Wolves battalion

Reconnaissance drones monitor a burnt-out tree line west of Pokrovsk; the images come through to Da Vinci’s command centre at one end of a 130-metre-long underground bunker. “It’s very dangerous to have even a small break on watching,” Afer says, and the team works round the clock. The bunker, built in four or five weeks, contains multiple rooms, including a barracks for sleep. Another is an army mess with children’s drawings, reminders of family. The menu for the week is listed on the wall.

It is three and a half years into the Ukraine war and Donald Trump’s August peace initiative has made no progress. Meanwhile the conflict evolves. Afer explains that such is the development of FPV (first person view) drones, remotely piloted using an onboard camera, that the so-called kill zone now extends “12 to 14 kilometres” behind the front – the range at which a $500 drone, flying at up to 60mph, can strike. It means, Afer adds, that “all the logistics [food, ammunition and medical supplies] we are doing is either on foot or with the help of ground drones”.

Heavy machine guns near the temporary base of the Da Vinci battalion

Further in the rear, at a rural dacha now used by Da Vinci’s soldiers, several types of ground drones are parked. The idea has moved rapidly from concept to trial to reality. They include remotely controlled machine guns, and flat bed robot vehicles. One, the $12,000 Termit, has tracks for rough terrain and can carry 300kg over 12 miles with a top speed of 7 miles an hour.

Termit land drones equipped for cargo, assault and mine laying

Ukrainian defence ministry photograph of its Termit drone.

Land drones save lives too. “Last night we evacuated a wounded man with two broken legs and a hole in his chest,” Afer continues. The whole process took “almost 20 hours” and involved two soldiers lifting the wounded man more than a mile to a land drone, which was able to cart the victim to a safe village. The soldier survived.

While Da Vinci reports its position is stable, endless Russian attempts at infiltration have been effective at revealing where the line is thinly held or poorly coordinated between neighbouring units. Russian troops last month penetrated Ukraine’s lines north-east of Pokrovsk near Dobropillya by as much as 12 miles – a dangerous moment in a critical sector, just ahead of Trump’s summit with Vladimir Putin in Alaska.

At first it was said a few dozen had broke through, but the final tally appears to have been much greater. Ukrainian military sources estimate that 2,000 Russians got through and that 1,100 infiltrators were killed in a fightback led by the 14th Chervona Kalyna brigade from Ukraine’s newly created Azov Corps – a rare setback for an otherwise slow but remorseless Russian advance.

Map

That evening at another dacha used by Da Vinci, people linger in the yard while moths target the light bulbs. Inside, a specialist drone jammer sits on a gaming chair surrounded by seven screens arranged in a fan and supported by some complex carpentry.

It is too sensitive to photograph, but the team leader Oleksandr, whose call sign is Shoni, describes the jammer’s task. Both sides can intercept each other’s feeds from FPV drones and three screens are dedicated to capturing footage that can then help to locate them. Once discovered, the operator’s task is to find the radio frequency the drone is using and immobilise it with jammers hidden in the ground (unless, that is, they are fibre optic drones that use a fixed cable up to 12 miles long instead of a radio connection).

“We are jamming around 70%,” Shoni says, though he acknowledges that the Russians achieve a similar success rate. In their sector, this amounts to 30 to 35 enemy drones a day. At times, the proportion downed is higher. “During the last month, we closed the sky. We intercepted their pilots saying on the radio they could not fly,” he continues, but that changed after Russian artillery destroyed jamming gear on the ground. The battle, Shoni observes, ebbs and flows: “It is a war of drones now and there is a shield and there is a sword. We are the shield.”

Oleksandr, call sign Shoni, having a break at the kitchen

A single drone pilot can operate 20 missions in 24 hours says Sean, who flies FPVs for Da Vinci, for several days at a stretch in a crew of two or three, hidden a few miles behind the frontline. Because the Russians are on the attack the main target is their infantry. Sean frankly acknowledges he is “killing at least three Russian soldiers” during that time, in the deadly struggle between ground and air. Does it make it easier to kill the enemy, from a distance? “How can we tell, we only know this,” says Dubok, another FPV pilot, sitting alongside Sean.

Other anti-drone defences are more sophisticated. Ukraine’s third brigade holds the northern Kharkiv sector, east of the Oskil River, but to the west are longer-range defence positions. Inside, a team member watches over a radar, mostly looking for signs of Russian Supercam, Orlan and Zala reconnaissance drones. If they see a target, two dash out into fields ripe with sunflowers to launch an Arbalet interceptor: a small delta wing drone made of a black polystyrene, which costs $500 and can be held in one hand.

Buhan, a pilot of the drone crew with the Arbalet interceptor at the positions of the 3rd Assault Brigade in Kharkiv region
Arbalet interceptors at the dugout of the 3rd Assault Brigade in Kharkiv region

The Arbalet’s top speed is a remarkable 110 miles an hour, though its battery life is a shortish 40 minutes. It is flown by a pilot hidden in the bunker via its camera using a sensitive hobbyists’ controller. The aim is to get it close enough to explode the grenade it carries and destroy the Russian drone. Buhan, one of the pilots, says “it is easier to learn how to fly it if you have never flown an FPV drone”.

It is an unusually wet and cloudy August day, which means a rare break from drone activity as the Russians will not be flying in the challenging conditions. The crew don’t want to launch the Arbalet in case they lose it, so there is time to talk. Buhan says he was a trading manager before the war, while Daos worked in investments. “I would have had a completely different life if it had not been for the war,” Daos continues, “but we all need to gather to fight to be free.”

So do the pilots feel motivated to carry on fighting when there appears to be no end? The two men look in my direction, and nod with a resolution not expressed in words.



Source link

Continue Reading

AI Insights

Tech giants pay talent millions of dollars

Published

on


Meta CEO Mark Zuckerberg offered $100 million signing bonuses to top OpenAI employees.

David Paul Morris | Bloomberg | Getty Images

The artificial intelligence arms race is heating up, and as tech giants scramble to come out on top, they’re dangling millions of dollars in front of a small talent pool of specialists in what’s become known as the AI talent war.

It’s seeing Big Tech firms like Meta, Microsoft, and Google compete for top AI researchers in an effort to bolster their artificial intelligence divisions and dominate the multibillion-dollar market.

Meta CEO Mark Zuckerberg recently embarked on an expensive hiring spree to beef up the company’s new AI Superintelligence Labs. This included poaching Scale AI co-founder Alexander Wang as part of a $14 billion investment into the startup.

OpenAI’s Chief Executive Sam Altman, meanwhile, recently said the Meta CEO had tried to tempt top OpenAI talent with $100 million signing bonuses and even higher compensation packages.

If I’m going to spend a billion dollars to build a [AI] model, $10 million for an engineer is a relatively low investment.

Alexandru Voica

Head of Corporate Affairs and Policy at Synthesia

Google is also a player in the talent war, tempting Varun Mohan, co-founder and CEO of artificial intelligence coding startup Windsurf, to join Google DeepMind in a $2.4 billion deal. Microsoft AI, meanwhile, has quietly hired two dozen Google DeepMind employees.

“In the software engineering space, there was an intense competition for talent even 15 years ago, but as artificial intelligence became more and more capable, the researchers and engineers that are specialized in this area has stayed relatively stable,” Alexandru Voica, head of corporate affairs and policy at AI video platform Synthesia, told CNBC Make It.

“You have this supply and demand situation where the demand now has skyrocketed, but the supply has been relatively constant, and as a result, there’s the [wage] inflation,” Voica, a former Meta employee and currently a consultant at the Mohamed bin Zayed University of Artificial Intelligence, added.

Voica said the multi-million dollar compensation packages are a phenomenon the industry has “never seen before.”

Here’s what’s behind the AI talent war:

Building AI models costs billions

“Companies that build products pay to use these existing models and build on top of them, so the capital expenditure is lower and there isn’t as much pressure to burn money,” Voica said. “The space where things are very hot in terms of salaries are the companies that are building models.”

AI specialists are in demand

The average salary for a machine learning engineer in the U.S. is $175,000 in 2025, per Indeed data.

Pixelonestocker | Moment | Getty Images

Machine learning engineers are the AI professionals who can build and train these large language models — and demand for them is high on both sides of the Atlantic, Ben Litvinoff, associate director at technology recruitment company Robert Walters, said.

“There’s definitely a heavy increase in demand with regards to both AI-focused analytics and machine learning in particular, so people working with large language models and people deploying more advanced either GPT-backed or more advanced AI-driven technologies or solutions,” Litvinoff explained.

This includes a “slim talent pool” of experienced specialists who have worked in the industry for years, he said, as well as AI research scientists who have completed PhDs at the top five or six universities in the world and are being snapped up by tech giants upon graduating.

It’s leading to mega pay packets, with Zuckerberg reportedly offering $250 million to a 24-year-old AI genius Matt Deitke, who dropped out of a computer science doctoral program at the University of Washington.

Meta directed CNBC to Zuckerberg’s comments to The Information, where the Facebook founder said there’s an “absolute premium” for top talent.

“A lot of the specifics that have been reported aren’t accurate by themselves. But it is a very hot market. I mean, as you know, and there’s a small number of researchers, which are the best, who are in demand by all of the different labs,” Zuckerberg told the tech publication.

“The amount that is being spent to recruit the people is actually still quite small compared to the overall investment and all when you talk about super intelligence.”

Litvinoff estimated that, in the London market, machine learning engineers and principal engineers are currently earning six-figure salaries ranging from £140,000 to £300,000 for more senior roles, on average.

In the U.S., the average salary for a machine learning engineer is $175,000, reaching nearly $300,000 at the higher end, according to Indeed.

Startups and traditional industries get left behind

As tech giants continue to guzzle up the best minds in AI with the lure of mammoth salaries, there’s a risk that startups get left behind.

“Some of these startups that are trying to compete in this space of building models, it’s hard to see a way forward for them, because they’re stuck in the space of: the models are very expensive to build, but the companies that are buying those models, I don’t know if they can afford to pay the prices that cover the cost of building the model,” Voica noted.

Mark Miller, founder and CEO of Insurevision.ai, recently told Startups Magazine that this talent war was also creating a “massive opportunity gap” in traditional industries.

“Entire industries like insurance, healthcare, and logistics can’t compete on salary. They need innovation but can’t access the talent,” Miller said. “The current situation is absolutely unsustainable. You can’t have one industry hoarding all the talent while others wither.”

Voica said AI professionals will have to make a choice. While some will take Big Tech’s higher salaries and bureaucracy, others will lean towards startups, where salaries are lower, but staff have more ownership and impact.

“In a large company, you’re essentially a cog in a machine, whereas in a startup, you can have a lot of influence. You can have a lot of impact through your work, and you feel that impact,” Voica said.

Until the price of building AI models comes down, however, the high salaries for AI talent are likely to remain.

“As long as companies will have to spend billions of dollars to build the model, they will spend tens of millions, or hundreds of millions, to hire engineers to build those models,” Voica added.

“If all of a sudden tomorrow, the cost to build those models decreases by 10 times, the salaries I would expect would come down as well.”



Source link

Continue Reading

Trending