Connect with us

AI Insights

Martyr “Majid Tajan -Jari”: The Man Who Reached the Heart of the World’s Artificial Intelligence

Published

on


TEHRAN- Martyr Majid Tajan Jar—a scientific genius who journeyed from the courtyard of his home in the village of Tajan Jar in Mazandaran Province to the heart of the world’s AI, now immortalized beside the word martyr.

Dr. Majid Tajan-Jari was a child who didn’t just take apart a broken radio but pieced its scattered fragments together like a puzzle, crafting a future with his small hands—a future that still echoes in the quiet of his childhood home.

It was as if an inner voice whispered to him: “The future begins right here.” This is the story told by a mother who witnessed every moment of it… and now narrates the silence of a home that her son, with his brilliance and his blood, gave meaning to.

A brilliance that seemed to have come from the future…

Some people are born not just for their own time, but for the times to come. From childhood, Dr. Majid Tajan-Jari showed signs of this timelessness in his demeanor—a sharp, creative mind that quickly blurred the line between play and science.

zobeideh Khaleghi, the martyr’s mother, recalls: “I remember one day when we went to the store together. Video players had just arrived. Majid was about ten or eleven. He took an old radio from his aunt, dismantled it, understood its components, and rebuilt it from scratch. We just watched, but it was as if he had a blueprint in his mind.”

Their simple courtyard became his laboratory—where he worked with electrical circuits and soldering. “One day, he asked me, ‘Mom, I don’t have a workshop—can I work here?’ I told him, ‘This house is yours. Do whatever you want.’”

Majid’s father, a retired employee, spoke of their financial struggles: “We had little, but Majid never gave up. He taught himself, built, and created.” At eighteen, he built a robot that didn’t just move—it thought.

Zobeideh continues: “We didn’t understand what he was making, but we knew it was something from the future.” Her voice is quiet, choked with emotion: “The pain of losing a child who was building the future is unbearable. The house feels smaller without him, and its silence is louder than ever.”

Yet Majid was not only unmatched in scientific brilliance—his ethics transcended ordinary boundaries. “He was kind to everyone; his respect and politeness were legendary,” his mother says. “Sometimes I thought his ‘grade’ in ethics was infinite.”

Majid’s move to Tehran was quiet and unassuming. “For fourteen years, he worked in silence,” his mother recalls. “I didn’t fully grasp what he was doing, but I felt he was fighting for something greater than himself.”

The scent of his shirt still lingers in the house…

Her voice trembles—not from breaking, but from standing firm, from honoring that pain. Softly, she says: “When I saw his body, it was as if the world stopped. I just looked at him… with that same smile he always had in my memory. I told myself, ‘Be calm—he wasn’t meant to stay. They didn’t bury him in the earth; they took him to the sky.’”

“He always said, ‘Kiss my throat, Mom…’” A brief silence follows. The mother looks down, then speaks a heavy truth: “Every time I visited his home, he’d say, ‘Mom, kiss my throat…’ Now I understand. I’m ashamed that the last time, I couldn’t kiss his throat.”

Our hearts are broken, but we have not collapsed

Amid this crushing grief, a voice rises from the depths of faith—not of mourning, but of resilience: “My sister calls every day and asks, ‘Zobeideh, I’m just his aunt, and I’m burning with grief—how are you still breathing?’ And I tell her, ‘Patience is the only thing Majid planted in my heart. He left, but he left his patience behind for me.’”

“His memory has lit up our lives.”

Martyr "Majid Tajan -Jari": The Man Who Reached the Heart of the World’s Artificial Intelligence

“We mothers live with our skin and bones—we touch pain. But every night, I tell myself, ‘Majid, my soul, though they took your body from me, your name, your memory, your voice are still with me. Sometimes, I still hear the door… as if you’re coming home, turning the key, saying, ‘Mom, I hope you’re not tired.’”

Ali Tajan-Jari, the martyr’s father, a quiet man with a gaze heavy with years of experience, sits on the couch, flipping through old photographs.

In a simple home, he had a global mind

His father, with a faint smile, glances toward the courtyard. A quiet pride lingers in his eyes: “That simple home, that humble courtyard, became the birthplace of boundless dreams.”

“From that small room, he connected with the world. He said, ‘I will stay in Iran, but my scientific voice must be heard beyond borders.’ And so it was. I often heard that when asked where his students were, he’d smile and say, ‘Everywhere… Spain, England, Canada, Turkey…’”

He built bridges from failure

A brief silence lingers between the father’s words before he continues: “In one of our talks, he said, ‘I’ve failed many, many times… but I built a home—a scientific family. All my chances were there.’ That group was called ‘AIO Learn’—young people who rose from the ground and reached the summit.”

The father places a hand on his chest, as if something deep within him speaks: “We didn’t know Majid was teaching. Not out of secrecy, but because, amid building robots and AI projects, that side of him was less visible.”

“One day, we heard his students had surpassed 500,000. Majid was a teacher without borders—with a virtual blackboard, yet magnificent. And all of it began in a room that didn’t even have an extra chair. Just love, a laptop, and a light of passion.”

“He always said, ‘Science must have attraction—not fear, not force… only motivation and the desire to know.’”

A Quran that still carries his presence…

Moments later, the father grows quieter. His eyes settle on a small Quran on the table—the one that had accompanied his son for years. Slowly, he takes out his glasses, places them on, and silently recites a verse.

His voice is soft, but the words are clear and firm. He closes the Quran, running his hand over its cover—as if still feeling the warmth of his son’s hands.

In the silence of the house, only the sound of his breathing can be heard. His gaze lingers on his son’s portrait. He says nothing. But that look tells a thousand unspoken words.

The end of a story, the beginning of a path

This chapter of Majid’s life was not just a career—it was part of Iran’s scientific identity today. A young man who chose to stay instead of emigrate, to build instead of complain, and to take root instead of leave.

In a simple home, with hands on a keyboard and a heart full of conviction, he trained students who now carry his legacy across the world.

The legacy he planted in life…

Mohaddeseh Tajan-Jari, the martyr’s sister, sits composed in the frame of the image. Soft light from a half-open window falls on her face. Her voice, delicate and measured, wavers between sorrow and pride:

“Sometimes they ask, ‘What did Majid leave behind?’ He had no children, no family of his own… But I say, ‘If only they knew what a child truly is.’”

“Majid did not father a child of his blood, but he fathered one of his mind—he named it his company. He always said with certainty, ‘I built AIO Learn… this is my child.’”

Martyr "Majid Tajan -Jari": The Man Who Reached the Heart of the World’s Artificial Intelligence

She pauses briefly, then adds: “Majid wasn’t just my brother—he was my confidant. We never fought—not because we couldn’t, but because there was no need. We were friends, united in thought, concern, and heart. More than a brother, he was my teacher—one whose silence itself was a lesson.”

“When my child was born, he was genuinely happy. He’d buy toys and say, ‘He must grow up intelligent.’ He wasn’t a father, but he lived fatherhood. In action, he was a martyr—not just in title.”

Her voice grows quieter, but the meaning grows heavier: “He didn’t see martyrdom only in combat. He stayed up till dawn coding, creating ideas, building the future. He wrote projects that seemed to come from decades ahead.”

“His jihad was a jihad of thought—his battlefield was science, his weapon genius. Martyrdom was not the end of his path—it was the manifestation of a life entirely devoted.”

My brother said ‘no’ to money, ‘yes’ to his homeland

The narrative shifts—from emotion to loyalty, from offers to faith. “When a major European company made him a staggering offer, everyone thought his choice was obvious. High salary, easy immigration… I told him, ‘Majid, it’s your decision.’ He smiled and said, ‘Mahdeh, I can’t live in a country where they lie about my people day and night. Even if I have to live in a tent, I’d rather be in my homeland.’”

An ascension that was preordained

Her gaze drifts to a distant point—a moment of silence. Then, with inner conviction, she says: “Majid wasn’t born—it was as if he descended. He came to build, to teach, to inspire… and when his mission was complete, he left. Not in silence, but at his peak.”

“I always think God entrusted Majid to us for only thirty-five years. Now, his mission is over… but his voice still flows.”

We are still standing…

Today, the small room in the Tajan-Jar home is silent. The sound of soldering is gone, the monitor remains dark, the desk empty. But the ideas born in that room are more alive than ever—in the pulse of research, the veins of science, the sky of hope.

Martyr Dr. Majid Tajan-Jari is no longer among us, but his vision still shines in the eyes of his students. His thoughts live on in the code he wrote, the projects he brought to life, the dreams he refused to leave unfinished.

Martyr "Majid Tajan -Jari": The Man Who Reached the Heart of the World’s Artificial Intelligence

He is gone, but his path remains. His principles—his belief in staying, in building, in nurturing elites on his homeland’s soil—endure.

A father, with eyes full of pride, spoke of a son who, in silence, in dignity, in action, wrote a new definition of scientific jihad.

And today, we are certain: some people do not come to stay—they come to light a lamp that will illuminate the path for years to come…

Martyr Dr. Majid Tajan-Jari was not just a scientific genius—he was the embodiment of committed, scholarly, and national life. A man who could have crossed borders, shone in the world’s best institutions, but chose to remain in this soil, take root, and build a bright future.

(Source: Mehr News Agency)



Source link

AI Insights

Generative vs. agentic AI: Which one really moves the customer experience needle?

Published

on


Artificial intelligence, first coined by John McCarthy in 1956, lay dormant for decades before exploding into a cultural and business phenomenon post-2012. From predictive algorithms to chatbots and creative tools, AI has evolved rapidly. Now, two powerful paradigms are shaping its future: generative AI, which crafts content from text to art, and agentic AI, which acts autonomously to solve complex tasks. But should businesses pit generative AI against agentic AI or combine them to innovate? The answer isn’t binary, because these technologies aren’t competing forces. In fact, they often complement each other in powerful ways, especially when it comes to transforming customer engagement.

The rise of generative AI: Creativity meets scale

Generative AI is all about creation; it represents the imaginative side of artificial intelligence. From producing marketing copy and designing campaign visuals to generating product descriptions and chat responses, generative AI has unlocked new possibilities for enterprises looking to scale content and personalisation like never before.

Fuelled by powerful models like ChatGPT, DALL·E, and MidJourney, these systems have entered the enterprise stack at speed. Marketing teams are using them to brainstorm ideas and accelerate go-to-market efforts. Customer support teams are deploying them to enhance chatbot interactions with more human-like language. Product teams are using generative AI to auto-draft FAQs or documentation. And sales teams are experimenting with tailored email pitches generated from past deal data.

At the heart of this capability is the model’s ability to learn from massive datasets, analysing and replicating patterns in text, visuals, and code to produce new, relevant content on demand. This has made generative AI a valuable tool in customer engagement workflows where speed, relevance, and personalisation are paramount. But while generative AI can start the conversation, it rarely finishes it. That’s where its limitations show up.

For instance, it can draft a beautifully written response to a billing query, but it can’t resolve the issue by accessing the customer’s account, applying credits, or triggering workflows across enterprise systems. In other words, it creates the message but not the outcome. This creative strength makes generative AI a powerful enabler of customer engagement but not a complete solution. To drive real business value, measured in resolution rates, retention, and revenue, enterprises need to go beyond content generation and toward intelligent action. This is where agentic AI comes into play.

How agentic AI is redefining enterprise and consumer engagement

As the need for deeper automation grows, agentic AI is taking centre stage. Agentic AI is built to act; it makes decisions, takes autonomous actions, and adapts in real time to achieve goals. For businesses, this marks a transformative shift. Generative AI has empowered enterprises to accelerate communication, generate insights, and personalise engagement. Agentic AI, on the other hand, goes beyond assistance to autonomy. Imagine a virtual enterprise assistant that doesn’t just draft emails but manages entire customer service workflows — triggering follow-ups, updating CRM systems, and escalating issues when needed.

In industries like supply chain, finance, and telecom, agentic AI can dynamically reconfigure networks, detect anomalies, or reroute deliveries—all with minimal human input. It’s a new era of AI-driven execution. On the consumer front, agentic AI takes engagement from passive response to proactive assistance. Think of a digital concierge that not only understands your intent but acts on your behalf — tracking shipments or negotiating a better mobile plan based on usage patterns.

A new layer of intelligence — with responsibility

The increased autonomy of agentic AI raises important questions around trust, governance, and accountability. Who’s liable when an agentic system makes an error or an ethically questionable decision? Enterprises adopting such systems will need to ensure alignment with human values, transparency in decision-making, and robust fail-safes.

Generative and agentic AI are not rivals — they’re complementary forces that, together, enable a new era of intelligent enterprise and consumer engagement.

When generative meets agentic AI

Generative AI and agentic AI may serve different functions. However, rather than operating in isolation, these technologies frequently collaborate, enhancing both communication and execution.

Take, for example, a virtual customer service agent. The agentic AI manages the flow of interaction, makes decisions, and determines next steps, while generative AI crafts clear, personalised responses tailored to the conversation in real time.

This collaborative dynamic also plays out in robotics. Imagine a robot chef: generative AI could invent creative recipes based on user tastes and available ingredients, while agentic AI would take over the cooking, executing the recipe with precision and adapting to real-time conditions in the kitchen.

Summing Up

As AI continues to evolve, the boundaries between generative and agentic systems will become increasingly fluid. We’re heading toward a future where AI doesn’t just imagine possibilities but also brings them to life, merging creativity with execution in a seamless loop. This fusion holds immense promise across industries, from streamlining healthcare operations to revolutionising manufacturing workflows.

However, with such transformative power comes great responsibility. Ethical development, transparency, and accountability must remain non-negotiable, especially when it comes to safeguarding consumer data. As these systems take on more autonomous roles, ensuring privacy, security, and user consent will be critical to building trust.

By understanding the distinct roles and combined potential of generative and agentic AI, we can shape a future where technology enhances human capability responsibly, meaningfully, and with integrity at its core.

This article is authored by Harsha Solanki, VP GM Asia, Infobip.

Disclaimer: The views expressed in this article are those of the author/authors and do not necessarily reflect the views of ET Edge Insights, its management, or its members



Source link

Continue Reading

AI Insights

How AI is eroding human memory and critical thinking

Published

on


by Paul W. Bennett 
Originally published on Policy Options
September 5, 2025

Consider these everyday experiences in today’s digitally dependent world rich with artificial intelligence (AI). A convenience store cashier struggles to make change. Your Uber driver gets lost on his way to your destination. A building contractor tries to calculate the load-bearing capacity of your new floor. An emergency-room nursing assistant guesses at the correct dosage in administering a life-saving heart medication.

All of these are instances of an underlying problem that can be merely an irritant or a matter of life and death. What happens when brains accustomed to backup from phones and devices must go it on their own?  

Increasingly we are relying upon technology to do our thinking for us. Cognitive offloading to calculators, GPS, ChatGPT and digital platforms enables us to do many things without relying on human memory. But that comes with a price.   

Leading cognitive science researchers have begun to connect the dots. In a paper entitled The Memory Paradox, released earlier this year, American cognitive psychologist Barbara Oakley and a team of neuroscience researchers exposed the critical but peculiar irony of the digital era: as AI-powered tools become more capable, our brains may be bowing out of the hard mental lift. This erodes the very memory skills we should be exercising. We are left less capable of using our heads.

Collective loss of memory

Studies show that decades of steadily rising IQ scores from the 1930s to the 1980s — the famed Flynn effect — have levelled off and even begun to reverse in several advanced countries. Recent declines in the United States, Britain, France and Norway cry out for explanation. Oakley and her research team applied neuroscience research to find an answer. Although IQ is undoubtedly influenced by multiple factors, the researchers attribute the decline to two intertwined trends. One is the educational shift away from direct instruction and memorization. The other is a rise in cognitive offloading, that is, people habitually leaning on calculators, smartphones and AI to recall facts and solve problems. 

The AI literacy gap facing Gen Alpha

AI threatens Indigenous data sovereignty and self-determination

Surveying decades of cognitive psychology and neuroscience research, Oakley and her team show how memory works best when it involves more than storage. It’s also about retrieval, integration and pattern recognition. When we repeatedly retrieve information, our brains form durable memory schemata and neural manifolds. These structures are indispensable for intuitive reasoning, error-checking and smooth skill execution. But if we default to “just Google it,” those processes so fundamental for innovation and critical thinking may never fully develop, particularly in the smartphone generation.

A key insight from the paper is the connection between deep learning behaviours in artificial neural networks (consider “grokking” in which patterns suddenly crystallize after extensive machine training) and human learning. Just as machines benefit from structured, repeated exposure before grasping deep patterns, so do humans. Practice, retrieval and timed repetition develop intuition and mastery.

Atrophy of mental exercise

The researchers sound a cautionary note. Purely constructivist or discovery‑based teaching, starting with assumptions that “students know best” and need little guidance, can short‑circuit mental muscle‑building, especially in our AI world. The team found that when students rely too early on AI or calculators, they skip key steps in the cognitive sequence: encoding, retrieval, consolidation and mastery of the brain’s essential building blocks. The result is individuals whose mental processes are more dependent upon guesswork, superficial grasp of critical facts and background knowledge and less flexible thinking.

Even techno skeptics see a role for digital tools. Oakley and her colleagues argue for what they term cognitive complementarity — a marriage of strong internal knowledge and smart external tools. ChatGPT or calculators should enhance — not replace — our deep mental blueprints that let us evaluate, refine and build upon AI output. That’s the real challenge that lies ahead.

The latest cognitive research has profound implications for educational leaders, consultants and classroom teachers. Popular progressive and constructionist approaches, which give students considerable autonomy, may have exacerbated the problem. It’s time to embrace lessons from the new science of learning to turn the situation around in today’s classrooms. This includes reintegrating retrieval practice (automatic recall of information from memory), spaced repetition and step-by-step skills progression in Grades K-12.

Using your head

What are the new and emerging essentials in the AI-dominated world? Oakley and her team deliver some sound recommendations, including:

  • Teaching students to limit AI use and delay offloading.
  • Training teachers to design AI‑inclusive but memory‑supportive curriculums, demonstrating that effective AI use requires prior knowledge and the ability to distinguish fact from fiction
  • Guiding institutions to adopt AI in ways that build upon, not supplant, the human brain, such as editing original prose or mapping data.    

Using our heads and tapping into our memory banks must not become obsolete. They are essential mental activities. Access to instant information can and does foster lazy habits of mind. British education researcher Carl Hendrick put it this way: “The most advanced AI can simulate intelligence, but it cannot think for you. That task remains, stubbornly and magnificently, human.”

The most important form of memory is still the one inside our heads.

*Composed in a fierce dialectical encounter with ChatGPT.

This <a target=”_blank” href=”https://policyoptions.irpp.org/2025/09/ai-memory/”>article</a> first appeared on <a target=”_blank” href=”https://policyoptions.irpp.org”>Policy Options</a> and is republished here under a Creative Commons license.<img src=”https://policyoptions.irpp.org/wp-content/uploads/2025/08/po_favicon-150×150.png” style=”width:1em;height:1em;margin-left:10px;”><img id=”republication-tracker-tool-source” src=”https://policyoptions.irpp.org/?republication-pixel=true&post=295565″ style=”width:1px;height:1px;”>



Source link

Continue Reading

AI Insights

The human thinking behind artificial intelligence

Published

on


Artificial intelligence is built on the thinking of intelligent humans, including data labellers who are paid as little as US$1.32 per hour. Zena Assaad, an expert in human-machine relationships, examines the price we’re willing to pay for this technology. This article was originally published in the Cosmos Print Magazine in December 2024.

From Blade Runner to The Matrix, science fiction depicts artificial intelligence as a mirror of human intelligence. It’s portrayed as holding a capacity to evolve and advance with a mind of its own. The reality is very different.

The original conceptions of AI, which hailed from the earliest days of computer science, defined it as the replication of human intelligence in machines. This definition invites debate on the semantics of the notion of intelligence.

Can human intelligence be replicated?

The idea of intelligence is not contained within one neat definition. Some view intelligence as an ability to remember information, others see it as good decision making, and some see it in the nuances of emotions and our treatment of others.

As such, human intelligence is an open and subjective concept. Replicating this amorphous notion in a machine is very difficult.

Software is the foundation of AI, and software is binary in its construct; something made of two things or parts. In software, numbers and values are expressed as 1 or 0, true or false. This dichotomous design does not reflect the many shades of grey of human thinking and decision making.

Not everything is simply yes or no. Part of that nuance comes from intent and reasoning, which are distinctly human qualities.

To have intent is to pursue something with an end or purpose in mind. AI systems can be thought to have goals, in the form of functions within the software, but this is not the same as intent.
The main difference is goals are specific and measurable objectives whereas intentions are the underlying purpose and motivation behind those actions.

You might define the goals as ‘what’, and intent as ‘why’.

To have reasoning is to consider something with logic and sensibility, drawing conclusions from old and new information and experiences. It is based on understanding rather than pattern recognition. AI does not have the capacity for intent and reasoning and this challenges the feasibility of replicating human intelligence in a machine.

There is a cornucopia of principles and frameworks that attempts to address how we design and develop ethical machines. But if AI is not truly a replication of human intelligence, how can we hold these machines to human ethical standards?

Can machines be ethical?

Ethics is a study of morality: right and wrong, good and bad. Imparting ethics on a machine, which is distinctly not human, seems redundant. How can we expect a binary construct, which cannot reason, to behave ethically?

Similar to the semantic debate around intelligence, defining ethics is its own Pandora’s box. Ethics is amorphous, changing across time and place. What is ethical to one person may not be to another. What was ethical 5 years ago may not be considered appropriate today.

These changes are based on many things; culture, religion, economic climates, social demographics, and more. The idea of machines embodying these very human notions is improbable, and so it follows that machines cannot be held to ethical standards. However, what can and should be held to ethical standards are the people who make decisions for AI.

Contrary to popular belief, technology of any form does not develop of its own accord. The reality is their evolution has been puppeteered by humans. Human beings are the ones designing, developing, manufacturing, deploying and using these systems.

If an AI system produces an incorrect or inappropriate output, it is because of a flaw in the design, not because the machine is unethical.

The concept of ethics is fundamentally human. To apply this term to AI, or any other form of technology, anthropomorphises these systems. Attributing human characteristics and behaviours to a piece of technology creates misleading interpretations of what that technology is and is not capable of.

Decades long messaging about synthetic humans and killer robots have shaped how we conceptualise the advancement of technology, in particular, technology which claims to replicate human intelligence.
AI applications have scaled exponentially in recent years, with many AI tools being made freely available to the general public. But freely accessible AI tools come at a cost. In this case, the cost is ironically in the value of human intelligence.

The hidden labour behind AI

At a basic level, artificial intelligence works by finding patterns in data, which involves more human labour than you might think.

ChatGPT is one example of AI, referred to as a large language model (LLM). ChatGPT is trained on carefully labelled data which adds context, in the form of annotations and categories, to what is otherwise a lot of noise.

Using labelled data to train an AI model is referred to as supervised learning. Labelling an apple as “apple”, a spoon as “spoon”, a dog as “dog”, helps to contextualise these pieces of data into useful information.

When you enter a prompt into ChatGPT, it scours the data it has been trained on to find patterns matching those within your prompt. The more detailed the data labels, the more accurate the matches. Labels such as “pet” and “animal” alongside the label “dog” provide more detail, creating more opportunities for patterns to be exposed.

Data is made up of an amalgam of content (images, words, numbers, etc.) and it requires this context to become useful information that can be interpreted and used.

As the AI industry continues to grow, there is a greater demand for developing more accurate products. One of the main ways for achieving this is through more detailed and granular labels on training data.
Data labelling is a time consuming and labour intensive process. In absence of this work, data is not usable or understandable by an AI model that operates through supervised learning.

Despite the task being essential to the development of AI models and tools, the work of data labellers often goes entirely unnoticed and unrecognised.

Data labelling is done by human experts and these people are most commonly from the Global South – Kenya, India and the Philippines. This is because data labelling is labour intensive work and labour is cheaper in the Global South.

Data labellers are forced to work under stressful conditions, reviewing content depicting violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.

Data labellers are pressured to meet high demands within short timeframes. For this, they earn as little as US$1.32 per hour, according to TIME magazine’s 2023 reporting, based on an OpenAI contract with data labelling company Sama.

Countries such as Kenya, India and the Philippines incur less legal and regulatory oversight of worker rights and working conditions.

Similar to the fast fashion industry, cheap labour enables cheaply accessible products, or in the case of AI, it’s often a free product.

AI tools are commonly free or cheap to access and use because costs are being cut around the hidden labour that most people are unaware of.

When thinking about the ethics of AI, cracks in the supply chain of development rarely come to the surface of these discussions. People are more focused on the machine itself, rather than how it was created. How a product is developed, be it an item of clothing, a TV, furniture or an AI-enabled capability, has societal and ethical impacts that are far reaching.

A numbers game

In today’s digital world, organisational incentives have shifted beyond revenue and now include metrics around the number of users.

Releasing free tools for the public to use exponentially scales the number of users and opens pathways for alternate revenue streams.

That means we now have a greater level of access to technology tools at a fraction of the cost, or even at no monetary cost at all. This is a recent and rapid change in the way technology reaches consumers.
In 2011, 35% of Americans owned a mobile phone. By 2024 this statistic increased to a whopping 97%. In 1973, a new TV retailed for $379.95 USD, equivalent to $2,694.32 USD today. Today, a new TV can be purchased for much less than that.

Increased manufacturing has historically been accompanied by cost cutting in both labour and quality. We accept poorer quality products because our expectations around consumption have changed. Instead of buying things to last, we now buy things with the expectation of replacing them.

The fast fashion industry is an example of hidden labour and its ease of acceptance in consumers. Between 1970 and 2020, the average British household decreased their annual spending on clothing despite the average consumer buying 60% more pieces of clothing.

The allure of cheap or free products seems to dispel ethical concerns around labour conditions. Similarly, the allure of intelligent machines has created a facade around how these tools are actually developed.

Achieving ethical AI

Artificial intelligence technology cannot embody ethics; however, the manner in which AI is designed, developed and deployed can.

In 2021, UNESCO released a set of recommendations on the ethics of AI, which focus on the impacts of the implementation and use of AI. The recommendations do not address the hidden labour behind the development of AI.

Misinterpretations of AI, particularly those which encourage the idea of AI developing with a mind of its own, isolate the technology from the people designing, building and deploying that technology. These are the people making decisions around what labour conditions are and are not acceptable within their supply chain, what remuneration is and isn’t appropriate for the skills and expertise required for data labelling.

If we want to achieve ethical AI, we need to embed ethical decision making across the AI supply chain; from the data labellers who carefully and laboriously annotate and categorise an abundance of data through to the consumers who don’t want to pay for a service they have been accustomed to thinking should be free.

Everything comes at a cost, and ethics is about what costs we are and are not willing to pay.





Source link

Continue Reading

Trending