Connect with us

Ethics & Policy

AI 101: What is AI, Anyway? And Other Questions You’ve Been Too Shy to Ask

Published

on


In this first installment of PAI’s Summer School Series, we’re breaking down the basics of artificial intelligence. Whether you are an AI pro or just beginning to explore it, this primer will help sharpen your understanding of the technology changing the world.

What is AI?

Human intelligence is defined as the ability to learn, reason, and apply knowledge or skills to solve problems. While Artificial Intelligence, or AI, is not like human intelligence, it is designed to simulate it. There’s a very important distinction between humans and AI, and that is that AI systems do not “think,” but rather reason and solve problems based on how they are trained. AI systems are built to process information, recognize patterns, and make decisions on a much larger scale and at a faster rate than humans.

There are different types of AI. Narrow AI, or “weak” AI, is the only kind of AI that exists today. It is designed to handle and manage very specific tasks and functions like recommending certain kinds of videos on social media or generating text (via AI assistants like ChatGPT). It can do many specific tasks very well but it can’t generalize or think beyond its programming. You may hear the terms “AGI” and “Superintelligence” being used as people speculate about the future capabilities of AI, but as of now, they are both hypothetical types of AI.

Common AI Terms You Might Have Heard

Many people first heard of AI in 2022, when Open AI first released their generative AI chatbot “ChatGPT”. While not a new concept, this AI tool became a viral sensation due to its accessibility and its impressive ability to “understand” and generate human-like text. Since the initial release of GPT-3.5, AI technology has taken center stage in the tech field, flooding the media with jargon and confusing terms. So what do these common AI terms even mean?

  • Foundation Model: Foundation models are AI systems with generally applicable functions that are designed to be used across a variety of contexts. The current generation of these systems is characterized by training deep learning models on large datasets (which requires significant computational resources) to perform numerous tasks that can serve as the “foundation” for a wide array of downstream applications.
  • Large Language Model (LLM): LLMs are a type of foundation model, primarily used to program systems, like generative AI systems. They are trained on very large sets of data and use machine learning techniques to improve and refine themselves.
  • Generative AI: Generative AI is a kind of AI system that can produce text, images, audio, and video. Examples of generative AI systems are Claude, Gemini, and ChatGPT, Midjourney, and Sora. These AI systems work by generating Synthetic Media when a user prompts the system with a specific request.
  • Agentic AI: This kind of AI system can act on behalf of users, with some degree of autonomy, to achieve goals without human intervention or guidance. The goals of agents are to understand a user’s general goals and utilize context to solve specific problems without explicit instructions. For example, where ChatGPT can generate a pizza recipe for you, if asked, an AI agent can find the best pizza place near you, look for and book a reservation, and schedule the reservation for when you are available in the week.

Although generative AI has become the most popular form of AI, there’s more to it. AI has actually been around since 1956, but the concepts which led to the development of AI have been around for much longer, with some theorizing it all started as far back as the eighteenth century. AI is in many of the applications and systems you interact with on a daily basis. When you apply for a car loan, an AI system is running in the background to weigh whether or not you should qualify. When you are scrolling through Netflix to pick your next binge-watch, an AI system is running in the background to decide which shows or movies to recommend to you based on your watch history. When you go on a roadtrip and use Google or Apple maps to help you navigate, an AI system is running in the background to optimize your route and avoid traffic. AI is everywhere, but how do these machines know what they know?

  • Machine learning: Machine learning refers to the method in which AI “learns” over time. Incorporating computer science, math, and coding, this process involves the development of algorithms that help machines learn without any human assistance.
  • Algorithm: Algorithms are instructions that tell the computer how to make decisions, perform tasks, or execute a function autonomously. Algorithms look for patterns in data and over time, as they work, it improves itself.

While much of this work sounds like computers are teaching computers how to “think” and “operate,” humans play a very critical role in the development of AI systems. Apart from overseeing the development of these systems, there are hundreds of millions of people around the world who collect the data that powers AI and train these machines to identify and recognize patterns in the data. There are also millions of people around the world who are dedicated to understanding the impacts these systems have on people and society.

  • Data Workers: Data workers are individuals who perform data enrichment tasks, such as cleaning, labeling, and moderating large datasets, that are crucial for training machine learning models, especially those powering AI systems.
  • Bias: Bias in AI refers to a systematic error that leads to unfair outcomes. Bias is typically introduced through human error in programming, data collection, or training. Bias in AI systems can exacerbate preexisting risks posed to marginalized groups.

Responsible AI

While AI has become a powerful tool in our everyday lives it is also important to recognize that the ubiquity of it means it can also have great potential for harm or misuse. Used irresponsibly, AI can amplify and reinforce discrimination, violate privacy, and even be used to spread false information. That is why Partnership on AI is dedicated to advancing the responsible development, deployment, and use of AI systems. AI is a tool that can be leveraged for a multitude of purposes, but it should always benefit society. To learn more about how we advance responsible AI, sign up for our newsletter.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing’s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing”s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Lavender’s Role in Targeting Civilians in Gaza

Published

on


The world today is war-torn, starting with Russia’s attacks on Ukraine to Israel’s devastation in Palestine and now in Iran, putting the entire West Asia in jeopardy.

The geometrics of war has completely changed, from Blitzkrieg (lightning war) in World War II to the use of sophisticated and technologically driven missiles in these latest armed conflicts. The most recent wars are being driven by use of artificial intelligence (AI) to narrow down potential targets.

There have been multiple evidences which indicate that Israeli forces have deployed novel AI-driven targeting tools in Gaza. One system, nicknamed “Lavender” is an AI-enabled database that assigns risk scores to Gazans based on patterns in their personal data (communication, social connections) to identify “suspected Hamas or Islamic Jihad operatives”. Lavender has flagged up to 37,000 Palestinians as potential targets early in the war.

A second system, “Where is Daddy?”, uses mobile phone location tracking to notify operators when a marked individual is at home. The initial strikes using these automated generated systems targeted individuals in their private homes on the pretext of targeting the terrorists. But innocent women and young children also lost their lives in these attacks. This technology was developed as a replacement of human acumen and strategy to identify and target the suspects.

According to the Humans Rights Watch report (2024), around 70 per cent of people who have lost lives were women and children. The United Nations agency has also verified the details of 8,119 victims killed in Gaza from November 2023 to April 2024. The report showed that 44 per cent of the victims were children and 26 per cent were women. The humans are merely at the mercy of this sophisticated technology that identified the suspected militants and targeted them.

The use of AI-based tools like “Lavender” and “Where’s Daddy?” by Israel in its war against Palestine raises serious questions about the commitment of countries to the international legal framework and the ethics of war. Use of such sophisticated AI targeted tools puts the weaker nations at the dictate of the powerful nations who can use these technologies to inflict suffering for the non-combatants.

The international humanitarian law (IHL) and international human rights law (IHRL) play a critical yet complex role in the context of AI during conflict situations such as the Israel-Palestine Conflict. Such AI-based warfare violates the international legal framework principles of distinction, proportionality and precaution.

The AI systems do not inherently know who is a combatant. Investigations report that Lavender had an error rate on the order of 10 per cent and routinely flagged non-combatants (police, aid workers, people who merely shared a name with militants). The reported practice of pre-authorising dozens of civilian deaths per strike grossly violates the proportionality rule.

An attack is illegal if incidental civilian loss is “excessive” in relation to military gain. For example, one source noted that each kill-list target came with an allowed “collateral damage degree” (often 15–20) regardless of the specific context. Allowing such broad civilian loss per target contradicts IHL’s core balancing test (ICRC Rule 14).

The AI-driven process has eliminated normal safeguards (verification, warnings, retargeting). IHRL continues to apply alongside IHL in armed conflict contexts. In particular, the right to life (ICCPR Article 6) obliges states to prevent arbitrary killing.

The International Court of Justice has held that while the right to life remains in force during war, an “arbitrary deprivation of life” must be assessed by reference to the laws of war. In practice, this means that IHL’s rules become the benchmark for whether killings are lawful.

However, even accepting lex specialis (law overriding general law), the reported AI strikes raise grave human rights concerns especially the Right to Life (ICCPR Art. 6) and Right to Privacy (ICCPR Art. 17).

Ethics of war, called ‘jus in bello’ in the legal parlance, based on the principles of proportionality (anticipated moral cost of war) and differentiation (between combatants and non-combatants) has also been violated. Article 51(5) of Additional Protocol I of the 1977 Geneva Convention said that “an attack is disproportionate, and thus indiscriminate, if it may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and military advantage”.

The Israel Defense Forces have been indiscriminately using AI to target potential targets. These targets though aimed at targeting militants have been extended to the non-military targets also, thus causing casualties to the civilians and non-combatants. Methods used in a war is like a trigger which once warded off is extremely difficult to retract and reconcile. Such unethical action creates more fault lines and any alternate attempt at peace resolution and mediation becomes extremely difficult.

The documented features of systems like Lavender and Where’s Daddy, based on automated kill lists, minimal human oversight, fixed civilian casualty “quotas” and use of imprecise munitions against suspects in homes — appear to contravene the legal and ethical principles.

Unless rigorously constrained, such tools risk turning warfare into arbitrary slaughter of civilians, undermining the core humanitarian goals of IHL and ethics of war. Therefore, it is extremely important to streamline the unregulated use of AI in perpetuating war crimes as it undermines the legal and ethical considerations of humanity at large.



Source link

Continue Reading

Trending