AI is everywhere. From the massive popularity of ChatGPT to Google cramming AI summaries at the top of its search results, AI is completely taking over the internet. With AI, you can get instant answers to pretty much any question. It can feel like talking to someone who has a Ph.D. in everything.
It’s showing up in a dizzying array of products — a short, short list includes Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude and the Perplexity search engine. You can read our reviews and hands-on evaluations of those and other products, along with news, explainers and how-to posts, at our AI Atlas hub.
As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you’re trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know.
This glossary is regularly updated.
artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities.
agentive: Systems or models that exhibit agency with the ability to autonomously pursue actions to achieve a goal. In the context of AI, an agentive model can act without constant supervision, such as an high-level autonomous car. Unlike an “agentic” framework, which is in the background, agentive frameworks are out front, focusing on the user experience.
AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias.
AI safety: An interdisciplinary field that’s concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans.
algorithm: A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own.
alignment: Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions with humans.
anthropomorphism: When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it’s happy, sad or even sentient altogether.
artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks.
autonomous agents: An AI model that have the capabilities, programming and other tools to accomplish a specific task. A self-driving car is an autonomous agent, for example, because it has sensory inputs, GPS and driving algorithms to navigate the road on its own. Stanford researchers have shown that autonomous agents can develop their own cultures, traditions and shared language.
bias: In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes.
chatbot: A program that communicates with humans through text that simulates human language.
ChatGPT: An AI chatbot developed by OpenAI that uses large language model technology.
cognitive computing: Another term for artificial intelligence.
data augmentation: Remixing existing data or adding a more diverse set of data to train an AI.
dataset: A collection of digital information used to train, test and validate an AI model.
deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.
diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo.
emergent behavior: When an AI model exhibits unintended abilities.
end-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It’s not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once.
ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse and other safety issues.
foom: Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity.
generative adversarial networks, or GANs: A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it’s authentic.
generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.
Google Gemini: An AI chatbot by Google that functions similarly to ChatGPT but also pulls information from Google’s other services, like Search and Maps.
guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn’t create disturbing content.
hallucination: An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren’t entirely known. For example, when asking an AI chatbot, “When did Leonardo da Vinci paint the Mona Lisa?” it may respond with an incorrect statement saying, “Leonardo da Vinci painted the Mona Lisa in 1815,” which is 300 years after it was actually painted.
inference: The process AI models use to generate text, images and other content about new data, by inferring from their training data.
large language model, or LLM: An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language.
latency: The time delay from when an AI system receives an input or prompt and produces an output.
machine learning, or ML: A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content.
Microsoft Bing: A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It’s similar to Google Gemini in being connected to the internet.
multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech.
natural language processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models and linguistic rules.
neural network: A computational model that resembles the human brain’s structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.
overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data, but not new data.
paperclips: The Paperclip Maximiser theory, coined by philosopher Nick Boström of the University of Oxford, is a hypothetical scenario where an AI system will create as many literal paperclips as possible. In its goal to produce the maximum amount of paperclips, an AI system would hypothetically consume or convert all materials to achieve its goal. This could include dismantling other machinery to produce more paperclips, machinery that could be beneficial to humans. The unintended consequence of this AI system is that it may destroy humanity in its goal to make paperclips.
parameters: Numerical values that give LLMs structure and behavior, enabling it to make predictions.
Perplexity: The name of an AI-powered chatbot and search engine owned by Perplexity AI. It uses a large language model, like those found in other AI chatbots, but has a connection to the open internet for up-to-date results.
prompt: The suggestion or question you enter into an AI chatbot to get a response.
prompt chaining: The ability of AI to use information from previous interactions to color future responses.
quantization: The process by which an AI large learning model is made smaller and more efficient (albeit, slightly less accurate) by lowering its precision from a higher format to a lower format. A good way to think about this is to compare a 16-megapixel image to an 8-megapixel image. Both are still clear and visible, but the higher resolution image will have more detail when you’re zoomed in.
stochastic parrot: An analogy of LLMs that illustrates that the software doesn’t have a larger understanding of meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them.
style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso.
synthetic data: Data created by generative AI that isn’t from the actual world but is trained on real data. It’s used to train mathematical, ML and deep learning models.
temperature: Parameters set to control how random a language model’s output is. A higher temperature means the model takes more risks.
text-to-image generation: Creating images based on textual descriptions.
tokens: Small bits of written text that AI language models process to formulate their responses to your prompts. A token is equivalent to four characters in English, or about three-quarters of a word.
training data: The datasets used to help AI models learn, including text, images, code or data.
transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.
Turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine’s ability to behave like a human. The machine passes if a human can’t distinguish the machine’s response from another human.
unsupervised learning: A form of machine learning where labeled training data isn’t provided to the model and instead the model must identify patterns in data by itself.
weak AI, aka narrow AI: AI that’s focused on a particular task and can’t learn beyond its skill set. Most of today’s AI is weak AI.
zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.
The Indian Institute of Technology (IIT) Delhi, in partnership with TeamLease EdTech, has introduced a comprehensive online executive programme in Artificial Intelligence (AI) in Healthcare, specially designed for working professionals across diverse domains. Scheduled to begin on November 1, 2025, this programme seeks to bridge the gap between healthcare and technology by imparting industry-relevant AI skills to professionals, including doctors, engineers, data scientists, and med-tech entrepreneurs.Applications for the programme are currently open and will remain so until July 31, 2025. Interested professionals are encouraged to submit their applications through the official IIT Delhi CEP portal.This initiative is a part of IIT Delhi’s eVIDYA platform, developed under the Continuing Education Programme (CEP), and aims to foster applied learning through a blend of theoretical instruction and hands-on experience using real clinical datasets.This course offers a unique opportunity to upskill with one of India’s premier institutes and contribute meaningfully to the rapidly evolving field of AI-powered healthcare.
Programme overview
To help prospective applicants plan better, here is a quick summary of the programme’s key details:
Category
Details
Course duration
November 1, 2025 – May 2, 2026
Class schedule
Online and conducted over weekends
Programme fee
₹1,20,000 + 18% GST (Payable in two easy installments)
The programme is tailored for a wide spectrum of professionals who are either involved in healthcare or aspire to work at the intersection of health and technology. You are an ideal candidate if you are:• A healthcare practitioner or clinician with limited or no background in coding or artificial intelligence, but curious to explore AI’s applications in medicine.• An engineer, data analyst, or academic researcher engaged in health-tech innovations or biomedical computing.• A med-tech entrepreneur or healthcare startup founder looking to incorporate AI-driven solutions into your business or products.
Curriculum overview
Participants will engage with a carefully curated curriculum that balances core concepts with real-world applications. Key modules include:• Introduction to AI, Machine Learning (ML), and Deep Learning (DL) concepts.• How AI is used to predict disease outcomes and assist in clinical decision-making.• Leveraging AI in population health management and epidemiology.• Application of AI for hospital automation and familiarity with global healthcare data standards like FHIR and DICOM.• Over 10 detailed case studies showcasing successful AI applications in hospitals and clinics.• A hands-on project with expert mentorship from faculty at IIT Delhi and clinicians from AIIMS, enabling learners to apply their knowledge to real clinical challenges.
Learning outcomes you can expect
By the end of this programme, participants will be equipped with the ability to:• Leverage AI technologies to enhance clinical workflows, automate processes, and support evidence-based decision making in healthcare.• Work effectively with diverse data sources such as Electronic Medical Records (EMRs), radiology images, genomics data, and Internet of Things (IoT)-based health devices.• Develop and deploy functional AI models tailored for practical use in hospitals, diagnostics, and public health infrastructure.• Earn a prestigious certification from IIT Delhi, enhancing your professional credentials in the health-tech domain.
Southern California grocery chain, Gelson’s, is partnering with Upshop to deploy an analytical approach to its market, using data, artificial intelligence, and operational insight as it looks to punch above its weight.
By adopting Upshop’s platform, Gelson’s says it will infuse intelligence into its forecasting, total store ordering, production planning, and real-time inventory processes, ensuring every location is tuned into local demand dynamics.
This means shoppers will find what they want, when they want it, all while store teams benefit from tools that simplify workflows, reduce waste, and increase efficiency.
“In a competitive grocery landscape, scale isn’t everything – intelligence is,” says Ryan Adams, President and CEO at Gelson’s Markets. “With Upshop’s embedded platform and AI driven capabilities, we’re empowering our stores to be hyper-responsive, efficient, and focused on the guest experience. It’s how Gelson’s can compete at the highest level.”
Digital skills and technology solutions are more critical for African economies as they embrace digital transformation. Countries are positioning themselves as major tech hubs as the world goes virtual.
Sign Up Now for More Entrepreneurship Training Programs
Entrepreneurs need to master artificial intelligence and advanced AI solutions available today for business growth and development. AI skills are an important tool for promoting social and economic development, creating new jobs, and driving innovation.
MEST AI Startup Program
MEST AI Startup Program is a bold redesign of Meltwater Entrepreneurial School of Technology’s flagship Training Program. It is built to prepare West Africa’s most promising tech talents to build, launch, and scale world-class AI startups.
West Africa has world-class tech talent, and it’s time AI solutions built on the continent reach users everywhere.
The MEST AI Startup Program is a fully-funded, immersive experience hosted in Accra, Ghana. Over an intensive seven-month training phase, founders receive hands-on instruction, technical mentorship, and business coaching from companies such as OpenAI, Perplexity, and Google.
The top ventures then advance to a four-month incubation period, and startups have an opportunity to pitch for pre-seed investment of up to $100, 000 and join the MEST Portfolio.
Wayan Vota co-founded ICTworks. He also co-founded Technology Salon, MERL Tech, ICTforAg, ICT4Djobs, ICT4Drinks, JadedAid, Kurante, OLPC News and a few other things. Opinions expressed here are his own and do not reflect the position of his employer, any of its entities, or any ICTWorks sponsor.