AI Research
AlphaGeometry: An Olympiad-level AI system for geometry

Science
Our AI system surpasses the state-of-the-art approach for geometry problems, advancing AI reasoning in mathematics
Reflecting the Olympic spirit of ancient Greece, the International Mathematical Olympiad is a modern-day arena for the world’s brightest high-school mathematicians. The competition not only showcases young talent, but has emerged as a testing ground for advanced AI systems in math and reasoning.
In a paper published today in Nature, we introduce AlphaGeometry, an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist – a breakthrough in AI performance. In a benchmarking test of 30 Olympiad geometry problems, AlphaGeometry solved 25 within the standard Olympiad time limit. For comparison, the previous state-of-the-art system solved 10 of these geometry problems, and the average human gold medalist solved 25.9 problems.
In our benchmarking set of 30 Olympiad geometry problems (IMO-AG-30), compiled from the Olympiads from 2000 to 2022, AlphaGeometry solved 25 problems under competition time limits. This is approaching the average score of human gold medalists on these same problems. The previous state-of-the-art approach, known as “Wu’s method”, solved 10.
AI systems often struggle with complex problems in geometry and mathematics due to a lack of reasoning skills and training data. AlphaGeometry’s system combines the predictive power of a neural language model with a rule-bound deduction engine, which work in tandem to find solutions. And by developing a method to generate a vast pool of synthetic training data – 100 million unique examples – we can train AlphaGeometry without any human demonstrations, sidestepping the data bottleneck.
With AlphaGeometry, we demonstrate AI’s growing ability to reason logically, and to discover and verify new knowledge. Solving Olympiad-level geometry problems is an important milestone in developing deep mathematical reasoning on the path towards more advanced and general AI systems. We are open-sourcing the AlphaGeometry code and model, and hope that together with other tools and approaches in synthetic data generation and training, it helps open up new possibilities across mathematics, science, and AI.
“
It makes perfect sense to me now that researchers in AI are trying their hands on the IMO geometry problems first because finding solutions for them works a little bit like chess in the sense that we have a rather small number of sensible moves at every step. But I still find it stunning that they could make it work. It’s an impressive achievement.
Ngô Bảo Châu, Fields Medalist and IMO gold medalist
AlphaGeometry adopts a neuro-symbolic approach
AlphaGeometry is a neuro-symbolic system made up of a neural language model and a symbolic deduction engine, which work together to find proofs for complex geometry theorems. Akin to the idea of “thinking, fast and slow”, one system provides fast, “intuitive” ideas, and the other, more deliberate, rational decision-making.
Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions. Symbolic deduction engines, on the other hand, are based on formal logic and use clear rules to arrive at conclusions. They are rational and explainable, but they can be “slow” and inflexible – especially when dealing with large, complex problems on their own.
AlphaGeometry’s language model guides its symbolic deduction engine towards likely solutions to geometry problems. Olympiad geometry problems are based on diagrams that need new geometric constructs to be added before they can be solved, such as points, lines or circles. AlphaGeometry’s language model predicts which new constructs would be most useful to add, from an infinite number of possibilities. These clues help fill in the gaps and allow the symbolic engine to make further deductions about the diagram and close in on the solution.
AlphaGeometry solving a simple problem: Given the problem diagram and its theorem premises (left), AlphaGeometry (middle) first uses its symbolic engine to deduce new statements about the diagram until the solution is found or new statements are exhausted. If no solution is found, AlphaGeometry’s language model adds one potentially useful construct (blue), opening new paths of deduction for the symbolic engine. This loop continues until a solution is found (right). In this example, just one construct is required.
AlphaGeometry solving an Olympiad problem: Problem 3 of the 2015 International Mathematics Olympiad (left) and a condensed version of AlphaGeometry’s solution (right). The blue elements are added constructs. AlphaGeometry’s solution has 109 logical steps.
Generating 100 million synthetic data examples
Geometry relies on understanding of space, distance, shape, and relative positions, and is fundamental to art, architecture, engineering and many other fields. Humans can learn geometry using a pen and paper, examining diagrams and using existing knowledge to uncover new, more sophisticated geometric properties and relationships. Our synthetic data generation approach emulates this knowledge-building process at scale, allowing us to train AlphaGeometry from scratch, without any human demonstrations.
Using highly parallelized computing, the system started by generating one billion random diagrams of geometric objects and exhaustively derived all the relationships between the points and lines in each diagram. AlphaGeometry found all the proofs contained in each diagram, then worked backwards to find out what additional constructs, if any, were needed to arrive at those proofs. We call this process “symbolic deduction and traceback”.
Visual representations of the synthetic data generated by AlphaGeometry
That huge data pool was filtered to exclude similar examples, resulting in a final training dataset of 100 million unique examples of varying difficulty, of which nine million featured added constructs. With so many examples of how these constructs led to proofs, AlphaGeometry’s language model is able to make good suggestions for new constructs when presented with Olympiad geometry problems.
Pioneering mathematical reasoning with AI
The solution to every Olympiad problem provided by AlphaGeometry was checked and verified by computer. We also compared its results with previous AI methods, and with human performance at the Olympiad. In addition, Evan Chen, a math coach and former Olympiad gold-medalist, evaluated a selection of AlphaGeometry’s solutions for us.
Chen said: “AlphaGeometry’s output is impressive because it’s both verifiable and clean. Past AI solutions to proof-based competition problems have sometimes been hit-or-miss (outputs are only correct sometimes and need human checks). AlphaGeometry doesn’t have this weakness: its solutions have machine-verifiable structure. Yet despite this, its output is still human-readable. One could have imagined a computer program that solved geometry problems by brute-force coordinate systems: think pages and pages of tedious algebra calculation. AlphaGeometry is not that. It uses classical geometry rules with angles and similar triangles just as students do.”
“
AlphaGeometry’s output is impressive because it’s both verifiable and clean…It uses classical geometry rules with angles and similar triangles just as students do.
Evan Chen, math coach and Olympiad gold medalist
As each Olympiad features six problems, only two of which are typically focused on geometry, AlphaGeometry can only be applied to one-third of the problems at a given Olympiad. Nevertheless, its geometry capability alone makes it the first AI model in the world capable of passing the bronze medal threshold of the IMO in 2000 and 2015.
In geometry, our system approaches the standard of an IMO gold-medalist, but we have our eye on an even bigger prize: advancing reasoning for next-generation AI systems. Given the wider potential of training AI systems from scratch with large-scale synthetic data, this approach could shape how the AI systems of the future discover new knowledge, in math and beyond.
AlphaGeometry builds on Google DeepMind and Google Research’s work to pioneer mathematical reasoning with AI – from exploring the beauty of pure mathematics to solving mathematical and scientific problems with language models. And most recently, we introduced FunSearch, which made the first discoveries in open problems in mathematical sciences using Large Language Models.
Our long-term goal remains to build AI systems that can generalize across mathematical fields, developing the sophisticated problem-solving and reasoning that general AI systems will depend on, all the while extending the frontiers of human knowledge.
Acknowledgements
This project is a collaboration between the Google DeepMind team and the Computer Science Department of New York University. The authors of this work include Trieu Trinh, Yuhuai Wu, Quoc Le, He He, and Thang Luong. We thank Rif A. Saurous, Denny Zhou, Christian Szegedy, Delesley Hutchins, Thomas Kipf, Hieu Pham, Petar Veličković, Edward Lockhart, Debidatta Dwibedi, Kyunghyun Cho, Lerrel Pinto, Alfredo Canziani, Thomas Wies, He He’s research group, Evan Chen, Mirek Olsak, Patrik Bak for their help and support. We would also like to thank Google DeepMind leadership for the support, especially Ed Chi, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis.
AI Research
Here’s how doctors say you should ask AI for medical help

The DoseWhat should I know about asking ChatGPT for health advice?
Family physician Dr. Danielle Martin doesn’t mince words about artificial intelligence.
“I don’t think patients should use ChatGPT for medical advice. Period,” said Martin, chair of the University of Toronto’s department of family and community medicine.
Still, with roughly 6.5 million Canadians without a primary care provider, she acknowledges that physicians can’t stop patients from turning to chatbots powered by large language models (LLMs) for health answers.
Martin isn’t alone in her concerns. Physician groups like the Ontario Medical Association and research from institutions like the Sunnybrook Health Science Centre all caution patients against relying on AI for medical advice.
A 2025 study comparing 10 popular chatbots, including ChatGPT, DeepSeek and Claude, found “a strong bias in many widely used LLMs towards overgeneralizing scientific conclusions, posing a significant risk of large-scale misinterpretations of research findings.”
Martin and other experts believe most patients would be better served by using telehealth options available across Canada, such as dialling 811 in most provinces.
But she also told The Dose host Dr. Brian Goldman that if they do choose to use chatbots, they can help reduce the risk of harm by avoiding open-ended questions and restricting AI-generated answers to credible sources.
Learning to ask the right questions
Unlike traditional search engines that provide users with links to reputable sources to answer questions, chatbots like Gemini, Claude and ChatGPT generate their own answers to users’ questions, based on existing databases of information.
Martin says a key challenge is figuring out how much of an AI-generated answer to a medical question is or isn’t essential information.
If you ask a chatbot something like, “I have a red rash on my leg, what could it be?” you could be given a “dump of information” which can do more harm than good.
“My concern is that the average busy person isn’t going to be able to read and process all of that information,” she said.
What’s more, if a patient asks “What do I need to know about lupus?”, for example, they “probably don’t know enough yet about lupus to be able to screen out or recognize the stuff that actually doesn’t make sense,” said Martin.
Martin says patients are more often better-served by asking them for help finding reliable sources, like official government websites.
Instead of asking, “Should I get this year’s flu shot?” a better question would be, “What are the most reliable websites to learn more about this year’s flu shot?”
Be careful following treatment advice
Martin says that patients shouldn’t rely on solutions recommended by AI — like purchasing topical creams for rashes — without consulting a medical expert.
In the case of symptoms like rashes which may have many possible causes, Martin instead recommends speaking to a health-care worker and to not ask an AI at all.
Some people might also worry that an AI chatbot might talk patients out of consulting real-life physicians, but family physician Dr. Onil Bhattacharry says it’s not as likely as some may fear.
“Generally the tools are … slightly risk-averse, so they might push you to more likely seek care than not,” said Bhattacharrya, director of Women’s College Hospital’s institute for health system solutions and virtual care.
Bhattacharrya is interested in how technology can support clinical care, and says artificial intelligence could be a way to democratize access to medical expertise.
He uses tools like OpenEvidence which compiles information from medical journals and gives answers that are accessible to most health professionals.
The Quebec government says it’s launching a pilot project involving artificial intelligence transcription tools for health-care professionals, with an increasing number saying they cut down the time they spend filling paperwork.
Still, Bhattacharrya recognizes that it can be more challenging for patients to determine the reliability of medical advice from an AI.
“As a doctor, I can critically appraise that information,” but it isn’t always easy for patients to do the same, he said.
Bhattacharrya also said chatbots can suggest treatment options that are available in some countries but not Canada, since many of them draw from American medical literature.
Despite her hesitations, Martin acknowledges there are some things an AI can do better than human physicians — like recalling a long list of possible conditions associated with a symptom.
“On a good day, we’re best at identifying the things that are common and the things that are dangerous,” she said.
“I would imagine that if you were to ask the bot, ‘What are all of the possible causes of low platelets?’ or whatever, it would probably include six things on the list that I have forgotten about because I haven’t seen or heard about them since third year medical school.”
Can patients with chronic conditions benefit from AI?
For his part, Bhattacharrya also sees AI as a way to empower people to improve their health literacy.
A chatbot can help patients with chronic conditions looking for general information in simple language, though he cautions against “exploring nonspecific symptoms and their implications.”
Warning: Mention of suicide and self-harm. Millions of people, especially teens, are finding companionship and emotional support in using AI chatbots, according to a kids digital safety non-profit. But health and technology experts say artificial intelligence isn’t properly designed for these scenarios and could do more harm than good.
“In primary care we see a large number of people with nonspecific symptoms,” he said.
“I have not tested this, but I suspect the chatbots are not great at saying ‘I don’t know what is causing this but let’s just monitor it and see what happens.’ That’s what we say as family doctors much of the time.”
AI Research
When gone isn’t goodbye – Komando.com

Kim Komando
🕯️ This is very personal.
An AI company reached out to me recently with an interesting offer. They’d take the photos, videos and voice recordings I have of my mom and use them to create an AI version of her. Not a slideshow or tribute video. Something interactive. When they were done, I could talk to my AI mom and have it talk back to me.
My mom passed away after a five-year battle with pancreatic cancer on Sept. 19, 2021. I say a prayer for her every morning when I wake up, and for my father, too. I know they’re reunited in heaven.
That photo above is my college graduation. I love that my parents are holding hands. As a prank, they brought me an AT&T T-shirt and balloon because I was interviewing for a job. Now you know it’s in my blood.
After my father died, my mother moved in with me when I was 27, and we became more sisters than anything else. When Barry asked me to marry him, I said, “You do know that Mom and I come as a set.”
I miss her every day. My heart still aches. I’m pushing back tears now writing this. I talk to her like she’s in the room, sometimes pointing out a great sunset or telling her she was right about the throw pillows. There really are too many on the couch.
🧠 A memory or a machine?
The idea of hearing her voice again feels comforting and frightening at the same time. Could I sit across from a screen and listen to her give advice or make me laugh with her great one-liners? Would it feel like a gift or a ghost?
This isn’t sci-fi. It’s real, right now.
These digital recreations, often called “deathbots,” use artificial intelligence trained on someone’s personal data to bring them back in a virtual form. Through them, some families talk to parents, spouses, even children who are no longer here.
In one case, a journalist interviewed an AI recreation of a school shooting victim. In China, companies offer this service as part of the grieving process.
🧬 The rise of generative ghosts
The tech behind this is evolving fast. Google researchers are working on “generative ghosts.” These aren’t just replicas. They are digital stand-ins that can learn, grow and even make decisions on someone’s behalf.
Think about an AI version of your grandmother telling you what it was like to start a window washing business in NYC when she only spoke Ukrainian (mine did). Or a digital parent reading bedtime stories to the grandkids he never met.
⚖️ Crossing the line?
Some say it brings closure. Others say it crosses a line.
Therapists warn this could complicate grief. That people might hold on too tightly. These bots can create idealized versions of loved ones and blur the line between memory and reality.
And what about consent? If someone didn’t explicitly say yes to being turned into a bot, should it happen? Only a handful of states have laws that protect your image or voice after death. In most places, it’s a gray area.
I’m still sitting with the offer. I certainly have everything they’d need. Videos. Voicemails. Photos. A lot of audio of me interviewing her on my show.
💭 So … would you?
Should I? If you had the chance to hear the voice of someone you’ve lost, even if it wasn’t really them, would you want to? Or is it better to let those memories stay just that? I can talk myself into either place.
When you rate this newsletter at the end, tell me. I’d really like to know. If you’d like to come on my show and talk about it with me, be sure to leave your email address. I’d love that.
Tags: AI (artificial intelligence), Google, mother, tech, video
AI Research
As they face conflicting messages about AI, some advice for educators on how to use it responsibly

When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.
One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.
I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.
Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.
Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.
What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.
First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.
We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.
You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.
Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.
Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.
Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.
AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.
When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.
In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?
Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show
Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.
Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.
For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.
I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.
Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.
Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.
Contact the opinion editor at opinion@hechingerreport.org.
This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi