AI Research
How Oakland teachers use — or avoid — AI in the classroom

When Calupe Kaufusi was a freshman at McClymonds High School in West Oakland, he’d use platforms like ChatGPT or Google Gemini for written assignments in his history class. But he quickly learned they weren’t infallible.
“It became kind of inconvenient,” Kaufusi said. “As I learned more about AI, I learned it wouldn’t give you correct information and we’d have to fact check it.”
Like many students, Kaufusi used generative AI platforms — where users can input a prompt and receive answers in various formats, be it an email text, an essay, or the answers to a test — to get his work done quickly and without much effort. Now a junior, Kaufusi said he’s dialed down his AI use.
Already rampant in college and university settings, artificial intelligence software is also reshaping the K-12 education landscape. Absent a detailed policy in the Oakland Unified School District, individual teachers and schools have been left to navigate how to integrate the technology in their classrooms — or how to try to keep it out.
Some teachers told The Oaklandside they are choosing to embrace AI by incorporating it into student projects or using it to assist with their own lesson planning, while others have said they’ve rejected it for its environmental impacts and how it enables students to cut corners. Some teachers are returning to old forms of assessment, such as essays handwritten during class that can’t be outsmarted by the platforms.
What’s clear to many is that AI platforms are already ubiquitous on the internet and many students are going to use them whether their teachers advise them to or not.
Kaufusi, who is in McClymonds’ engineering pathway, is interested in studying machine learning or software engineering, so he wants to see more of his teachers discuss responsible uses for AI. “They know there’s no way to stop us” from using it, he said, “so they can try to teach us how to use it properly.”
A new policy in the works
Under current OUSD guidance, published in March, teachers and principals are left to determine whether students are allowed to use AI in their work; if they do, students are required to cite it. The guidance also outlines procedures for teachers to follow if they suspect a student is misusing AI, for example, by representing AI-generated work as their own, starting with a private conversation with the student, then the collection of evidence, and finally a consultation with colleagues about proper discipline.
Work is underway in Oakland Unified to develop a more comprehensive AI policy for the district, said Kelleth Chinn, the district’s instructional technology coordinator. In his role, he’s been thinking about how to address student use of AI. A former classroom teacher, Chinn can imagine beneficial uses for both students and teachers in the classroom, but he knows teaching students responsible uses for AI doesn’t preclude them from using it in dishonest ways.
“The reason that we need to talk about AI to students is because a lot of students are already using it,” Chinn told The Oaklandside. “In the absence of having any kind of conversations, you’re just leaving this vacuum without guidance for students.”
Any new draft policy would first be evaluated by the school board’s teaching and learning committee before being considered by the full board of directors. VanCedric Williams, chair of that committee, has met with Chinn and his team to discuss potential approaches. Williams, a veteran teacher, said he is hesitant to recommend a policy that would encourage educators to use AI.
“I do not want to put any expectations for teachers or students to use it or not,” Williams told The Oaklandside. “We’re looking at best practices around the state, what other districts are doing and what pitfalls they’ve incurred.”
Chinn added that he’s been looking at how colleges and universities are addressing AI. What he’s found is that some professors are turning away from papers and written homework assignments and toward methods like blue book exams and oral presentations that preclude the use of AI.
‘We just want our kids to be able to critically think’
Some teachers are hesitant to fully embrace the technology, concerned that it could hamper student learning and critical thinking. At Oakland Technical High School, a group of history and English teachers have formed a professional learning community to study AI in education and come up with potential guidance.
Amanda Laberge and Shannon Carey, who both teach juniors at Oakland Tech, joined the group as AI skeptics. Carey, who has been teaching in OUSD since 1992, sees AI differently than she does other advances in technology that have taken place over the course of her career.
“A computer is a tool: You can draft your essay and I can put comments on it,” Carey, a history teacher, told The Oaklandside. “Whereas AI, the way many students are using it, is to do their thinking for them.”
Carey noted that after years of a drive to incorporate more tech in the classroom, the tide is turning on cell phones — many schools now have “no smartphone” policies and last year Governor Gavin Newsom signed a law, which goes into effect in 2026, requiring all school districts to prohibit cell phone use during the school day.
Neither Carey nor Laberge plan to use AI themselves, the way some educators use it for grading or lesson planning.

Laberge, who teaches English in Oakland Tech’s race, policy, and law pathway, assigned her students a project encouraging them to think critically about AI. They’ll survey other students on how they use AI, research the cognitive impacts of relying on AI, gain an understanding of how exactly the algorithms and platforms operate, and examine wider societal implications.
“Our job is to help them develop skills and thinking so as adults they can do whatever they want,” Laberge said.
Laberge and Carey said they want to see OUSD put together an evidence-based policy around AI use. They mentioned a 2025 MIT study that monitored brain function for groups writing an essay. The authors found that those using a large language model to assist in writing the essay had lower brain activity than those who didn’t, and they had more trouble quoting their own work.
“We just want our kids to be able to critically think and read and write fluently and with grace,” Carey said. “We do not see a way in which AI is going to make that happen.”
Using AI strategically
At Latitude High School in Fruitvale, educators are taking a different approach. Computer science students at the charter school, which emphasizes project-based learning, are incorporating AI into math video games they’re creating for local fourth graders. This is the first year that classes have introduced AI as part of the curriculum, according to Regina Kruglyak, the school’s dean of instruction.
Students first write out code on their own, then run it through ChatGPT to test their ideas and find errors. The school uses GoGuardian, a software that can block websites, to restrict access to ChatGPT when students aren’t actively using it for an assignment, Kruglyak said.
“We were nervous about the possibility that students will forget how to do certain things, or they’ll never learn how to do it in the first place because they’ll just fall back on having ChatGPT do it for them,” Kruglyak said. “That’s where we use GoGuardian. Making sure that students are using their own brains and learning the skills in the first place feels very crucial.”
Kruglyak coaches Latitude’s science teachers and has held professional development sessions on new AI platforms. She recently introduced Notebook LM, a Google platform that can summarize documents and organize notes into various media. Kruglyak tested it by uploading a grant application and having the software turn it into a podcast. Her goal, she said, is to “change teachers’ minds about what AI can do, and how to help students learn from it rather than be scared of it as a teacher.”
It’s not only high school educators who are confronting students using AI. Joel Hamburger, a fifth grade teacher at Redwood Heights Elementary School, said with students using Google on their Chromebooks, AI results come up every time they type in a Google search. Hamburger, who has been teaching for four years, said this calendar year is when he first started noticing how unavoidable AI is in the classroom.
“Google AI culls the information from the internet and immediately gives you a response,” Hamburger told The Oaklandside. “Whereas a year or two ago, it gave you websites to go to.”
For now, he allows his students to use Google’s AI for filling out simple worksheets in class. At this time of year, Hamburger’s focus is teaching his students how to craft the right inputs to get the answers they’re looking for. During a spring unit on research projects, he’ll lay out the foundations for evaluating information and factchecking what Google serves up.
Any kind of AI policy should include tiered guidance for various grade levels, Hamburger said. While fifth graders may not be using ChatGPT, he said, they’re surrounded by AI on their devices and guidance for them may not look the same as instructions for a high schooler.
“The genie’s just about to be brought out of the bottle for these 10-year-olds,” he said. “They need to know appropriate uses.”
AI Research
(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)
AI Research
[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models

View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors
Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.
Submission history
From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)
AI Research
‘AI Learning Day’ spotlights smart campus and ecosystem co-creation

When artificial intelligence (AI) can help you retrieve literature, support your research, and even act as a “super assistant”, university education is undergoing a profound transformation.
On 9 September, XJTLU’s Centre for Knowledge and Information (CKI) hosted its third AI Learning Day, themed “AI-Empowered, Ecosystem-Co-created”. The event showcased the latest milestones of the University’s “Education + AI” strategy and offered in-depth discussions on the role of AI in higher education.
In her opening remarks, Professor Qiuling Chao, Vice President of XJTLU, said: “AI offers us an opportunity to rethink education, helping us create a learning environment that is fairer, more efficient and more personalised. I hope today’s event will inspire everyone to explore how AI technologies can be applied in your own practice.”
Professor Qiuling Chao
In his keynote speech, Professor Youmin Xi, Executive President of XJTLU, elaborated on the University’s vision for future universities. He stressed that future universities would evolve into human-AI symbiotic ecosystems, where learning would be centred on project-based co-creation and human-AI collaboration. The role of educators, he noted, would shift from transmitters of knowledge to mentors for both learning and life.
Professor Youmin Xi
At the event, Professor Xi’s digital twin, created by the XJTLU Virtual Engineering Centre in collaboration with the team led by Qilei Sun from the Academy of Artificial Intelligence, delivered Teachers’ Day greetings to all staff.
(Teachers’ Day message from President Xi’s digital twin)
“Education + AI” in diverse scenarios
This event also highlighted four case studies from different areas of the University. Dr Ling Xia from the Global Cultures and Languages Hub suggested that in the AI era, curricula should undergo de-skilling (assigning repetitive tasks to AI), re-skilling, and up-skilling, thereby enabling students to focus on in-depth learning in critical thinking and research methodologies.
Dr Xiangyun Lu from International Business School Suzhou (IBSS) demonstrated how AI teaching assistants and the University’s Junmou AI platform can offer students a customised and highly interactive learning experience, particularly for those facing challenges such as information overload and language barriers.
Dr Juan Li from the School of Science shared the concept of the “AI amplifier” for research. She explained that the “double amplifier” effect works in two stages: AI first amplifies students’ efficiency by automating tasks like literature searches and coding. These empowered students then become the second amplifier, freeing mentors from routine work so they can focus on high-level strategy. This human-AI partnership allows a small research team to achieve the output of a much larger one.
Jing Wang, Deputy Director of the XJTLU Learning Mall, showed how AI agents are already being used to support scheduling, meeting bookings, news updates and other administrative and learning tasks. She also announced that from this semester, all students would have access to the XIPU AI Agent platform.
Students and teachers are having a discussion at one of the booths
AI education system co-created by staff and students
The event’s AI interactive zone also drew significant attention from students and staff. From the Junmou AI platform to the E
-Support chatbot, and from AI-assisted creative design to 3D printing, 10 exhibition booths demonstrated the integration of AI across campus life.
These innovative applications sparked lively discussions and thoughtful reflections among participants. In an interview, Thomas Durham from IBSS noted that, although he had rarely used AI before, the event was highly inspiring and motivated him to explore its use in both professional and personal life. He also shared his perspective on AI’s role in learning, stating: “My expectation for the future of AI in education is that it should help students think critically. My worry is that AI’s convenience and efficiency might make students’ understanding too superficial, since AI does much of the hard work for them. Hopefully, critical thinking will still be preserved.”
Year One student Zifei Xu was particularly inspired by the interdisciplinary collaboration on display at the event, remarking that it offered her a glimpse of a more holistic and future-focused education.
Dr Xin Bi, XJTLU’s Chief Officer of Data and Director of the CKI, noted that, supported by robust digital infrastructure such as the Junmou AI platform, more than 26,000 students and 2,400 staff are already using the University’s AI platforms. XJTLU’s digital transformation is advancing from informatisation and digitisation towards intelligentisation, with AI expected to empower teaching, research and administration, and to help staff and students leap from knowledge to wisdom.
Dr Xin Bi
“Looking ahead, we will continue to advance the deep integration of AI in education, research, administration and services, building a data-driven intelligent operations centre and fostering a sustainable AI learning ecosystem,” said Dr Xin Bi.
By Qinru Liu
Edited by Patricia Pieterse
Translated by Xiangyin Han
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries