Education
Why the Trump administration grounded these middle schoolers’ drones–and other STEM research

This story was originally published by Chalkbeat. Sign up for their newsletters at ckbe.at/newsletters.
Give a girl a drone, and she might see her future as a scientist.
But if her teacher doesn’t have the training or resources to turn cool tech into lessons that stick, she’s likely to crash it, get frustrated, and move on.
Take Flight, a research project backed by $1.5 million from the U.S. National Science Foundation, aimed to solve that problem with a drone-focused curriculum for rural middle schools. The drones could fly in classrooms–no big outdoor space needed. The lessons were developed with teachers and easy for newbies to pick up. And the program placed a particular emphasis on girls, who often get frustrated by the handheld controller while their male classmates, who tend to have more video game experience, whiz by.
The lessons included real-world scenarios for using drones, like finding a lost child, that often appeal to young girls, and writing exercises to remind kids of what they’re good at before they try something hard.
At first, Laurie Prewandowski wrinkled her nose at Take Flight’s approach. It seemed “touchy feely” to the digital learning specialist who works in a rural New Hampshire middle school and is known as the “drone lady.” But then she saw kids enjoying the lessons and getting a STEM confidence boost.
“All those little things matter,” she said. “It’s really for any kid with a barrier.”
For decades, the federal government believed getting more students interested in science, math, and technology was a national security priority. But in April, the Trump administration cancelled funding for Take Flight and over 800 other STEM education projects funded by the National Science Foundation. The agency said it primarily terminated grants related to diversity, equity, and inclusion, as well as environmental justice and combatting disinformation.
It’s yet another way the Trump administration has sought to undermine efforts specifically meant to help women and girls and students of color. The administration has frequently claimed this work is, in fact, discriminatory, and has sought to withhold funding from schools that don’t comply with its civil rights vision, although that attempt is on hold for now.
Sixteen states sued to stop Trump’s NSF cuts, which represent a significant hit for STEM education research. NSF has long been a primary funder of this work, and one of the few institutions that helps researchers not only test new ideas in the classroom, but figure out what worked and why–which is key to replicating a successful program.
Researchers say these cancelled projects have broken trust, won’t be easy to revive, and left lots of data unanalyzed.
At the time Take Flight lost its National Science Foundation grant, its curriculum was being tested by 1,200 students and 30 rural middle school teachers across 10 states.
The research team had promising early data showing the program helped both boys and girls who weren’t interested in science or math before to envision working in a STEM field, said Amanda Bastoni, the lead researcher on the project.
That matters because rural students are less likely to go into STEM fields. They often attend under-funded schools and have less access to high-tech industries than their peers in urban schools. But now researchers won’t be able to follow up with kids to see if Take Flight altered their trajectory in high school.
“The government spent all this money but didn’t get the results,” said Bastoni, who is the director of career technical and adult education at the nonprofit CAST. Without funding, her team has to “turn in a final report that says: We have no idea if this really works or not.”
Why the government funds STEM education research
President Harry S. Truman signed the law that created the National Science Foundation in 1950, in part to recognize the key role scientific research played in World War II.
Congress has held that the agency’s support of STEM education and research are essential to the nation’s security, economy, and health. And, for decades, federal lawmakers have charged NSF with getting more people who are underrepresented in STEM into that pipeline to maintain a competitive workforce.
For example, a 1980 law calls for NSF to fund a “comprehensive and continuing program to increase substantially the contribution and advancement of women and minorities” in science and technology.
The law authorized NSF to create fellowships for women, minority recruitment programs, and K-12 programs to boost interest in STEM among girls.
The Trump administration’s approach runs counter to that. On April 18, the head of the NSF announced that any efforts by the agency to broaden participation in STEM “must aim to create opportunities for all Americans everywhere” and “should not preference some groups at the expense of others, or directly/indirectly exclude individuals or groups.”
Sixteen attorneys general, led by Letitia James of New York, are suing NSF to end that policy, arguing it does exactly the opposite of what Congress asked the agency to do. NSF has yet to file a response in court and a spokesperson for the agency declined to comment on the lawsuit.
It’s still unclear exactly how the Trump administration determined which grants to terminate.
In February, the Washington Post reported that NSF staff were told to comb through active research grants for keywords like “cultural relevance,” “diverse backgrounds” and “women” to see if they violated Trump’s executive orders. Some projects previously appeared on a list of “woke DEI grants at NSF” circulated by Sen. Ted Cruz, the Republican chair of the Senate science committee.
According to emails shared with Chalkbeat, Jamie French, a budget official with NSF, told researchers who lost their funding that their work no longer aligned with NSF priorities, but did not give more details. French told researchers the decision was final and they could not appeal.
In response to questions from Chalkbeat about why NSF cancelled Take Flight and other research projects, a spokesperson for NSF reiterated that rationale, and said the agency would still fund projects that “promote the progress of science, advance the national health, prosperity and welfare and secure the national defense.”
For Frances Harper, an assistant professor of mathematics education at the University of Tennessee, Knoxville, the change was jarring.
She received a $700,000 grant from NSF in 2021 to work with 10 Black and Latina mothers with children in Knox County Schools. Together, they were studying how parents can advocate for improvements in their children’s math education and what teachers can learn from them.
Some of the Latina mothers in the study, for example, saw that English learners had a lot of anxiety about taking high-stakes tests, so they created a peer study group for them.
When Sethuraman Panchanathan, the NSF director selected during Trump’s first term who also served under President Joe Biden, visited her university in 2023, Harper said, “he asked me to convey to the mothers how much he valued families being involved in NSF projects.”
But after Harper’s research appeared on Cruz’s “woke” list, her university asked her to pause her work. She lost her funding the same day NSF announced changes to its priorities. And Panchanathan resigned a few days later.
NSF cuts felt from elementary school to college
Some researchers are applying for emergency funding from private foundations to salvage what they can. But much of their planned work will no longer be possible.
The Chicago Children’s Museum was working with Latino families from McAuliffe Elementary School in Chicago on a program known as Somos Ingenieros, or We Are Engineers, to get kids interested in engineering early on.
The team ran two after-school programs for around 20 families, but now won’t have funding to reach dozens more, or to reach the museum and wider school community.
Parents and children met after school for six weeks to learn about building with various materials, including everyday items like sticks, pine cones, and rocks. That helped kids see engineering in their daily lives and it invited immigrant parents who played with those materials as kids to share their own experiences.
Families also got to put their building skills to the test. One group chose to create puppets and had to figure out how to get the intricate pieces to move correctly. Another picked piñatas and had to strategize how to make them hold heavy candy and survive lots of whacks.
Already, the research team was seeing evidence that the program had boosted parents’ confidence to do engineering activities with their children, said Kim Koin, the director of art and tinkering studios at the Chicago Children’s Museum, who was also the lead researcher on the project.
For Ryan Belville, the principal of McAuliffe, the loss of the program means his students will have fewer opportunities to imagine a college or career pathway in STEM and the arts.
“It may be that moment that they made that puppet that makes them want to be an engineer or a scientist,” Belville said.
And for Karletta Chief, much of the harm is in the lost talent and broken trust caused by the abrupt NSF cancellation.
Chief, a professor of environmental science at the University of Arizona, was a lead researcher with the Native FEWS Alliance, which received $10 million from NSF to address food, energy, and water crises in Indigenous communities, and to develop pathways for Native Americans and other underrepresented students to pursue environmental careers.
The Alliance had built a vast network of research and mentorship opportunities over six years, Chief said. It was involved in dozens of projects across the U.S., from creating K-12 school curriculum to mentoring Native students as they transitioned from tribal colleges to four-year universities.
“Our partnerships are built on trust and long commitment,” Chief said. “These are relationships that we have built over years, and it was just really unfortunate that we had to say, ‘sorry!’”
Now Chief and others are scrambling to find funding to cover graduate student researchers’ outstanding tuition and health care bills.
She worries even if the cuts were somehow reversed, it would be difficult to put the project back together. Many of the students and staff they had to let go have already taken other jobs.
“There’s a lot of knowledge and expertise that will be lost,” she said. “We were stopped when we were going full force. … Now we just went to zero.”
Chalkbeat is a nonprofit news site covering educational change in public schools.
For more news on STEM education and policy, visit eSN’s Educational Leadership hub.
Education
SUNY Geneseo education professor shares AI expertise with area school district | News

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.
For any issues, contact CustomerService@BataviaNews.com or call (585) 343-8000.
Education
Artificial Intelligence in Schools? – tovima.com

Artificial Intelligence (AI), fueled by recent advances in large language models, is now moving at a breakneck pace and transforming into a multi-billion-dollar industry. In education specifically, the global AI market is estimated at around 7 billion dollars in 2025, with an expected annual growth rate of more than 36% over the next decade. The challenge, then, is how education systems will harness these technologies in ways that truly serve students and teachers. Studies show that about 30% of teachers use Generative AI (GAI) on a weekly basis. In Greece, the situation looks different. When asked whether they use GAI in preparing their lessons, nearly half of Greek teachers reported that they never use it, while around 13% said they use it weekly or more often (see Figure 1). This suggests that we are still at the beginning. Technology promises a lot, but time, appropriate tools, and training are needed for it to become a meaningful part of everyday school life.
One application of GAI is assisting teachers in grading. The rationale is that having an initial evaluation from AI can help a teacher reach accurate and consistent grading in less time. Such tools already exist. A recent study of ours, however, shows that teachers should be cautious since the tools are not necessarily error-free. In research we conducted in Greece with Professor Rigissa Megalokonomou from Monash University and Dr. Panagiotis Sotirakopoulos from Curtin University, we found that teachers were more likely to leave grading mistakes uncorrected when these appeared as recommendations from an AI system, compared to identical mistakes presented as human-made. These failures of oversight demonstrate that while AI promises to help teachers, training and awareness are also required so that human oversight of the technology ensures that the pitfalls of uncritical acceptance are avoided.
Another area of AI application in education is academic and career guidance. Together with Professor Faidra Monachou and Ph.D. candidate Hemanshu Das from Yale University, we carried out an experiment with Greek high school students, examining how they respond to academic counseling advice presented either as coming from a professional advisor or from AI. Results showed that 73% of students report being willing to use algorithmic recommendation systems for their college application. However, students differ considerably in how they approach advice from such tools. The critical factor for adopting recommendations is not the perceived ability of the system but trust in its intentions.
This shows that leveraging AI to guide young people is not simply a matter of technological advancement, but equally a matter of trust and presentation. Our studies teach us that AI in education is not only about technical ability but primarily about human trust and the perception of what the “machine” is doing. Students follow advice when they trust the intentions of the one giving it, while teachers often show excessive tolerance of mistakes from a system they consider objective. In other words, what is needed is not only better algorithms, but an understanding of how humans interact with them. And this interaction, as both international studies and our experience in Greece show, is shaped by the social and cultural context. Concepts like creativity, trust, or fairness are not universal but differ from country to country. Therefore, the use of AI in Greek education cannot simply be copied from other systems but must be based on Greek data and take into account our culture, so that it reflects our own needs and priorities. For this to happen, continued research on Greek ground is essential.
Furthermore, a regulatory framework based on local experience is more likely to foster citizens’ trust and ensure that the new technology operates fairly and with respect for our. Greece has a wealth of human capital working and conducting research on AI, both within the country and in the international academic and business community. This “knowledge reservoir” can support public dialogue and contribute to shaping rules and practices that reflect the needs of Greek education. If properly utilized, it can serve as a bridge between technological innovation and our cultural specificities, ensuring that AI develops in ways that reinforce rather than undermine the work of teachers.
The challenge—and the invitation—is not to fear AI, but to tailor it to our needs so that it becomes a true ally of our schools.
Sofoklis Goulas is an economist and works as an Associate Research Scholar at Yale University in the United States. He has held research positions at the Brookings Institution, Stanford University, and the World Bank. He is a Research Affiliate of the Institute for the Study of Labor (IZA) in Bonn and a Research Fellow of the Foundation for Economic and Industrial Research (ΙΟΒΕ) in Athens. He earned his Ph.D. from the University of North Carolina at Chapel Hill on a Fulbright Scholarship.
Education
If we are going to build AI literacy into every level of learning, we must be able to measure it

Everywhere you look, someone is telling students and workers to “learn AI.”
It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly.
And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.
Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice.
Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications.
Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan.
Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.
What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely.
Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.
One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality.
But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable?
AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills.
Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment.
This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.
Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback.
Related: To employers, AI skills aren’t just for tech majors anymore
What can education leaders do right now to address this issue? I have a few ideas.
First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.
Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource.
Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs.
To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.
And that’s too shaky a foundation for the stakes ahead.
Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world.
Contact the opinion editor at opinion@hechingerreport.org.
This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi