Connect with us

Education

AI Isn’t the Answer To Our Education Crisis — It’s a Distraction

Published

on


It’s been two weeks since the Secretary of Education stood in front of the country and enthusiastically called AI “A1.” Two saucy weeks since what should’ve been a serious conversation about the future of American education turned into a viral punchline.

And now, in the same surreal timeline, we’ve got President Trump signing an executive order to promote AI in K-12 schools — directing the Department of Education and the National Science Foundation to prioritize funding for AI-related research and grants.

You truly can’t make this stuff up and even if you could, you no longer have to.

To be clear: I’m not anti-technology. AI has a role to play in education. Personalized learning, intelligent tutoring systems, data-driven insights — these are powerful tools when used thoughtfully. But let’s not kid ourselves. We’re living through a moment where the Trump 2 administration is taking a DOGE chainsaw to the very foundations of public education. And instead of confronting that, we’re being told to get excited about chatbots in the classroom.

This isn’t leadership. It’s deflection.

Funding for future success

The truth is, AI is not the lifeline our education system needs. Certainly not right now. What we need — what we’ve needed for decades — is serious investment in teachers, classrooms, infrastructure and support services. And we’re getting the opposite.

The Trump administration is proposing deep cuts to key education programs, gutting federal support for public schools, and pushing policies that favor privatization and deregulation over student success. Amid all that, we’re supposed to believe that some AI-powered lesson plans are going to move the needle?

Please.

Let’s start with the obvious: AI doesn’t fix underfunded schools any more than A1 sauce would. You can’t put an algorithm into a building with no heat, no internet and no functioning restroom and expect a miracle. You can’t expect a teacher managing 35 kids on her own to suddenly have the time and training to integrate AI into daily lesson plans (if they even have the time to make one actual lesson plan a week). And you can’t tell communities that are already struggling to get basic resources that what they really need is machine-learning software.

This executive order assumes that what’s missing in American education is innovation. But we don’t have an innovation problem — we have a priorities problem. Our students aren’t falling behind because teachers aren’t tech-savvy enough. They’re falling behind because our country refuses to treat education like a public good.

What’s broken

We’ve normalized schools with outdated textbooks, overworked staff and dilapidated facilities. We’ve made it acceptable for teachers to buy their own supplies, for students to skip meals, and for mental health crises to go unanswered.

And now, in the middle of that, this administration wants to convince us that the real problem is that we’re not moving fast enough on AI.

Let’s also be honest about what AI in schools usually means. It doesn’t mean teachers getting sophisticated tools that make their jobs easier. It means more standardized testing, more data collection, more screen time and more surveillance, especially for kids in low-income communities.

It means feeding student information into systems built by private companies, often with little oversight or transparency. It means potentially outsourcing educational decisions to algorithms that don’t understand context, nuance (sidebar: do any of us get nuance anymore?) or humanity.

It’s a far cry from the glossy pitch the administration is selling.

Widening the digital divide

And let’s not ignore the inequity intentionally baked into all of this. AI-enhanced education requires reliable internet, up-to-date devices, tech-literate staff and digital infrastructure — things that affluent districts are more likely to have. For schools in underserved areas, this push risks widening the digital divide under the guise of modernization.

What’s being framed as progress is actually an elegant Trojan horse for deeper inequality. The schools that most need real, human-centered support are the least likely to benefit from this initiative.

It’s particularly galling that all this is being rolled out with a heavy dose of PR spin. The A1 comment might’ve been a gaffe, but it was also revealing. It showed just how deeply unserious this administration is about the reality on the ground in American schools. It was meant to sound cool, forward-thinking, maybe even meme-worthy. Instead, it became a symbol of how disconnected Trump’s appointee is from what’s actually happening in classrooms across the country.

What schools really need

Teachers aren’t asking for AI. They’re asking for manageable class sizes, fair pay, mental health resources and the ability to teach without being completely buried by bureaucracy. Students aren’t crying out for machine learning — they’re asking for support, stability and a system that sees them as more than test scores or data points. And parents aren’t begging for the latest edtech. They want to know their kids are safe, challenged and cared for at school.

AI is a tool. That’s it. It’s not a savior, it’s not a substitute, and it’s certainly not a replacement for public investment. If the Trump administration were serious about improving education, it would be fighting to expand school funding, not slash it. The administration would be making college more affordable, not reversing progress on student debt. It would be strengthening teacher pipelines, not weakening them. And it would be protecting public schools, not undermining them.

Instead, we get a photo op and a tech policy wrapped in buzzwords.

So yes, Secretary A1, AI has its place. But until we’re ready to fund schools like they matter, treat educators like professionals, and address the real, systemic issues at the heart of this crisis, all the artificial intelligence in the world won’t save us.

And that’s not artificial. That’s just reality.


Aron Solomon is the chief strategy officer for Amplify. He holds a law degree and has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. His writing has been featured in Newsweek, The Hill, Fast Company, Fortune, Forbes, CBS News, CNBC, USA Today and many other publications. He was nominated for a Pulitzer Prize for his op-ed in The Independent exposing the NFL’s “race-norming” policies.

Related reading:

Illustration: Dom Guzman


Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.



Source link

Education

Overcoming Roadblocks to Innovation — Campus Technology

Published

on


Register Now for Tech Tactics in Education: Overcoming Roadblocks to Innovation

Tech Tactics in Education will return on Sept. 25 with the conference theme “Overcoming Roadblocks to Innovation.” Registration for the fully virtual event, brought to you by the producers of Campus Technology and THE Journal, is now open.

Offering hands-on learning and interactive discussions on the most critical technology issues and practices across K–12 and higher education, the conference will cover key topics such as:

  • Tapping into the potential of AI in education;
  • Navigating cybersecurity and data privacy concerns;
  • Leadership and change management;
  • Evaluating emerging ed tech choices;
  • Foundational infrastructure for technology innovation;
  • And more.

A full agenda will be announced in the coming weeks.

Call for Speakers Still Open

Tech Tactics in Education seeks higher education and K-12 IT leaders and practitioners, independent consultants, association or nonprofit organization leaders, and others in the field of technology in education to share their expertise and experience at the event. Session proposals are due by Friday, July 11.

For more information, visit TechTacticsInEducation.com.

About the Author



Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].





Source link

Continue Reading

Education

9 AI Ethics Scenarios (and What School Librarians Would Do)

Published

on


A common refrain about artificial intelligence in education is that it’s a research tool, and as such, some school librarians are acquiring firsthand experience with its uses and controversies.

Leading a presentation last week at the ISTELive 25 + ASCD annual conference in San Antonio, a trio of librarians parsed appropriate and inappropriate uses of AI in a series of hypothetical scenarios. They broadly recommended that schools have, and clearly articulate, official policies governing AI use and be cautious about inputting copyrighted or private information.

Amanda Hunt, a librarian at Oak Run Middle School in Texas, said their presentation would focus on scenarios because librarians are experiencing so many.


“The reason we did it this way is because these scenarios are coming up,” she said. “Every day I’m hearing some other type of question in regards to AI and how we’re using it in the classroom or in the library.”

  • Scenario 1: A class encourages students to use generative AI for brainstorming, outlining and summarizing articles.

    Elissa Malespina, a teacher librarian at Science Park High School in New Jersey, said she felt this was a valid use, as she has found AI to be helpful for high schoolers who are prone to get overwhelmed by research projects.

    Ashley Cooksey, an assistant professor and school library program director at Arkansas Tech University, disagreed slightly: While she appreciates AI’s ability to outline and brainstorm, she said, she would discourage her students from using it to synthesize summaries.

    “Point one on that is that you’re not using your synthesis and digging deep and reading the article for yourself to pull out the information pertinent to you,” she said. “Point No. 2 — I publish, I write. If you’re in higher ed, you do that. I don’t want someone to put my work into a piece of generative AI and an [LLM] that is then going to use work I worked very, very hard on to train its language learning model.”

  • Scenario 2: A school district buys an AI tool that generates student book reviews for a library website, which saves time and promotes titles but misses key themes or introduces unintended bias.

    All three speakers said this use of AI could certainly be helpful to librarians, but if the reviews are labeled in a way that makes it sound like they were written by students when they weren’t, that wouldn’t be ethical.

  • Scenario 3: An administrator asks a librarian to use AI to generate new curriculum materials and library signage. Do the outputs violate copyright or proper attribution rules?

    Hunt said the answer depends on local and district regulations, but she recommended using Adobe Express because it doesn’t pull from the Internet.

  • Scenario 4: An ed-tech vendor pitches a school library on an AI tool that analyzes circulation data and automatically recommends titles to purchase. It learns from the school’s preferences but often excludes lesser-known topics or authors of certain backgrounds.

    Hunt, Malespina and Cooksey agreed that this would be problematic, especially because entering circulation data could include personally identifiable information, which should never be entered into an AI.

  • Scenario 5: At a school that doesn’t have a clear AI policy, a student uses AI to summarize a research article and gets accused of plagiarism. Who is responsible, and what is the librarian’s role?

    The speakers as well as polled audience members tended to agree the school district would be responsible in this scenario. Without a policy in place, the school will have a harder time establishing whether a student’s behavior constitutes plagiarism.

    Cooksey emphasized the need for ongoing professional development, and Hunt said any districts that don’t have an official AI policy need steady pressure until they draft one.

    “I am the squeaky wheel right now in my district, and I’m going to continue to be annoying about it, but I feel like we need to have something in place,” Hunt said.

  • Scenario 6: Attempting to cause trouble, a student creates a deepfake of a teacher acting inappropriately. Administrators struggle to respond, they have no specific policy in place, and trust is shaken.

    Again, the speakers said this is one more example to illustrate the importance of AI policies as well as AI literacy.

    “We’re getting to this point where we need to be questioning so much of what we see, hear and read,” Hunt said.

  • Scenario 7: A pilot program uses AI to provide instant feedback on student essays, but English language learners consistently get lower scores, leading teachers to worry the AI system can’t recognize code-switching or cultural context.

    In response to this situation, Hunt said it’s important to know whether the parent has given their permission to enter student essays into an AI, and the teacher or librarian should still be reading the essays themselves.

    Malespina and Cooksey both cautioned against relying on AI plagiarism detection tools.

    “None of these tools can do a good enough job, and they are biased toward [English language learners],” Malespina said.

  • Scenario 8: A school-approved AI system flags students who haven’t checked out any books recently, tracks their reading speed and completion patterns, and recommends interventions.

    Malespina said she doesn’t want an AI tool tracking students in that much detail, and Cooksey pointed out that reading speed and completion patterns aren’t reliably indicative of anything that teachers need to know about students.

  • Scenario 9: An AI tool translates texts, reads books aloud and simplifies complex texts for students with individualized education programs, but it doesn’t always translate nuance or tone.

    Hunt said she sees benefit in this kind of application for students who need extra support, but she said the loss of tone could be an issue, and it raises questions about infringing on audiobook copyright laws.

    Cooksey expounded upon that.

    “Additionally, copyright goes beyond the printed work. … That copyright owner also owns the presentation rights, the audio rights and anything like that,” she said. “So if they’re putting something into a generative AI tool that reads the PDF, that is technically a violation of copyright in that moment, because there are available tools for audio versions of books for this reason, and they’re widely available. Sora is great, and it’s free for educators. … But when you’re talking about taking something that belongs to someone else and generating a brand-new copied product of that, that’s not fair use.”

Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelor’s degree in physiology from Michigan State University and lives in Northern California.





Source link

Continue Reading

Education

Bret Harte Superintendent Named To State Boards On School Finance And AI

Published

on






Bret Harte Superintendent Named To State Boards On School Finance And AI – myMotherLode.com

































































 




Source link

Continue Reading

Trending