Connect with us

Education

Why AI Strategy Matters (and Why Not Having One Is Risky) — Campus Technology

Published

on


Why AI Strategy Matters (and Why Not Having One Is Risky)

More than a mere trend of the times, artificial intelligence is quickly becoming a baseline way of working in higher education. AI usage is evolving rapidly and influencing everything from student success to operational efficiency. If your institution hasn’t started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.

You and your team won’t have all the answers today, and that’s okay. AI is advancing daily, and by establishing a strategic foundation now, your institution can stay agile and aligned with its mission, vision, and goals to serve learners as the education sector continues to evolve its usage of AI globally.

The topic of AI strategy was the focus of a multi-institutional presentation titled “Why AI Strategy Matters (and Why Not Having One is Risky),” led by Vincent Spezzo from the Georgia Institute of Technology and Dana Scott at Thomas Jefferson University, at 1EdTech Consortium‘s 2025 Learning Impact Conference in Indianapolis. The attendance was standing room only and participation was robust.

The Reality Is: Most Institutions Are Still Figuring It Out

The session started with a survey of essential questions for participants in the room, and the results revealed are consistent with other reports stemming from 1EdTech working groups, conversations at industry conferences, and within recent publications: Most institutions either lack a defined AI strategy or have efforts that are disjointed or siloed. Leaders are asking for support, guidance, and tools to move forward with purpose.

The most important takeaway here? Everyone is still learning.

Faculty, students, and staff are experimenting with AI, and pods of innovation are abundant across institutions. Your role as an institutional leader isn’t to control innovation; it’s to guide it. A well-crafted AI strategy ensures that exploration happens within shared guardrails, reinforcing institutional values and serving long-term goals. Employing the advice of Dr. Susan Aldridge, president of Thomas Jefferson University, who framed four strategic objectives from her call to action, “How best can we proactively guide AI’s use in higher education and shape its impact on our students, faculty and institution,” the session walked attendees through these objectives and coupled them with additional practice frameworks that capture the importance of innovation and discovery, integral components of AI strategy which can’t get lost in translation while institutions figure things out.

  • Objective 1: Ensuring that across our curriculum, we are preparing today’s students to use AI in their careers. That enables them to succeed in parallel with employers’ expanded use of AI.
  • Objective 2: Employing AI-based capacities to enhance the effectiveness (and value) of the education we deliver.
  • Objective 3: Leverage AI to address specific pedagogical and administrative challenges.
  • Objective 4: Concretely address the already identified pitfalls and shortcomings of using AI in higher education and develop mechanisms for anticipating and responding to emerging challenges.

Source: Aldridge, S.C. “Four objectives to guide artificial intelligence’s impact on higher education.” Times Higher Education. 2025.

Framing Strategy with Data Privacy

Among 1EdTech session attendees, who came from both institutions and ed tech providers, data privacy was the top concern regarding existing and future AI tools. Last year, the 1EdTech community launched the Generative AI Taskforce and developed the TrustEd Generative AI Data Rubric, a framework that promotes transparency and responsible data practices. This rubric enables institutions to vet their apps for data privacy while providers can self-assess their posture and position when it comes to their AI practices.



Source link

Education

9 AI Ethics Scenarios (and What School Librarians Would Do)

Published

on


A common refrain about artificial intelligence in education is that it’s a research tool, and as such, some school librarians are acquiring firsthand experience with its uses and controversies.

Leading a presentation last week at the ISTELive 25 + ASCD annual conference in San Antonio, a trio of librarians parsed appropriate and inappropriate uses of AI in a series of hypothetical scenarios. They broadly recommended that schools have, and clearly articulate, official policies governing AI use and be cautious about inputting copyrighted or private information.

Amanda Hunt, a librarian at Oak Run Middle School in Texas, said their presentation would focus on scenarios because librarians are experiencing so many.


“The reason we did it this way is because these scenarios are coming up,” she said. “Every day I’m hearing some other type of question in regards to AI and how we’re using it in the classroom or in the library.”

  • Scenario 1: A class encourages students to use generative AI for brainstorming, outlining and summarizing articles.

    Elissa Malespina, a teacher librarian at Science Park High School in New Jersey, said she felt this was a valid use, as she has found AI to be helpful for high schoolers who are prone to get overwhelmed by research projects.

    Ashley Cooksey, an assistant professor and school library program director at Arkansas Tech University, disagreed slightly: While she appreciates AI’s ability to outline and brainstorm, she said, she would discourage her students from using it to synthesize summaries.

    “Point one on that is that you’re not using your synthesis and digging deep and reading the article for yourself to pull out the information pertinent to you,” she said. “Point No. 2 — I publish, I write. If you’re in higher ed, you do that. I don’t want someone to put my work into a piece of generative AI and an [LLM] that is then going to use work I worked very, very hard on to train its language learning model.”

  • Scenario 2: A school district buys an AI tool that generates student book reviews for a library website, which saves time and promotes titles but misses key themes or introduces unintended bias.

    All three speakers said this use of AI could certainly be helpful to librarians, but if the reviews are labeled in a way that makes it sound like they were written by students when they weren’t, that wouldn’t be ethical.

  • Scenario 3: An administrator asks a librarian to use AI to generate new curriculum materials and library signage. Do the outputs violate copyright or proper attribution rules?

    Hunt said the answer depends on local and district regulations, but she recommended using Adobe Express because it doesn’t pull from the Internet.

  • Scenario 4: An ed-tech vendor pitches a school library on an AI tool that analyzes circulation data and automatically recommends titles to purchase. It learns from the school’s preferences but often excludes lesser-known topics or authors of certain backgrounds.

    Hunt, Malespina and Cooksey agreed that this would be problematic, especially because entering circulation data could include personally identifiable information, which should never be entered into an AI.

  • Scenario 5: At a school that doesn’t have a clear AI policy, a student uses AI to summarize a research article and gets accused of plagiarism. Who is responsible, and what is the librarian’s role?

    The speakers as well as polled audience members tended to agree the school district would be responsible in this scenario. Without a policy in place, the school will have a harder time establishing whether a student’s behavior constitutes plagiarism.

    Cooksey emphasized the need for ongoing professional development, and Hunt said any districts that don’t have an official AI policy need steady pressure until they draft one.

    “I am the squeaky wheel right now in my district, and I’m going to continue to be annoying about it, but I feel like we need to have something in place,” Hunt said.

  • Scenario 6: Attempting to cause trouble, a student creates a deepfake of a teacher acting inappropriately. Administrators struggle to respond, they have no specific policy in place, and trust is shaken.

    Again, the speakers said this is one more example to illustrate the importance of AI policies as well as AI literacy.

    “We’re getting to this point where we need to be questioning so much of what we see, hear and read,” Hunt said.

  • Scenario 7: A pilot program uses AI to provide instant feedback on student essays, but English language learners consistently get lower scores, leading teachers to worry the AI system can’t recognize code-switching or cultural context.

    In response to this situation, Hunt said it’s important to know whether the parent has given their permission to enter student essays into an AI, and the teacher or librarian should still be reading the essays themselves.

    Malespina and Cooksey both cautioned against relying on AI plagiarism detection tools.

    “None of these tools can do a good enough job, and they are biased toward [English language learners],” Malespina said.

  • Scenario 8: A school-approved AI system flags students who haven’t checked out any books recently, tracks their reading speed and completion patterns, and recommends interventions.

    Malespina said she doesn’t want an AI tool tracking students in that much detail, and Cooksey pointed out that reading speed and completion patterns aren’t reliably indicative of anything that teachers need to know about students.

  • Scenario 9: An AI tool translates texts, reads books aloud and simplifies complex texts for students with individualized education programs, but it doesn’t always translate nuance or tone.

    Hunt said she sees benefit in this kind of application for students who need extra support, but she said the loss of tone could be an issue, and it raises questions about infringing on audiobook copyright laws.

    Cooksey expounded upon that.

    “Additionally, copyright goes beyond the printed work. … That copyright owner also owns the presentation rights, the audio rights and anything like that,” she said. “So if they’re putting something into a generative AI tool that reads the PDF, that is technically a violation of copyright in that moment, because there are available tools for audio versions of books for this reason, and they’re widely available. Sora is great, and it’s free for educators. … But when you’re talking about taking something that belongs to someone else and generating a brand-new copied product of that, that’s not fair use.”

Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelor’s degree in physiology from Michigan State University and lives in Northern California.





Source link

Continue Reading

Education

Bret Harte Superintendent Named To State Boards On School Finance And AI

Published

on






Bret Harte Superintendent Named To State Boards On School Finance And AI – myMotherLode.com

































































 




Source link

Continue Reading

Education

Blunkett urges ministers to use ‘incredible sensitivity’ in changing Send system in England | Special educational needs

Published

on


Ministers must use “incredible sensitivity” in making changes to the special educational needs system, former education secretary David Blunkett has said, as the government is urged not to drop education, health and care plans (EHCPs).

Lord Blunkett, who went through the special needs system when attending a residential school for blind children, said ministers would have to tread carefully.

The former home secretary in Tony Blair’s government also urged the government to reassure parents that it was looking for “a meaningful replacement” for EHCPs, which guarantee more than 600,000 children and young people individual support in learning.

Blunkett said he sympathised with the challenge facing Bridget Phillipson, the education secretary, saying: “It’s absolutely clear that the government will need to do this with incredible sensitivity and with a recognition it’s going to be a bumpy road.”

He said government proposals due in the autumn to reexamine Send provision in England were not the same as welfare changes, largely abandoned last week, which were aimed at reducing spending. “They put another billion in [to Send provision] and nobody noticed,” Blunkett said, adding: “We’ve got to reduce the fear of change.”

Earlier Helen Hayes, the Labour MP who chairs the cross-party Commons education select committee, called for Downing Street to commit to EHCPs, saying this was the only way to combat mistrust among many families with Send children.

“I think at this stage that would be the right thing to do,” she told BBC Radio 4’s Today programme. “We have been looking, as the education select committee, at the Send system for the last several months. We have heard extensive evidence from parents, from organisations that represent parents, from professionals and from others who are deeply involved in the system, which is failing so many children and families at the moment.

“One of the consequences of that failure is that parents really have so little trust and confidence in the Send system at the moment. And the government should take that very seriously as it charts a way forward for reform.”

A letter to the Guardian on Monday, signed by dozens of special needs and disability charities and campaigners, warned against government changes to the Send system that would restrict or abolish EHCPs.

Labour MPs who spoke to the Guardian are worried ministers are unable to explain essential details of the special educational needs shake-up being considered in the schools white paper to be published in October.

Downing Street has refused to rule out ending EHCPs, while stressing that no decisions have yet been taken ahead of a white paper on Send provision to be published in October.

Keir Starmer’s deputy spokesperson said: “I’ll just go back to the broader point that the system is not working and is in desperate need of reform. That’s why we want to actively work with parents, families, parliamentarians to make sure we get this right.”

skip past newsletter promotion

Speaking later in the Commons, Phillipson said there was “no responsibility I take more seriously” than that to more vulnerable children. She said it was a “serious and complex area” that “we as a government are determined to get right”.

The education secretary said: “There will always be a legal right to the additional support children with Send need, and we will protect it. But alongside that, there will be a better system with strengthened support, improved access and more funding.”

Dr Will Shield, an educational psychologist from the University of Exeter, said rumoured proposals that limit EHCPs – potentially to pupils in special schools – were “deeply problematic”.

Shield said: “Mainstream schools frequently rely on EHCPs to access the funding and oversight needed to support children effectively. Without a clear, well-resourced alternative, families will fear their children are not able to access the support they need to achieve and thrive.”

Paul Whiteman, general secretary of the National Association of Head Teachers, said: “Any reforms in this space will likely provoke strong reactions and it will be crucial that the government works closely with both parents and schools every step of the way.”



Source link

Continue Reading

Trending