Education
Training the Next Generation of Space Cybersecurity Experts — Campus Technology
Training the Next Generation of Space Cybersecurity Experts
A Q&A with Scott Shackelford
In a few short decades, since the first signals from our experimental satellites broke the stillness at the edge of space, more and more of the world’s critical technology infrastructure has occupied the crowded regions of low earth orbit (LEO). Thousands of satellites from countries around the globe now serve every type of communications, from top-secret defense programs, to weather imaging, to basic science research, and so much more. From government and research agencies to commercial entities, large and small, securing those assets means more than incrementally pumping up existing cybersecurity resources and practices. Space cybersecurity is a field that must grow and mature significantly to keep up with the pace of change. Here, we ask Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about some of the efforts — globally, nationally, and locally — that may help launch a separate discipline that will support changing practices and foster future space cybersecurity leadership.
Mary Grush: Certainly cybersecurity has existed for some time around our space infrastructure and related digital technology. It has been here in some form, even if not called out as a separate, specific “space cybersecurity” discipline. What’s new or on the horizon that might cause us to rethink space cybersecurity, redefine it, and restructure how we implement it?
Scott Shackelford: Our reliance on space infrastructure has been around for a long time now. We’ve been reliant on satellites for decades, especially since the ’80s when the technology took off in a big way with a lot of different services. Of course, in the beginning it started off mostly in national security with early surveillance satellites taking pictures during the Cold War as part of our nuclear deterrence. But it did not take long for the first commercial satellites to be launched, originally for weather forecasting and telecommunications. And these days, we get a host of satellite-based services, from Internet access from space, to GPS, to all the geospatial applications like Google Maps — you name it. Just having this conversation right now, all of this is dependent on space and the infrastructure that we’ve launched into orbit (as we’re continuing to do), along with the ground-based services that make it possible. Securing all of that is really challenging and important.
We’ve been focused on space as a vulnerability for a long time. Traditionally that meant basically not making it too easy to knock out satellites with missiles. And originally only the U.S. and the former Soviet Union, then Russia, were able to do that. China joined the club in the early 2000s when they very publicly took down an aging weather satellite and in so doing caused a cascade of orbital debris that continues to ricochet around LEO, still causing headaches for the International Space Station and for the other satellites that have to move out of the way.
And now, it’s gotten easier for bad players to launch cyber attacks instead of missiles. Those cyber attacks are coming from countries or criminal organizations, and they’re designed to go after satellites without the need to launch kinetic vehicles to interfere physically with the satellite. It’s a lot easier for them when they can just punch a few keystrokes. We’ve seen countries do it, like Iran. We’ve seen groups do it to target different types of satellites or even in some cases to launch ransomware attacks against them.
Education
OpenAI, Microsoft, and Anthropic pledge $23 million to help train American teachers on AI
Teachers are pulling up a chair to implement AI in the classroom.
The American Federation of Teachers (AFT) announced on Tuesday that it will open a training center in New York City devoted to teaching educators how to responsibly use AI systems in their work.
Also: Can AI save teachers from a crushing workload? There’s new evidence it might
Dubbed the National Center for AI Instruction, the training center will open this fall and kick off with a series of workshops on practical uses of AI for K-12 teachers. Representing close to two million members, the AFT is the second-largest teachers’ union in the United States. The effort is being launched in partnership with OpenAI, Microsoft, and Anthropic, who have pledged a cumulative $23 million for the hub.
“Now is the time to ensure Al empowers educators, students, and schools,” OpenAI wrote in a company blog post published Tuesday, announcing its plan to invest $10 million in the Center over the next five years. “For this to happen, teachers must lead the conversation around how to best harness its potential.”
Backlash and acceptance
The rise of generative AI chatbots like ChatGPT in recent years has sparked widespread concern among educators. These systems can write essays and responses to homework questions in seconds, suddenly making it difficult to determine if assignments have been completed by hand or by machine.
Also: The best free AI courses and certificates in 2025 – and I’ve tried many
At the same time, however, many teachers have actively embraced the technology: a recent Gallup poll found that six-in-ten teachers used AI at work in the most recent school year, helping them save time on tasks like preparing lesson plans and providing feedback on student assignments. To make educators feel more comfortable about using AI, companies including Anthropic and OpenAI have launched education-specific versions of their chatbots: Claude for Education and ChatGPT Edu, respectively.
Like many other industries that have suddenly had to contend with the ubiquity of powerful AI systems, the US education system has struggled to achieve a healthy balance with the technology. Some school systems, like New York City’s public schools, initially opted to ban its employees and students from using ChatGPT.
But over time, it’s become clear that AI isn’t going away, and that there is yet to be a long-term benefit in ignoring it. The NYC public school system later changed its no-ChatGPT policy, and some universities, like Duke University and the schools belonging to the California State University system, have begun providing premium ChatGPT services for free to students. Similarly, the Miami-Dade Public School system started deploying Google’s Gemini chatbot to 100,000 of its high school students earlier this year.
Also: Claude might be my new favorite AI tool for Android – here’s why
Like those university initiatives, the new partnership with the AFT will also benefit the AI companies sponsoring the effort, as it will place their technology into the hands of many thousands of new users.
President Trump issued an executive order in April focused on equipping students and teachers with AI literacy skills, signaling efforts like this one with AFT are in line with the administration’s forthcoming AI Action Plan, set to be released later this month.
Impact on critical thinking
Apologists for AI in the classroom will sometimes compare it to previous technologies, such as digital calculators or the internet, which felt disruptive at the time of their debut but have since become foundational to modern education.
Also: Heavy AI use at work has a surprising relationship to burnout, new study finds
A new body of research, however, is starting to show that using AI tools can inhibit critical thinking skills in human users. The technology’s long-term impacts on human cognition and education, therefore, could be far more pronounced than we can know today.
A recent study conducted by researchers from Carnegie Mellon University and Microsoft, for example, found that “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.”
An MIT Media Lab study yielded similar findings: that using AI “undeniably reduced the friction involved in answering participants’ questions,” but that “this convenience came at a cognitive cost, diminishing users’ inclination to critically evaluate the LLM’s output or ‘opinions’ (probabilistic answers based on the training datasets).”
Finding benefits while avoiding risks
The National Academy for AI Instruction aims to chart a path forward for educators in the age of AI, one that embraces the technology’s benefits while steering clear of the potential risks that are very much still coming into focus.
Also: Samsung just answered everyone’s biggest question about its AI strategy
“The direct connection between a teacher and their kids can never be replaced by new technologies,” Randi Weingarten, president of the AFT, said in a statement included in OpenAI’s blog post, “but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.”
Education
Pearson’s AI driven edtech
A Pearson veteran of 25 years, Sharon Hague has an extensive background in education including eight years of secondary school teaching. From the coal face of education to her strategic leadership role within Pearson, Hague’s guiding principal has always been harnessing the transformational power of education.
In this vein, she believes that AI is once in a generation opportunity to transform education at all ages and stages. According to Hague, AI’s power lies in its potential to “amplify the teacher, not to replace the teacher, but to reduce the administrative burden, manual data collection, and really support the teacher in concentrating on what they do best; interacting with young people, supporting, motivating and helping them with their learning.”
The Covid-19 pandemic was an inflection point—borne of necessity—for the global edtech market when teaching quickly pivoted online. Global edtech revenue in 2020 increased by some 23% to $158bn. And though the exceptionally strong growth rate did not hold, GlobalData expects the industry to grow at a compound annual growth rate of 9.8% between 2022 and 2030, reaching $535bn in 2030.
Pearson is looking towards the next big technology driven market shift by developing a suite of AI enabled products. The multinational corporation, headquartered in the UK, was founded in 1856 and has undergone many iterations which included national and international subsidiaries in manufacturing, electricity, oil, coal, banking and financial services, publishing (periodicals and books), and aviation. In 1998, Pearson plc formed Pearson Education, and by 2016, education publishing and services had become the company’s exclusive focus.
AI will unlock personalised education
Hague is also president of Pearson’s English Language Learning (ELL) division which provides English language assessments and learning materials for English learners globally. The ELL business has launched a smart lesson generator which enables teachers to identify lesson priorities and create lesson content and activities.
Aside from reducing the administrative burden for teaching staff, Hague notes that AI will unlock personalised learning in a way that has hitherto been unavailable without the physical presence of a teacher.
She is excited about AI addressing individual student needs, particularly when access arrangements and accessibility issues are proving to be a barrier, “. “It will enable children with different types of learning needs to be able to access more learning and “to create a better experience”.
Pearson is also developing an AI driven GCSE revision tool which will allow students to receive feedback and suggestions outside the classroom in the absence of a teacher. “The child will write a response, and then the app gives feedback on what they’ve covered, as well as things to think about, and links to further practice,” explains Hague.
The tool is being developed in collaboration with teachers and, though still in development, the feedback is reportedly positive. The company is developing a similar product for use in higher education to deliver instantaneous feedback or suggestions when a tutor is unavailable.
Though adoption is not guaranteed, Pearson’s own research published in Dec 2024, found that some 58% of US Higher Education students say they are using generative AI for academic purposes (up 8% percentage points from Spring 2024).
Can AI replace teachers?
AI hallucinations and mishaps make striking headlines and carry reputational risk for any business launching AI tools. So, what of accuracy? Pearson’s background in pedagogy and learning science, combined with the high-quality and trusted content means that the company is well placed to deliver products in an industry that requires a high level of accuracy, says Hague. “I think we are drawing on content that we know is high quality, is proprietary content, so anyone using the tool can be assured it’s accurate,” she says.
When pressed on the potential for hallucinations in feedback offered to students, Hague reiterates that the tools in development are not being designed to replace teachers, but only to support existing teaching processes. “We’re not letting it [LLM] go out into the wild internet and just put anything in. With both our revision tool, and our smart lesson generator, we thought really carefully about how it’s designed, because we’re conscious that, particularly when young people are preparing for GCSE where there are certain requirements, that it’s not just pulling from anywhere.”
Looking forward, Hague envisions the technology component in education to increase with use cases including anything from teaching support to a student’s experience of examinations. “At the moment, if you take a colour coded paper-based examination, you could do that much more seamlessly on a screen. The child could actually personalise how they’re viewing the exam paper, so that it would meet their needs,” says Hague.
In the same way, speech recognition could enable many different tools for children with different accessibility requirements. “There are lots of opportunities within school-based education and higher education,” says Hague.
Will AI erode students’ critical thinking?
As AI infiltrates traditional learning processes, concerns are growing around the erosion of the human capacity for critical thinking. Pearson conducted analyses of over 128,000 student queries to Pearson’s AI study tools in Campbell Biology, the most popular title used in introductory biology courses.
The company categorised student inputs using Bloom’s Taxonomy, a framework used to understand the cognitive complexity of students’ questions and found that one-third of student queries were at higher levels of cognitive complexity, with 20% of student inputs reflecting the critical thinking skills essential for deeper learning.
Edtech for the enterprise
Pearson is also working with enterprise clients in upskilling and training staff around how they leverage AI. “There’s an increasing skills gap that we’re all aware of,” notes Hague.
Pearson’s Fathom tool analyses automation opportunities within the enterprise. The tool can be used for re-skilling purposes, for talent planning and assessing these skills gaps. This is particularly relevant to the UK market, says Hague, as UK companies spend 50% less on continuous training than their European counterparts.
“There’s some really great opportunities around continuous review people’s work whilst they’re working, by giving them feedback and address skills gaps to improve people’s performance as they’re working.
And technology enables you to do that, rather than more traditional routes, where you might have a training for a couple of days,” says Hague.
Ongoing credentialing is another AI application for the enterprise market. Pearson’s certification system Credly is digital badging that recognises workplace achievement and continuous learning. This employee certification earned as they learn while in the workflow is something they can take with them into their career.
Hague’s advice to businesses looking at implementing AI tools is to analyse where the opportunities lie for automation, develop skills and re-skill within the workforce for future needs. “Hiring is not going to be the way to solve everything you’ve really got to focus on training and re-skilling at the same time,” she says. And as the edtech landscape becomes increasingly AI driven, the need for companies to address their skill requirements only grows more urgent.
Education
Anthropic Continue The Push For AI In Education
Anthropic Continue The Push For AI In Education
Let’s be honest. AI has already taken a seat in the classroom. Google, Microsoft, OpenAI, Anthropic have all been pushing hard. Today brings more announcements from Athropic, the company behind the AI chatbot Claude, adding even more momentum. The shift isn’t subtle anymore. It’s fast, loud and it’s happening whether schools are ready or not.
It’s not only big tech. The U.S. government is also driving efforts to integrate A1 into education.
The Balance of Innovation and Safety
There’s real concern, and for good reason. Sure, the benefits are hard to ignore. AI tutoring, lighter workloads for teachers, more personalized learning paths for students. It all sounds great. But there’s a flip side. Missteps here could make existing education gaps worse. And once the damage is done, it’s tough to undo.
Many policymakers are stepping in early. They’re drafting ethical guardrails, pushing for equitable access, and starting to fund research into what responsible use of AI in education really looks like. Not as a PR move, but because the stakes are very real.
Meanwhile, the tech companies are sprinting. Google is handing out AI tools for schools at no cost, clearly aiming for reach. The strategy is simple: remove barriers and get in early. Just yesterday Microsoft, OpenAI, and Anthropic teamed up to build a national AI academy for teachers. An acknowledgment that it’s not the tools, but the people using them, that determine success. Teachers aren’t optional in this equation. They’re central.
Claude’s New Education Efforts
Claude for Education’s recent moves highlight what effective integration could look like. Its Canvas integration means students don’t need to log into another platform or juggle windows. Claude just works inside what they’re already using. That kind of invisible tech, could be the kind that sticks.
Then there’s the Panopto partnership. Students can now access lecture transcripts directly in their Claude conversations. Ask a question about a concept from class and Claude can pull the relevant sections right away. No need to rewatch an entire lecture or scrub through timestamps. It’s like giving every student their own research assistant.
And they’ve gone further. Through Wiley, Claude can now pull from a massive library of peer-reviewed academic sources. That’s huge. AI tools are often criticized for producing shaky or misleading information. But with access to vetted, high-quality content, Claude’s answers become more trustworthy. In a world overflowing with misinformation, that matters more than ever.
Josh Jarrett, senior vice president of AI growth at Wiley, emphasized this: “The future of research depends on keeping high-quality, peer-reviewed content central to AI-powered discovery. This partnership sets the standard for integrating trusted scientific content with AI platforms.”
Claude for Education are building a grassroots movement on campuses, too. Their student ambassador program is growing fast and new Claude Builder Clubs are popping up at universities around the world. Rather than being coding bootcamps or formal classes, they’re open spaces where students explore what they can actually make with AI. Workshops, demo nights and group builds.
These clubs are for everyone. Not just computer science majors. Claude’s tools are accessible enough that students in any field, from philosophy to marketing, can start building. That kind of openness helps make AI feel less like elite tech and more like something anyone can use creatively.
Privacy is a big theme here, too. Claude seems to be doing things right. Conversations are private, they’re not used for model training and any data-sharing with schools requires formal approvals.cStudents need to feel safe using AI tools. Without that trust, none of this works long term.
At the University of San Francisco School of Law, students are working with Claude to analyze legal arguments, map evidence and prep for trial scenarios. This is critical training for the jobs they’ll have after graduation. In the UK, Northumbria University is also leaning in. Their focus is on equity, digital access and preparing students for a workplace that’s already being shaped by AI
Graham Wynn, vice-chancellor for education at Northumbria University, puts the ethical side of AI front and center: “The availability of secure and ethical AI tools is a
significant consideration for our applicants, and our investment in Claude for Education
will position Northumbria as a forward-thinking leader in ethical AI innovation.”
They see tools like Claude not just as educational add-ons, but as part of a broader strategy to drive social mobility and reduce digital poverty. If you’re serious about AI in education, that’s the level of thinking it takes.
Avoiding Complexity and Closing Gaps
The core truth here is simple. AI’s role in education is growing whether we plan for it or not. The technology is getting more capable. The infrastructure is being built. But what still needs to grow, is a culture of responsible use. The challenge for education isn’t chasing an even smarter tool, but ensuring the tools we have serve all students equally.
That means listening to educators. It means designing for inclusion from the ground up. It means making sure AI becomes something that empowers students, not just another layer of complexity.
The next few years will shape everything. If we get this right, AI could help close long-standing gaps in education. If we don’t, we risk deepening them in ways we’ll regret later.
This is more than a tech story. It’s a human one. And the decisions being made today will echo for a long time.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Funding & Business6 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%