Education
Data, privacy, and cybersecurity in schools: A 2025 wake-up call

Key points:
In 2025, schools are sitting on more data than ever before. Student records, attendance, health information, behavioral logs, and digital footprints generated by edtech tools have turned K-12 institutions into data-rich environments. As artificial intelligence becomes a central part of the learning experience, these data streams are being processed in increasingly complex ways. But with this complexity comes a critical question: Are schools doing enough to protect that data?
The answer, in many cases, is no.
The rise of shadow AI
According to CoSN’s May 2025 State of EdTech District Leadership report, a significant portion of districts, specifically 43 percent, lack formal policies or guidance for AI use. While 80 percent of districts have generative AI initiatives underway, this policy gap is a major concern. At the same time, Common Sense Media’s Teens, Trust and Technology in the Age of AI highlights that many teens have been misled by fake content and struggle to discern truth from misinformation, underscoring the broad adoption and potential risks of generative AI.
This lack of visibility and control has led to the rise of what many experts call “shadow AI”: unapproved apps and browser extensions that process student inputs, store them indefinitely, or reuse them to train commercial models. These tools are often free, widely adopted, and nearly invisible to IT teams. Shadow AI expands the district’s digital footprint in ways that often escape policy enforcement, opening the door to data leakage and compliance violations. CoSN’s 2025 report specifically notes that “free tools that are downloaded in an ad hoc manner put district data at risk.”
Data protection: The first pillar under pressure
The U.S. Department of Education’s AI Toolkit for Schools urges districts to treat student data with the same care as medical or financial records. However, many AI tools used in classrooms today are not inherently FERPA-compliant and do not always disclose where or how student data is stored. Teachers experimenting with AI-generated lesson plans or feedback may unknowingly input student work into platforms that retain or share that data. In the absence of vendor transparency, there is no way to verify how long data is stored, whether it is shared with third parties, or how it might be reused. FERPA requires that if third-party vendors handle student data on behalf of the institution, they must comply with FERPA. This includes ensuring data is not used for unintended purposes or retained for AI training.
Some tools, marketed as “free classroom assistants,” require login credentials tied to student emails or learning platforms. This creates additional risks if authentication mechanisms are not protected or monitored. Even widely-used generative tools may include language in their privacy policies allowing them to use uploaded content for system training or performance optimization.
Data processing and the consent gap
Generative AI models are trained on large datasets, and many free tools continue learning from user prompts. If a student pastes an essay or a teacher includes student identifiers in a prompt, that information could enter a commercial model’s training loop. This creates a scenario where data is being processed without explicit consent, potentially in violation of COPPA (Children’s Online Privacy Protection Act) and FERPA. While the FTC’s December 2023 update to the COPPA Rule did not codify school consent provisions, existing guidance still allows schools to consent to technology use on behalf of parents in educational contexts. However, the onus remains on schools to understand and manage these consent implications, especially with the rule’s new amendments becoming effective June 21, 2025, which strengthen protections and require separate parental consent for third-party disclosures for targeted advertising.
Moreover, many educators and students are unaware of what constitutes “personally identifiable information” (PII) in these contexts. A name combined with a school ID number, disability status, or even a writing sample could easily identify a student, especially in small districts. Without proper training, well-intentioned AI use can cross legal lines unknowingly.
Cybersecurity risks multiply
AI tools have also increased the attack surface of K-12 networks. According to ThreatDown’s 2024 State of Ransomware in Education report, ransomware attacks on K-12 schools increased by 92 percent between 2022 and 2023, with 98 total attacks in 2023. This trend is projected to continue as cybercriminals use AI to create more targeted phishing campaigns and detect system vulnerabilities faster. AI-assisted attacks can mimic human language and tone, making them harder to detect. Some attackers now use large language models to craft personalized emails that appear to come from school administrators.
Many schools lack endpoint protection for student devices, and third-party integrations often bypass internal firewalls. Free AI browser extensions may collect keystrokes or enable unauthorized access to browser sessions. The more tools that are introduced without IT oversight, the harder it becomes to isolate and contain incidents when they occur. CoSN’s 2025 report indicates that 60 percent of edtech leaders are “very concerned about AI-enabled cyberattacks,” yet 61 percent still rely on general funds for cybersecurity efforts, not dedicated funding.
Building a responsible framework
To mitigate these risks, school leaders need to:
- Audit tool usage using platforms like Lightspeed Digital Insight to identify AI tools being accessed without approval. Districts should maintain a living inventory of all digital tools. Lightspeed Digital Insight, for example, is vetted by 1EdTech for data privacy.
- Develop and publish AI use policies that clarify acceptable practices, define data handling expectations, and outline consequences for misuse. Policies should distinguish between tools approved for instructional use and those requiring further evaluation.
- Train educators and students to understand how AI tools collect and process data, how to interpret AI outputs critically, and how to avoid inputting sensitive information. AI literacy should be embedded in digital citizenship curricula, with resources available from organizations like Common Sense Media and aiEDU.
- Vet all third-party apps through standards like the 1EdTech TrustEd Apps program. Contracts should specify data deletion timelines and limit secondary data use. The TrustEd Apps program has vetted over 12,000 products, providing a valuable resource for districts.
- Simulate phishing attacks and test breach response protocols regularly. Cybersecurity training should be required for staff, and recovery plans must be reviewed annually.
Trust starts with transparency
In the rush to embrace AI, schools must not lose sight of their responsibility to protect students’ data and privacy. Transparency with parents, clarity for educators, and secure digital infrastructure are not optional. They are the baseline for trust in the age of algorithmic learning.
AI can support personalized learning, but only if we put safety and privacy first. The time to act is now. Districts that move early to build policies, offer training, and coordinate oversight will be better prepared to lead AI adoption with confidence and care.
Education
Sparkwork Group Appoints Venkatesh N S as CTO to Drive AI-Powered Personalized Learning at SisuCare Education

SisuCare Education, part of Sparkwork Group, today announced the appointment of Venkatesh N S as Chief Technology Officer (CTO). With over 23 years of experience building and scaling technology across education, healthcare, fintech, and AI-driven platforms, Venkatesh will lead SisuCare’s technology strategy as the company accelerates its mission to deliver trusted, AI-powered, personalized learning for learners and educators worldwide.
Venkatesh’s career has been defined by building systems that empower people and creating technology that makes a difference. Over 23 years, Venkatesh has led large-scale digital transformation initiatives, architected global cloud-native solutions reaching millions of users, and built AI-powered platforms. He holds several U.S. patents in virtualization technologies and has published multiple technical papers on scalable cloud architectures and related innovations, underscoring his commitment to driving innovation in scalable, high-impact technologies. Equally important, Venkatesh is recognized for his leadership in mentoring high-performing teams, nurturing a culture of innovation, and championing human-centered design.
“These values have guided me throughout my journey, from my early days as an engineer to leading global technology teams: innovate with purpose, work together, and always keep the human at the center of the technology,” said Venkatesh. “At SisuCare, we have the opportunity to combine cutting-edge AI with the care and insight of great educators, creating adaptive learning experiences that respond in real time to each student’s needs.”
Strengthening Technology Leadership and Team Excellence
Venkatesh joins SisuCare to strengthen the technology team and provide mentorship, leadership, and strategy. As CTO, he will help shape clear priorities, support team growth, and ensure the work consistently advances SisuCare’s mission and the needs of learners and educators.
“I believe the future of education is not just about delivering content, but about delivering transformation,” Venkatesh added. “If we do this right, we won’t just be improving education, we’ll be reshaping the future of how the world learns.”
“We’re thrilled to welcome Venkatesh to SisuCare,” said Bijay Baniya, CEO of SisuCare & Sparkwork. “Venkatesh brings a rare blend of technical depth, experience operating at scale, and business leadership. His track record of building global, cloud-native AI platforms and his commitment to purposeful innovation make him the perfect leader to advance our vision for truly personalized learning. Together, we’ll empower educators, inspire learners, and set a new standard for responsible AI in education.”
Near-Term Focus Areas Under Venkatesh’s Leadership
- A scalable, cloud-native platform for adaptive, real-time learning experiences
- Responsible and transparent AI: privacy, safety, and inclusion by design
- Powerful educator tools that amplify teaching, assessment, and mentorship
- Global readiness and interoperability with institutions and partners
- A culture of innovation: mentorship, cross-functional collaboration, and rapid experimentation
About SisuCare
SisuCare Education is a California-based nursing education provider approved by the California Department of Public Health (CDPH) to deliver training for Certified Nurse Assistant (CNA) and Certified Home Health Aide (CHHA) programs, as well as offering a Director of Staff Development (DSD) certification. As one of California’s largest self-paced and hybrid CNA training programs, SisuCare meets learners where they learn best, providing opportunities to thousands of students needing access and flexible options beyond a traditional, fully in-person training model. Sisucare Education is part of Sparkwork Group, which offers global enterprise learning and education platforms to businesses. With this appointment, SisuCare strengthens its position at the intersection of education and AI, advancing its mission to prepare learners for the future of work.
Media Contact
Company Name: SisuCare Education (Sparkwork Group)
Contact Person: Bijay Baniya, CEO (Chief Executive Officer)
Email: Send Email
Phone: (213) 537-8360
City: Stanton
State: CA
Country: United States
Website: www.sisucare.com
Education
The Guardian view on free nursery places: risks as well as rewards must be monitored | Editorial

With the change of season, ministers know they must get back on the front foot after weeks during which their opponents have made the political weather. The launch of a new, more generous regime for funding early years education in England should help. The first of September was keenly awaited by hundreds of thousands of working parents of children aged between nine months and four. As of now, they are entitled to 30 free childcare or nursery hours a week.
The education secretary, Bridget Phillipson, is right to stress that this is the biggest-ever expansion of early years provision – described by the Institute for Fiscal Studies as “a new branch of the welfare state”. Equivalent to about £7,500 per year, per child, this is worth more, to most women in full-time work, than abolishing their income tax and national insurance contributions. Working-age parents, particularly those with larger families, have been dealt with less generously by the tax and benefits systems in recent years than under the Blair and Brown governments. The UK has higher childcare costs than most leading economies. So it is right that parents of the youngest children are targeted with support.
Implementation will need to be closely monitored, however. Most of the new funding will flow to private providers. Ms Phillipson is a strong advocate for new nurseries attached to primary schools, but these are small in number. Families are being given additional funding, but not a new public service. The new subsidy will not cover fees in full. Many nurseries face shortages of trained staff, while the number of childminders – usually women working at home – keeps on falling.
While there are many good private and non-profit nurseries, as well as public ones overseen by councils, there is cause for concern about the way that this market has developed. As in children’s social care and special needs education, private-equity owned businesses control a growing number of settings. Last year, academics published research showing that privately run care homes were disproportionately likely to be closed by regulators. In children’s social care, the Competition and Markets Authority judged that private owners were making excessive profits and carrying too much debt, leading to unacceptable risks.
Ministers must ensure that such shocking failures are not repeated in the nursery sector. As a first step, Ofsted inspections should increase from their current six-yearly cycle to a four-yearly one – the same as schools. The current disparity sends a terrible signal about early years’ lowly status.
Another issue is the impact of the changes on poorer children who do not meet the eligibility criteria because their parents do not work or do not meet the £9,518 earnings threshold. While some vulnerable families already qualify for additional childcare, experts are right to worry that the existing attainment gap could grow as a result of a policy that grants extra funding to under‑fives from wealthier homes.
This is the logical but troubling consequence of a policy whose chief aim is to enable parents to work, rather than invest in early years education as an intrinsic good. The hope must be that rules change if these fears are realised, and that “family hubs” – which the Tories brought in after vandalising Sure Start – make a contribution to these families’ welfare in the meanwhile. Ms Phillipson has a lot on her plate. Extra spending on early years is welcome – but it needs proper oversight.
Education
Badenoch urged to ‘come clean’ after doubt cast on Stanford University claim | Kemi Badenoch

Labour and the Liberal Democrats have called for Kemi Badenoch to come clean about her claims of an offer from Stanford University at the age of 16, after former admissions staff said she had described an impossible scenario.
The Labour MP Peter Prinsley has written to the Conservatives leader saying she should lay out the specifics of how the alleged offer came about, given the doubts cast over her story. The Lib Dem education spokesperson, Munira Wilson, said Badenoch risked undermining trust.
Badenoch has defended her claim to have received an offer as a teenager in Nigeria from the elite university to study medicine, sometimes described by her as pre-med, even though the university does not offer that course for undergraduates.
Admissions staff have also said the Conservative leader’s assertion she was offered a place on exam results alone, and was offered a partial scholarship, would not have been possible, with no offers made on that basis.
Jon Reider, the admissions officer during the period Badenoch applied, told the Guardian he had been responsible for international admissions and scholarships and had not offered one to Badenoch.
Badenoch doubled down on Monday, telling reporters she had indeed received offers based on her exam results. “All I will say is that I remember the very day those letters came to me. It was not just from Stanford. I was 16, I had done very well in my SATs,” she said. “But this is 30 years ago, I don’t have the papers, and what the Guardian is doing is reporting on hearsay rather than talking about what the government is doing.”
Multiple former admissions staff and US academics have told the Guardian that Stanford has never made offers based only on SATs – US standardised tests – with no exceptions even for royalty or child prodigies.
Prinsley, a former hospital consultant, wrote to Badenoch to say her claims have been “called into serious question by people in a position to understand the situation, and I would be grateful if you could demonstrate that you have been telling the truth”.
He said the Tory leader should clear up whether she had applied to Stanford and whether a place and financial aid had been offered.
A Labour source said: “Honesty and integrity aren’t optional qualities for those who serve as the leader of His Majesty’s official opposition. The uncertainty surrounding Kemi Badenoch’s Stanford University claims raises important questions that the public deserve to know the answers to.
after newsletter promotion
“Badenoch needs to come clean about what’s happened here and whether she’s been telling the truth to the British people.”
Wilson also wrote to Badenoch saying she should come clean. “If Kemi Badenoch cares about restoring trust, she should start by explaining her own academic record,” she said. “Failing to come clean over these allegations would send a message to the thousands of pupils who just received their exam results that their hard work does not matter and that you can just bluff your way to the top.”
-
Business3 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies