Connect with us

Education

How Cybercriminals Exploit Education in the Age of AI

Published

on


Johannesburg, South Africa – September 9th, 2025 Today, September 9th, 2025 is International Day to Protect Education from Attack. While the focus has traditionally been on the physical risks to schools in conflict zones, the bigger battle ground in 2025 is digital.

“The modern classroom has shifted into a digital schoolyard built on platforms like Microsoft Teams, Google Classroom, and Zoom,” says Lorna Hardie, Regional Director: Africa, Check Point Software Technologies.

“These tools are designed to drive collaboration and innovation, however, they are also prime targets for cyberattacks, especially those using AI. Without stronger “digital fences,” schools and universities are exposed to risks that directly threaten students, educators, and even national innovation,” she adds.

Education: The World’s Most Attacked Sector

The education sector has become the number one target for cybercriminals worldwide. According to Check Point Research (CPR), schools and universities faced an average of 4,356 cyberattacks per organisation every week in 2025 — a 41% year-on-year increase. While all regions are targeted, Africa has seen a 56% surge to 4,463 attacks in 2025.

The Education sector is an exploding sector for cyberattacks due to several specific reasons:

  • Schools house vast amounts of sensitive data—from personal information of students and staff to financial and research data—making them attractive to attackers.
  • Schools need to connect with multiple parties for curriculum schedules, term holidays, online classes. This means the intrusion surface is simply bigger.
  • Many educational institutions lack the resources to secure their systems adequately; some simply do not have the know-how or skilled resources to ensure defense measures are also up to date.

“The combination of factors inevitably turns this sector into a ‘soft target” with a ‘hard” payoff,” Hardie says.

Cyberattacks Impacting More Than Just IT Downtime.

The impact of cyberattacks on the education sector extends far beyond system outages. School closures and exam disruption caused by ransomware have forced universities offline for weeks, cancelling or delaying assessments.

In 2023, ransomware attacks cost educational institutions much more than expected with median payments reaching $6.6 million for lower education and $4.4 million for higher education institutions according to a Sophos report.

Despite these payments, recovery remains a significant challenge, with only 30% of victims fully recovering within a week, down from last year, as limited resources and teams hinder recovery efforts. These ransom payments severely impact the school’s reputation, forcing schools to cut corners in other areas, impacting the quality of education to their student.

In recent times, Dark web sales of student data have been found, from transcripts and personal records to forged certificates causing personal harm to individuals and organisations.

In severe cases of cyberattacks, there have been reports of institutional collapse; the 157-year old Lincoln College in Illinois was forced to shut its doors permanently after a ransomware attack.

Every breach chips away at student trust, academic credibility, and institutional resilience.

The AI Factor: Cybercrime at Machine Speed

Artificial Intelligence is reshaping both the threat landscape and the defensive playbook for education. On the attacker side, AI enables deepfake phishing campaigns targeting students and staff, as well as automated credential theft through large-scale password spraying. Now with the power of AI, AI-driven malware now scans and exploits vulnerabilities in minutes, not weeks as in previous times. Attackers are also weaponising AI in school settings, creating highly convincing scams that make phishing far more effective than ever before integrating cybersecurity education early—especially before AI adoption begins—is vital, cultivating the awareness needed to resist AI-gnerated threats in digital classrooms.

In July 2025 alone, CPR identified 18,000 new education-related domains, with one in every 57 flagged as malicious. Many of these were AI-generated, designed to mimic exam portals, fee-payment systems, or login pages.

On the defender side, AI can now help detect anomalies in login behaviour across thousands of accounts, identify zero-day malware before signatures exist and provide AI-powered prevention-first security, blocking phishing, ransomware, and malicious domains in real time.

Crucially, integrating cyber security education early—especially before AI adoption begins—is vital, cultivating the awareness needed to resist AI-generated threats in digital classrooms. For schools with small IT teams, AI-driven cyber security is no longer optional — it’s the only way to keep pace with attackers.

How Education Can Stay Safe in the AI-Era

To safeguard the digital classroom, education institutions must adopt a prevention-first strategy backed by AI-powered tools. Some key suggestions includes :

    Harden authentication by enforcing MFA and monitoring for MFA fatigue phishing tactics.

    Network segmentation to prevent attackers from moving laterally once inside.

    Reinforce phishing awareness for staff and students with examples of current scam

    Patch and update systems regularly, especially widely used platforms such as email and collaboration tools.

    Cyber awareness training for students, educators, and parents — helping them spot AI-generated scams, especially sophisticated phishing scams and recognising suspicious links

These aren’t just IT measures — they are core safeguards for the future of learning.

“Education is the backbone of every country’s future, but without strong cyber security, it becomes an easy target for disruption.

Globally we’ve seen a surge in AI-powered attacks that not only steal sensitive data but also interrupt learning for millions of students. Protecting the education sector requires a prevention-first approach, with AI-powered defences, stronger digital perimeters, and awareness across every level. Only then can we ensure that digital classrooms remain safe havens for growth and innovation,” Hardie says.

Protecting the Future of Education
On this International Day to Protect Education from Attack, we must recognise that cyber security is now fundamental to safeguarding education. The “digital schoolyard” is under constant attack, with AI making threats faster, smarter, and harder to detect. But with the right tools, collaboration, and prevention-first strategies, schools can protect not just their data, but the futures of millions of students.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Education

As AI tools reshape education, schools struggle with how to draw the line on cheating

Published

on


The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.

High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat.

“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”

The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.

“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”

Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”

In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.

“I used to give a writing prompt and say, ‘In two weeks I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”

Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is now to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”

Students are uncertain when AI usage is out of bounds

Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it’s sometimes hard to know where to draw the line.

College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.

“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”

Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater.

Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.

“Whether you can use AI or not, depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong.

But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”

Schools are introducing guidelines, gradually

Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.

Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.

The University of California, Berkeley emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” around AI use. The guidance offered language for three sample syllabus statements — for courses that require AI, ban AI in and out of class, or allow some AI use.

“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”

At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.

For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn’t realize the platform also altered his language, which was flagged by an AI detector.

Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don’t want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence.

Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI “is not a viable policy” unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.

Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.

“To expect an 18-year-old to exercise great discipline is unreasonable, that’s why it’s up to instructors to put up guardrails.”

___

The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.





Source link

Continue Reading

Education

How do Google robots detect AI-generated content?

Published

on


What is meant by the term “AI-generated content” is material that is produced by an artificial intelligence system. The material may include photos and music in addition to written information such as blogs, articles, essays, and business plans, among other types of written content.

Many times, it’s impossible to discern AI-generated material from stuff that people really authored. And this may occasionally raise some ethical concerns.

While there isn’t a body in place to oversee the usage of AI, numerous algorithms and techniques are being created to recognize AI-generated material.

Here’s how Google, the search engine behemoth, is approaching the issue of AI-generated content.

To improve your site’s ranking in search engines (SEO). We have more than 100 free tools waiting for you

How Google Detects AI-Generated Content

To answer the question, certainly, Google can detect AI-generated material – sort of!

To show this, we’ll be relying mostly on written content.

Google is always building and refining new algorithms to deal with the challenge of AI-generated stuff.

With algorithms, Google can check for how well-written stuff is, as well as numerous abnormalities and patterns that show up in AI-generated content. Google will look for sentences that are meaningless to human readers but incorporate keywords. The organization will also check for content developed utilizing stochastic models and sequences like Markov chains.

Google also checks for information generated by scraping RSS feeds. Content stitched from various internet sources without delivering any genuine value will also be identified as AI-generated content. Content produced by deliberate obfuscation and like-for-like replacement of words with synonyms will be identified as AI-generated content.

Basically, if it fits within a framework that is recognized by the algorithm, Google flags it.

That However, although text written by older NLP models like GPT-1 and GPT-2 is simple to identify, the current GPT-3 is more advanced and is difficult to discover, hence the “sort of.”

Google believes that the more they become better at recognizing AI-generated stuff, the more the creators of these tools find strategies to get better and escape the system. Google’s search champion, John Mueller, likens this to a “cat and mouse” game.

Tools such as uniqueness. AI exists that can also assess whether stuff was created by AI writers, such ChatGPT. They offer an AI material identification Chrome plugin that allows you free credits to evaluate whether the stuff you are viewing is AI-produced.

To boost your site’s rating in search engines (SEO). We have more than 100 free tools waiting for you

The significance of detecting AI-generated content

  • At its very core, the fundamental goal for designing algorithms to recognize
  • AI-generated stuff is ethics. How ethical is it to exploit stuff developed by
  • AI? Does AI-produced work come under plagiarism or copyright

restrictions, or is it actually newly generated data?

Many universities and other educational institutions require students work on material independently without submitting AI-generated content or outsourcing it. Mainly because these universities fear that if students leave all their papers to AI, they would get duller.

Also, companies and SEO firms pay copywriters and content writers to develop stuff for them. Sadly, some of these writers deploy AI to generate stuff that may not meet the specific aims of their customers. Making it even more crucial to recognize AI-generated stuff.

Currently, Google penalizes websites and blogs for having AI-generated content. John Mueller, Google’s Search Advocate, disclosed that Google considers all AI-generated content to be spam.

He noted that applying machine learning techniques to generate material is seen as the same as translation hacks, word shuffling, synonym manipulation, and other similar tactics. He further indicated that Google would introduce a manual penalty for AI-generated content.

This difficulty isn’t going away.

AI-generated content is the newest daily application of machine learning to our everyday life. More and more AI-content generations are springing up, with their creators trying to rip off their portion of consumers and earn some market share as users increase.

But Google will always be there to detect AI-generated stuff and its consumers. Google has always found a way to prevail against Black Hat SEO strategies and other unethical means that people use to bypass its SEO constraints, and this won’t be different.

AI-generated content won’t go away. But they will be employed appropriately. The American political scientist John Mueller predicts that AI content generators will be utilized responsibly for content planning and to reduce grammatical and spelling challenges. This is separate from deploying AI to churn out written work within minutes.

The challenge of AI-generated material is pretty new, but as the business has always done, Google will always innovate and build more precise approaches to spot AI-generated content.

To boost your site’s rating in search engines (SEO). We have more than 100 free tools waiting for you

Outline of the Article

Introduct Introduction to AI-Written Contentstanding how Google recognizes AI-written content

  • Crawling and indexing
  • Natural language processing
  • Machine learning algorithms

3. Techniques employed by AI detectors to recognize AI writing

  • Pattern recognition
  • Linguistic analysis
  • Semantic understanding

4. Google Classroom’s technique to identifying AI writing

  • Plagiarism detection tools
  • Manual review processes
  • Collaboration with AI detection experts
  • 5. Does Google prioritize AI-written content differently?
  • Impact on search engine rankings
  • User experience considerations

Conclusion

How Does Google Detect AI-Written Content?

In today’s digital age, the advent of artificial intelligence has led to the production of material made by machines that is frequently indistinguishable from human-written language. This issue raises problems about how search engines like Google handle such material and if they can successfully recognize AI-generated text. Let’s look into the processes underlying Google’s recognition of AI-written material and discuss the consequences for content providers and viewers alike.

Introduction to AI-Written Content

With the emergence of AI technology, the environment of content production has undergone a substantial upheaval.AI-driven tools and algorithms can already write articles, blogs, and even novels with astonishing accuracy and fluency. This breakthrough has prompted both enthusiasm and anxiety throughout the digital world, as the borders between human and machine-generated material blur.

Plagiarism detection tools

Manual review methods

Collaboration with AI detection experts

5. Does Google rank AI-written content differently?

Impact on search engine rankings

User experience considerations

Conclusion

How Does Google Detect AI-Written Content?

In today’s digital era, the emergence of artificial intelligence has led to the creation of content created by computers that is often indistinguishable from human-written text. This issue presents questions concerning how search engines like Google handle such information and whether they can correctly detect AI-generated text. Let’s delve at the methods underpinning Google’s detection of AI-written material and consider the repercussions for content suppliers and consumers alike.

Introduction to AI-Written Content

With the introduction of AI technology, the environment of content creation has experienced a considerable upheaval.AI-driven tools and algorithms can already create articles, blogs, and even novels with astounding accuracy and fluency. This achievement has triggered both exhilaration and worry across the digital world, as the barriers between human and machine-generated content blur. 

For more information on the evaluations of How Google robots detect AI-generated content, you can visit our website by

click here



Source link

Continue Reading

Education

More school-starters missing key skills like toilet training, teachers say

Published

on


Kate McGoughEducation reporter, BBC News

Getty Images Primary school students sitting in a classroom being taught by a teacher. The pupils are learning on mini whiteboards.Getty Images

Schools are “picking up the pieces” as more children start reception without key skills such as speaking in full sentences or using the toilet independently, teaching unions have told the BBC.

A third of teachers have at least five children in their school’s reception class who need help with going to the toilet, a survey of more than 1,000 primary school teachers in England suggests.

Nine in 10 who responded to the Teacher Tapp survey had seen a decrease in speech and language abilities among new starters over the past two years.

The government previously announced a target for 75% of children to be at a good level of development on leaving reception by 2028.

At St Mary’s Church of England Primary School in Stoke, speech and language therapist Liz Parkes is helping reception pupil Gracie sound out words that rhyme.

Liz comes to the school once a week to do one-to-one interventions like this, and to offer training and support to teachers on how to spot issues.

Around a quarter of pupils at St Mary’s need some extra support with speech and language when they join reception, but with Liz’s help that number is down to just a handful of pupils by Year 2.

Liz says social isolation is partly the reason for the decrease in communication skills.

“Children are increasingly spending a lot of time looking at a screen and not necessarily engaged in more meaningful interactions or developing the kind of listening skills you need when you hit nursery and reception.

“We’re seeing children in reception who haven’t experienced having conversations on a regular basis or aren’t having a range of experiences where they’re exposed to language.”

BBC/Kate McGough A blonde-haired little girl wearing purple school dress and cardigan sits at a table with a woman with long blonde hair and glasses wearing a brown top. The woman is point at a phonics card in front of the little girl with a picture of a strawberry on it. They are in the corner of a classroom with displays of words on the wall. BBC/Kate McGough

Speech and language therapist Liz Parkes supports reception pupil Gracie

Teacher Tapp, a survey tool, asked primary school teachers in England about school readiness a week into term. In results seen exclusively by BBC News, they found:

  • 85% of 1,132 respondents said they had at least one reception pupil who needed help going to the toilet
  • 33% have at least five children needing help, while 8% had at least 10
  • 92% reported a decrease in speech and language abilities among reception starters over the past two years.

A Department for Education spokesperson said that the government was working to ensure that a record share of children are “school-ready” at the age of five, “turning the tide on inherited challenges of lack of access to high-quality early education, and helping teachers focus on teaching so every child in the class can achieve and thrive”.

The spokesperson added that the government had already increased access to early years care for hundreds of thousands of families and was investing £1.5bn to “rebuild early years services”.

BBC/Kate McGough Two little girls, one asian with dark hair, one white with blonde hair sit at a table in reception class. The table is covered with a sheet and they are painting with brushes on a piece of paper. BBC/Kate McGough

Pupils paint in their first week in reception class

Catherine Miah, deputy head at St Mary’s Church of England Primary School in Stoke, encouraged schools to budget for a speech and language therapist, who could have an “incredible” impact on children.

“We’ve had to make sacrifices elsewhere, but if children aren’t ready to learn you could sit them in front of the best phonics lessons in the world, they’re not going to take it onboard if they’ve not got those learning behaviours.”

The school says a third of its pupils need help with toilet training when they join nursery, but the school works with parents to ensure they are toilet-trained by the time they reach reception.

“We’re a team. It’s not a case of saying to parents ‘This is your job. Why haven’t you done it?’ We need to work together.”

The government has set a target that 75% of children leaving reception at five years old will have a “good level of development” by 2028. Last year 68% of children were at that level, so an extra 45,000 children a year are needed to reach that goal.

To achieve a “good” level of development, a child is assessed by teachers at the end of their reception year on tasks including dressing, going to the toilet, and paying attention in class.

Pepe Di’Iasio, of the Association of School and College Leaders, said reception teachers were “brilliant” at supporting young children but local services have been badly eroded over the past decade.

“It has left schools picking up the pieces,” he said. “Many children are starting school already several months behind their peers.”

Parenting charity Kindred Squared found that teachers are spending 2.5 hours a day helping children who haven’t hit developmental milestones instead of teaching.

They have written a set of guidelines for parents to check whether their child has the skills they need to begin school.

The Department for Education was approached for comment.

BBC/Kate McGough A woman in a grey top with long dark hair and glasses sits with her five year old son on her knee. They are both smiling.  BBC/Kate McGough

Diane’s son had support with his speech and language during reception

Diane’s son has just started Year 1 at St Mary’s in Stoke this year. She says without the school’s support he would have been much further behind in his development.

“Within two weeks he was out of nappies,” said Diane. “They would help him on the toilet here and I’d do it at home, we’d work together.”

Teachers say her boy is thriving, but Diane says the school has been instrumental in supporting his special educational needs and improving his speech and language.

“He does a lot for himself, whereas before he was always dependent on me. School have helped me to help him become more independent and more confident,” she said.

Additional reporting by Emily Doughty



Source link

Continue Reading

Trending