Education
Professor calls for government to begin AI education earlier

The government has announced a number of new secondary school subjects for years 11 to 13.
Photo: AFP / JONATHAN RAA
A digital education professor has called on the government to be bolder with its plan to bring in artificial intelligence as a subject in schools.
The government has announced a number of new secondary school subjects for years 11 to 13, which will begin rolling out from 2028.
It included a new year 13 subject on Generative AI, but for later development.
Speaking to media announcing the new secondary school subjects, Education Minister Erica Stanford said there would be a new emphasis on artificial intelligence (AI).
“With the rapid development of AI, students will be able to learn about generative AI. This may include learning about how digital systems work, machine learning, cybersecurity, and digital ethics.
“I’ve asked the Ministry [of Education] to investigate a new specialised Year 13 subject on generative AI for later development.”
Education Minister Erica Stanford
Photo:
Canterbury University Associate Professor of Digital Education Kathryn MacCallum said children needed to start learning about AI before year 13.
“Why does it sit at year 13 and why aren’t we doing this a lot earlier. It misses the point that a lot of students and even younger are engaging with AI.
“If we’re leaving it to year 13 to engage with this, how does it set them up to using it appropriately?” she said.
MacCallum said if the government was going to focus on AI as a year 13 topic, it was too late.
“Looking at the refresh of the curriculum, we should be actually starting with an explicit engagement of AI from year 1, and we should be tailing this with digital literacy.
Professor Kathryn MacCullum
Photo: University of Canterbury
“So AI literacy to some degree sits as a separate subject or a separate focus, but it also needs to dovetail into being good citizens, and being able to be digitally literate, and supporting students to engage in a society that is so much more around digital.
“So I think we also have a responsibility to also be really focused on how do we get all students to be more engaged in the digital side and that’s not just coding, it needs to be about the broader spectrum of digital,” she said.
MacCallum also urged the government to be broad when it began developing the framework.
“I know that some of the commentary is about generative AI, but if we’re going to do it, we need to be very broad about what we’re trying to engage our students in and understanding that what AI is in its broad nature.
“We should be talking about how it works and not just the technology, so we shouldn’t be just saying, OK, [here is] AI technology, these [are the] tools, and how do we use it in this context.
“We should be explaining to the students and helping them understand where AI sits and equally, where it shouldn’t sit… that’s why we need to start early is because part of the process of navigating the space is knowing when AI should sit in the process, but equally when we shouldn’t be and how it’s manipulating us, but equally how it can be useful in certain spaces.”
MacCallum said she supported the government’s move to introduce AI into schools and wanted a framework in place as soon as possible.
Other school subjects announced on Thursday include mechanical engineering, building and construction, infrastructure engineering, civics, politics and philosophy, Pacific studies and primary industry.
Stanford said the new subjects reflected the growing importance of science, technology, engineering and maths.
RNZ has approached her office for more comment on Professor MacCallum’s views.
Concern over teacher numbers
Dr Nina Hood is a former secondary school teacher and founder of the Teachers Institute, which gives prospective in-school training but also upskills existing teachers too.
She told First Up she didn’t think there were currently enough specialist teachers to teach new subjects being introduced by the government.
She said some work would need to be done to build capacity across the teaching force to be able to offer the subjects.
That could involve bringing new people into the profession or partnering with industry.
Devaluation of outdoor education – principal
Mount Aspiring College principal Nicola Jacobson said high school students who were looking to continue on to university were being discouraged to study outdoor education under the new curriculum.
Outdoor education would lose its University Entrance & Academic status – becoming a vocational subject instead.
Jacobson said the move would narrow students’ academic focus and devalue the subjects that no longer counted towards University Entrance.
“What the government is proposing is very distinct lists between vocational and academic. Students might develop a perception around a pathway – or parents might develop a perception around a pathway – when we need people who are able to do all of those things and feel valued in the skills and knowledge that they have,” Jacobson said.
Jacobson said equally valuing all subjects under NCEA allowed a greater variety of skills and achievement to be reflected within the qualification.
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.
Education
As AI tools reshape education, schools struggle with how to draw the line on cheating

The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.
High school and college educators around the country say student use of artificial intelligence has become so prevalent that to assign writing outside of the classroom is like asking students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study, how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.
“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”
Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”
In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.
“I used to give a writing prompt and say, ‘In two weeks I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”
Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is now to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”
Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation and it’s sometimes hard to know where to draw the line.
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”
Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater.
Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.
“Whether you can use AI or not, depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey, who credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them and then explain problems they got wrong.
But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”
Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.
Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.
The University of California, Berkeley emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” around AI use. The guidance offered language for three sample syllabus statements — for courses that require AI, ban AI in and out of class, or allow some AI use.
“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”
At Carnegie Mellon University there has been a huge uptick in academic responsibility violations due to AI but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.
For example, one English language learner wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English but didn’t realize the platform also altered his language, which was flagged by an AI detector.
Enforcing academic integrity policies has been complicated by AI, which is hard to detect and even harder to prove, said Fitzsimmons. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line but are now more hesitant to point out violations because they don’t want to accuse students unfairly, and students are worried that if they are falsely accused there is no way to prove their innocence.
Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI “is not a viable policy” unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.
“To expect an 18-year-old to exercise great discipline is unreasonable, that’s why it’s up to instructors to put up guardrails.”
___
The Associated Press’ education coverage receives financial support from multiple private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.
Education
How do Google robots detect AI-generated content?

What is meant by the term “AI-generated content” is material that is produced by an artificial intelligence system. The material may include photos and music in addition to written information such as blogs, articles, essays, and business plans, among other types of written content.
Many times, it’s impossible to discern AI-generated material from stuff that people really authored. And this may occasionally raise some ethical concerns.
While there isn’t a body in place to oversee the usage of AI, numerous algorithms and techniques are being created to recognize AI-generated material.
Here’s how Google, the search engine behemoth, is approaching the issue of AI-generated content.
How Google Detects AI-Generated Content
To answer the question, certainly, Google can detect AI-generated material – sort of!
To show this, we’ll be relying mostly on written content.
Google is always building and refining new algorithms to deal with the challenge of AI-generated stuff.
With algorithms, Google can check for how well-written stuff is, as well as numerous abnormalities and patterns that show up in AI-generated content. Google will look for sentences that are meaningless to human readers but incorporate keywords. The organization will also check for content developed utilizing stochastic models and sequences like Markov chains.
Google also checks for information generated by scraping RSS feeds. Content stitched from various internet sources without delivering any genuine value will also be identified as AI-generated content. Content produced by deliberate obfuscation and like-for-like replacement of words with synonyms will be identified as AI-generated content.
Basically, if it fits within a framework that is recognized by the algorithm, Google flags it.
That However, although text written by older NLP models like GPT-1 and GPT-2 is simple to identify, the current GPT-3 is more advanced and is difficult to discover, hence the “sort of.”
Google believes that the more they become better at recognizing AI-generated stuff, the more the creators of these tools find strategies to get better and escape the system. Google’s search champion, John Mueller, likens this to a “cat and mouse” game.
Tools such as uniqueness. AI exists that can also assess whether stuff was created by AI writers, such ChatGPT. They offer an AI material identification Chrome plugin that allows you free credits to evaluate whether the stuff you are viewing is AI-produced.
To boost your site’s rating in search engines (SEO). We have more than 100 free tools waiting for you
The significance of detecting AI-generated content
- At its very core, the fundamental goal for designing algorithms to recognize
- AI-generated stuff is ethics. How ethical is it to exploit stuff developed by
- AI? Does AI-produced work come under plagiarism or copyright
restrictions, or is it actually newly generated data?
Many universities and other educational institutions require students work on material independently without submitting AI-generated content or outsourcing it. Mainly because these universities fear that if students leave all their papers to AI, they would get duller.
Also, companies and SEO firms pay copywriters and content writers to develop stuff for them. Sadly, some of these writers deploy AI to generate stuff that may not meet the specific aims of their customers. Making it even more crucial to recognize AI-generated stuff.
Currently, Google penalizes websites and blogs for having AI-generated content. John Mueller, Google’s Search Advocate, disclosed that Google considers all AI-generated content to be spam.
He noted that applying machine learning techniques to generate material is seen as the same as translation hacks, word shuffling, synonym manipulation, and other similar tactics. He further indicated that Google would introduce a manual penalty for AI-generated content.
This difficulty isn’t going away.
AI-generated content is the newest daily application of machine learning to our everyday life. More and more AI-content generations are springing up, with their creators trying to rip off their portion of consumers and earn some market share as users increase.
But Google will always be there to detect AI-generated stuff and its consumers. Google has always found a way to prevail against Black Hat SEO strategies and other unethical means that people use to bypass its SEO constraints, and this won’t be different.
AI-generated content won’t go away. But they will be employed appropriately. The American political scientist John Mueller predicts that AI content generators will be utilized responsibly for content planning and to reduce grammatical and spelling challenges. This is separate from deploying AI to churn out written work within minutes.
The challenge of AI-generated material is pretty new, but as the business has always done, Google will always innovate and build more precise approaches to spot AI-generated content.
To boost your site’s rating in search engines (SEO). We have more than 100 free tools waiting for you
Outline of the Article
Introduct Introduction to AI-Written Contentstanding how Google recognizes AI-written content
- Crawling and indexing
- Natural language processing
- Machine learning algorithms
3. Techniques employed by AI detectors to recognize AI writing
- Pattern recognition
- Linguistic analysis
- Semantic understanding
4. Google Classroom’s technique to identifying AI writing
- Plagiarism detection tools
- Manual review processes
- Collaboration with AI detection experts
- 5. Does Google prioritize AI-written content differently?
- Impact on search engine rankings
- User experience considerations
Conclusion
How Does Google Detect AI-Written Content?
In today’s digital age, the advent of artificial intelligence has led to the production of material made by machines that is frequently indistinguishable from human-written language. This issue raises problems about how search engines like Google handle such material and if they can successfully recognize AI-generated text. Let’s look into the processes underlying Google’s recognition of AI-written material and discuss the consequences for content providers and viewers alike.
Introduction to AI-Written Content
With the emergence of AI technology, the environment of content production has undergone a substantial upheaval.AI-driven tools and algorithms can already write articles, blogs, and even novels with astonishing accuracy and fluency. This breakthrough has prompted both enthusiasm and anxiety throughout the digital world, as the borders between human and machine-generated material blur.
Plagiarism detection tools
Manual review methods
Collaboration with AI detection experts
5. Does Google rank AI-written content differently?
Impact on search engine rankings
User experience considerations
Conclusion
How Does Google Detect AI-Written Content?
In today’s digital era, the emergence of artificial intelligence has led to the creation of content created by computers that is often indistinguishable from human-written text. This issue presents questions concerning how search engines like Google handle such information and whether they can correctly detect AI-generated text. Let’s delve at the methods underpinning Google’s detection of AI-written material and consider the repercussions for content suppliers and consumers alike.
Introduction to AI-Written Content
With the introduction of AI technology, the environment of content creation has experienced a considerable upheaval.AI-driven tools and algorithms can already create articles, blogs, and even novels with astounding accuracy and fluency. This achievement has triggered both exhilaration and worry across the digital world, as the barriers between human and machine-generated content blur.
For more information on the evaluations of How Google robots detect AI-generated content, you can visit our website by
Education
More school-starters missing key skills like toilet training, teachers say

Kate McGoughEducation reporter, BBC News

Schools are “picking up the pieces” as more children start reception without key skills such as speaking in full sentences or using the toilet independently, teaching unions have told the BBC.
A third of teachers have at least five children in their school’s reception class who need help with going to the toilet, a survey of more than 1,000 primary school teachers in England suggests.
Nine in 10 who responded to the Teacher Tapp survey had seen a decrease in speech and language abilities among new starters over the past two years.
The government previously announced a target for 75% of children to be at a good level of development on leaving reception by 2028.
At St Mary’s Church of England Primary School in Stoke, speech and language therapist Liz Parkes is helping reception pupil Gracie sound out words that rhyme.
Liz comes to the school once a week to do one-to-one interventions like this, and to offer training and support to teachers on how to spot issues.
Around a quarter of pupils at St Mary’s need some extra support with speech and language when they join reception, but with Liz’s help that number is down to just a handful of pupils by Year 2.
Liz says social isolation is partly the reason for the decrease in communication skills.
“Children are increasingly spending a lot of time looking at a screen and not necessarily engaged in more meaningful interactions or developing the kind of listening skills you need when you hit nursery and reception.
“We’re seeing children in reception who haven’t experienced having conversations on a regular basis or aren’t having a range of experiences where they’re exposed to language.”

Teacher Tapp, a survey tool, asked primary school teachers in England about school readiness a week into term. In results seen exclusively by BBC News, they found:
- 85% of 1,132 respondents said they had at least one reception pupil who needed help going to the toilet
- 33% have at least five children needing help, while 8% had at least 10
- 92% reported a decrease in speech and language abilities among reception starters over the past two years.
A Department for Education spokesperson said that the government was working to ensure that a record share of children are “school-ready” at the age of five, “turning the tide on inherited challenges of lack of access to high-quality early education, and helping teachers focus on teaching so every child in the class can achieve and thrive”.
The spokesperson added that the government had already increased access to early years care for hundreds of thousands of families and was investing £1.5bn to “rebuild early years services”.

Catherine Miah, deputy head at St Mary’s Church of England Primary School in Stoke, encouraged schools to budget for a speech and language therapist, who could have an “incredible” impact on children.
“We’ve had to make sacrifices elsewhere, but if children aren’t ready to learn you could sit them in front of the best phonics lessons in the world, they’re not going to take it onboard if they’ve not got those learning behaviours.”
The school says a third of its pupils need help with toilet training when they join nursery, but the school works with parents to ensure they are toilet-trained by the time they reach reception.
“We’re a team. It’s not a case of saying to parents ‘This is your job. Why haven’t you done it?’ We need to work together.”
The government has set a target that 75% of children leaving reception at five years old will have a “good level of development” by 2028. Last year 68% of children were at that level, so an extra 45,000 children a year are needed to reach that goal.
To achieve a “good” level of development, a child is assessed by teachers at the end of their reception year on tasks including dressing, going to the toilet, and paying attention in class.
Pepe Di’Iasio, of the Association of School and College Leaders, said reception teachers were “brilliant” at supporting young children but local services have been badly eroded over the past decade.
“It has left schools picking up the pieces,” he said. “Many children are starting school already several months behind their peers.”
Parenting charity Kindred Squared found that teachers are spending 2.5 hours a day helping children who haven’t hit developmental milestones instead of teaching.
They have written a set of guidelines for parents to check whether their child has the skills they need to begin school.
The Department for Education was approached for comment.

Diane’s son has just started Year 1 at St Mary’s in Stoke this year. She says without the school’s support he would have been much further behind in his development.
“Within two weeks he was out of nappies,” said Diane. “They would help him on the toilet here and I’d do it at home, we’d work together.”
Teachers say her boy is thriving, but Diane says the school has been instrumental in supporting his special educational needs and improving his speech and language.
“He does a lot for himself, whereas before he was always dependent on me. School have helped me to help him become more independent and more confident,” she said.
Additional reporting by Emily Doughty
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi