Connect with us

Tools & Platforms

Schools using AI to personalise learning, finds Ofsted

Published

on


Personalisation is just one of the ways education providers are experimenting with artificial intelligence (AI), according to a report from the Office for Standards in Education, Children’s Services and Skills (Ofsted).

When looking into early adopters of the technology to find out how it’s being used, and assess the positives and challenges of using AI in an educational setting, there were some cases where AI was used to assist children who may need extra help due to life circumstances with a view to levelling the playing field.

“Several leaders also highlighted how AI allowed teachers to personalise and adapt resources, activities and teaching for different groups of pupils, including, in a couple of instances, young carers and refugee children with English as an additional language,” the report said.

These examples relate to one school using AI to translate resources for students whose first language isn’t English, and another turning lessons and resources into podcasts for young caregivers to help them catch up on things they’ve missed.

Other use cases for personalisation included using AI to mark work while giving personalised feedback, saving the teacher time while also offering specific advice to students.

Government push

In early 2025, the UK’s education secretary, Bridget Phillipson, told The Bett Show the current government plans to use AI to save teachers time, ensure children get the best education possible, and grow the connection between students and teachers.

But research conducted by the Department for Education to gauge teachers’ attitudes to the technology found many are wary. Half of teachers are already using generative artificial intelligence (GenAI), according to the research, but 64% of the remaining half aren’t sure how to use it in their roles, and 35% are concerned about the many risks it can pose.

Regardless of teacher attitudes, the government is leaning heavily into using AI to make teachers’ lives easier, making plans to invest £4m into developing AI tools “for different ages and subjects to help manage the burden on teachers for marking and assessment”, among many other projects and investments.

The Department for Education (DfE), which also commissioned Ofsted’s research into the matter, has stated: “If used safely, effectively and with the right infrastructure in place, AI can ensure that every child and young person, regardless of their background, is able to achieve at school or college and develop the knowledge and skills they need for life.”

Use cases and cautions

Early in 2025, the government launched its AI opportunities action plan, which includes how the Department for Science, Innovation and Technology (DSIT) aims to use AI to improve the delivery of education in the UK, with DSIT flagging potential uses such as lesson planning and making admin easier.

In some cases, this is exactly what schools and colleges were using it for, according to Ofsted’s research – many were automating common teaching tasks such as lesson planning, marking and creating classroom resources to make time for other tasks; others were using AI in lessons and letting children interact with it.

Other schools had already started developing their own AI chatbots, and though no solid plans were yet in place, there were hopes of integrating the technology into the curriculum in the future.

But implementing AI has required careful consideration, with the report highlighting: “AI requires skills and knowledge across more than one department.”

Each school and college Ofsted spoke to were in different stages of AI adoption, as well as teachers and students having varying levels of understanding of how best to use the technology.

Pace of adoption also varied, though most schools seemed to be taking an incremental approach to adoption, changing bit by bit as teachers and students experiment and accept new ways of working using AI technology. The report claimed there didn’t seem to be a “prescriptive” approach about what tools could be used.

An “AI champion” existed in most cases, namely someone responsible for implementing and getting others on board with adoption – usually someone who has prior knowledge of the technology in some capacity.

A college principal of one the education providers Ofsted spoke to said: “I think anybody who’s telling you they’ve got a strategy is lying to you because the truth of the matter is AI is moving so quickly that any plan wouldn’t survive first contact with the enemy. So, I think a strategy is overbaking it. Our approach is to be pragmatic: what works for the problems we’ve got and what might be interesting to play with for problems that might arise.”

When children are involved, safeguarding should be at the forefront of any plans to implement new technologies, which is one of the reasons those running pilots and introducing AI are being so cautious.

Those Ofsted spoke to already displayed knowledge about the risks of using the technology, such as “bias, personal data, misinformation and safety”, and many had already developed or were adding to AI policies and best practices.

The report said: “A further concern is the risk of AI perpetuating or even amplifying existing biases. AI systems rely on algorithms trained on historical data, which may reflect stereotypical or outdated attitudes…

“However, some of the specific aspects of AI, such as its ability to predict and hallucinate, and the safeguarding issues it raises, create an urgent need to assess whether intended benefits outweigh any potential risks.”

There have been other less commonly mentioned concerns for some schools, for example, where AI is being used for student brainstorming or individualised marking, there is the possibility of narrowing what is counted as correct, taking away some of the “nuance and creativity” from how students can answer questions and tackle problems.

“Deskill[ing]” teachers and making it harder for children to learn certain skills because of a reliance on AI was also mentioned as something education providers are worried about.

Getting it right

Ultimately, AI adoption will be an ongoing process for education providers, and it’s important senior leaders are on board, with someone in charge of introducing and monitoring the technology’s impact on teaching and education delivery.

The most vital piece of the puzzle, according to Ofsted, is ensuring teachers are guided and supported rather than put under pressure, as well as guaranteeing transparency surrounding anything AI is used for in schools.

“There is a lack of evidence about the impact of AI on educational outcomes or a clear understanding of what type of outcome to consider as evidence of successful AI adoption,” the report said. “Not knowing what to measure and/or what evidence to collect makes it hard to identify any direct impact of AI on outcomes.

“Our study also indicates that these journeys are far from complete,” it continued. “The leaders we spoke to are aware that developing an overarching strategy for AI and providing effective means for evaluating the impact of AI are still works in progress. The findings show how leaders have built and developed their use of AI. However, they also highlight gaps in knowledge that may act as barriers to an effective, safe or responsible use of AI.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Babylon Bee 1, California 0: Court Strikes Down Law Regulating Election-Related AI Content | American Enterprise Institute

Published

on


By reducing traditional barriers of content creation, the AI revolution holds the potential to unleash an explosion in creative expression. It also increases the societal risks associated with the spread of misinformation. This tension is the subject of a recent landmark judicial decision, Babylon Bee v Bonta (hat tip to Ajit Pai, whose social media account remains an outstanding follow). The eponymous satirical news site and others challenged California’s AB 2839, which prohibited the dissemination of “materially deceptive” AI-generated audio or video content related to elections. Although the court recognized that the case presented a novel question about “synthetically edited or digitally altered” content, it struck down the law, concluding that the rise of AI does not justify a departure from long-standing First Amendment principles.

AB 2839 was California’s attempt to regulate the use of AI and other digital tools in election-related media. The law defined “materially deceptive” content as audio or visual material that has been intentionally created or altered so that a reasonable viewer would believe it to be an authentic recording. It applied specifically to depictions of candidates, elected officials, election officials, and even voting machines or ballots, where the altered content was “reasonably likely” to harm a candidate’s electoral prospects or undermine public confidence in an election. While the statute carved out exceptions for candidates making deepfakes of themselves and for satire or parody, those exceptions required prominent disclaimers stating that the content had been manipulated.

Via Twenty20.

The court recognized that the electoral context raises the stakes for both parties. Because the law regulated speech on the basis of content, the court applied strict scrutiny: The law is constitutional only if it serves a compelling governmental interest and is the least restrictive means of protecting that interest. On the one hand, the court recognized that the state has a compelling interest in preserving the integrity of its election process. California noted how AI-generated robocalls purporting to be from President Biden encouraged New Hampshire voters not to go to the polls during the 2024 primary. But on the other hand, the Supreme Court has recognized that political speech occupies the “highest rung” of First Amendment protection. That tension is the opinion’s throughline: While elections justify significant regulation, they also demand the most protection for individual speech.

But it ultimately held that California struck the wrong balance. The state argued that the bill was a logical extension of traditional harmful speech regulations such as defamation or fraud. But the court ruled that the law reached much further. It did not limit liability to instances of actual harm, but to any content “reasonably likely” to cause material harm. And importantly, it did not limit recovery to actual candidate victims, but instead allowed any recipient of allegedly deceptive content to sue for damages. This private right of action deputized roving “censorship czars” across the state whose malicious or politically motivated suits risk chilling a broad swath of political expression.

Given this breadth, the court found the law’s safe harbor was insufficient. The law exempted satirical content (such as that produced by the Bee) as long as it carried a disclaimer that the content was digitally manipulated, in accordance with the act’s formatting requirements. But the court found that this compelled disclosure was itself unconstitutional, as it drowned out the plaintiff’s message: “Put simply, a mandatory disclaimer for parody or satire would kill the joke.” This was especially true in contexts such as mobile devices, where the formatting requirements meant the disclaimer would take up the entire screen—a problem that I have discussed elsewhere in the context of Federal Trade Commission disclaimer rules.

Perhaps most importantly, the court recognized the importance of counter-speech and market solutions as alternative remedies to disinformation. It credited crowd-sourced fact-checking such as X’s community notes, and AI tools such as Grok, as scalable solutions already being adopted in the marketplace. And it noted that California could fund AI educational campaigns to raise awareness of the issue or form committees to combat disinformation via counter-speech.

The court’s emphasis on private, speech-based solutions points the way forward for other policymakers wrestling with deepfakes and other AI-generated disinformation concerns. Private, market-driven solutions offer a more constitutionally sound path than empowering the state to police truth and risk chilling protected expression. The AI revolution is likely to disrupt traditional patterns of content creation and dissemination in society. But fundamental First Amendment principles are sufficiently flexible to adapt to these changes, just as they have when presented with earlier disruptive technologies. When presented with problematic speech, the first line of defense is more speech—not censorship. Efforts at the state or federal level to regulate AI-generated content should respect this principle if they are to survive judicial scrutiny.



Source link

Continue Reading

Tools & Platforms

AI And Creativity: Hero, Villain – Or Something Far More Nuanced? – New Technology

Published

on


As part of SXSW’s first tever UK edition, Lewis Silkin
brought together a packed room to hear five esharp minds –
photographer-advocate Isabelle Doran, tech founder Guy Gadney,
licensing entrepreneur Benjamin Woollams, Commercial partners Laura
Harper and Phil Hughes – wrestle with one deceptively simple
question: is AI a hero or a villain in the creative world?

Spoiler: it’s neither. Over sixty fast-paced minutes, the
panel dug into the real-world impact of generative models, the gaps
in current law and the uneasy economics facing everyone from
freelancers to broadcasters. We’ve distilled the conversation
into six take-aways that matter to anyone who creates, commissions
or monetises content.

1. Generative AI is already taking work – fast

“Generative AI is competing with creators in their
place of work,
” warned Isabelle Doran, citing her
Association of Photographers’ latest survey. In September 2024,
30% of respondents had lost a commission to AI; five months later
that figure ehit 58%. The fallout runs wider than photographers.
When a shoot is cancelled, stylists, assistants and post-production
teams stand idle too – a ripple effect the panel believes
that policy-makers ignore at their peril.

2. Yet the tech also unlocks new forms of storytelling

Guy Gadney was quick to balance the gloom: “It’s a
proper tsunami in the sense of the breadth and volume that’s
changing,”
he said, “but it also lets us ask
what stories we can tell now that we couldn’t
before.
” His company, Charismatic AI, is building tools
that let writers craft interactive narratives at a speed and scale
unheard of two years ago. The opportunity, he argued, lies in
marrying that capability with fair economic models rather than
trying to “block the tide“.

3. The law isn’t a free-for-all – but it is
fragmenting

Laura Harper cut through the noise: “The status quo at
the moment is uncertain and it depends on what country you’re
operating in.”
In the UK, copyright can subsist in
computer-generated works; in the US, it can’t. EU rules require
commercial text-and-data miners to respect opt-outs; UK law
doesn’t – yet. Add pergent notions of “fair
use” and you get a regulatory patchwork that leaves creators
guessing and investors hesitating.

4. Transparency is the missing link

Phil Hughes nailed the practical blocker: “We can’t
build sensible licensing schemes until we know what data went into
each model.”
Without a statutory duty to disclose
training sets, claims for compensation – or even informed
consent – stall. Isabelle Doran backed him up, pointing to
Baroness Kidron’s amendment that would force openness via the
UK’s Data Act. The Lords have now sent that proposal back to
the Commons five times; every extra week, more unlicensed works are
scraped.

5. Collective licensing could spread the load

Inpidual artists can’t negotiate with OpenAI on equal terms,
but Benjamin Woollams sees hope in a pooled approach. “Any
sort of compensation is probably where we should start,”

he said, arguing for collective rights management to mirror how
music collecting societies handle radio play. At True Rights
he’s developing pricing tools to help influencers understand
usage clauses before they sign them – a practical step
towards standardisation in a market famous for anything but.

6. Personality rights may be the next frontier

Copyright guards expression; it doesn’t stop a model cloning
your voice, gait or mannerisms. “We need to strengthen
personality rights,”
Isabelle Doran urged, echoing calls
from SAG-AFTRA and beyond. Think passing off on steroids: a legal
shield for the look, sound and biometric data that make a performer
unique. Laura Harper agreed – but reminded us that
recognition is only half the battle. Enforcement mechanisms,
cross-border by default, must follow fast.

Where does that leave us?

AI is not marching creators to the cliff edge, but it is forcing
a reckoning. The panel’s consensus was clear:

  • We can’t uninvent generative tools – nor should
    we.

  • Creators deserve both transparency and a cut of the value
    chain.

  • Government must move quickly, or the UK risks watching
    leverage, investment and talent drift overseas

As Phil Hughes put it in closing:

“We all know artificial intelligence has unlocked
extraordinary possibilities across the creative industries. The
question is whether we’re bold enough and organised enough to
make sure those possibilities pay off for the people whose
imagination feeds the machine.”

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading

Tools & Platforms

Tampere University GPT-Lab launches AI project with City of Pori

Published

on


GPT-Lab, part of Tampere University’s Faculty of Information Technology and Communication Sciences in Finland, has begun collaborating with the City of Pori Unemployment Services on the Generative Artificial Intelligence in Business Support (GENT) project.

The initiative will test how AI-driven automation can improve efficiency and reliability in public sector services.

According to a LinkedIn post from GPT-Lab, the kickoff meeting on 3 September brought together project researchers and city representatives to align objectives and set the project roadmap. The work will focus on automating routine inquiries and case handling to reduce the workload of staff, speed up responses to citizens, and free time for tasks that require human expertise.

The project is designed to improve the efficiency, accessibility, and reliability of unemployment services. It will also provide a framework for the responsible use of AI in the public sector.

The GENT project, funded by the Satakuntaliitto Regional Council of Satakunta and led by Tampere University, runs until September 2026. Its broader aim is to bring generative AI expertise to companies and organizations in the Satakunta region. Researchers will work directly with businesses to co-create AI-assisted experiments that enhance productivity, investment efficiency, and competitiveness.

Solutions and materials developed through these experiments will be shared with all companies in the region and can be adapted to individual needs. The project also highlights cooperation between SMEs, public services, and research institutions in Finland.

The ETIH Innovation Awards 2026



Source link

Continue Reading

Trending