Connect with us

Tools & Platforms

Schools using AI to personalise learning, finds Ofsted

Published

on


Personalisation is just one of the ways education providers are experimenting with artificial intelligence (AI), according to a report from the Office for Standards in Education, Children’s Services and Skills (Ofsted).

When looking into early adopters of the technology to find out how it’s being used, and assess the positives and challenges of using AI in an educational setting, there were some cases where AI was used to assist children who may need extra help due to life circumstances with a view to levelling the playing field.

“Several leaders also highlighted how AI allowed teachers to personalise and adapt resources, activities and teaching for different groups of pupils, including, in a couple of instances, young carers and refugee children with English as an additional language,” the report said.

These examples relate to one school using AI to translate resources for students whose first language isn’t English, and another turning lessons and resources into podcasts for young caregivers to help them catch up on things they’ve missed.

Other use cases for personalisation included using AI to mark work while giving personalised feedback, saving the teacher time while also offering specific advice to students.

Government push

In early 2025, the UK’s education secretary, Bridget Phillipson, told The Bett Show the current government plans to use AI to save teachers time, ensure children get the best education possible, and grow the connection between students and teachers.

But research conducted by the Department for Education to gauge teachers’ attitudes to the technology found many are wary. Half of teachers are already using generative artificial intelligence (GenAI), according to the research, but 64% of the remaining half aren’t sure how to use it in their roles, and 35% are concerned about the many risks it can pose.

Regardless of teacher attitudes, the government is leaning heavily into using AI to make teachers’ lives easier, making plans to invest £4m into developing AI tools “for different ages and subjects to help manage the burden on teachers for marking and assessment”, among many other projects and investments.

The Department for Education (DfE), which also commissioned Ofsted’s research into the matter, has stated: “If used safely, effectively and with the right infrastructure in place, AI can ensure that every child and young person, regardless of their background, is able to achieve at school or college and develop the knowledge and skills they need for life.”

Use cases and cautions

Early in 2025, the government launched its AI opportunities action plan, which includes how the Department for Science, Innovation and Technology (DSIT) aims to use AI to improve the delivery of education in the UK, with DSIT flagging potential uses such as lesson planning and making admin easier.

In some cases, this is exactly what schools and colleges were using it for, according to Ofsted’s research – many were automating common teaching tasks such as lesson planning, marking and creating classroom resources to make time for other tasks; others were using AI in lessons and letting children interact with it.

Other schools had already started developing their own AI chatbots, and though no solid plans were yet in place, there were hopes of integrating the technology into the curriculum in the future.

But implementing AI has required careful consideration, with the report highlighting: “AI requires skills and knowledge across more than one department.”

Each school and college Ofsted spoke to were in different stages of AI adoption, as well as teachers and students having varying levels of understanding of how best to use the technology.

Pace of adoption also varied, though most schools seemed to be taking an incremental approach to adoption, changing bit by bit as teachers and students experiment and accept new ways of working using AI technology. The report claimed there didn’t seem to be a “prescriptive” approach about what tools could be used.

An “AI champion” existed in most cases, namely someone responsible for implementing and getting others on board with adoption – usually someone who has prior knowledge of the technology in some capacity.

A college principal of one the education providers Ofsted spoke to said: “I think anybody who’s telling you they’ve got a strategy is lying to you because the truth of the matter is AI is moving so quickly that any plan wouldn’t survive first contact with the enemy. So, I think a strategy is overbaking it. Our approach is to be pragmatic: what works for the problems we’ve got and what might be interesting to play with for problems that might arise.”

When children are involved, safeguarding should be at the forefront of any plans to implement new technologies, which is one of the reasons those running pilots and introducing AI are being so cautious.

Those Ofsted spoke to already displayed knowledge about the risks of using the technology, such as “bias, personal data, misinformation and safety”, and many had already developed or were adding to AI policies and best practices.

The report said: “A further concern is the risk of AI perpetuating or even amplifying existing biases. AI systems rely on algorithms trained on historical data, which may reflect stereotypical or outdated attitudes…

“However, some of the specific aspects of AI, such as its ability to predict and hallucinate, and the safeguarding issues it raises, create an urgent need to assess whether intended benefits outweigh any potential risks.”

There have been other less commonly mentioned concerns for some schools, for example, where AI is being used for student brainstorming or individualised marking, there is the possibility of narrowing what is counted as correct, taking away some of the “nuance and creativity” from how students can answer questions and tackle problems.

“Deskill[ing]” teachers and making it harder for children to learn certain skills because of a reliance on AI was also mentioned as something education providers are worried about.

Getting it right

Ultimately, AI adoption will be an ongoing process for education providers, and it’s important senior leaders are on board, with someone in charge of introducing and monitoring the technology’s impact on teaching and education delivery.

The most vital piece of the puzzle, according to Ofsted, is ensuring teachers are guided and supported rather than put under pressure, as well as guaranteeing transparency surrounding anything AI is used for in schools.

“There is a lack of evidence about the impact of AI on educational outcomes or a clear understanding of what type of outcome to consider as evidence of successful AI adoption,” the report said. “Not knowing what to measure and/or what evidence to collect makes it hard to identify any direct impact of AI on outcomes.

“Our study also indicates that these journeys are far from complete,” it continued. “The leaders we spoke to are aware that developing an overarching strategy for AI and providing effective means for evaluating the impact of AI are still works in progress. The findings show how leaders have built and developed their use of AI. However, they also highlight gaps in knowledge that may act as barriers to an effective, safe or responsible use of AI.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

The EU AI Act is Here (Is Your Data Ready to Lead?)

Published

on


The accelerated adoption of AI and generative AI tools has reshaped the business landscape. With powerful capabilities now within reach, organizations are rapidly exploring how to apply AI across operations and strategy.  

In fact, 93% of UK CEOs have adopted generative AI tools in the last year, and according to the latest State of AI report by McKinsey, 78% of businesses use AI in more than one business function. 

With such an expansion, governing bodies are acting promptly to ensure AI is deployed responsibly, safely and ethically. For example, the EU AI Act restricts unethical practices, such as facial image scraping, and mandates AI literacy. This ensures organizations understand how their tools generate insights before acting on them. These policies aim to reduce the risk of AI misuse due to insufficient training or oversight. 

In July, the EU released its final General-Purpose AI (GPAI) Code of practice, outlining voluntary guidelines on transparency, safety and copyright for foundation models. While voluntary, companies that opt out may face closer scrutiny or more stringent enforcement. Alongside this, new phases of the act continue to take effect, with the latest compliance deadline taking place in August. 

This raises two critical questions for organizations. How can they utilize AI’s transformative power while staying ahead of new regulations? And how will these regulations shape the path forward for enterprise AI? 

Related:Why Master Data Management Is Even More Important Now

How New Regulations Are Reshaping AI Adoption 

The EU AI Act is driving organizations to address longstanding data management challenges to reduce AI bias and ensure compliance. AI systems under “unacceptable risk” — those that pose a clear threat to individual rights, safety or freedoms — are already restricted under the act.  

Meanwhile, broader compliance obligations for general-purpose AI systems are taking this year. Stricter obligations for systemic-risk models, including those developed by leading providers, follow in August 2026. With this rollout schedule, organizations must move quickly to build AI readiness, starting with AI-ready data. That means investing in trusted data foundations that ensure traceability, accuracy and compliance at scale. 

In industries such as financial services, where AI is used in high-stakes decisions like fraud detection and credit scoring, this is especially urgent. Organizations must show that their models are trained on representative and high-quality data, and that the results are actively monitored to support fair and reliable decisions. The act is accelerating the move toward AI systems that are trustworthy and explainable. 

Related:InformationWeek Podcast: Proving Tech Investment’s Company-wide Value

Data Integrity as a Strategic Advantage 

Meeting the requirements of the EU AI Act demands more than surface level compliance. Organizations must break down data silos, especially where critical data is locked in legacy or mainframe systems. Integrating all relevant data across cloud, on-premises and hybrid environments, as well as across various business functions, is essential to improving the reliability of AI outcomes and reduce bias. 

Beyond integration, organizations must prioritize data quality, governance and observability to ensure that the data used in AI models is accurate, traceable and continuously monitored. Recent research shows that 62% of companies cite data governance as the biggest challenge to AI success, while 71% plan to increase investment in governance programmes.  

The lack of interpretability and transparency in AI models remains a significant concern, raising questions around bias, ethics, accountability and equity. As organizations operationalise AI responsibly, robust data and AI governance will play a pivotal role in bridging the gap between regulatory requirements and responsible innovation. 

Related:Why Your BI Dashboard Underwhelms

Additionally, incorporating trustworthy third-party datasets, such as demographics, geospatial insights and environmental risk factors, can help increase the accuracy of AI outcomes and strengthen fairness with additional context. This is increasingly important given the EU’s direction toward stronger copyright protection and mandatory watermarking for AI generated content. 

A More Deliberate Approach to AI 

The early excitement around AI experimentation is now giving way to more thoughtful, enterprise-wide planning. Currently, only 12% of organizations report having AI-ready data. Without accurate, consistent and contextualised data in place, AI initiatives are unlikely to deliver measurable business outcomes. Poor data quality and governance limits performance and introduces risk, bias and opacity across business decisions that affect customers, operations, and reputation. 

As AI systems grow more complex and agentic, capable of reasoning, taking action, and even adapting in real-time, the demand for trusted context and governance becomes even more critical. These systems cannot function responsibly without a strong data integrity foundation that supports transparency, traceability and trust. 

Ultimately, the EU AI Act, alongside upcoming legislation in the UK and other regions, signals a shift from reactive compliance to proactive AI readiness.  As AI adoption grows, powering AI initiatives with integrated, high-quality, and contextualised data will be key to long-term success with scalable and responsible AI innovation. 





Source link

Continue Reading

Tools & Platforms

The Tech Elites Trying to “Build Canada” Can Only Muster AI-Written Prose

Published

on



The technology executive suffers from a unique affliction. Call it disruptivitis: he (it’s almost always a he) will stumble upon a well-trod idea, give it a new name, and then claim credit for its discovery. Often, this idea will involve privatizing a previously public good, placing an app between a customer and an existing product or service, or solving an intractable social problem in such a way that happens to line said executive’s pockets.

Most importantly, this idea is always a priori innovative, by virtue of its origin in the mind of a self-declared innovator—think Athena springing fully formed from Zeus’s forehead. Fortunately for those afflicted, disruptivitis is also the world’s only malady that enriches its sufferers, and the boy-kings of Silicon Valley are its patient zeroes. Elon Musk was the first person to think of subways; the brain trust at Uber recently dreamed up the bus; meanwhile, Airbnb’s leaders decided to go ahead and start listing hotel rooms. Someday soon, a nineteen-year-old Stanford dropout will invent the wheel and become a billionaire.

This plague has now crossed the forty-ninth parallel via something called Build Canada. Its founders insist Build Canada isn’t a lobby group and doesn’t represent “special interest groups,” although it includes a former senior Liberal staffer as co-founder and CEO, several former or current executives and employees at Shopify (one of the country’s most valuable companies), and various other tech- and business-adjacent figures. (Apparently, corporate interests aren’t “special.”) They describe Build Canada as a project that will, it seems, close up shop whenever the government finally sees the light and implements their ideas, which are spelled out via a series of “memos.”

The project has attracted attention in political and tech circles; Liberal prime minister Mark Carney even established a Build Canada cabinet committee, despite the fact that, according to reporting by The Logic, a number of the project’s founders have turned hard right and backed the Conservatives in the last election.

But the memos have received less notice—and that’s a problem. They’re the core of the project, spelling out, in detail, the goals and world views of its backers; they’re also instructive as literary artifacts, with their own tics and tells. Perhaps it’s time we read these memos with the care upon which they so stridently insist.

As of this writing, there are thirty-six Build Canada memos. They’re policy proposals, basically, but they’re also intended to be works of political rhetoric, crafted (although, as we’ll see, “generated” might be the more apt verb) by people who believe that prose can move power. More than anything, though, the memos evoke the post-literate era’s most influential rhetorical form: the tech start-up pitch deck.

For one thing, the memos are utterly disinterested in language itself and seem to be pitched at someone with the attention span of a ketamine-addled venture capitalist. Many would require the translation services of a Y Combinator alumnus, with a lot of thoughts on “seconding employees” and “micromobility solutions,” as well as suggestions for “transition validated technologies” and a “follow-on non-dilutive capital program.” One representative passage: “Today in 2025, LCGE and CEI’s true combined cap is only $1.25M. And while QSBS shields 100% of gains up until the policy cap for individuals and corporations, Canada’s CEI would only shields [sic] 66.7% of gains for individuals.” Not exactly Two Treatises of Government or What Is to Be Done? A prior version of the Build Canada website said unnamed “experts” review each memo before publication, but expert editors don’t seem to be among them. Even government white papers have more flair.

This raises an important question, one crucial to any work of rhetoric: Who are these memos—with their gumbo of lofty self-regard, change-the-world ambition, and Instagram-reel reading level—actually for? If they’re intended for a general audience, aiming to inspire the Canadian public to rally around such stirring, big-tent goals as stablecoin adoption and capital gains reform, why do they dwell on “structured procurement pathway” and “major process driven services”? If, on the other hand, they’re intended as private lobbying tools, for a small audience of elected officials and aides, why make a whole-ass website?

The simplest explanation: the people behind Build Canada are too online. Its founders say they got together because “We got sick of sharing bold ideas on social media, in private chats and political events, and seeing nothing happen.” Now, most normal people, upon typing a sentence like that, would be self-aware enough to step away from the keyboard, take up an interesting hobby like cross stitching or Warhammer, and never speak of this brief lapse in judgment again. (Tellingly, that line has since been scrubbed from the Build Canada website.) But, remember, the technology executive is not like you or me. His ideas are always bold—which means their lack of implementation is not just a personal affront but open defiance of the natural order. It should be enough for him to tweet these ideas and leave the details to the peons.

Like so many terminally online posters before them, though, Build Canada’s founders have mistaken an audience of social media sycophants for a popular base of support. The great robber barons of old at least had the decency and good sense to stay behind the curtain. But, for today’s wealthy, influence isn’t enough. They want credit too. Musk posted a lot on Twitter; then he bought Twitter; then he bought a president. Build Canada founders appear to be on the same path—although, like proper Canadians, they’re still playing catch-up with the Americans.

If the memos are supposed to be works of persuasion, one has to ask: Why are they so poorly written? The obvious answer is that they’re produced with the help of generative artificial intelligence. Build Canada admits this. “It’s an experiment in how we could be doing things,” co-founder Daniel Debow has said, an excuse that red-handed undergraduates might want to keep on mental file. Indeed, the memos bear all of a chatbot’s hallmarks: bulleted lists, bolded headers, circular logic, business-school jargon, pleonasms, repetition. The generalizations are sweeping, the ideas visionary—albeit within a circumscribed vocabulary. Build Canada’s proposals are frequently “bold” (twenty-one uses, by my count). The country is in “crisis” (thirty-five), but it would be “world-class” (twenty) if not for all those “outdated” (eighteen) regulations and policies, although the most pressing issues at hand are “investment” (195), “innovation” (109), and “productivity” (forty-two), rather than, say, climate change (three) or poverty (three).

Build Canada’s reliance on AI isn’t surprising, since it seems to be the project’s glue, both the solution to government waste and a God-given right. (The irony of a large language model extolling its own virtues goes unremarked upon.) It’s also the future of art and entertainment, per one disquieting memo that advocates the redirection of cultural funding toward AI-related “content.” “Shift emphasis from rewarding sheer volume or traditional labour inputs towards incentivizing projects demonstrating innovative human-AI collaboration, development of Canadian AI creative tools, and global competitiveness,” the memo intones, in chillingly businesslike terms. “Redirect a portion of existing funds from less impactful programs towards these AI-readiness priorities.”

Build Canada’s founders point out, again and again, that they’re doing this on a volunteer basis, simply because they care about the country so much. If that’s true, why can’t they be bothered to write anything themselves, rather than turning to a chatbot? For all their complaints about “inertia” and “small thinking” holding the country back, it’s hard to imagine anything more inert or small minded than leaning on AI to churn out a couple of unremarkable paragraphs. Contempt for language is a form of contempt for the reader, and the overriding tone of the Build Canada memos is one of annoyance at having to spell out all these self-evident ideas for us little people.

If the style of the Build Canada memos leaves something to be desired, what about the substance—the policy ideas themselves? Some are good, or unobjectionable, or common sense. Canada should produce more food locally. Canadian telecoms have a monopolistic stranglehold on the market. Canadians should control their financial data. Canada needs high-speed rail and more housing. If you’re a normal person, you might believe that the reason these problems haven’t been fixed is that certain powerful players have certain economic incentives to oppose certain reforms—which results in those reforms being stymied. You might then draw the conclusion that the chief issue is greed and malice.

According to Build Canada, you’d be wrong. Who cares if, say, the housing crisis isn’t solely caused by a shortage of units but—to name a few other hypothetical culprits—the rise of corporate landlordism, a staggering drop in affordable and social housing stock, and an equally staggering decline in consumer purchasing power? Never mind. The only problem is all that pesky red tape. Might the Canadian consumer’s lack of financial data portability have something to do with the outsize political power of the country’s biggest banks? Let’s not get into that. In Build Canada’s world, there are almost no entrenched interests (except, that is, for public sector employees). The problem is always big government and low ambition.

If you lack the serene benevolence of the technology executive, some of Build Canada’s other proposals might give you pause. Again, though, that’s a you problem. Are you worried about the high rates of accidents from self-driving cars, or fires from e-bike battery meltdowns, or the accessibility hazards posed by electric scooters? You’re a NIMBY. Do you suspect that cryptocurrencies are really just unregulated financial securities? You’re living in the past. Are you weirded out by the idea of only funding artists who “celebrate Canadian achievement and ambition”? You’re short sighted. Are you troubled by the climate-change impact of fast-tracking every major fossil fuel project in the country? You’re unrealistic. Are you creeped out by a points-based rewards system for new immigrants? You’re soft. Do you have reservations about the wholesale embrace of generative artificial intelligence, given its long-term implications for employment, energy use, and the survival of the human spirit? You’re out of touch.

If, however, you have certain “outdated” ideas about any of the issues tackled by the Build Canada genius bar—if, for example, you believe that the clear-and-present climate catastrophe might require stopping new pipeline development rather than accelerating it, or that a technology like AI should be safely regulated rather than handed over for Pandora to crank open—you might be led to the conclusion that Build Canada has a very specific reason for blaming all the country’s ills on laziness and bureaucracy. In fact, you might begin to suspect that its founders are pointing the finger at everyone except themselves. You might notice that Build Canada has next to nothing to say about, for example, income inequality. You might wonder if—hypothetically—this has something to do with the class interests and net worth of its founders.

You might even allow your mind to wander down unexpected pathways—the sorts of meanderings and sense-memory flashbacks of which AI chatbots are, mercifully, not yet capable—until, for some reason, you realize that “Build Canada” has the same cadence as “Blame Canada,” the classic song from 1999’s South Park: Bigger, Longer & Uncut. And, in another surprising mental leap, you might then recall the song’s final line, which, for reasons you can’t quite put a finger on, sounds awfully apt right now: “We must blame them and cause a fuss / before somebody thinks of blaming us.”

Drew Nelles is a writer and formerly was a senior editor at The Walrus.





Source link

Continue Reading

Tools & Platforms

Could gen AI radically change the power of the SLA?

Published

on


Clorox’s lawsuit cites transcripts of help desk calls as evidence of Cognizant’s negligence, but what if those calls been captured, transcribed, and analyzed to send real-time alerts to Clorox management? Could the problem behavior have been discovered early enough to thwart the breach?

Here, generative AI could have a significant impact, as it delivers the capability to capture information from a wide range of communication channels — potentially actions as well via video — and analyze for deviations from what a company has been contracted to deliver. This could deliver near-real-time alerts regarding problematic behavior in a way that could spur a rethinking of the SLA as it is currently practiced. 

“This is flipping the whole idea of SLA,” said Kevin Hall, CIO for the Westconsin Credit Union, which has 129,000 members throughout Wisconsin and Minnesota. “You can now have quality of service rather than just performance metrics.”



Source link

Continue Reading

Trending