Education
Glasgow Caledonian pauses recruitment to BCA-risk courses
- Scottish university writes to agents to inform them of “pause” on course recruitment, along with withdrawal of offers and refunds of depsosits
- Only courses that don’t meet the new BCA metrics are impacted – but they have not been named yet
- The move comes as British universities audit the threat to sponsor licence following a compliance crackdown outlined in Keir Starmer’s immigration white paper.
Writing to its network of global recruitment partners, Glasgow Caledonian University has acted upon its own internal analysis of visa refusals, enrolment and course completion rates by pausing recruitment to specific courses where there could be a risk to BCA compliance.
Communications seen by The PIE News quote the vice-chancellor, Stephen Decent, as saying: “In order to ensure that the [university] is able to achieve the more stringent requirements of the white paper, and ensure our ongoing and future compliance with our legal responsibilities as a sponsor of international students, we have taken the decision to implement a number of short-term and temporary changes to international student intake.”
“We have identified a number of courses at risk of non-compliance with the new UKVI metrics – and we have made the decision to pause recruitment to these courses for September 2025,” he continued.
The communication goes on to say that the university will be withdrawing offers on these courses for September 2025 and refunding deposits paid in time for students to source alternative courses. However, it remains unseen as to which courses will be impacted. The university has reached out to all affected students and its overseas partners and stressed that “everyone directly impacted has been fully informed”.
“While Glasgow Caledonian remains highly attractive to students from around the world, and we both welcome and value our international students, we have taken the decision to temporarily pause international student recruitment to a number of postgraduate programmes for the September 2025 intake,” a spokesperson for the university confirmed to The PIE. “This is a proactive and strategic step in light of anticipated changes outlined in the UK government’s immigration white paper.”
While the new BCA metrics – which would see the minimum pass requirement for each metric tightened by five percentage points – are not being formally enforced yet, delegates at last month’s UKCISA conference were told by compliance officials that they fully expected all policies set out to be enacted in the future.
We have identified a number of courses at risk of non-compliance with the new UKVI metrics – and we have made the decision to pause recruitment to these courses for September 2025
Stephen Decent, Glasgow Caledonian University
Policy changes such as BCA metric boundaries would not require legislative change and therefore could be introduced and enforced quickly. As a result, many universities have been conducting their own internal audits to see if they would be compliant or not.
“These new thresholds will present a greater challenge for many institutions, including our own, particularly if implemented without transition time or additional support mechanisms. Doing nothing is therefore not an option,” the Glasgow Caledonian University spokesperson said.
They added that the move represented only a “short-term pause”, giving it the time it needs to “review and, where necessary, adjust” entry processes for international students to make sure the institution is in “the strongest position possible” to meet the tightened thresholds.
Universities continue to be concerned about the proposed timing of the new metrics and their potential impact on smaller institutions. More clarity is needed on how non-completion is recorded, especially if a university is punished for students who legitimately decided not to continue with their studies and return home.
The immigration white paper outlines more stringent metrics for Basic Compliance Assessment (BCA):
- Reduction of visa refusal rate from 10% to 5%
- Increase of course completion rate from 85% to 90%
- Increase of enrolment rate from 90% to 95%
While Glasgow Caledonian University has not yet revealed the courses that will be subject to a pause in recruitment, it is well documented that postgraduate programs such as Masters of Research (MRes) courses have been attracting applicants seeking to bring dependants.
The PIE has previously reported on other institutions curtailing MRes recruitment mid-cycle, in a bid to reduce compliance risk.
Speculation has increased about institutions potentially facilitating students to switch to MSc programmes in a bid to reduce numbers enrolled on MRes programs in recent intakes, although this tactic would not help impacted students who need to obtain dependant visas for their loved ones.
The university has explained that the recruitment pause will enable time for the work to be undertaken to ensure that all courses are compliant with the new UKVI metrics.
“This will have a short-term impact, and we are aiming to reopen these courses as quickly as possible, primarily from Trimester B onwards and on the proviso that any reopened activities put us in as strong a position of compliance and success as possible” the university stated in its letter to stakeholders.
Colleagues from across the sector continue to be concerned about the BCA reforms, calling them “arbitrary“.
Last month, The PIE reported that former Home Secretary Jack Straw had warned that unscrupulous recruitment practices at some mid-ranking UK universities were having real-world consequences for the sector.
Speaking at Duolingo English Language Test’s inaugural DETcon London conference, he raised concerns that some universities has “expanded dramatically” in terms of their international intake – and that “the rest of the sector will pay the price” as the government clamps down on compliance.
Education
New York Passes the Responsible AI Safety and Education Act
The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.
Applicability and Relevant Definitions
The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models.
- “Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
- “Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.
The RAISE Act imposes the following obligations and restrictions on large developers:
- Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”
- “Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
- Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:
- (1) implement a written safety and security protocol;
- (2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
- (3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
- (4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
- (5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.
- Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
- Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.
- “Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.
If enacted, the RAISE Act would take effect 90 days after being signed into law.
Education
Pasco schools have a new AI program. It may help personalize lessons.
When Lacoochee Elementary School resumes classes in August, principal Latoya Jordan wants teachers to focus more attention on each student’s individual academic needs.
She’s looking at artificial intelligence as a tool they can use to personalize lessons.
“I’m interested to see how it can help,” Jordan said.
Lacoochee is exploring whether to become part of the Pasco County school district’s new AI initiative being offered to 30 campuses in the fall. It’s a test run that two groups — Scholar Education and Khanmigo — have offered the district free of charge to see whether the schools find a longer-term fit for their classes.
Scholar, a state-funded startup that made its debut last year at Pepin Academy and Dayspring Academy, will go into selected elementary schools. Khanmigo, a national model recently highlighted on 60 Minutes, is set for use in some middle and high schools.
“Schools ultimately will decide how they want to use it,” said Monica Ilse, deputy superintendent for academics. “I want to get feedback from teachers and leaders for the future.”
Ilse said she expected the programs might free teachers from some of the more mundane aspects of their jobs, so they can pay closer attention to their students. A recent Gallup poll found teachers who regularly use AI said it saves them about six hours of work weekly, in areas such as writing quizzes and completing paperwork.
Marlee Strawn, cofounder of Scholar Education, introduced her system to the principals of 19 schools during a June 30 video call. The model is tied to Florida’s academic standards, Strawn said, and includes dozens of lessons that teachers can use.
It also allows teachers to craft their own assignments, tapping into the growing body of material being uploaded. The more specific the request, the more fine-tuned the exercises can be. If a student has a strong interest in baseball or ballet, for instance, the AI programming can help develop standards-based tasks on those subjects, she explained.
Perhaps most useful, Strawn told the principals, is the system’s ability to support teachers as they analyze student performance data. It identifies such things as the types of questions students asked and the items they struggled with, and can make suggestions about how to respond.
“The data analytics has been the most helpful for our teachers so far,” she said.
She stressed that Scholar Education protects student data privacy, a common concern among parents and educators, noting the system got a top rating from Common Sense.
School board member Jessica Wright brought up criticisms that AI has proven notoriously error-prone in math.
Strawn said the system has proven helpful when teachers seek to provide real-life examples for math concepts. She did not delve into details about the reliability of AI in calculations and formulas.
Required reading for Floridians
Subscribe to our free Florida in Focus newsletter
Get the biggest stories happening across the state every Wednesday.
You’re all signed up!
Want more of our free, weekly newsletters in your inbox? Let’s get started.
Lacoochee principal Jordan wanted to know how well the AI system would interface with other technologies, such as iReady, that schools already use.
“If it works with some of our current systems, that’s an easier way to ease into it, so for teachers it doesn’t become one more thing that you have to do,” Jordan said.
Strawn said the automated bot is a supplement that teachers can integrate with data from other tools to help them identify classroom needs and create the types of differentiated instruction that Jordan and others are looking for.
The middle and high school model, Khanmigo, will focus more on student tutoring, Ilse wrote in an email to principals. It’s designed to “guide students to a deeper understanding of the content and skills mastery,” she explained in the email. As with Scholar, teachers can monitor students’ interactions and step in with one-on-one support as needed, in addition to developing lesson plans and standards-aligned quizzes.
Superintendent John Legg said teachers and schools would not be required to use AI. Legg said he simply wanted to provide options that might help teachers in their jobs. After a year, the district will evaluate whether to continue, most likely with paid services.
While an administrator at Dayspring Academy before his election, Legg wrote a letter of support for Scholar Education’s bid for a $1 million state startup grant, and he also received campaign contributions from some of the group’s leaders. He said he had no personal stake in the organization and was backing a project that might improve education, just as he previously supported Algebra Nation, the University of Florida’s online math tutoring program launched in 2013.
Education
Microsoft Launches $4B AI Initiative for Education
Microsoft has unveiled a monumental initiative to reshape the landscape of education through artificial intelligence, pledging a staggering $4 billion over the next five years to integrate AI tools and resources into schools, colleges, and nonprofit organizations.
This ambitious commitment, announced on July 9, 2025, aims to equip educators and students with cutting-edge technology, including cash grants, AI software, and cloud computing services, positioning Microsoft at the forefront of the digital transformation in education.
The scope of this investment is not merely financial but strategic, as the tech giant seeks to democratize access to AI, ensuring that institutions of all sizes—from underfunded public schools to sprawling university systems—can harness these tools to enhance learning. According to The New York Times, Microsoft’s initiative is designed to address the growing demand for digital literacy in an era where AI is becoming integral to nearly every industry.
A Vision for the Future of Learning
Details of the plan reveal a focus on practical implementation, with resources tailored to support curriculum development, teacher training, and student engagement through AI-driven platforms like Microsoft’s Copilot chatbot. The company envisions personalized learning experiences where AI can adapt to individual student needs, offering real-time feedback and tailored educational content.
Beyond software, Microsoft is committing to infrastructure support, providing computing services that many educational institutions lack the budget to acquire independently. This move could bridge significant gaps in access to technology, particularly for community colleges and technical schools that serve diverse, often underserved populations, as highlighted by The New York Times.
Collaboration and Scale of Impact
Microsoft’s announcement comes at a time when the integration of AI in education is both a promise and a challenge, with concerns about ethics, data privacy, and over-reliance on technology looming large. Yet, the company appears poised to address these issues through partnerships with educational bodies and nonprofits, ensuring that the rollout of these tools is accompanied by robust guidelines and support systems.
The initiative also aligns with broader industry trends, as tech giants increasingly invest in education to cultivate future talent and expand their influence. With over $13 billion already invested in OpenAI, Microsoft’s additional $4 billion for education signals a long-term bet on AI as a transformative force, not just in tech but in society at large, per reporting from The New York Times.
Challenges and Opportunities Ahead
While the potential benefits are immense, industry insiders note that the success of this initiative will hinge on execution—ensuring that teachers are adequately trained and that AI tools do not exacerbate existing inequalities in education. There is also the question of balancing innovation with oversight, as unchecked AI use in classrooms could raise ethical dilemmas.
Nevertheless, Microsoft’s bold step could set a precedent for how technology companies engage with public goods like education. As the world watches this $4 billion experiment unfold, the outcomes could redefine how we teach, learn, and prepare for a future dominated by artificial intelligence, with insights drawn from The New York Times underscoring the scale of this transformative endeavor.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education3 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children