Connect with us

Education

Multi-stakeholder perspective on responsible artificial intelligence and acceptability in education

Published

on


Education stands as a cornerstone of society, nurturing the minds that will ultimately shape our future1. As we advance into the twenty-first century, exponentially developing technologies and the convergence of knowledge across disciplines are set to have a significant influence on various aspects of life2, with education a crucial element that is both disrupted by and critical to progress3. The rise of artificial intelligence (AI), notably generative AI and generative pre-trained transformers4 such as ChatGPT, with its new capabilities to generalise, summarise, and provide human-like dialogue across almost every discipline, is set to disrupt the education sector from K-12 through to lifelong learning by challenging traditional systems and pedagogical approaches5,6.

Artificial intelligence can be defined as the simulation of human intelligence and its processes by machines, especially computer systems, which encompasses learning (the acquisition of information and rules for using information), reasoning (using rules to reach approximate or definite conclusions), and flexible adaptation7,8,9. In education, AI, or AIED, aims to “make computationally precise and explicit forms of educational, psychological and social knowledge which are often left implicit10. Therefore, the promise of AI to revolutionise education is predicated on its ability to provide adaptive and personalised learning experiences, thereby recognising and nurturing the unique cognitive capabilities of each student11. Furthermore, integrating AI into pedagogical approaches and practice presents unparalleled opportunities for efficiency, global reach, and the potential for the democratisation of education unattainable by traditional approaches.

AIED encompasses a broad spectrum of applications, from adaptive learning platforms that curate customised content to fit individual learning styles and paces12 to AI-driven analytics tools that forecast student performance and provide educators with actionable insights13. Moreover, recent developments in AIED have expanded the educational toolkit to include chatbots for student support, natural language processing for language learning, and machine learning for automating administrative tasks, allowing educators to focus more or exclusively on teaching and mentoring14. These tools have recently converged into multipurpose, generative pre-trained transformers (GPTs). These GPTs are large language models (LLMs) utilizing transformers to combine large language data sets and immense computing power to create an intelligent model that, after training, can generate complex, advanced, human-level output15 in the form of text, images, voice, and video. These models are capable of multi-round human-computer dialogues, continuously responding with novel output each time users input a new prompt due to having been trained with data from the available corpus of human knowledge, ranging from the physical and natural sciences through medicine to psychiatry.

This convergence highlights that a step change has occurred in the capabilities of AI to act not only as a facilitator of educational content but also as a dynamic tool with agentic properties capable of interacting with stakeholders at all levels of the educational ecosystem, enhancing and potentially disrupting the traditional pedagogical process. Recently, the majority of the conversation within the current literature concerning AIED is focused on the aspect of cheating or plagiarism16,17,18, with some calls to examine the ethics of AI19. This focus falls short of addressing the multidimensional, multi-stakeholder nature of AI-related issues in education. It fails to consider that AI is already here, accessible, and proliferating. It is this accessibility and proliferation that motivates the research presented in this manuscript. The release of generative AI globally and its application within education raises significant ethical concerns regarding data privacy, AI agency, transparency, explainability, and additional psychosocial factors, such as confidence and trust, as well as the acceptance and equitable deployment of the technology in the classroom20.

As education touches upon all members and aspects of society, we, therefore, seek to investigate and understand the level of acceptability of AI within education for all stakeholders: students, teachers, parents, school staff, and principals. Using factors derived from the explainable AI literature21 and the UNESCO framework for AI in education22. We present research that investigates the role of agency, privacy, explainability, and transparency in shaping the perceptions of global utility (GU), individual usefulness (IU), confidence, justice, and risk toward AI and the eventual acceptance of AI and intention to use (ITU) in the classroom. These factors were chosen as the focus for this study based on feedback from focus groups that identified our four independent variables as the most prominent factors influencing AI acceptability, aligning with prior IS studies21 that have demonstrated their central role in AI adoption decisions. Additionally, these four variables directly influence other AI-related variables, such as fairness -conceptualized in our study as confidence- suggesting a mediating role in shaping intentions to use AI.

In an educational setting, the deployment of AI has the potential to redistribute agency over decision-making between human actors (teachers and students) and algorithmic systems or autonomous agents. As AI systems come to assume roles traditionally reserved for educators, the negotiation of autonomy between educator, student, and this new third party becomes a complex balancing act in many situations, such as personalising learning pathways, curating content, and even evaluating student performance23,24.

Educational professionals face a paradigm shift where the agency afforded to AI systems must be weighed against preserving the educators’ pedagogical authority and expertise25. However, this is predicated on human educators providing additional needs such as guidance, motivation, facilitation, and emotional investment, which may not hold as AI technology develops26 That is not to say that AI will supplant the educator in the short term; rather, it highlights the need to calibrate AI’s role within the pedagogical process carefully.

Student agency, defined as the individual’s ability to act independently and make free choices27, can be compromised or enhanced by AI. While AI can personalise learning experiences, adaptively responding to student needs, thus promoting agency28, it can conversely reduce student agency through over-reliance, whereby AI-generated information may diminish students’ critical thinking and undermine the motivation toward self-regulated learning, leading to a dependency29.

Moreover, in educational settings, the degree of agency afforded to AI systems, i.e., its autonomy and decision-making capability, raises significant ethical considerations at all stakeholder levels. A high degree of AI agency risks producing “automation complacency“30, where stakeholders within the education ecosystem, from parents to teachers, uncritically accept AI guidance due to overestimating its capabilities. Whereas a low degree of agency essentially hamstrings the capabilities of AI and the reason for its application in education. Therefore, ensuring that AI systems are designed and implemented to support and enhance human agency through human-centred alignment and design rather than replacing it requires thorough attention to the design and deployment of these technologies31.

In conclusion, educational institutions must navigate the complex dynamics of assigned agency when integrating AI into pedagogical frameworks. This will require careful consideration of the balance between AI autonomy and human control to prevent the erosion of stakeholders’ agency at all levels of the education ecosystem and, thus, increase confidence and trust in AI as a tool for education.

Establishing confidence in AI systems is multifaceted, encompassing the ethical aspects of the system, the reliability of AI performance, the validity of its assessments, and the robustness of data-driven decision-making processes32,33. Thus, confidence in AI systems within educational contexts centres on their capacity to operate reliably and contribute meaningfully to educational outcomes.

Building confidence in AI systems is directly linked to the consistency of their performance across diverse pedagogical scenarios34. Consistency and reliability are judged by the AI system’s ability to function without frequent errors and sustain its performance over time35. Thus, inconsistencies in AI performance, such as system downtime or erratic behaviour, may alter perceptions of utility and significantly decrease user confidence.

AI systems are increasingly employed to grade assignments and provide feedback, which are activities historically under the supervision of educators. Confidence in these systems hinges on their ability to deliver feedback that is precise, accurate, and contextually appropriate36. The danger of misjudgment by AI, particularly in subjective assessment areas, can compromise its credibility37, increasing risk perceptions for stakeholders such as parents and teachers and directly affecting learners’ perceptions of how fair and just AI systems are.

AI systems and the foundation models they are built upon are trained over immense curated datasets to drive their capabilities38. The provenance of these data, the views of those who curate the subsequent training data, and how that data is then used within the model (that creates the AI) is of critical importance to ensure bias does not emerge when the model is applied19,39. To build trust in AI, stakeholders at all levels must have confidence in the integrity of the data used to create an AI, the correctness of analyses performed, and any decisions proposed or taken40. Moreover, the confidence-trust relationship in AI-driven decisions requires transparency about data sources, collection methods, and explainable analytical algorithms41.

Therefore, to increase and maintain stakeholder confidence and build trust in AIED, these systems must exhibit reliability, assessment accuracy, and transparent and explainable decision-making. Ensuring these attributes requires robust design, testing, and ongoing monitoring of AI systems, the models they are built upon, and the data used to train them.

Trust in AI is essential to its acceptance and utilisation at all stakeholder levels within education. Confidence and trust are inextricably linked42, representing a feedback loop wherein confidence builds towards trust and trust instils confidence, and the reverse holds that a lack of confidence fails to build trust. Thus, a loss of trust decreases confidence. Trust in AI is engendered by many factors, including but not limited to the transparency of AI processes, the alignment of AI functions with educational ethics, including risk and justice, the explainability of AI decision-making, privacy and the protection of student data, and evidence of AI’s effectiveness in improving learning outcomes33,43,44.

Standing as a proxy for AI, studies of trust toward automation45,46 have identified three main factors that influence trust: performance (how automation performs), process (how it accomplishes its objective), and purpose (why the automation was built originally). Accordingly, educators and students are more likely to trust AI if they can comprehend its decision-making processes and the rationale behind its recommendations or assessments47. Thus, if AI operates opaquely as a “black box”, it can be difficult to accept its recommendations, leading to concerns about its ethical alignment. Therefore, the dynamics of stakeholder trust in AI hinges on the assurance that the technology operates transparently and without bias, respects student diversity, and functions fairly and justly48.

Furthermore, privacy and security directly feed into the trust dynamic in that educational establishments are responsible for the data that AI stores and utilises to form its judgments. Tools for AIED are designed, in large part, to operate at scale, and a key component of scale is cloud computing, which involves resource sharing, which refers to the technology and the data stored on it49. This resource sharing makes the boundary between personal and common data porous, which is viewed as a resource that many technology companies can use to train new AI models or as a product50. Thus, while data breaches may erode trust in AIED in an immediate sense, far worse is the hidden assumption that all data is common. However, this issue can be addressed by stakeholders at various levels through ethical alignment negotiations, robust data privacy measures, security protocols, and policy support to enforce them22,51.

Accountability is another important element of the AI trust dynamic, and one inextricably linked to agency and the problem of control. It refers to the mechanisms in place to hold system developers, the institutions that deploy AI, and those that use AI responsible for the functioning and outcomes of AI systems33. The issue of who is responsible for AI’s decisions or mistakes is an open question heavily dependent on deep ethical analysis. However, it is of critical and immediate importance, particularly in education, where the stakes include the quality of teaching and learning, the fairness of assessments, and the well-being of students.

In conclusion, trust in AI is an umbrella construct that relies on many factors interwoven with ethical concerns. The interdependent relationship between confidence and trust suggests that the growth of one promotes the enhancement of the other. At the same time, their decline, through errors in performance, process, or purpose, leads to mutual erosion. The interplay between confidence and trust points towards explainability and transparency as potential moderating factors in the trust equation.

The contribution of explainability and transparency towards trust in AI systems is significant, particularly within the education sector; they enable stakeholders to understand and rationalise the mechanisms that drive AI decisions52. Comprehensibility is essential for educators and students not only to follow but also to assess and accept the judgments made by AI systems critically53,54. Transparency gives users visibility of AI processes, which opens AI actions to scrutiny and validation55.

Calibrating the right balance between explainability and transparency in AI systems is crucial in education, where the rationale behind decisions, such as student assessments and learning path recommendations, must be clear to ensure fairness and accountability32,56. The technology is perceived to be more trustworthy when AI systems articulate, in an accessible manner, their reasoning for decisions and the underlying data from which they are made57. Furthermore, transparency allows educators to align AI-driven interventions with pedagogical objectives, fostering an environment where AI acts as a supportive tool rather than an inscrutable authority58,59,60.

Moreover, the explainability and transparency of AI algorithms are not simply a technical requirement but also a legal and ethical one, depending on interpretation, particularly in light of regulations such as the General Data Protection Regulation (GDPR), which posits a “right to explanation” for decisions made by automated systems61,62,63. Thus, educational institutions are obligated to deploy AI systems that perform tasks effectively and provide transparent insights into their decision-making processes in a transparent manner64,65.

In sum, explainability and transparency are critical co-factors in the trust dynamic, where trust appears to be the most significant factor toward the acceptance and effective use of AI in education. Systems that employ these methods enable stakeholders to understand, interrogate, and trust AI technologies, ensuring their responsible and ethical use in educational contexts.

When taken together, this discussion points to the acceptance of AI in education as a multifaceted construct, hinging on a harmonious yet precarious balance of agency, confidence, and trust underpinned by the twin pillars of explainability and transparency. Agency involving the balance of autonomy between AI, educators, and students requires careful calibration between AI autonomy and educator control to preserve pedagogical integrity and student agency, which is vital for independent decision-making and critical thinking. Accountability, closely tied to agency, strengthens trust by ensuring that AI systems are answerable for their decisions and outcomes, reducing risk perceptions. Trust in AI and its co-factor confidence are fundamental prerequisites for AI acceptance in educational environments. The foundation of this trust is built upon factors such as AI’s performance, the clarity of its processes, its alignment with educational ethics, and the security and privacy of data. Explainability and transparency are critical in strengthening the trust dynamic. They provide stakeholders with insights into AI decision-making processes, enabling understanding and critical assessment of AI-generated outcomes and helping to improve perceptions of how just and fair these systems are.

However, is trust a one-size-fits-all solution to the acceptance of AI within education, or is it more nuanced, where different AI applications require different levels of each factor on a case-by-case basis and for different stakeholders? This research seeks to determine to what extent each factor contributes to the acceptance and intention to use AI in education across four use cases from a multi-stakeholder perspective.

Drawing from this broad interdisciplinary foundation that integrates educational theory, ethics, and human-computer interaction, this study investigates the acceptability of artificial intelligence in education through a multi-stakeholder lens, including students, teachers, and parents. This study employs an experimental vignette approach, incorporating insights from focus groups, expert opinion and literature review to develop four ecologically valid scenarios of AI use in education. Each scenario manipulates four independent variables—agency, transparency, explainability, and privacy—to assess their effects on perceived global utility, individual usefulness, justice, confidence, risk, and intention to use. The vignettes were verified through multiple manipulation checks, and the effects of independent variables were assessed using previously validated psychometric instruments administered via an online survey. Data were analysed using a simple mediation model to determine the direct and indirect effects between the variables under consideration and stakeholder intention to use AI.



Source link

Education

Anthropic announces University of San Francisco School of Law will fully integrate Claude

Published

on


Anthropic, the mind behind ChatGPT competitor Claude, is joining the industry-wide charge into education, as the tech company announces a new university and classroom partnerships that will put their educational chatbot into the hands of students of all ages.

Announced today, Claude for Education will be entering more classrooms and boosting its peer-reviewed knowledge bank, as it integrates with teaching and learning software Canvas, textbook and courseware company Wiley, and video learning tool Panopto.

“We’re building toward a future where students can reference readings, lecture recordings, visualizations, and textbook content directly within their conversations,” the company explained.

Students and educators can connect Wiley and Panopto materials to Claude’s data base using pre-built MCP servers, says Anthropic, and access Claude directly in the Canvas coursework platform. In summary: students can use Claude like a personal study partner.

Mashable Light Speed

And Claude is coming to higher education, too. The University of San Francisco School of Law will become the first fully AI-integrated law school with new Claude AI-enabled learning — as the legal field contentiously addresses the introduction of generative AI. Anthropic is also expanding its student ambassador program and network of Claude Builder Clubs across campuses, launching its first free AI fluency course.

“We’re excited to introduce students to the practical use of LLMs in litigation,” said University of San Francisco Dean Johanna Kalb. “One way we’re doing this is through our Evidence course, where this fall, students will gain direct experience applying LLMs to analyze claims and defenses, map evidence to elements of each cause of action, identify evidentiary gaps to inform discovery, and develop strategies for admission and exclusion of evidence at trial.”

Earlier this week, Anthropic announced it was joining a coalition of AI partners who were forming the new National Academy for AI Instruction, led by the American Federation of Teachers (AFT). Anthropic’s $500,000 investment in the project will support a brick-and-mortar facility and later nationwide expansion of a free, educator-focused AI training curriculum.

“The stakes couldn’t be higher: while the opportunity to accelerate educational progress is unprecedented, missteps could deepen existing divides and cause lasting harm,” Anthropic said. “That’s why we’re committed to navigating this transformation responsibly, working hand-in-hand with our partners to build an educational future that truly serves everyone.”


Best Curated Amazon Prime Day Deals

Products available for purchase here through affiliate links are selected by our merchandising team. If you buy something through links on our site, Mashable may earn an affiliate commission.




Source link

Continue Reading

Education

Speech therapy association proposes eliminating ‘DEI’ in its standards

Published

on


Scores of speech therapists across the country erupted last month when their leading professional association said it was considering dropping language calling for diversity, equity and inclusion and “cultural competence” in their certification standards. Those values could be replaced in some standards with a much more amorphous emphasis on “person-centered care.” 

“The decision to propose these modifications was not made lightly,” wrote officials of the American Speech-Language-Hearing Association (ASHA) in a June letter to members. They noted that due to recent executive orders related to DEI, even terminology that “is lawfully applied and considered essential for clinical practice … could put ASHA’s certification programs at risk.” 

Yet in the eyes of experts and some speech pathologists, the change would further imperil getting quality help to a group that’s long been grossly underserved: young children with speech delays who live in households where English is not the primary language spoken. 

“This is going to have long-term impacts on communities who already struggle to get services for their needs,” said Joshuaa Allison-Burbank, a speech language pathologist and Navajo member who works on the Navajo Nation in New Mexico where the tribal language is dominant in many homes.

Across the country, speech therapists have been in short supply for many years. Then, after the pandemic lockdown, the number of young children diagnosed annually with a speech delay more than doubled. Amid that broad crisis in capacity, multilingual learners are among those most at risk of falling through the cracks. Less than 10 percent of speech therapists are bilingual.

A shift away from DEI and cultural competence — which involves understanding and trying to respond to differences in children’s language, culture and home environment — could have a devastating effect at a time when more of both are needed to reach and help multilingual learners, several experts and speech pathologists said. 

They told me about a few promising strategies for strengthening speech services for multilingual infants, toddlers and preschool-age children with speech delays — each of which involves a heavy reliance on DEI and cultural competence.

Embrace creative staffing. The Navajo Nation faces severe shortages of trained personnel to evaluate and work with young children with developmental delays, including speech. So in 2022, Allison-Burbank and his research team began providing training in speech evaluation and therapy to Native family coaches who are already working with families through a tribal home visiting program. The family coaches provide speech support until a more permanent solution can be found, said Allison-Burbank.

Home visiting programs are “an untapped resource for people like me who are trying to have a wider reach to identify these kids and get interim services going,” he said. (The existence of both the home visiting program and speech therapy are under serious threat because of federal cuts, including to Medicaid.) 

Use language tests that have been designed for multilingual populations. Decades ago, few if any of the exams used to diagnose speech delays had been “normed” — or pretested to establish expectations and benchmarks — on non-English-speaking populations.

For example, early childhood intervention programs in Texas were required several years ago to use a single tool that relied on English norms to diagnose Spanish-speaking children, said Ellen Kester, the founder and president of Bilinguistics Speech and Language Services in Austin, which provides both direct services to families and training to school districts. “We saw a rise in diagnosis of very young (Spanish-speaking) kids,” she said. That isn’t because all of the kids had speech delays, but due to fundamental differences between the two languages that were not reflected in the test’s design and scoring. (In Spanish, for instance, the ‘z’ sound is pronounced like an English ‘s.’)

There are now more options than ever before of screeners and tools normed on multilingual, diverse populations; states, agencies and school districts should be selective, and informed, in seeking them out, and pushing for continued refinement.

Expand training — formal and self-initiated — for speech therapists in the best ways to work with diverse populations. In the long-term, the best way to help more bilingual children is to hire more bilingual speech therapists through robust DEI efforts. But in the short term, speech therapists can’t rely solely on interpreters — if one is even available — to connect with multilingual children.

That means using resources that break down the major differences in structure, pronunciation and usage between English and the language spoken by the family, said Kester. “As therapists, we need to know the patterns of the languages and what’s to be expected and what’s not to be expected,” Kester said.

It’s also crucial that therapists understand how cultural norms may vary, especially as they coach parents and caregivers in how best to support their kids, said Katharine Zuckerman, professor and associate division head of general pediatrics at Oregon Health & Science University. 

“This idea that parents sit on the floor and play with the kid and teach them how to talk is a very American cultural idea,” she said. “In many communities, it doesn’t work quite that way.”

In other words, to help the child, therapists have to embrace an idea that’s suddenly under siege: cultural competence,

Quick take: Relevant research

In recent years, several studies have homed in on how state early intervention systems, which serve children with developmental delays ages birth through 3, shortchange multilingual children with speech challenges. One study based out of Oregon, and co-authored by Zuckerman, found that speech diagnoses for Spanish-speaking children were often less specific than for English speakers. Instead of pinpointing a particular challenge, the Spanish speakers tended to get the general “language delay” designation. That made it harder to connect families to the most tailored and beneficial therapies. 

A second study found that speech pathologists routinely miss critical steps when evaluating multilingual children for early intervention. That can lead to overdiagnosis, underdiagnosis and inappropriate help. “These findings point to the critical need for increased preparation at preprofessional levels and strong advocacy … to ensure evidence-based EI assessments and family-centered, culturally responsive intervention for children from all backgrounds,” the authors concluded. 

Carr is a fellow at New America, focused on reporting on early childhood issues. 

Contact the editor of this story, Christina Samuels, at 212-678-3635, via Signal at cas.37 or samuels@hechingerreport.org.

This story about the speech therapists association was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

Education

International students react to QS rankings as competition intensifies

Published

on


Seventeen UK universities now rank in the global top 100 of the QS World University Rankings 2026, with Sheffield and Nottingham rejoining the list. Still, 61% of UK institutions saw their place in the rankings drop, amid rising global competition.

Overall, 24 UK institutions saw their positions improve, 11 remained stable, and 54 – accounting for 61% – dropped in the rankings. This pattern reflects a wider trend, where institutions in other countries are advancing more rapidly. For example, seven of Ireland’s eight universities climbed in the rankings, along with nine of 13 in the Netherlands and six of seven in Hong Kong.

Notably, Imperial College London holds steady as being ranked the world’s second-best university, only trailing Massachusetts Institute of Technology in the US. Meanwhile, Oxford, Cambridge, and UCL have ket their places in the global top 10 – though Oxford and Cambridge slipped one place each due to Stanford’s climb to third.

However, the UK remains the second most represented country in the rankings, with 90 institutions on the list, only behind the United States with 192. International students studying at prominent UK universities spoke to The PIE News about how they perceived their university’s place on the list – with both expressing positivity about their institution’s ranking.

The University of Edinburgh experienced a modest drop, falling from 27th to 34th place globally. Seeing the drop in position, Sean Xia, a PhD student at the University of Edinburgh, commented that they weren’t too disappointed to see the university’s QS ranking drop in recent years. “I am still proud seeing us staying in the top 50 universities in the world and it shows effort from all of us, especially when the funding situation is getting much more difficult than the past years,” they told The PIE.

“I believe the ranking will eventually be fluctuated back in the future as long as we keep the research quality world-class.”

I am still proud seeing us staying in the top 50 universities in the world and it shows effort from all of us, especially when the funding situation is getting much more difficult than the past years
Sean Xia, international student at University of Edinburgh

Two UK universities – Sheffield and Nottingham – made a notworthy return to the top 100, now ranked 92nd and 97th respectively. The strongest gain came from Oxford Brookes University, which jumped 42 places to 374th, marking the biggest single improvement for a UK institution this year. Other major climbers include Strathclyde, Aston, Surrey, Birkbeck, and Bradford, each rising by at least 20 places.

20 best-performing UK universities in 2026 World University Rankings
UK Rank 2026 Rank 2025 Rank Institution
1 2 2 Imperial College London
2 4 3 University of Oxford
3 6 5 University of Cambridge
4 9 9 UCL (University College London)
5 31 =40 King’s College London (KCL)
6 34 27 University of Edinburgh
7 35 =34 The University of Manchester
8 51 54 University of Bristol
9 56 =50 London School of Economics and Political Science (LSE)
10 74 =69 The University of Warwick
11 76 =80 University of Birmingham
12 79 78 University of Glasgow
13 86 =82 University of Leeds
14 87 =80 University of Southampton
15 92 =105 The University of Sheffield
16 =94 =89 Durham University
17 97 108 The University of Nottingham
18 =110 =120 Queen Mary University of London (QMUL)
19 113 104 University of St Andrews
20 =132 =150 University of Bath
© QS Quacquarelli Symonds 2004-2025, TopUniversities.com

The University of Liverpool stood out among Russell Group members, climbing from 165th in this year to joint 147th in 2025, making it the most improved among the group.

Derek Zhou, a PhD candidate of the University of Liverpool, proudly stated: “As a PhD student who also got my MSc degree at the University of Liverpool, I am glad to see our university can rank among top 150. We are proud to say we are stepping exactly towards our aim which is to be the top 100 university before 2031.”

“I am also happy to see our academic research getting a higher rank than before. This means all of our research is truly contributing to the world and we are heard by the world,” he added.

Most improved non-Russell Group universities in 2026
2026 Rank 2025 Rank Institution Rank in UK
374 =416 Oxford Brookes University 38
=251 =281 University of Strathclyde 30
=395 =423 Aston University 42
=262 =285 University of Surrey 31=
=388 =408 Birkbeck College, University of London 41
=511 =531 University of Bradford 47=
=132 =150 University of Bath 20
=461 =477 Royal Holloway University of London 46
=456 =472 University of Essex 45
292 =298 Swansea University 35
=613 661-670 University of Plymouth 57
721-730 741-750 UWE Bristol (University of the West of England) 62
801-850 851-900 University of Lincoln 64=

Commenting on the rankings, Jessica Turner, CEO of QS, noted that the UK’s place as a coveted study destination could be at risk.

“While the analysis outlines detailed performances on a wide range of metrics for each institution, the picture for the wider country as a whole is more worrying,” she said.

She added: “The UK government is seeking to slash capital funding in a higher education system that has already sustained financial pressure, introduce an international student levy and shorten the length of the Graduate Visa route to 18 months from two years.

“This could accumulate in a negative impact on the quality and breadth of higher education courses and research undertaken across the country. While the UK government has placed research and development as a key part of the recent spending review, universities across the country will need more support to ensure their stability going ahead. “

And she noted that competing study destinations around the world are pouring investment into higher education and research. This is contributing to a global shift of higher education power seen through the 2026 QS World University Rankings – the US, UK, Australia, and Canada – are increasingly challenged by emerging study destinations across Europe, Asia, and the Middle East.

Among the big four, the United States remains the strongest performer in the QS rankings. With 192 ranked institutions, it improved the rank of 42% of its institutions. In contrast, the UK maintained its four universities in the top 20, while Australia lost one, signalling a subtle rebalancing in prestige.

Both the US and UK also saw one university each drop out of the top 50, while South Korea added one to the elite group, reflecting broader diversification in academic excellence.

Emerging players are quickly gaining ground. China’s top institutions continue their upward march: Tsinghua University climbed to 17th, and Fudan University rose nine places to 30th. Meanwhile, Saudi Arabia and Italy entered the top 100 for the first time, with King Fahd University at 67th and Politecnico di Milano at 98th, respectively.

The momentum extends beyond Asia. Ireland, Malaysia, the UAE, Germany, and New Zealand are among 26 countries where at least 50% of ranked institutions improved this year. Germany, notably, reversed a prior decline to see more universities rise than fall, while Hong Kong emerged as the world’s second most improved system, just behind Ireland.

“This retrospective data shows that policy and changes in higher education directly impact the rankings,” Turner added. “While emerging markets such as Hong Kong, Malaysia, and the UAE continue to improve, there is still a long way to go until they compete with the traditional study destinations of the UK, US, Australia and Canada.”



Source link

Continue Reading

Trending