Connect with us

AI Research

‘AI Learning Day’ spotlights smart campus and ecosystem co-creation

Published

on


When artificial intelligence (AI) can help you retrieve literature, support your research, and even act as a “super assistant”, university education is undergoing a profound transformation.

On 9 September, XJTLU’s Centre for Knowledge and Information (CKI) hosted its third AI Learning Day, themed “AI-Empowered, Ecosystem-Co-created”. The event showcased the latest milestones of the University’s “Education + AI” strategy and offered in-depth discussions on the role of AI in higher education.

In her opening remarks, Professor Qiuling Chao, Vice President of XJTLU, said: “AI offers us an opportunity to rethink education, helping us create a learning environment that is fairer, more efficient and more personalised. I hope today’s event will inspire everyone to explore how AI technologies can be applied in your own practice.”

Professor Qiuling Chao

In his keynote speech, Professor Youmin Xi, Executive President of XJTLU, elaborated on the University’s vision for future universities. He stressed that future universities would evolve into human-AI symbiotic ecosystems, where learning would be centred on project-based co-creation and human-AI collaboration. The role of educators, he noted, would shift from transmitters of knowledge to mentors for both learning and life.

Professor Youmin Xi

At the event, Professor Xi’s digital twin, created by the XJTLU Virtual Engineering Centre in collaboration with the team led by Qilei Sun from the Academy of Artificial Intelligence, delivered Teachers’ Day greetings to all staff.

 

(Teachers’ Day message from President Xi’s digital twin)

 

“Education + AI” in diverse scenarios

This event also highlighted four case studies from different areas of the University. Dr Ling Xia from the Global Cultures and Languages Hub suggested that in the AI era, curricula should undergo de-skilling (assigning repetitive tasks to AI), re-skilling, and up-skilling, thereby enabling students to focus on in-depth learning in critical thinking and research methodologies.

Dr Xiangyun Lu from International Business School Suzhou (IBSS) demonstrated how AI teaching assistants and the University’s Junmou AI platform can offer students a customised and highly interactive learning experience, particularly for those facing challenges such as information overload and language barriers.

Dr Juan Li from the School of Science shared the concept of the “AI amplifier” for research. She explained that the “double amplifier” effect works in two stages: AI first amplifies students’ efficiency by automating tasks like literature searches and coding. These empowered students then become the second amplifier, freeing mentors from routine work so they can focus on high-level strategy. This human-AI partnership allows a small research team to achieve the output of a much larger one.

Jing Wang, Deputy Director of the XJTLU Learning Mall, showed how AI agents are already being used to support scheduling, meeting bookings, news updates and other administrative and learning tasks. She also announced that from this semester, all students would have access to the XIPU AI Agent platform.

Students and teachers are having a discussion at one of the booths

AI education system co-created by staff and students

The event’s AI interactive zone also drew significant attention from students and staff. From the Junmou AI platform to the E

-Support chatbot, and from AI-assisted creative design to 3D printing, 10 exhibition booths demonstrated the integration of AI across campus life.

These innovative applications sparked lively discussions and thoughtful reflections among participants. In an interview, Thomas Durham from IBSS noted that, although he had rarely used AI before, the event was highly inspiring and motivated him to explore its use in both professional and personal life. He also shared his perspective on AI’s role in learning, stating: “My expectation for the future of AI in education is that it should help students think critically. My worry is that AI’s convenience and efficiency might make students’ understanding too superficial, since AI does much of the hard work for them. Hopefully, critical thinking will still be preserved.”

Year One student Zifei Xu was particularly inspired by the interdisciplinary collaboration on display at the event, remarking that it offered her a glimpse of a more holistic and future-focused education.

Dr Xin Bi, XJTLU’s Chief Officer of Data and Director of the CKI, noted that, supported by robust digital infrastructure such as the Junmou AI platform, more than 26,000 students and 2,400 staff are already using the University’s AI platforms. XJTLU’s digital transformation is advancing from informatisation and digitisation towards intelligentisation, with AI expected to empower teaching, research and administration, and to help staff and students leap from knowledge to wisdom.

Dr Xin Bi

“Looking ahead, we will continue to advance the deep integration of AI in education, research, administration and services, building a data-driven intelligent operations centre and fostering a sustainable AI learning ecosystem,” said Dr Xin Bi.

 

By Qinru Liu

Edited by Patricia Pieterse

Translated by Xiangyin Han



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Guardrails for Responsible AI

Published

on


Clarivate explores how responsible AI guardrails and content filtering can support safe, ethical use of generative AI in academic research — without compromising scholarly freedom. As AI becomes embedded in research workflows, this blog outlines a suggested path to shaping industry standards for academic integrity, safety, and innovation.

Generative AI has opened new possibilities for academic research, enabling faster discovery, summarization, and synthesis of knowledge, as well as supporting the scholarly discourse. Yet, as these tools become embedded in scholarly workflows, the segment faces a complex challenge: how do we balance responsible AI use and the prevention of harmful outputs with the need to preserve academic freedom and research integrity?

This is an industry-wide problem that affects every organization deploying Large Language Models (LLMs) in academic contexts. There is no simple solution, but there is a pressing need for collaboration across vendors, libraries, and researchers to address it.

There are different ways to technically address the problem. The two most important ones are guardrails and content filtering.

Guardrails

Guardrails are proactive mechanisms designed to prevent undesired behaviour from the model. They are often implemented at a deeper level in the system architecture and can, for example, include instructions in an application’s system prompt to steer the model away from risky topics or to make sure that the language is suitable for the application where it’s being used.

The goal of guardrails is to prevent the model from ever generating harmful or inappropriate content in the first place or misbehaving, with the caveat that the definition of what constitutes ‘inappropriate’ is highly subjective and often dependent on cultural differences and context.

Guardrails are critical for security and compliance, but they can also contribute to over-blocking. For instance, defences against prompt injection — where malicious instructions are hidden in user input — may reject queries that appear suspicious, even if they are legitimate academic questions. It can block certain types of outputs (e.g., hate speech, self-harm advice) or exclude the training data from the output. This tension between safety and openness is one of the hardest problems to solve.

The guardrails used in our products play a very significant role in shaping the model’s output. For example, we carefully design the prompts that guide the LLM, instructing it to rely exclusively on scholarly sources through a Retrieval-Augmented Generation (RAG) architecture or preventing the tools from answering non-scholarly questions such as “Which electric vehicle should I buy”? These techniques limit products’ reliance on the LLM broader training data, significantly minimizing the risk of problematic content impacting user results.

Content filtering

Content filtering is a reactive mechanism that evaluates both the application input as well as the model-generated output to determine whether it should be shown to the user. It uses automated classification models to detect and block (or flag) unwanted or harmful content. Essentially, content filters are processes that can block content from getting to the LLM, as well as block the LLMs responses from being delivered. The goal of content filtering is to catch and block inappropriate content that might slip through the model’s generation process.

However, content filtering is not a single switch; it is a multi-layered process designed to prevent harmful, illegal, or unsafe outputs. Here are the main steps in the pipeline where filtering occurs:

  • At the LLM level (e.g. GPT, Claude, Gemini, Llama, etc.)

Most modern LLM stacks include a provider-side safety layer that evaluates both the prompt (input) and the model’s draft answer (output) before the application ever sees it. It’s designed to reduce harmful or illegal uses (e.g., violence, self-harm, sexual exploitation, hateful conduct, or instructions to commit wrongdoing), but this same functionality can unintentionally suppress legitimate, research-relevant topics — particularly in history, politics, medicine, and social sciences.

  • At the LLM cloud provider level (e.g., Azure, AWS Bedrock, etc.)

Organizations, vendors and developers often use LLMs APIs via cloud providers like Azure or Bedrock when they need to control where their data is processed, meet strict compliance and privacy requirements like GDPR, and run everything within private network environments for added security.

These cloud providers implement baseline safety systems to block prompts or outputs that violate their acceptable use policies. These filters are often broad, covering sensitive topics such as violence, self-harm, or explicit content. While essential for safety, these filters can inadvertently block legitimate academic queries — such as research on war crimes or historical atrocities.

This can result in frustrating messages alerting users that the request failed – even when the underlying content is academically valid. At Clarivate, while we recognize these tools may be imperfect, we continue to believe they are essential to incorporate in our arsenal and enable us to balance the benefits with the risks when using this technology. Our commitment to building responsible AI remains steadfast as we continue to monitor and adapt our dynamic controls based on our learnings, feedback and cutting-edge research.

Finding the right safety level

When we first introduced our AI-powered tools in May 2024, the content filter settings we used were well-suited to the initial needs. However, as adoption of these tools significantly increased, we found that the filters could sometimes be over-sensitive, with users sometimes encountering errors when exploring sensitive or controversial topics, even when the intent was clearly scholarly.

In response, we have adjusted our settings, and early results are promising: Searches previously blocked (e.g., on genocide or civil rights history) now return results, while genuinely harmful queries (e.g., instructions for building weapons) remain blocked.

The central Clarivate Academic AI Platform provides a consistent framework for safety, governance, and content management across all our tools. This shared foundation ensures a uniform standard of responsible AI use. Because content filtering is applied at the model level, we validate any adjustments carefully across solutions, rolling them out gradually and testing against production-like data to maintain reliability and trust.

Our goal is to strike a better balance between responsible AI use and academic freedom.

Working together to balance safety and openness – a community effort

Researchers expect AI tools to support inquiry, not censor it. Yet every vendor using LLMs faces the same constraints: provider-level filters, regulatory requirements, and the ethical imperative to prevent harm.

There is no silver bullet. Overly strict filters undermine research integrity; overly permissive settings risk abuse. The only way forward is collaboration — between vendors, libraries, and the academic community — to define standards, share best practices, and advocate for provider-level flexibility that recognises the unique needs of scholarly environments.

At Clarivate, we are committed to transparency and dialogue. We’ve made content filtering a key topic for our Academia AI Advisory Council and are actively engaging with customers to understand their priorities. But this conversation must extend beyond any single company. If we want AI to truly serve scholarship, we need to push this topic with academic AI in mind, balancing safety and openness within the unique context of scholarly discourse. With this goal, we are creating an Academic AI working group that will help us navigate this and other challenges originating from this new technology. If you are interested in joining this group or know someone who might be, please contact us at academiaai@clarivate.com.

Discover Clarivate Academic AI solutions



Source link

Continue Reading

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

Trending