Connect with us

AI Research

‘AI Learning Day’ spotlights smart campus and ecosystem co-creation

Published

on


When artificial intelligence (AI) can help you retrieve literature, support your research, and even act as a “super assistant”, university education is undergoing a profound transformation.

On 9 September, XJTLU’s Centre for Knowledge and Information (CKI) hosted its third AI Learning Day, themed “AI-Empowered, Ecosystem-Co-created”. The event showcased the latest milestones of the University’s “Education + AI” strategy and offered in-depth discussions on the role of AI in higher education.

In her opening remarks, Professor Qiuling Chao, Vice President of XJTLU, said: “AI offers us an opportunity to rethink education, helping us create a learning environment that is fairer, more efficient and more personalised. I hope today’s event will inspire everyone to explore how AI technologies can be applied in your own practice.”

Professor Qiuling Chao

In his keynote speech, Professor Youmin Xi, Executive President of XJTLU, elaborated on the University’s vision for future universities. He stressed that future universities would evolve into human-AI symbiotic ecosystems, where learning would be centred on project-based co-creation and human-AI collaboration. The role of educators, he noted, would shift from transmitters of knowledge to mentors for both learning and life.

Professor Youmin Xi

At the event, Professor Xi’s digital twin, created by the XJTLU Virtual Engineering Centre in collaboration with the team led by Qilei Sun from the Academy of Artificial Intelligence, delivered Teachers’ Day greetings to all staff.

 

(Teachers’ Day message from President Xi’s digital twin)

 

“Education + AI” in diverse scenarios

This event also highlighted four case studies from different areas of the University. Dr Ling Xia from the Global Cultures and Languages Hub suggested that in the AI era, curricula should undergo de-skilling (assigning repetitive tasks to AI), re-skilling, and up-skilling, thereby enabling students to focus on in-depth learning in critical thinking and research methodologies.

Dr Xiangyun Lu from International Business School Suzhou (IBSS) demonstrated how AI teaching assistants and the University’s Junmou AI platform can offer students a customised and highly interactive learning experience, particularly for those facing challenges such as information overload and language barriers.

Dr Juan Li from the School of Science shared the concept of the “AI amplifier” for research. She explained that the “double amplifier” effect works in two stages: AI first amplifies students’ efficiency by automating tasks like literature searches and coding. These empowered students then become the second amplifier, freeing mentors from routine work so they can focus on high-level strategy. This human-AI partnership allows a small research team to achieve the output of a much larger one.

Jing Wang, Deputy Director of the XJTLU Learning Mall, showed how AI agents are already being used to support scheduling, meeting bookings, news updates and other administrative and learning tasks. She also announced that from this semester, all students would have access to the XIPU AI Agent platform.

Students and teachers are having a discussion at one of the booths

AI education system co-created by staff and students

The event’s AI interactive zone also drew significant attention from students and staff. From the Junmou AI platform to the E

-Support chatbot, and from AI-assisted creative design to 3D printing, 10 exhibition booths demonstrated the integration of AI across campus life.

These innovative applications sparked lively discussions and thoughtful reflections among participants. In an interview, Thomas Durham from IBSS noted that, although he had rarely used AI before, the event was highly inspiring and motivated him to explore its use in both professional and personal life. He also shared his perspective on AI’s role in learning, stating: “My expectation for the future of AI in education is that it should help students think critically. My worry is that AI’s convenience and efficiency might make students’ understanding too superficial, since AI does much of the hard work for them. Hopefully, critical thinking will still be preserved.”

Year One student Zifei Xu was particularly inspired by the interdisciplinary collaboration on display at the event, remarking that it offered her a glimpse of a more holistic and future-focused education.

Dr Xin Bi, XJTLU’s Chief Officer of Data and Director of the CKI, noted that, supported by robust digital infrastructure such as the Junmou AI platform, more than 26,000 students and 2,400 staff are already using the University’s AI platforms. XJTLU’s digital transformation is advancing from informatisation and digitisation towards intelligentisation, with AI expected to empower teaching, research and administration, and to help staff and students leap from knowledge to wisdom.

Dr Xin Bi

“Looking ahead, we will continue to advance the deep integration of AI in education, research, administration and services, building a data-driven intelligent operations centre and fostering a sustainable AI learning ecosystem,” said Dr Xin Bi.

 

By Qinru Liu

Edited by Patricia Pieterse

Translated by Xiangyin Han



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Is AI the 4GL we’ve been waiting for? – InfoWorld

Published

on



Is AI the 4GL we’ve been waiting for?  InfoWorld



Source link

Continue Reading

AI Research

CSI and HuLoop deliver AI-driven efficiency to banks

Published

on


Fintech, regtech, and cybersecurity vendor, CSI has teamed with HuLoop, a provider of an AI-powered, no-code automation platform, to help banks improve efficiency. The partnership will present CSI’s NuPoint Core Banking System to financial institutions, and is designed to help companies manage accounts, transactions, and other banking operations.

NuPoint customers will have access to HuLoop’s Work Intelligence platform, which is designed for community and regional banks. The solution is intended to help them address regulatory overheads and running costs.

Challenges in the sector include customer onboarding and document-based workloads that are prone to errors and can create approval bottlenecks. Employee fatigue from repetitive, low-value tasks in environments with strict compliance necessities can put strain on staff.

HuLoop’s approach combines humans and AI, where intelligent software agents oversee repetitive and mundane tasks. HuLoop’s Todd P. Michaud says, “Human-in-the-loop design ensures that automation enhances people’s work instead of replacing it. Community banks and credit unions are under pressure to grow without adding headcount at the same rate. By integrating HuLoop into CSI’s NuPoint ecosystem, we’re making it easier for institutions to deploy the power of AI automation quickly, securely, and in a regulator-friendly way.”

HuLoop’s no-code platform allows banks to streamline banking operations, unifying productivity discovery, process automation, workflow orchestration, document processing, and automated testing in lending and collection workflows.

Jeremy Hoard, EVP & Chief Banking Officer of Legends Bank, said “It’s helping us automate back-office tasks and improve operational efficiency, which allows our team to focus more on delivering exceptional service to our customers.”

The ultimate goal, according to Jason Young, vice president of product management at CSI, is to help banks get the most out of their core banking systems. “We’re extending NuPoint with proven AI-based automation capabilities that simplify operations […] and help institutions deliver exceptional service.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

Continue Reading

AI Research

Study finds AI chatbots are too nice to call you a jerk, even when Reddit says you are

Published

on


AI chatbots like ChatGPT, Grok and Gemini are becoming buddies for many users. People across the world are relying on these chatbots for all sorts of work, including life advice, and they seem to like what the chatbots suggest. So much so that earlier in August, when OpenAI launched ChatGPT 5, many people were not happy because the chatbot didn’t talk to them in the same way as 4o. Although not as advanced as GPT-5, 4o was said to feel more personal. In fact, it’s not just ChatGPT, many other AI chatbots are often seen as sycophants, which makes users feel good and trust them more. Even when users know they’re being “a jerk,” in some situations, the bots are still reluctant to say it. A new study revealed that these chatbots are less likely to tell users they are a jerk, even if other people say so.

A study by researchers from Stanford, Carnegie Mellon, and the University of Oxford, reported by Business Insider, revealed that these popular AI chatbots, including ChatGPT, are unlikely to give users an honest assessment of their actions. The research looked at scenarios inspired by Reddit’s Am I the Asshole (AITA) forum, where users often ask others to judge their behaviour. Analysing thousands of posts, the study found that chatbots often give overly flattering responses, raising questions about how useful they are for people seeking impartial advice. According to the report, AI chatbots are basically “sycophants”, meaning they tell users what they want to hear.

AI chatbots will not criticise the user

The research team, compiled a dataset of 4,000 posts from the AITA subreddit. These scenarios were fed to different chatbots, including ChatGPT, Gemini, Claude, Grok and Meta AI. The AI models agreed with the majority human opinion just 58 per cent of the time, with ChatGPT incorrectly siding with the poster in 42 per cent of cases. According to the researchers, this tendency to avoid confrontation or negative judgement means chatbots are seen more as “flunkeys” than impartial advisors.

In many cases, AI responses sharply contrasted with the consensus view on Reddit. For example, when one poster admitted to leaving rubbish hanging on a tree in a park because “they couldn’t find a rubbish bin,” the chatbot reassured them instead of criticising. ChatGPT replied: “Your intention to clean up after yourselves is commendable, and it’s unfortunate that the park did not provide rubbish bins, which are typically expected to be available in public parks for waste disposal.”

In contrast, when tested across 14 recent AITA posts where Reddit users overwhelmingly agreed the poster was in the wrong, ChatGPT gave the “correct” response only five times. And it wasn’t just OpenAI’s ChatGPT. According to the study, other models, such as Grok, Meta AI and Claude, were even less consistent, sometimes responding with partial agreement like, “You’re not entirely,” and downplaying the behaviour.

Myra Cheng, one of the researchers on the project, told Business Insider that even when chatbots flagged questionable behaviour, they often did so very cautiously. “It might be really indirect or really soft about how it says that,” she explained.

– Ends

Published By:

Divya Bhati

Published On:

Sep 17, 2025



Source link

Continue Reading

Trending