AI Research
Does artificial intelligence help uni students learn smarter or just faster? – News and events

25 July 2025
New research from the University of South Australia has revealed that tertiary students’ learning habits are deeply connected to how they engage with generative artificial intelligence tools.
Surveying 435 students from Australia and Canada, the study investigated how confidence, motivation, and effort regulation influence perceptions of AI-powered tools such as ChatGPT.
Researchers found that self-regulated learning skills play a significant role in whether students adopt AI as a meaningful learning aid or merely a quick solution for academic tasks.
The findings show that university students who use AI for academic purposes benefit more than those using it for work or personal tasks. They also show that student who feel confident in their abilities are more likely to use Ai to benefit their learning.
Lead researcher, UniSA’s Associate Professor Negin Mirriahi, says that the way students approach AI tools reflects their broader learning strategies.
“Some students see AI as a shortcut, using it to finish assignments more quickly, but our research suggests that those with strong self-regulation skills actually harness it for deeper learning,” Assoc Prof Negin Mirriahi says.
“It’s not just about speed; it’s about how students engage with knowledge.
“When students feel confident in their capabilities, they are more likely to engage with and effectively use technological tools.”
The study highlights a distinction between students who use AI for university studies and those who engage with it for non-academic purposes such as work or entertainment.
Those using AI for learning were more likely to find it useful, reinforcing the connection between structured self-regulation and effective AI adoption.
Assoc Prof Mirriahi says the findings should inform how universities integrate AI into education.
“Artificial intelligence is reshaping higher education, and our study shows that students who are motivated and confident in their learning benefit the most from AI tools,” she says.
“The challenge for universities is to ensure AI fosters independent thinking rather than becoming a crutch for students who lack self-regulation.
“We need to help students develop the skills to critically engage with AI, not just rely on it for convenience.”
The researchers say that universities should model AI use in classrooms, demonstrating ways that students can engage with the technology to strengthen their critical thinking and independent learning.
“We need to see more engagement with AI in university environments, so that teachers can demonstrate how AI can benefit student learning,” Assoc Prof Mirriahi says.
“This might include showcasing how AI can generate ideas, explain complex concepts, or even critique their work.
“Importantly, through direct and guided engagement, students will learn how they can confidently and responsibly engage with AI to enhance their learning experiences, without cheating.”
Study co-author, UniSA’s Associate Professor Vitomir Kovanović, says that while AI adoption is increasing, there is a risk that some students may rely on it superficially, rather than using it to refine study skills and deepen understanding.
“The concern isn’t just whether students use AI, it’s about how they use it,” Assoc Prof Kovanović says.
“If they approach AI critically and actively evaluate its responses, they can enhance their learning.
“But if AI simply becomes a shortcut to completing tasks, we may see gaps in how students develop their problem-solving skills.”
Assoc Prof Kovanović says that universities should focus on fostering self-efficacy and effort regulation in students.
“Students who have confidence in their learning abilities and persist through challenges tend to find AI genuinely useful,” he says.
“Universities must equip students with strategies to use AI effectively so that it enhances their critical thinking, rather than replacing it.
“AI is already embedded in education, and it’s only going to become more prevalent. Our responsibility is to ensure students are equipped with the right strategies to navigate it effectively.”
………………………………………………………………………………………………………………………….
The full paper is available here: Mirriahi, N., Marrone, R., Barthakur, A., Gabriel, F., Colton, J., Yeung, T. N., Arthur, P., & Kovanovic, V. (2025). The relationship between students’ self-regulated learning skills and technology acceptance of GenAI. Australasian Journal of Educational Technology.
………………………………………………………………………………………………………………………
Contacts for interview: Associate Professor Negin Mirriahi E: Negin.Mirriahi@unisa.edu.au
Associate Professor Vitomir Kovanović E: Vitomir.Kovanovic@unisa.edu.au
Media contact: Annabel Mansfield M: +61 479 182 489 E: Annabel.Mansfield@unisa.edu.au
AI Research
Nursa Launches Artificial Intelligence for Nurse Scheduling

Nursa Intelligence Assistant enables rapid posting of single or bulk shifts
SALT LAKE CITY, September 04, 2025–(BUSINESS WIRE)–Nursa, a nationwide platform that exists to put a nurse at the bedside of every patient in need, today announced the launch of an artificial intelligence assistant that enables healthcare facilities to rapidly generate shift listings within the Nursa platform. The first-of-its-kind smart scheduling tool helps organizations post single or bulk shifts within seconds so they can reach qualified, available clinicians immediately.
Active now within the Nursa platform, the Nursa Intelligence Assistant or “NIA,” allows post creation three ways: users can speak directly to NIA, describing their shift needs; they can take a photo of relevant shift information, even if it’s a handwritten scribble; and they can upload any spreadsheet or file used to track scheduling. From there, NIA fills in the details, letting users review and edit, and confirm pricing, before posting.
Carlee Scholl, staffing coordinator at Sullivan Park Care Center in Spokane, Wash., manages up to 150 shifts per month and recently began using NIA to schedule individual and bulk shifts. She described the experience as quick and accurate, with the AI assistant capturing all the details perfectly. “I just looked it over to make sure it was everything that I needed,” she said. “It was spot on.”
“Artificial Intelligence is opening up new opportunities to streamline cumbersome workflows so healthcare facilities can focus on the important business of delivering quality patient care,” said Curtis Anderson, CEO and founder of Nursa. “With NIA, facilities eliminate the repetitive typing and data entry of shift posting by generating one or thousands of shifts in just seconds. We’re redefining what fast and easy staffing feels like, and this is just the beginning.”
For more information on how Nursa helps healthcare facilities, hospitals and health systems solve staffing needs with qualified clinicians, visit nursa.com.
About Nursa
Nursa is a nationwide platform that exists to put a nurse at the bedside of every patient in need, removing the financial strain and operational gaps of traditional staffing agencies. Nursa’s technology enables hospitals, health systems, skilled nursing facilities and community organizations to easily secure reliable, qualified, nursing talent for per diem shifts and contract work. Founded in 2019 and headquartered in Salt Lake City, Nursa is trusted by a growing community of more than 3,400 facilities and 400,000 nurses nationwide and is accredited by The Joint Commission. For more information, visit nursa.com.
AI Research
Researchers Empower AI Companions With Spatiotemporal Reasoning For Dynamic Real-world Understanding

The ability to understand and respond to specific references within a video, relating to both where and when events occur, represents a crucial next step for artificial intelligence. Honglu Zhou, Xiangyu Peng, Shrikant Kendre, and colleagues at Salesforce AI Research address this challenge with Strefer, a novel framework that empowers Video LLMs with advanced spatiotemporal reasoning capabilities. Strefer generates synthetic instruction data, effectively teaching these models to interpret fine-grained spatial and temporal references within dynamic video footage, without relying on expensive or time-consuming human annotation. This approach significantly improves a Video LLM’s ability to understand complex instructions involving specific objects, locations, and moments in time, paving the way for more versatile and perceptually grounded AI companions capable of interacting with the real world. The results demonstrate that models trained with Strefer-generated data outperform existing methods on tasks requiring precise spatial and temporal understanding, establishing a new benchmark for instruction-tuned video analysis.
Data Synthesis and VLM Evaluation Strategies
This research details a project focused on building more robust and accurate Video Language Models (VLMs) to improve their ability to understand and reason about video content, particularly in complex scenarios involving temporal reasoning, object localization, and nuanced descriptions. The core goal is to address limitations of existing VLMs, which often struggle with tasks requiring precise temporal understanding or grounding in specific video segments. The project relies heavily on generating synthetic data to target the weaknesses of existing VLMs, challenging the model in areas where it struggles. This is achieved through a process called Strefer, and the data covers a wide range of tasks categorized as open-ended question answering, multiple-choice question answering, temporal reasoning, object localization, and reasoning about actions and behaviors.
The data format varies, specifying how much of the video is used as input, and whether frames are extracted from a segment or the full video. Many tasks have mask-refer versions, where the question focuses on a specific region of interest in the video, forcing the model to ground its answers in the visual content. To improve the model’s ability to understand time, the research uses a technique that discretizes continuous time into segments, representing each segment with a temporal token added to the language model’s vocabulary. This allows it to process time-related information more effectively. Existing models struggle with understanding complex video content when queries rely on precise spatial locations or specific moments in time. Strefer addresses this limitation by systematically creating detailed, object-centric metadata from videos, including the location of subjects and objects as tracked over time, and their associated actions. This innovative approach leverages a modular system of pre-trained models, including Large Language Models and multimodal vision foundation models, to pseudo-annotate videos with temporally dense information.
By building upon this structured metadata, Strefer guides language models in generating high-quality instruction data specifically designed to train Video LLMs in understanding and responding to complex spatiotemporal references. Unlike existing datasets, Strefer automatically produces instruction-response pairs at scale, grounded in the dynamic, object-centric structures within videos. Current models struggle with detailed spatial and temporal reasoning, particularly when interpreting gestures or time-based cues in user queries. Strefer addresses this limitation by automatically generating synthetic training data that includes rich, detailed information about objects, their locations, and actions occurring at specific moments in time. By using a combination of existing AI models to annotate videos with this detailed metadata, Strefer creates a large dataset without the need for costly human annotation.
Experiments demonstrate that video models trained with this synthetically generated data outperform existing models on tasks requiring spatial and temporal disambiguation, showing enhanced reasoning abilities. The authors acknowledge that the framework relies on the accuracy of the underlying AI models used for annotation. Future work may focus on refining the annotation process and exploring the application of Strefer to more complex real-world scenarios.
👉 More information
🗞 Strefer: Empowering Video LLMs with Space-Time Referring and Reasoning via Synthetic Instruction Data
🧠 ArXiv: https://arxiv.org/abs/2509.03501
AI Research
Silicon Valley’s Major Reshuffle: Why Do Chinese People Dominate AGI, from Small

In the past two decades, the Internet in Silicon Valley has belonged to the Indians. With their diligence, high efficiency, and strong execution ability, they have built the software empire of the Internet era in Silicon Valley.
However, with the rise of generative AI, the talent landscape in Silicon Valley is undergoing a systematic shift. Undoubtedly, the Chinese are becoming the most important source of talent in the AGI field.
Let’s take a look at the high “Chinese quotient” in Silicon Valley:
Among the initial 11 – member team of Meta’s Super Intelligence Lab, 7 are of Chinese descent; among the first 12 members of xAI, 5 are Chinese, accounting for more than 40%; when Elon Musk released Grok 4, the two core figures sitting beside him were also Chinese; as for OpenAI, 6 out of the 17 key team members are Chinese.
No wonder some people joke: “Finance belongs to the Jews, and AGI belongs to the Chinese.”
What’s even more interesting is that the resumes of these top – tier talents are almost like a “template”:
Most of them graduated from top domestic universities such as Tsinghua University and Peking University for their undergraduate studies. Then they went to prestigious schools like Princeton, Stanford, MIT, and Carnegie Mellon to pursue a doctorate. Subsequently, they naturally entered the most cutting – edge AI labs in Silicon Valley and became the backbone in pushing the boundaries of technology. This has almost become the most stable and efficient talent delivery channel in the AI era.
There is a thought – provoking question behind this: How can an education system often criticized for “lacking creativity” systematically cultivate top – tier talents who can penetrate the technological fog and find the path to AGI?
01
The Chinese Have Become the Most Valuable Talent in the United States
In the AI departments of top – tier technology companies in Silicon Valley, the proportion of Chinese among the core members is astonishingly high.
The “Global Artificial Intelligence Talent Tracking Report 2.0” released by the Paulson Foundation shows that in 2022, among the top 20% of AI institutions in the United States, the proportion of Chinese researchers reached 38%, even exceeding the 37% of local Americans.
If we focus on specific companies, the presence of the Chinese becomes even more prominent.
(1) In Meta’s Super Intelligence Lab, the Chinese Account for 64% of the First – Batch Core Members
In July, Meta established the Super Intelligence Lab, and the proportion of the Chinese was remarkable. Among the first – announced 11 – member core team, 7 have Chinese backgrounds.
They are almost all the technical backbones behind the key technological and product breakthroughs of OpenAI:
Bi Shuchao: Co – creator of the voice mode of GPT – 4o and o4 – mini, former head of the multimodal post – training at OpenAI;
Chang Huiwen: Co – creator of the image generation of GPT – 4o, inventor of the MaskGIT and Muse text – to – image architectures at Google;
Zhao Shengjia: Co – creator of ChatGPT, GPT – 4, and several mini – models, former head of the synthetic data team at OpenAI.
Later, the team expanded to more than 30 people. In a circulated list of 44 people, the proportion of the Chinese was close to half. According to Wired magazine, in order to recruit talent, Meta even offered a compensation package of $300 million over four years, with more than $100 million payable in the first year.
(2) In OpenAI’s Gold – Medal AI Team, the Chinese Account for 35%
In OpenAI, the proportion of the Chinese is also astonishing.
In November 2022, when ChatGPT was astonishingly launched, among the 87 – member creative team, the Chinese accounted for 10.34%, reaching 9 people, and 5 of them graduated from universities in mainland China for their undergraduate studies.
Behind the many products that have been successively unveiled, there are also a large number of Chinese faces:
There are more than 30 Chinese behind GPT – 4. Among the 9 leaders of the GPT – 4o mini team, 5 are Chinese. Among the 13 – member R & D team of Sora, 4 are Chinese.
Last year, when OpenAI launched its first native multimodal model, GPT – 4o, 6 out of the 17 key team members were Chinese, from universities such as Tsinghua University, Peking University, Shanghai Jiao Tong University, and the University of Science and Technology of China.
In the latest GPT – 5 demonstration, the faces of Chinese researchers appeared three times. What’s more noteworthy is that the Chinese have begun to move into management positions. For example, Mark Chen joined OpenAI in 2018, participated in core projects such as DALL·E, GPT – 4, and o1, and has now been promoted to the senior vice – president of research.
(3) Musk’s “Chinese Brain Trust”
In xAI, Musk’s “brain trust” also includes the Chinese. Among the 12 – member founding team, 5 are Chinese, accounting for more than 40%. At the Grok 4 launch event, the two core founding members on stage with Musk were Tony Wu and Jimmy Ba.
Among them, the former is a co – founder of xAI and has interned at Google DeepMind and OpenAI. The latter is the well – known proposer of the AdamW optimization algorithm, with more than 210,000 citations of his papers and is already a big name in the academic circle.
It can be seen that the Chinese have become the most important source of talent in the top – tier AI labs in Silicon Valley, without a doubt.
This is not accidental. According to a report from the think – tank MacroPolo, in 2019, among the top AI research institutions in the United States, the proportion of researchers with an undergraduate Chinese nationality background was 29%. Just three years later, in 2022, this figure soared to 47%, almost half, while that of the United States was only 18%.
A clear path for top – tier AI talent is emerging: Undergraduate degree from top universities such as Tsinghua and Peking + Doctorate in the United States = Global top – tier AI talent.
According to an incomplete statistics by Crow Master, among the 30 Chinese core researchers sorted out, 22 have a similar path:
They graduated from top domestic universities such as Tsinghua University, Peking University, the University of Science and Technology of China, and Zhejiang University for their undergraduate studies. Then they went to prestigious schools like Princeton, Stanford, MIT, and Carnegie Mellon to pursue a doctorate. After that, they entered the most cutting – edge AI labs in Silicon Valley and became the backbone in pushing the boundaries of technology.
For example, among the core members of Meta’s Super Intelligence Lab, there are many such representative figures: Yu Jiahui graduated from the Juvenile Class of the University of Science and Technology of China for his undergraduate studies and studied at UIUC for his doctorate; Zhao Shengjia graduated from Tsinghua University for his undergraduate studies and Stanford University for his doctorate; Bi Shuchao graduated from Zhejiang University for his undergraduate studies and the University of California, Berkeley for his doctorate; Ren Hongyu graduated from Peking University for his undergraduate studies and Stanford University for his doctorate.
Why do these so – called “students from rural areas taking exams” who seem to have grown up in a “sea of questions” become the most scarce talent in the current AI industry?
02
Where Does the Engineer Dividend in the AI Era Come From?
In the past, when talking about AI, people used to focus on Silicon Valley. But if we look at the present, you will find that another force is growing rapidly, which is the talent accumulation in AI research in China.
Now, China graduates more than 5 million students majoring in computer science and related fields every year, making it the world’s largest exporter of STEM talent.
According to the Dimensions research database, currently, there are more than 30,000 active artificial intelligence researchers in China. The total number of doctoral and post – doctoral students alone is twice the total number of artificial intelligence researchers in the United States. In contrast, the United States has about 10,000 researchers, the 27 EU countries have about 20,000, and the United Kingdom has about 3,000.
This constitutes a huge talent echelon for China’s AI, and it can even be said to be the new “engineer dividend” in the AI era.
More importantly, China’s basic education emphasizes mathematical and physical foundations and problem – solving abilities. This long – term high – intensity training just cultivates the core qualities suitable for AI research:
First, Structured thinking, which can translate real – world problems into mathematical problems.
For example, in Olympiad math problems and physics problems, you are actually practicing translating real – world situations into formulas and equations and then solving them with mathematical methods.
In problem – solving training, students learn the ability to “remove redundant information and grasp the core variables”. The same is true in AI research. Complex things such as language, images, and actions must first be translated into vectors and matrices before they can be processed by machines.
Second, Patience and resilience.
Math problems and competition problems often require a long process of thinking and calculation, and patience is a necessary quality. The same is true in AI research. Behind a single paper, there may be hundreds or thousands of experiments; models often have billions or even hundreds of billions of parameters, and parameter tuning is very time – consuming. Without patience, it is difficult to persevere in large – model experiments.
Especially when reinforcement learning replaces pre – training as the new Scaling law for models, the abilities of Chinese students are more suitable.
The characteristic of reinforcement learning is that the goal is clear (reward function), the path is not unique, and continuous trial – and – error iteration is required. In Ilya’s words:
“Reinforcement learning allows AI to try new tasks with random paths. If the effect exceeds expectations, then update the weights of the neural network so that AI remembers to use this successful event more often and then starts the next attempt.”
This is very similar to the logic of Olympiad math: Try a path → Fail → Correct mistakes → Summarize → Try again.
And this is exactly the rhythm that Chinese students are most familiar with. Since childhood, they have been used to breaking down big problems into small problems and then solving them step by step. Long – term mathematical and physical training has also made them very proficient in tools such as probability, optimization, and linear algebra – and these are exactly the basic skills of RL.
By the time many people graduate from undergraduate studies, they are already very familiar with matrix operations, gradient descent, and probability modeling. So when they enter research, they don’t need to “make up for lost lessons” and can directly engage in algorithm innovation and implementation.
In addition, the characteristics of RL are that the results are quantifiable and the indicators are clear: reward curves, convergence speed, and test scores can all show improvements at a glance. Such a research model is particularly in line with the Chinese people’s habits of being pragmatic, efficient, and pursuing certainty.
This is why the Chinese have a particularly strong presence in the field of RL.
In the RL papers of NeurIPS 2020, 30% of the first authors are of Chinese descent; in Google’s RL team, one – quarter to one – third graduated from Chinese universities; in the xAI team, Zhang Guodong, Yang Ge, Jimmy Ba and others have all left achievements in top – tier RL research.
To some extent, reinforcement learning is the “natural home field” of Chinese engineers. And the rise of DeepSeek – R1 at the beginning of this year is more like a clear indication that this advantage is bearing fruit.
There is no mystery behind it. China has a large educated population, long – term mathematical and physical training from childhood to adulthood, long – term national investment in scientific research, and a motivation deeply rooted in culture – the belief that technology can transform the world.
It is these factors that together support a huge “talent pipeline”, continuously sending doctoral – level researchers to top – tier universities and AI labs in the United States.
In the era of large models, Silicon Valley still needs a few “Da Vinci – like geniuses” who can invent new paradigms, but at present, it needs a large number of engineering scientists who can refine algorithms to the extreme. China’s education and talent system just shows strong “hematopoietic ability” at this moment, providing a stable and solid scientific research foundation.
The competition in AI has never been a sprint on a single technological curve, but a long – term game of talent pipelines, education systems, and cultural mindsets.
When the most cutting – edge labs in Silicon Valley are full of Chinese faces, this is not only a talent phenomenon but also a civilization phenomenon. The future of AGI is not just a competition between companies, but a global civilization competition in talent allocation.
And in this competition, the Chinese have already stood in the center of the stage.
This article is from the WeChat public account “Crow Intelligence Talk”, author: Smart Crow. Republished by 36Kr with authorization.
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics