Connect with us

AI Research

Number of Students Using AI for Schoolwork Surges by Double-Digits

Published

on


The adoption of artificial intelligence (AI) in U.S. classrooms has accelerated rapidly over the past year, with double-digit growth in the number of students using AI tools for schoolwork, according to a new report from Quizlet.

“With the support of AI tools, students can reclaim time and streamline tasks, making their value immediately clear, Quizlet’s CEO told Newsweek in part.

Why It Matters

Artificial intelligence has surged in popularity across the United States and worldwide.

While some companies are integrating the tools to improve productivity, students are using the technology to their own advantage, whether by helping them conduct research for papers, creating baseline drafts for essays or as a tutor-like service on an unclear topic.

What to Know

Quizlet’s 2025 How America Learns report revealed that 85 percent of teachers and students (age 14-22) now use AI in some capacity, marking a substantial increase from 66 percent in 2024. Among students, 89 percent reported using AI for schoolwork, compared to just 77 percent in the previous year.

“We also know that students today are juggling more than ever. In particular, college students are significantly more likely than high school students (82 percent vs. 73 percent) to have sacrificed sleep, personal time, or extracurricular activities because of homework,” Kurt Beidler, CEO of Quizlet, told Newsweek. “With the support of AI tools, students can reclaim time and streamline tasks, making their value immediately clear.”

The Pew Research Center’s January 2025 survey echoes this trend, finding that 26 percent of U.S. teens had used ChatGPT for schoolwork—double the 13 percent observed in 2023. Usage is highest among older students, Black and Hispanic teens, and those most familiar with AI tools.

Students are leveraging AI for a variety of academic tasks. Quizlet’s survey found the most common uses are:

  • Summarizing or synthesizing information (56 percent)
  • Conducting research (46 percent)
  • Generating study guides or materials (45 percent)

Teens support using AI tools like ChatGPT primarily for researching new topics (54 percent find it acceptable), though fewer approve of its use for math problems (29 percent) or essay writing (18 percent), according to Pew.

Stock image of a child using a smartphone while doing homework.

Nadzeya Haroshka/Getty Images

“The growing adoption of AI in education signals a lasting trend toward greater use of these new technologies to enhance the learning journey by making it more efficient and effective,” Beidler said.

“Just as the adoption of AI continues to increase, we anticipate the future of education to become more personalized. We’re already seeing how AI can adapt in real time—identifying knowledge gaps, adjusting difficulty levels, and delivering the right content at the right moment to help students master material more efficiently.”

Despite rapid adoption, opinion on AI’s impact on education remains mixed. According to Quizlet’s findings, only 40 percent of respondents believe AI is used ethically and effectively in classrooms, with students less likely to agree (29 percent) compared to parents (46 percent) and teachers (57 percent).

“While privacy and security are vital concerns, we also need to address the deeper cognitive and developmental risks posed by AI in education,” Leyla Bilge, Global Head of Scam Research for Norton, told Newsweek.

“Easy access to instant answers and AI-generated content can lead to intellectual passivity—undermining curiosity, problem-solving, and critical thinking. Overreliance on AI shortcuts means students may miss essential learning processes, weakening foundational skills like reading comprehension, analytical reasoning, and writing.”

Demographic differences also persist: Pew’s data shows awareness and usage of ChatGPT is higher among white teens and those from wealthier households, while Black and Hispanic teens are more likely than their white peers to use it for schoolwork.

K-12 educators remain cautious. A 2023 Pew survey reported that 25 percent of public K-12 teachers believe AI tools do more harm than good, with more pessimism among high school staff. Still, many see benefits—especially in supporting research and personalized learning—if managed responsibly.

What People Are Saying

Kurt Beidler, CEO of Quizlet, said in the release: “As we drive the next era of AI-powered learning, it’s our mission to give every student and lifelong learner the tools and confidence to succeed, no matter their motivation or what they’re striving to achieve. As we’ve seen in the data, there’s immense opportunity when it comes to career-connected learning, from life skills development to improving job readiness, that goes well beyond the classroom and addresses what we’re hearing from students and teachers alike.”

Leyla Bilge, Global Head of Scam Research for Norton, told Newsweek: “The sharp rise in AI adoption across classrooms tells us that what was once considered cutting-edge is now becoming second nature. This isn’t just students experimenting, but it’s educators and parents recognizing AI as a legitimate tool for learning and support. Whether it’s drafting essays, solving math problems, or translating concepts into simpler terms, AI is making education more accessible and adaptive.”

What Happens Next

As digital learning expands, Quizlet’s report notes that over 60 percent of respondents want digital methods to be equal to or greater than traditional learning, citing flexibility and accessibility. However, gaps persist: only 43 percent affirm equal access for students with learning differences.

Looking ahead, the top skills students, parents, and educators want schools to develop include critical thinking, financial literacy, mental health management, and creativity—areas where AI-powered tools could play a growing role.

“Digital literacy must evolve. Students need to critically evaluate AI outputs, understand their limitations, and learn how to protect their personal data. Most importantly, children should engage with developmentally appropriate AI tools, those that encourage exploration and responsible use, not just efficiency,” Bilge said.

“Like age-appropriate books, AI systems for kids should align with educational and cognitive developmental goals.”



Source link

AI Research

Nursa Launches Artificial Intelligence for Nurse Scheduling

Published

on


Nursa Intelligence Assistant enables rapid posting of single or bulk shifts

SALT LAKE CITY, September 04, 2025–(BUSINESS WIRE)–Nursa, a nationwide platform that exists to put a nurse at the bedside of every patient in need, today announced the launch of an artificial intelligence assistant that enables healthcare facilities to rapidly generate shift listings within the Nursa platform. The first-of-its-kind smart scheduling tool helps organizations post single or bulk shifts within seconds so they can reach qualified, available clinicians immediately.

Active now within the Nursa platform, the Nursa Intelligence Assistant or “NIA,” allows post creation three ways: users can speak directly to NIA, describing their shift needs; they can take a photo of relevant shift information, even if it’s a handwritten scribble; and they can upload any spreadsheet or file used to track scheduling. From there, NIA fills in the details, letting users review and edit, and confirm pricing, before posting.

Carlee Scholl, staffing coordinator at Sullivan Park Care Center in Spokane, Wash., manages up to 150 shifts per month and recently began using NIA to schedule individual and bulk shifts. She described the experience as quick and accurate, with the AI assistant capturing all the details perfectly. “I just looked it over to make sure it was everything that I needed,” she said. “It was spot on.”

“Artificial Intelligence is opening up new opportunities to streamline cumbersome workflows so healthcare facilities can focus on the important business of delivering quality patient care,” said Curtis Anderson, CEO and founder of Nursa. “With NIA, facilities eliminate the repetitive typing and data entry of shift posting by generating one or thousands of shifts in just seconds. We’re redefining what fast and easy staffing feels like, and this is just the beginning.”

For more information on how Nursa helps healthcare facilities, hospitals and health systems solve staffing needs with qualified clinicians, visit nursa.com.

About Nursa

Nursa is a nationwide platform that exists to put a nurse at the bedside of every patient in need, removing the financial strain and operational gaps of traditional staffing agencies. Nursa’s technology enables hospitals, health systems, skilled nursing facilities and community organizations to easily secure reliable, qualified, nursing talent for per diem shifts and contract work. Founded in 2019 and headquartered in Salt Lake City, Nursa is trusted by a growing community of more than 3,400 facilities and 400,000 nurses nationwide and is accredited by The Joint Commission. For more information, visit nursa.com.



Source link

Continue Reading

AI Research

Researchers Empower AI Companions With Spatiotemporal Reasoning For Dynamic Real-world Understanding

Published

on


The ability to understand and respond to specific references within a video, relating to both where and when events occur, represents a crucial next step for artificial intelligence. Honglu Zhou, Xiangyu Peng, Shrikant Kendre, and colleagues at Salesforce AI Research address this challenge with Strefer, a novel framework that empowers Video LLMs with advanced spatiotemporal reasoning capabilities. Strefer generates synthetic instruction data, effectively teaching these models to interpret fine-grained spatial and temporal references within dynamic video footage, without relying on expensive or time-consuming human annotation. This approach significantly improves a Video LLM’s ability to understand complex instructions involving specific objects, locations, and moments in time, paving the way for more versatile and perceptually grounded AI companions capable of interacting with the real world. The results demonstrate that models trained with Strefer-generated data outperform existing methods on tasks requiring precise spatial and temporal understanding, establishing a new benchmark for instruction-tuned video analysis.

Data Synthesis and VLM Evaluation Strategies

This research details a project focused on building more robust and accurate Video Language Models (VLMs) to improve their ability to understand and reason about video content, particularly in complex scenarios involving temporal reasoning, object localization, and nuanced descriptions. The core goal is to address limitations of existing VLMs, which often struggle with tasks requiring precise temporal understanding or grounding in specific video segments. The project relies heavily on generating synthetic data to target the weaknesses of existing VLMs, challenging the model in areas where it struggles. This is achieved through a process called Strefer, and the data covers a wide range of tasks categorized as open-ended question answering, multiple-choice question answering, temporal reasoning, object localization, and reasoning about actions and behaviors.

The data format varies, specifying how much of the video is used as input, and whether frames are extracted from a segment or the full video. Many tasks have mask-refer versions, where the question focuses on a specific region of interest in the video, forcing the model to ground its answers in the visual content. To improve the model’s ability to understand time, the research uses a technique that discretizes continuous time into segments, representing each segment with a temporal token added to the language model’s vocabulary. This allows it to process time-related information more effectively. Existing models struggle with understanding complex video content when queries rely on precise spatial locations or specific moments in time. Strefer addresses this limitation by systematically creating detailed, object-centric metadata from videos, including the location of subjects and objects as tracked over time, and their associated actions. This innovative approach leverages a modular system of pre-trained models, including Large Language Models and multimodal vision foundation models, to pseudo-annotate videos with temporally dense information.

By building upon this structured metadata, Strefer guides language models in generating high-quality instruction data specifically designed to train Video LLMs in understanding and responding to complex spatiotemporal references. Unlike existing datasets, Strefer automatically produces instruction-response pairs at scale, grounded in the dynamic, object-centric structures within videos. Current models struggle with detailed spatial and temporal reasoning, particularly when interpreting gestures or time-based cues in user queries. Strefer addresses this limitation by automatically generating synthetic training data that includes rich, detailed information about objects, their locations, and actions occurring at specific moments in time. By using a combination of existing AI models to annotate videos with this detailed metadata, Strefer creates a large dataset without the need for costly human annotation.

Experiments demonstrate that video models trained with this synthetically generated data outperform existing models on tasks requiring spatial and temporal disambiguation, showing enhanced reasoning abilities. The authors acknowledge that the framework relies on the accuracy of the underlying AI models used for annotation. Future work may focus on refining the annotation process and exploring the application of Strefer to more complex real-world scenarios.

👉 More information
🗞 Strefer: Empowering Video LLMs with Space-Time Referring and Reasoning via Synthetic Instruction Data
🧠 ArXiv: https://arxiv.org/abs/2509.03501



Source link

Continue Reading

AI Research

Silicon Valley’s Major Reshuffle: Why Do Chinese People Dominate AGI, from Small

Published

on


In the past two decades, the Internet in Silicon Valley has belonged to the Indians. With their diligence, high efficiency, and strong execution ability, they have built the software empire of the Internet era in Silicon Valley.

However, with the rise of generative AI, the talent landscape in Silicon Valley is undergoing a systematic shift. Undoubtedly, the Chinese are becoming the most important source of talent in the AGI field.

Let’s take a look at the high “Chinese quotient” in Silicon Valley:

Among the initial 11 – member team of Meta’s Super Intelligence Lab, 7 are of Chinese descent; among the first 12 members of xAI, 5 are Chinese, accounting for more than 40%; when Elon Musk released Grok 4, the two core figures sitting beside him were also Chinese; as for OpenAI, 6 out of the 17 key team members are Chinese.

No wonder some people joke: “Finance belongs to the Jews, and AGI belongs to the Chinese.”

What’s even more interesting is that the resumes of these top – tier talents are almost like a “template”:

Most of them graduated from top domestic universities such as Tsinghua University and Peking University for their undergraduate studies. Then they went to prestigious schools like Princeton, Stanford, MIT, and Carnegie Mellon to pursue a doctorate. Subsequently, they naturally entered the most cutting – edge AI labs in Silicon Valley and became the backbone in pushing the boundaries of technology. This has almost become the most stable and efficient talent delivery channel in the AI era.

There is a thought – provoking question behind this: How can an education system often criticized for “lacking creativity” systematically cultivate top – tier talents who can penetrate the technological fog and find the path to AGI?

01

The Chinese Have Become the Most Valuable Talent in the United States

In the AI departments of top – tier technology companies in Silicon Valley, the proportion of Chinese among the core members is astonishingly high.

The “Global Artificial Intelligence Talent Tracking Report 2.0” released by the Paulson Foundation shows that in 2022, among the top 20% of AI institutions in the United States, the proportion of Chinese researchers reached 38%, even exceeding the 37% of local Americans.

If we focus on specific companies, the presence of the Chinese becomes even more prominent.

(1) In Meta’s Super Intelligence Lab, the Chinese Account for 64% of the First – Batch Core Members

In July, Meta established the Super Intelligence Lab, and the proportion of the Chinese was remarkable. Among the first – announced 11 – member core team, 7 have Chinese backgrounds.

They are almost all the technical backbones behind the key technological and product breakthroughs of OpenAI:

Bi Shuchao: Co – creator of the voice mode of GPT – 4o and o4 – mini, former head of the multimodal post – training at OpenAI;

Chang Huiwen: Co – creator of the image generation of GPT – 4o, inventor of the MaskGIT and Muse text – to – image architectures at Google;

Zhao Shengjia: Co – creator of ChatGPT, GPT – 4, and several mini – models, former head of the synthetic data team at OpenAI.

Later, the team expanded to more than 30 people. In a circulated list of 44 people, the proportion of the Chinese was close to half. According to Wired magazine, in order to recruit talent, Meta even offered a compensation package of $300 million over four years, with more than $100 million payable in the first year.

(2) In OpenAI’s Gold – Medal AI Team, the Chinese Account for 35%

In OpenAI, the proportion of the Chinese is also astonishing.

In November 2022, when ChatGPT was astonishingly launched, among the 87 – member creative team, the Chinese accounted for 10.34%, reaching 9 people, and 5 of them graduated from universities in mainland China for their undergraduate studies.

Behind the many products that have been successively unveiled, there are also a large number of Chinese faces:

There are more than 30 Chinese behind GPT – 4. Among the 9 leaders of the GPT – 4o mini team, 5 are Chinese. Among the 13 – member R & D team of Sora, 4 are Chinese.

Last year, when OpenAI launched its first native multimodal model, GPT – 4o, 6 out of the 17 key team members were Chinese, from universities such as Tsinghua University, Peking University, Shanghai Jiao Tong University, and the University of Science and Technology of China.

In the latest GPT – 5 demonstration, the faces of Chinese researchers appeared three times. What’s more noteworthy is that the Chinese have begun to move into management positions. For example, Mark Chen joined OpenAI in 2018, participated in core projects such as DALL·E, GPT – 4, and o1, and has now been promoted to the senior vice – president of research.

(3) Musk’s “Chinese Brain Trust”

In xAI, Musk’s “brain trust” also includes the Chinese. Among the 12 – member founding team, 5 are Chinese, accounting for more than 40%. At the Grok 4 launch event, the two core founding members on stage with Musk were Tony Wu and Jimmy Ba.

Among them, the former is a co – founder of xAI and has interned at Google DeepMind and OpenAI. The latter is the well – known proposer of the AdamW optimization algorithm, with more than 210,000 citations of his papers and is already a big name in the academic circle.

It can be seen that the Chinese have become the most important source of talent in the top – tier AI labs in Silicon Valley, without a doubt.

This is not accidental. According to a report from the think – tank MacroPolo, in 2019, among the top AI research institutions in the United States, the proportion of researchers with an undergraduate Chinese nationality background was 29%. Just three years later, in 2022, this figure soared to 47%, almost half, while that of the United States was only 18%.

A clear path for top – tier AI talent is emerging: Undergraduate degree from top universities such as Tsinghua and Peking + Doctorate in the United States = Global top – tier AI talent.

According to an incomplete statistics by Crow Master, among the 30 Chinese core researchers sorted out, 22 have a similar path:

They graduated from top domestic universities such as Tsinghua University, Peking University, the University of Science and Technology of China, and Zhejiang University for their undergraduate studies. Then they went to prestigious schools like Princeton, Stanford, MIT, and Carnegie Mellon to pursue a doctorate. After that, they entered the most cutting – edge AI labs in Silicon Valley and became the backbone in pushing the boundaries of technology.

For example, among the core members of Meta’s Super Intelligence Lab, there are many such representative figures: Yu Jiahui graduated from the Juvenile Class of the University of Science and Technology of China for his undergraduate studies and studied at UIUC for his doctorate; Zhao Shengjia graduated from Tsinghua University for his undergraduate studies and Stanford University for his doctorate; Bi Shuchao graduated from Zhejiang University for his undergraduate studies and the University of California, Berkeley for his doctorate; Ren Hongyu graduated from Peking University for his undergraduate studies and Stanford University for his doctorate.

Why do these so – called “students from rural areas taking exams” who seem to have grown up in a “sea of questions” become the most scarce talent in the current AI industry?

02

Where Does the Engineer Dividend in the AI Era Come From?

In the past, when talking about AI, people used to focus on Silicon Valley. But if we look at the present, you will find that another force is growing rapidly, which is the talent accumulation in AI research in China.

Now, China graduates more than 5 million students majoring in computer science and related fields every year, making it the world’s largest exporter of STEM talent.

According to the Dimensions research database, currently, there are more than 30,000 active artificial intelligence researchers in China. The total number of doctoral and post – doctoral students alone is twice the total number of artificial intelligence researchers in the United States. In contrast, the United States has about 10,000 researchers, the 27 EU countries have about 20,000, and the United Kingdom has about 3,000.

This constitutes a huge talent echelon for China’s AI, and it can even be said to be the new “engineer dividend” in the AI era.

More importantly, China’s basic education emphasizes mathematical and physical foundations and problem – solving abilities. This long – term high – intensity training just cultivates the core qualities suitable for AI research:

First, Structured thinking, which can translate real – world problems into mathematical problems.

For example, in Olympiad math problems and physics problems, you are actually practicing translating real – world situations into formulas and equations and then solving them with mathematical methods.

In problem – solving training, students learn the ability to “remove redundant information and grasp the core variables”. The same is true in AI research. Complex things such as language, images, and actions must first be translated into vectors and matrices before they can be processed by machines.

Second, Patience and resilience.

Math problems and competition problems often require a long process of thinking and calculation, and patience is a necessary quality. The same is true in AI research. Behind a single paper, there may be hundreds or thousands of experiments; models often have billions or even hundreds of billions of parameters, and parameter tuning is very time – consuming. Without patience, it is difficult to persevere in large – model experiments.

Especially when reinforcement learning replaces pre – training as the new Scaling law for models, the abilities of Chinese students are more suitable.

The characteristic of reinforcement learning is that the goal is clear (reward function), the path is not unique, and continuous trial – and – error iteration is required. In Ilya’s words:

“Reinforcement learning allows AI to try new tasks with random paths. If the effect exceeds expectations, then update the weights of the neural network so that AI remembers to use this successful event more often and then starts the next attempt.”

This is very similar to the logic of Olympiad math: Try a path → Fail → Correct mistakes → Summarize → Try again.

And this is exactly the rhythm that Chinese students are most familiar with. Since childhood, they have been used to breaking down big problems into small problems and then solving them step by step. Long – term mathematical and physical training has also made them very proficient in tools such as probability, optimization, and linear algebra – and these are exactly the basic skills of RL.

By the time many people graduate from undergraduate studies, they are already very familiar with matrix operations, gradient descent, and probability modeling. So when they enter research, they don’t need to “make up for lost lessons” and can directly engage in algorithm innovation and implementation.

In addition, the characteristics of RL are that the results are quantifiable and the indicators are clear: reward curves, convergence speed, and test scores can all show improvements at a glance. Such a research model is particularly in line with the Chinese people’s habits of being pragmatic, efficient, and pursuing certainty.

This is why the Chinese have a particularly strong presence in the field of RL.

In the RL papers of NeurIPS 2020, 30% of the first authors are of Chinese descent; in Google’s RL team, one – quarter to one – third graduated from Chinese universities; in the xAI team, Zhang Guodong, Yang Ge, Jimmy Ba and others have all left achievements in top – tier RL research.

To some extent, reinforcement learning is the “natural home field” of Chinese engineers. And the rise of DeepSeek – R1 at the beginning of this year is more like a clear indication that this advantage is bearing fruit.

There is no mystery behind it. China has a large educated population, long – term mathematical and physical training from childhood to adulthood, long – term national investment in scientific research, and a motivation deeply rooted in culture – the belief that technology can transform the world.

It is these factors that together support a huge “talent pipeline”, continuously sending doctoral – level researchers to top – tier universities and AI labs in the United States.

In the era of large models, Silicon Valley still needs a few “Da Vinci – like geniuses” who can invent new paradigms, but at present, it needs a large number of engineering scientists who can refine algorithms to the extreme. China’s education and talent system just shows strong “hematopoietic ability” at this moment, providing a stable and solid scientific research foundation.

The competition in AI has never been a sprint on a single technological curve, but a long – term game of talent pipelines, education systems, and cultural mindsets.

When the most cutting – edge labs in Silicon Valley are full of Chinese faces, this is not only a talent phenomenon but also a civilization phenomenon. The future of AGI is not just a competition between companies, but a global civilization competition in talent allocation.

And in this competition, the Chinese have already stood in the center of the stage.

This article is from the WeChat public account “Crow Intelligence Talk”, author: Smart Crow. Republished by 36Kr with authorization.



Source link

Continue Reading

Trending