Education
To engage with AI or not: learning engagement among rural junior high school students in an AI-powered adaptive learning environment

Validity and reliability testing of the measurement model
EFA was carried out on all the questions in each dimension of the factors, and the question types retained in the junior high school students’ learning engagement scale in the AI-powered ALS. First, the Kaiser–Meyer–Olkin (KMO) and Bartlett sphericity tests were performed. The KMO value reached 0.915, and the Bartlett sphericity test was significant, indicating that the questionnaire was suitable for validity analysis. The EFA results (see Appendix D) reveal that the scale is divided into 10 components, with a total variance explained of 75.069%. Among these items, the items under the dimensions of PEU and PU can be seen to be in the same component compared with the constructed theoretical model. This finding indicates that the two dimensions are highly correlated, so PEU and PU were combined to discuss technology acceptance in the subsequent analysis.
There are many questions under this dimension of technology acceptance because of this combination, and the measurement of this dimension is further analyzed in AMOS 26.0. The results show that the factor loadings of each question in the dimension of technology acceptance are in the range of 0.63–0.86, all of which are above 0.6 and below 0.95, indicating that the factor loadings of all questions meet the requirements (see Fig. 4). However, the results of the structural validity test indicate that χ2/df = 5.609 > 3 and RMSEA = 0.138 > 0.08, and neither fit index meets the requirements of the reference value, which demonstrates that further modification is needed. Some models are modified according to the modification indices, and the observed variables identified by the path with a larger MI value are deleted. According to the table of modification indices, the four questions of PE1, PE4, PU3, and PU4 pointed to by e1, e3, e7, and e8, respectively, were removed from this study, and the four questions of PE3, PE5, PU1, and PU2 were retained (see Table 1). The revised construct validity data are shown in Table 2. After modification, the measurement facet clearly meets the reference values for model fit, and all the values are good, demonstrating that this measurement facet has good construct validity.
CFA was then performed on all of the retained questions to study the learning engagement of the junior high school students in the AI-powered ALS and its factors. The first examination concerns the analysis result of convergent validity. Table 3 shows that all of the reserved questions are significant at the 0.001 level. The factor loadings of all questions range from 0.648–0.933, which are all in the range of 0.6–0.95. The average variance extracted (AVE) values of the seven dimensions are between 0.542 and 0.806, which are all greater than 0.5, and the composite reliability (CR) values are between 0.777 and 0.926, which are all greater than 0.7. The above analysis results demonstrate that the scale has good convergent validity.
The discriminant validity of the whole scale was subsequently tested. The value of the square root of the AVE for each dimension is 0.759, 0.827, 0.824, 0.746, 0.804, 0.736, 0.776, 0.802, 0.898, and 0.877 (see Table 4). The square root value of the AVE for each dimension is significantly greater than the correlation coefficient between one dimension and the other dimensions, which proves that there is no high correlation between each dimension and the other dimensions, thus indicating good discriminant validity for this study.
A reliability test of the questions that passed the validity test was carried out, and the results of the test using Cronbach’s α are shown in Table 5. The Cronbach’s coefficient of each dimension is between 0.759 and 0.925, and the overall reliability of the scale reaches 0.901. All dimensions have good reliability, except for perceived relatedness, which is within the acceptable range of reliability; accordingly, the overall reliability of the scale is good.
To obtain a more complete picture of the students’ answers to this scale, the mean and standard deviation of each dimension and the questions under each dimension are determined. The average score of the students is higher than 3 points for the four dimensions of perceived autonomy, perceived competence, perceived relatedness, and technology acceptance. This includes all of the questions in these dimensions, which means that the degree of satisfaction of the students’ needs for autonomy, competence, and relatedness in an AI-powered ALS is generally recognized, and the students have a high degree of acceptance of the AI-powered ALS. The mean of both students’ internal cognitive and extraneous cognitive load is lower than 3, but the mean of their internal cognitive load is close to 3, indicating that the students generally perceive the task to be more difficult and feel that they have received less unambiguous instruction and feedback under the AI-powered ALS.
Goodness-of fit test of the structural model
Second-order model validation
The right part of the model, i.e., learning engagement and its four measures, belongs to a second-order model. The main purpose of using second-order models is to simplify the structural dimensions in SEM. In this study, learning engagement is divided into four dimensions: behavioral, cognitive, emotional, and agentic engagement. If a second-order model is not constructed, then, according to the research hypothesis, each dimension significantly affects the four dimensions of learning engagement, thus resulting in a redundant structural dimension. Therefore, a second-order model is used in this study for model validation.
However, the second-order model inevitably causes a certain loss to the model. This simplifies the structural model at the cost of a loss of model fit. Therefore, the second-order model is suitable for use only when such a loss is low. There are usually two bases for judging whether the first-order model can be replaced by the second-order model: one is whether the construction of the second-order model has theoretical support, and the other is to determine the size of the objective coefficient of the model. In this study, the four measures of learning engagement have been widely recognized and used by researchers; thus, the second-order model has theoretical support. Second, the objective coefficient of the model is T = 0.976. The T-value of the model is very close to 1, which is ideal for using a second-order model instead of a first-order model. After the second-order validation is passed, the structural model can then be tested and analyzed for goodness-of-fit.
Model running and goodness-of-fit validation
Based on the reliability and validity tests and the validation of the second-order model, the model is divided into 10 dimensions, with technology acceptance being a combination of PEU and PU. The six dimensions of intrinsic cognitive load, extraneous cognitive load, perceived autonomy, perceived competence, perceived relatedness, and technology acceptance are factors that influence junior high school student learning engagement in the AI-powered ALS, and behavioral engagement, cognitive engagement, emotional engagement, and agentic engagement are used as measures of learning engagement. There are 3–4 measurement questions (observed variables) under each dimension (i.e., latent variables), which meets the SEM requirement of at least 3 observed variables under each latent variable. The running results show that the values of χ2/df, RMSEA, SRMR, CFI, IFI, and TLI meet the requirements of the reference values. Therefore, the values of each model fit index are within the acceptable range, indicating that this structural model is an acceptable research model (see Table 6).
Research hypotheses test
Research hypotheses revision
In the initial theoretical framework, a total of 15 research hypotheses were delineated to explore distinct facets of user interaction with AI-powered ALS. Upon conducting a discriminant validity test, it was found that PEU and PU—two constructs anticipated to independently interact with learning engagement—exhibited a higher than acceptable level of correlation. This suggests a lack of clear distinction between them in the context of our study’s empirical data. To maintain the integrity of the structural model and align it with the theoretical underpinnings of TAM, these constructs were unified into a single dimension termed “technology acceptance.” Consequently, the original hypotheses H9 and H10, which separately addressed these constructs, were integrated into a composite hypothesis—now referred to as H7—to reflect their combined influence on learning engagement.
Similarly, hypotheses H5 and H6, which pertained to the impact of PC on PEU and PU, respectively, were also consolidated. This was based on the empirical observation that PC uniformly affected the newly defined technology acceptance dimension. Thus, a revised H5 was formulated to encapsulate this unified relationship. The original hypotheses H4, H7, H13, H14, and H15 also underwent terminological adjustments to replace PU and the dual expressions of PEU and PU with technology acceptance. These alterations, resulting in revised hypotheses H4, H6, H10, H11, and H12, were necessary to accurately represent the findings of our discriminant validity assessment and to ensure conceptual consistency throughout the study (as shown in Table 7).
These methodological amendments have streamlined our hypotheses and ensured that the structural model is congruent with both theoretical expectations and the observed empirical relationships. This adjustment addresses the overlap in the constructs and aligns the hypotheses with the validated model of user engagement within AI-powered adaptive learning environments.
Test for the relationship between variables
The current model has a goodness-of-fit that is based on previous modifications to the model; therefore, AMOS 26.0 was used to conduct path analysis on the revised model structural dimensions to test the validity of the research hypothesis on the basis of the logical relationships between the variables, as proposed by the theory. The test results in Table 8 support research hypotheses H2, H4, H5, H6, H7, and H9, whereas H1, H3, and H8 are not supported.
Mediation effect test
In the constructed influencing factor model, technology acceptance may have a mediating effect on perceived autonomy, perceived competence, and perceived relatedness. Therefore, the relationships between the three dimensions of basic psychological needs and both technology acceptance and learning engagement are further analyzed in this study.
Additionally, the coefficient product and the bootstrap estimation of the upper and lower limits of the 95% confidence interval were used to test the significance of the direct effect, indirect effect, and total effect. The test results of the mediating effect of perceived autonomy are shown in Table 9, which indicates that technology acceptance does not have a mediating effect on the relationship between perceived autonomy and learning engagement; thus, H10 is not supported. The test results of the mediating effect of perceived competence are shown in Table 10, which indicates that technology acceptance plays a partial mediating role between perceived competence and learning engagement; thus, H11 is supported. The variance analysis of the indirect path and direct path of perceived competence reveals that there is no significant difference between the indirect effect and direct effect of perceived competence. The test results of the mediating effect of perceived relatedness are shown in Table 11, which indicates that technology acceptance plays a fully mediating role between perceived relatedness and learning engagement; thus, H12 is supported.
Model validation results
Based on the above relationship among the variables in the mediation effect test, the findings show that perceived competence and technology acceptance have a significant positive impact on learning engagement; extraneous cognitive load has a significant negative effect on learning engagement; technology acceptance is a mediating variable among perceived competence, perception relatedness and learning engagement; and perceived autonomy and intrinsic cognitive load have no direct or indirect impact on learning engagement. However, perceived autonomy, competence, and relatedness all have positive effects on technology acceptance.
We also conducted a round of expert reviews where domain experts evaluated the components of the model. We sent the model to a panel of experts who can provide feedback on its validity. This process can help achieve a consensus among experts through a series of expert interviews with their feedback. A research-validated model of the factors that interact with junior high school student learning engagement in the AI-powered ALS is shown in Fig. 5. The figure shows that among all of the influencing factors, technology acceptance has the greatest positive impact on learning engagement, and extraneous cognitive load has a lower significant, negative impact. Among the three factors of basic student psychological needs, the direct and indirect impact paths of perceived autonomy on learning engagement are not significant. Perceived competence can directly affect learning engagement and can also have an indirect effect on it through technology acceptance. The direct and indirect effects of perceived competence are the greatest among these three factors, and the difference between the direct and indirect effects is not significant. Perceived relatedness, in contrast, indirectly affects learning engagement mainly through the mediating effect of technology acceptance, but according to the results of the point estimation, its indirect effect is still low. Finally, the test results show that intrinsic cognitive load has no significant effect on learning engagement.
Qualitative data results
The qualitative data analysis procedure drew on the steps proposed by Assarroudi et al. (2018) and was partially modified as follows: (1) presetting themes—our themes were based on SDT, TAM, and CLT; (2) familiarizing with the data; (3) defining themes; (4) coding; (5) reviewing codes; (6) summarizing themes; and (7) producing the report. In the end, 19 codes were generated. From the SDT perspective, the themes included perceived autonomy (with four codes: personalized guidance, personal needs, self-directed learning, and curiosity), perceived competence (with two codes: self-efficacy and challenge), and perceived relatedness (with two codes: interactive discussion and a sense of closeness).
Excerpt 1:
-
(1)
Personalized guidance: I feel that the AI learning tablet is basically like a one-to-one tutoring class outside school (teacher interview).
-
(2)
Personal needs: (a) The AI learning system recommends one type of problem to students who do not understand it and a different type of problem to those who do. (b) We can see that every student is working on a different set of questions (classroom observation).
-
(3)
Self-directed learning: In my spare time, I like to study on my own and follow my own learning schedule.
-
(4)
Curiosity: (a) When we first started using it, I was curious and wanted to explore it. (b) When the technician opened the cabinet holding the tablets, the students all rushed forward to grab one—they were clearly curious and very interested (classroom observation).
-
(5)
Self-efficacy: Most of the time, I can complete the tasks.
-
(6)
Challenge: I am willing to address this problem as a challenge.
-
(7)
Interactive discussion: (a) In this kind of class, I can discuss things with my classmates. (b) Because I do less lecturing in this mode, I have fewer discussions with students, and sometimes I worry whether they have truly learned it (teacher interview).
-
(8)
Sense of closeness: If the teacher is also present, I can raise my hand to ask questions and feel a bit closer to the teacher.
From the TAM perspective, the themes comprise PEU—reflected in the codes’ device login and updates, drawing/handwriting operations, and recognition functionality—and PU—reflected in the codes’ learning assistance, knowledge consolidation, and learning enhancement.
Excerpt 2:
-
(1)
Device login and updates: (a) During one update, we spent quite a while flustered and trying to sort things out. (b) At the start of today’s class, the software was updated, so the students could not use it at first; the classroom was slightly chaotic, and the technician stood by to step in quickly to fix it (classroom observation).
-
(2)
Drawing/handwriting operations: (a) Drawing tools—such as the set square and compass—are much easier to use. (b) The drawing tool is well designed: students can draw directly inside the software instead of sketching on paper and then taking a photo (teacher interview).
-
(3)
Recognition function: (a) I hope that the system can improve its accuracy in recognizing characters. (b) A student told the technician in class that the software could not correctly recognize his handwriting (classroom observation).
-
(4)
Learning assistance: (a) After a problem is finished, the worked solution can be read, and the analysis is very detailed. (b) If students answer a question incorrectly or do not know how to solve it, then the text explanations and video tutorials help them learn (teacher interview).
-
(5)
Knowledge consolidation: (a) This helps me cement the knowledge point more firmly. (b) Teachers and technicians collaborate to set the day’s learning tasks, which helps students consolidate what they learned that day (classroom observation).
-
(6)
Learning improvement: (a) Because my grades have improved, I am even more motivated to keep progressing. (b) With respect to the review that we performed before class the next day, students performed better than before; compared with when no one at home could help them solve problems, their learning has indeed improved (teacher interview).
From the CLT perspective, the data encompass intrinsic CL, as represented by the codes “tasks too difficult” and “excessive text or overly long videos,” and extraneous CL, as represented by the codes “inaccurate feedback,” “unclear instructions,” and “unclear explanations.”
Excerpt 3:
(1) Tasks too difficult: “I think the problems it pushes to me are too hard—I really don’t want to do them”.
(2) Excessive text/videos that are too long: (a) There is too much text in the questions, and it is tiring to read. (b) Some students do not have the patience to finish the videos (classroom observation).
(3) Inaccurate feedback: I worked on it for a long time and felt sure that I was right, but it said that I was wrong.
(4) Unclear instructions: Sometimes, if you accidentally tap somewhere, it exits, and you do not know how to get back.
(5) Unclear explanations: Sometimes the explanation is not clear, which can be stressful.
Education
Global Artificial Intelligence in Education Market to Reach USD

The global Artificial Intelligence (AI) in the Education Sector market, valued at approximately USD 5.9 billion in 2024, is projected to grow to nearly USD 38.2 billion by 2034, registering a strong CAGR of 20.8%. This growth is fueled by the rising need for personalized learning experiences, faster adoption of virtual classrooms, ongoing teacher shortages, and increasing use of real-time data analytics to improve learning outcomes.
In 2023, over 350 million learners worldwide engaged with AI-powered platforms, with EdTech leaders like Duolingo, BYJU’S, and Pearson AI recording double-digit growth in users. Duolingo’s adaptive engine handled over 1.4 billion daily practices, while BYJU’S facilitated 2.1 million daily learning experiences in India alone. AI tools are transforming teaching methods, streamlining administrative work, and enabling personalized student support.
To Receive A PDF Sample Of The Report, Visit @https://www.emergenresearch.com/request-sample/484
Virtual teaching tools and intelligent tutoring systems are now used by more than 42% of higher education institutions in the U.S. and UK. AI-powered proctoring services such as ProctorU and Examity oversaw over 25 million online exams in 2023, ensuring integrity in digital assessments. Platforms like Google Classroom, Microsoft Education Insights, and ClassDojo provide educators with real-time insights to adapt lessons and support struggling students early.
AI technologies are also helping bridge the education gap in remote areas. In 2023, adaptive learning applications reached around 42 million students in underserved regions across Africa, Asia, and Latin America. Generative AI tools like ChatGPT and Google Gemini are increasingly used for lesson planning, content creation, and reducing teacher preparation time by up to 40%.
Regional Insights
North America leads the market in 2024, backed by strong EdTech investments and early AI adoption in K-12 and higher education. Asia-Pacific is the fastest-growing region due to large-scale government programs in China, India, and South Korea, with China investing over USD 1.2 billion in AI-powered classrooms in 2023. Europe is advancing AI integration through EU funding and partnerships between tech firms and universities.
Market Drivers
Key drivers include the demand for personalized learning, the shift to digital classrooms, and the need for data-driven decision-making. For instance, BYJU’S AI tools in India boosted test scores by 19%, while Khan Academy’s AI tutor “Khanmigo” reached over 200,000 students in just six months. AI analytics tools now assist over 12 million teachers worldwide in tracking student performance.
Browse Detailed Research Report @https://www.emergenresearch.com/industry-report/artificial-intelligence-in-the-education-sector-market
Trends and Innovations
AI is enabling personalized learning, intelligent tutoring systems, and real-time feedback. Accessibility tools such as speech-to-text and translation are supporting over 180 million students annually. AI integration with extended reality (XR) is enhancing immersive learning, while nonprofit programs are expanding access to digital education in underserved areas. Growing attention to AI ethics and student data privacy is shaping industry practices.
Market Restraints
Challenges include data privacy concerns, the digital divide, educator resistance, and policy uncertainty. Limited internet access and infrastructure in rural regions restrict AI adoption, and inconsistent regulations between countries increase compliance costs. Teacher training and clear ethical standards are essential for overcoming these barriers.
Segment Highlights
Technology: Machine Learning & Deep Learning held the largest share (39%) in 2024, followed by Natural Language Processing (24%) and Computer Vision (15%).
Platform: Cloud-based solutions dominated with 61% market share due to scalability and cost efficiency.
Applications: Virtual Learning Environments led with 33% share, while Intelligent Tutoring Systems and Language Learning tools showed the fastest growth.
End Use: K-12 schools accounted for 41% of deployments in 2024, with strong growth also seen in higher education and corporate training sectors.
Buy Now: @https://www.emergenresearch.com/select-license/484
Some major players operating in the artificial intelligence in the education sector market are:
Google LLC
Microsoft Corporation
Amazon Web Services, Inc.
International Business Machines Corporation
Cognizant Technology Solutions Corp.
Pearson PLC
Nuance Communications Inc.
Blackboard Inc.
Carnegie Learning, Inc.
Cognii, Inc.
Artificial Intelligence in the Education Sector Market Market Segmentation Analysis
By Component Outlook (Revenue, USD Billion, 2021-2034)
Solutions (Intelligent Tutoring Systems, Learning Management Systems, AI-Powered Content Creation, Adaptive Assessments, Analytics Platforms)
Services (Professional Services, Managed Services, AI Training and Consulting)
By Deployment Mode Outlook (Revenue, USD Billion, 2021-2034)
Cloud-Based
On-Premises
By Technology Outlook (Revenue, USD Billion, 2021-2034)
Machine Learning & Deep Learning
Natural Language Processing (NLP)
Computer Vision
Speech & Voice Recognition
Others
By Application Outlook (Revenue, USD Billion, 2021-2034)
Virtual Learning Environments
Intelligent Tutoring Systems
Student Information Systems
Classroom Management
Language Learning
Accessibility Tools
Others
By End-User Outlook (Revenue, USD Billion, 2021-2034)
K-12 Schools
Higher Education Institutions
Vocational & Technical Training
Corporate Training & Workforce Development
Government & Nonprofit Organizations
By Regional Outlook (Revenue, USD Billion, 2021-2034)
North America
U.S.
Canada
Mexico
Europe
Germany
United Kingdom
France
Italy
Spain
Nordics
Asia Pacific
China
India
Japan
South Korea
Australia
Latin America
Brazil
Argentina
Middle East & Africa
Saudi Arabia
UAE
South Africa
Nigeria
Browse More Report By Emergen Research:
Automated Suturing Devices Market
https://www.emergenresearch.com/industry-report/automated-suturing-devices-market
Burial Insurance Market
https://www.emergenresearch.com/industry-report/burial-insurance-market
Empty Capsules Market
https://www.emergenresearch.com/industry-report/empty-capsules-market
Forensic Imaging Market
https://www.emergenresearch.com/industry-report/forensic-imaging-market
Contact Us:
Eric Lee
Corporate Sales Specialist
Emergen Research | Web: www.emergenresearch.com
Direct Line: +1 (604) 757-9756
E-mail: sales@emergenresearch.com
Visit for More Insights: https://www.emergenresearch.com/insights
About Us:
Emergen Research is a market research and consulting company that provides syndicated research reports, customized research reports, and consulting services. Our solutions purely focus on your purpose to locate, target, and analyse consumer behavior shifts across demographics, across industries, and help clients make smarter business decisions. We offer market intelligence studies ensuring relevant and fact-based research across multiple industries, including Healthcare, Touch Points, Chemicals, Types, and Energy. We consistently update our research offerings to ensure our clients are aware of the latest trends existent in the market. Emergen Research has a strong base of experienced analysts from varied areas of expertise. Our industry experience and ability to develop a concrete solution to any research problems provides our clients with the ability to secure an edge over their respective competitors.
This release was published on openPR.
Education
The role of AI in Purdue University’s academic future – Indianapolis News | Indiana Weather | Indiana Traffic

This is the second entry of WISH-TV’s deeper dive into artificial intelligence in education, first examining how AI is being used in colleges and universities.
WEST LAFAYETTE, Ind. (WISH) — Students start the fall semester at Purdue University Aug. 25.
Leaders there say artificial intelligence will likely be a part of all college students’ education. However, how much or how little depends on their major, their professors, and the students themselves.
Jamil Mansouri just graduated from Purdue in May and double majored in Agricultural Economics and Political Science. He will soon start graduate school for Business Analytics and Data Management. He has become familiar with AI as a student.
“I think there are fields where AI can be your biggest tool or not help you that much,” Mansouri said.
He is also a member of the Student Pedagogy Advocates program and is a student voice while the university develops frameworks for how to use and teach AI.
(WISH Photo)
When asked what people should know about AI in the university setting, Mansouri said, “The student body is not a monolith, it is very major specific. The second thing is students want consistent frameworks and guidance when using it.
“A lot of faculty have very different perspectives on AI and it translates into their coursework. Some faculty actively support it and give you these resources. Others say ‘If I even find a hint of it, I’m going to give you an F in the class. Students feel confused students don’t know exactly where to go with it where to engage with these tools.”
That’s where David Nelson comes into play. He is the associate director for Purdue Center for Instructional Excellence and a courtesy faculty in the John Martinson Honors College.
He helps the 2,400 faculty members stay updated on educational trends, such as AI.
“AI disrupts a lot of processes that we’ve come to rely on an education,” Nelson explained. “AI is creating a lot more freedom of choice in cognitive work. That’s a big part of what it is doing right now and we haven’t had to worry about that choice. So, now are we.”
He’s helping implement Purdue University’s AI policy.
“Rather than institute one kind of broad AI policy, the university has encouraged different instructors and different departments to really investigate it,” Nelson said.
Professors will set the AI policy for their own classes; the level of use and implementation will depend on the course, the subject and the faculty member.
“We’re very much encouraging transparency for faculty instructors in their AI policies and those can be very different from class to class. We’re also encouraging faculty and instructors to make sure that there is a human in the loop when trying to give feedback or assessment to students by using any AI,” he said. “Everything else has been kind of ‘We’d like you to experiment we want you to be aware of what existing rules are about academic integrity and research’ — but it’s a lot of trust in full-time professionals to do their jobs.”

(WISH Photo)
Purdue is trusting students, too.
Nelson says there are pros and cons to AI: A chat bot can act as a study buddy, or something to bounce ideas off of when brainstorming. It can even act as a language translator; however, it should not do the thinking for students.
Nelson encourages students to learn how algorithms work and determine what they really want to accomplish with their higher education experience – and encouraging staff to have bold and up front conversations with students about AI.
“How can we incentivize change the way that we are encouraging students engaging with them so that they’re making the proactive choice to learn and realize this is something that could be harmful or helpful to me and if they don’t know what guidance can they get?” Nelson said.
The university says there will not be a freshman orientation focused on AI. This will be a subject individual professors will approach at the start of each semester.
While the university is embracing the technology, faculty at Purdue also hope it will encourage students to really think about what they want to get out of their education and perhaps promote face to face interactions with professors.
Mansouri said this has prompted a lot of professors to change their course work.
“It is no secret that in computer science you can get up through your junior year by just using Chat GPT to guide you through the code. So, professors are adapting so now you have to come in and have one-on-one conversations and explain your coding and how you got there. You’re going to see different projects I think. You’re going to see a lot more presentations – a lot more of the social side of it and a lot more, kind of, showing and talking your way through something rather than writing on paper how you got to a solution,” Mansouri said.
When it comes to the issue of cheating, the university says AI detectors don’t work well anymore.
The school is relying on professors getting to know their students and feeling when something seems off. Nelson says he has had to do this with students, and generally, they admit to the generative AI use in circumstances that are not allowed.
“Identifying that is a feeling as well as a discussion. But when students do admit to it it is a violation of academic integrity and so it does violate the university’s honesty policy and there are the same kind of consequences from directly copying from one previous static document to your own work and saying, ‘This is what I did.’” Nelson said.
The university is also encouraging students to take Purdue’s Honor Pledge.
According to its website, students developed the honor pledge to advance a supportive environment that promotes academic integrity and excellence. “It is intended that this pledge inspires Boilermakers of all generations to stay ‘on track’ to themselves and their University,” according to Purdue.
As for Chat GPT’s advice to college students?
The chat bot’s bottom line is that it’s a tool. It can generally be abused. But, it can challenge you to think critically and help shape your education and future career.
“I think, fundamentally, I agree with its perspective on that,” Mansouri said. “It can give you an edge but it can also harm your education. So use it as the tool that it is and it can be a double-edged sword.”
Nelson said the best analogy for AI is when radio started. It was a paradigm shift in technology that was suddenly everywhere…
He says AI is also a paradigm shift for education and potentially careers. Some students tell News 8 they are re-thinking or worried about jobs after school because of AI’s impact.
Next in this series, WISH-TV explore how AI is changing what students choose to do in higher education, the trades, and its impact on careers.
That story will air Aug. 18 on Daybreak.
Education
How schools are tackling absenteeism

(Note: This is the second piece in a two-part series on absenteeism in schools. Read the first part, on seven insights from researchers.)
Chronic absenteeism, when students miss 10 percent or more of the school year, is 50 percent higher across the nation than before the pandemic. Researchers say it’s difficult for schools to address the problem because it is both so intense, with students missing huge chunks of the school year, and so extensive, affecting both rich and poor students and even high achievers. And the reasons vary widely, from asthma and bullying to transportation problems and the feeling that school is boring.
“It’s hard to know where and when to target resources,” said Sam Hollon, a data analyst at the American Enterprise Institute, which hosted a symposium on the problem in May. “Who do you help when every student potentially can be a candidate for help?”
Related: Our free weekly newsletter alerts you to what research says about schools and classrooms.
The Trump administration’s immigration enforcement is exacerbating the problem. A June draft paper by Stanford University professor Thomas Dee calculated that recent raids coincided with a 22 percent increase in daily student absences with particularly large increases in absenteeism among the youngest students.
Talking about the problem isn’t enough. Researchers say they want to study more schools that are making headway. It remains unclear if there are broadly applicable fixes or if each school or even each student needs individual solutions. Some underlying root causes for skipping school are more complex than others, requiring psychotherapy or housing assistance, which schools can’t provide. Here are a few examples of how very different communities are tackling the problem.
Providence: Bus stops and weekend food bags
Principal W. Jackson Reilly of Nathanael Greene Middle School in Providence, Rhode Island, said that when he arrived in April 2023, half of his 900 students in grades six to eight were chronically absent, up from 30 percent of students before the pandemic. Thirty percent of his teachers were also chronically absent. Achievement scores were in the state’s bottom 1 percent.
Reilly managed to slash his chronic absenteeism rate in half to 25 percent this past 2024-25 year. That’s still high. One in four students missed more than 18 days of school a year. But, it’s better.
He began by identifying 150 kids who were just over the threshold for chronic absenteeism, those who missed between 18 and 35 days, hoping that these kids would be easier to lure back to school than those who were more disengaged. Reilly and a group of administrators and guidance counselors each took 10 to 15 students and showed their families how much school they had missed and how low their grades were. His team asked, “What do you need in order for your kid to be coming to school?’”
The two most common replies: transportation and food.
Related: The chronic absenteeism puzzle
Many students lived only a mile away, too close to school to qualify for bus service. Yet the walk deterred many, especially if it was raining or snowing. Yellow buses often passed these children’s homes as they were transporting children who lived farther out, and Reilly convinced the district to add stops for these chronically absent children.
Ninety percent of his students come from families who are poor enough to qualify for the federal free or reduced-price lunch program and 80 percent are Hispanic. Although many children were fed breakfast and lunch at school, their families admitted that their kids would get so hungry over the weekend that they didn’t want to wake up and come to school on Mondays. Reilly partnered with a food pantry and sent bags of meat and pasta home with students on Fridays.
Individual attention also helped. At the start of each school day, Reilly and his team check in with their assigned students. Kids who show up get five “green bucks” to spend on snacks and prizes. Administrators call the homes of those who didn’t come to school. “If they did not answer the phone, we’d make a home visit,” said Reilly.
The most dramatic overhaul was scheduling. Reilly scrapped individual schedules for students and assigned four teachers to every 104 students. The kids now move in pods of 26 that take all their classes together, rotating through the same four teachers throughout the day. The classrooms are right near each other, creating a smaller community within the school.
“It’s all about relationship building,” said Reilly. When students look forward to seeing their classmates and teachers, he said, they’re more motivated to come to school.
Researchers say fostering relationships is effective. Hedy Chang, executive director of Attendance Works, a nonprofit organization that advises schools on how to boost attendance rates, said it’s still a battle to persuade school leaders (and school board members) that making school a more welcoming place is more productive than punishing kids and families for skipping school.
Reilly said his school now posts the lowest student and teacher chronic absenteeism rates in Providence. And he said his school is the highest performing middle school in the city and among the highest statewide in reading.
New York City: Catching the butterflies
A cluster of New York City high schools are taking a more data-driven approach, guided by New Visions, a consulting organization that supports 71 city high schools.
After some experimentation, New Visions staff saw strong improvement in attendance in one subgroup of students who were on the cusp of missing 10 percent of school days, but had not yet crossed the chronic absenteeism threshold. These are students who might miss a day or two every week or every other week but were relatively engaged at school. Jonathan Green, a New Visions school improvement coach who is spearheading this effort, calls them “butterflies.” “They would flutter in and out every week,” he said.
Green suggested that someone at school meet weekly with these butterflies and show them their attendance data, set goals for the coming week and explain how their attendance was leading to better grades. The intervention took two to five minutes. “There were marked changes in attendance,” said Green.
New Visions built a website where school administrators could print out two-page documents for each student so the data, including monthly attendance and tardiness, appeared in an easy-to-digest format. The quick meetings took place for eight to 10 weeks during the final grading period for the semester. “That’s when there’s the most opportunity to turn those potentially failing grades into passing grades,” said Green. “We were finding these sweet spots within the school calendar to do this very high resource, high-energy intensive weekly check-in. It’s not something that anyone can easily scale across a school.”
Related: Tracking student data falls short in combating absenteeism at school
Staff had to figure out the bell schedule for each child and intercept them between classes. One succeeded in holding their entire caseload of students below the chronic absenteeism threshold. Not everyone thought it was a good idea: Some school administrators questioned why so much effort should go into students who weren’t yet chronically absent rather than students in greater trouble.
The dramatic results help answer that question. Among schools in the Bronx that volunteered to participate in the butterfly intervention, chronic absenteeism rates dropped 15 percentage points from 47 percent in 2021 to 32 percent in 2025, still high. But other Bronx high schools in the New Visions network that didn’t try this butterfly intervention still had a chronic absenteeism rate of 46 percent.
Green said this solution wouldn’t work for other high schoolers. Some have trouble organizing their study time, he said, and need more intensive help from teachers. “Two- to five-minute check-ins aren’t going to help them,” said Green.
Indianapolis: Biscuits and gravy
The leader of an Indiana charter school told me he used a system of rewards and punishments that reduced the chronic absenteeism rate among his kindergarten through eighth graders from 64 percent in 2021-22 to 10 percent in 2024-25.
Jordan Habayeb, the chief operating officer of Adelante Schools, said he used federal funds for the school breakfast and lunch program to create a made-from-scratch restaurant-style cafeteria. “Fun fact: On homemade biscuit and gravy days, we saw the lowest rates of tardies,” he said.
Researchers recommend avoiding punishment because it doesn’t bring students back to school. But Habayeb said he adheres strictly to state law that requires schools to report 10 absences to the state Department of Child Services and to file a report with the county prosecutor. Habayeb told me his school accounted for a fifth of truancy referrals to the county prosecutor.
The school created an automated warning system after five absences rather than waiting for the critical 10-day loss. And Habayeb said he dispatched the safety and attendance officer in a van to have “real conversations with families rather than being buried in paperwork.” Meanwhile, students who did show up received a constant stream of rewards, from locker decorations to T-shirts.
Parent education was also important. During mandatory family orientations, the school illustrated how regular attendance matters for even young children. “We shared what a child might miss during a three-day stretch in a unit on ‘Charlotte’s Web’ — showing how easily a student could leave with a completely different understanding of the book,” said Habayeb. “This helped shift perspectives and brought urgency to the issue.”
Kansas City: Candy and notes
School leaders in Kansas City, Kansas, shared some tips that have worked for them during a webinar earlier this month hosted by Attendance Works. One elementary school reduced its chronic absenteeism from 55 percent in 2021 to 38 percent in 2024 by assigning all 300 students to an adult in the building, encouraging them to build an “authentic” relationship. Teachers were given a list of ideas but were free to do what seemed natural. One teacher left candy and notes on their assigned students’ desks. A preschooler proudly pasted his note, which said he was a “genius,” on the front door of his house. “The smiles kids have on their faces are amazing,” said Zaneta Boles, the principal of Silver City Elementary School.
When students do miss school, Boles said educators try to take a “non-blaming approach” so that families are more likely to divulge what is going on. That helps the school refer them to other community agencies for assistance.
Albuquerque: A shining example regroups
Alamosa Elementary School in Albuquerque, New Mexico, was once a shining example of a school that persuaded more families to send their kids to class. Chronic absenteeism fell as low as 1 in 4 students in 2018, when The Hechinger Report wrote about the school.
But Alamosa has not been immune from the surge of absenteeism that has plagued schools around the nation. Chronic absenteeism spiked to 64 percent of students during the 2021-22 school year, when Covid variants were still circulating. And it remained shockingly high with 38 percent of students missing more than 10 percent of the 2024-25 school year — exactly matching the 50 percent increase in chronic absenteeism across the country since 2019.
“We were on a roll. Then life happened,” said Daphne Strader, Albuquerque Public Schools’ director of coordinated school health, who works to reduce absenteeism.
Strader said Alamosa and other Albuquerque schools have made some successful changes to how they’re tackling the problem. But the volume of absenteeism remains overwhelming. “There’s so many kids who have needs,” Starder said. “We need more staff on board.”
Related: 7 insights about chronic absenteeism, a new normal for American schools
Strader said attendance interventions had been “too siloed” and they’re focusing more on the “whole child.” She’s encouraging schools to integrate attendance efforts with other initiatives to boost academic achievement and improve student behavior. “Students are hungry, they’re dysregulated, they don’t have grit,” said Strader, and all of these issues are contributing to absenteeism. But she also concedes that some students have more severe needs, and it’s unclear who in the system can address them.
Her biggest advice for schools is to focus on relationships. “Relationships drive everything,” said Strader. “One of the major consequences of the pandemic was the isolation. If I feel a sense of belonging, I’m more likely to come to school.”
Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or barshay@hechingerreport.org.
This story about how schools are tackling absenteeism was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Proof Points and other Hechinger newsletters.
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Funding & Business1 month ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 month ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education1 month ago
VEX Robotics launches AI-powered classroom robotics system
-
Education1 month ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Mergers & Acquisitions1 month ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Jobs & Careers1 month ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Podcasts & Talks1 month ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks1 month ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers1 month ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure