AI Research
The gray area of AI – The NAU Review

Artificial intelligence has quickly become part of the fabric of academic life, and scholars are finding themselves caught between innovation and integrity.
Professor Luke Plonsky and assistant professor Tove Larsson, both from NAU’s Department of English, are two of the applied linguists involved in a study led by NAU alumna Katherine Yaw, who works at the University of South Florida. Along with Scott Sterling from Indiana State University and Merja Kytö of Uppsala University, the group is working to review the gray area of AI usage in academic publishing.
“We have been working together for a long time, since before receiving the grant from the National Science Foundation for this study,” Larsson said. “We’re using a community-based approach to try to better understand the gray area between ethical and unethical use of AI when conducting and publishing research.”
The group plans to work on developing a taxonomy for AI use by carrying out asynchronous focus groups. These will include about 90 stakeholders in academic publishing, including journal editors, peer reviewers and authors from diverse fields. The idea is to gather information on what could be considered questionable when using AI for research.
“When you submit a manuscript to a journal, the journal submission system might use an AI checker,” Plonsky said. “One of the questions we have is what needs to be disclosed about its use. We want to know how reviewers, journal editors and researchers are using AI for research and publication purposes.”
After the asynchronous focus groups, the team will code the gathered responses to generate an initial list of questionable research practices that will facilitate the creation of policy documents and ethical guidelines for journals and professional organizations.
“Maybe different journals will take this list of considerations and make their own guidelines based on them,” Plonsky said. “They might say, OK, for this journal we are allowing this, but we are not allowing that. At the very least, many will likely want authors to declare or disclose if, and when, they use AI. But as of right now, it is the Wild West, and there’s very little guidance from journals, publishers and learned societies.”
Larsson said that by listing what uses are considered questionable, other government bodies could help regulate what should be acceptable when using AI for research and publishing.
“If you don’t know what the range of things you could do with AI is in the context of publishing, it is difficult to design guidelines,” Larsson said. “I don’t think any of us are against the use of AI, but there is this gray area of questionable research practices for AI usage that we need to understand better.”
The incubation grant is for a year of research and both Larsson and Plonsky hope that the NSF and other agencies will provide additional funding for ongoing research on the interface between AI and research ethics.
“Authors and reviewers of the journals that we work with come to us with questions that we don’t have answers to yet,” Plonsky said. “Even publishers like Cambridge University Press don’t have specific enough guidelines on this because the role of AI in academic research needs to be further examined. Answering those questions is what we are trying to do.”
Mariana Laas | NAU Communications
(928) 523-5050 | mariana.laas@nau.edu
AI Research
Artificial intelligence helps break barriers for Hispanic homeownership | Nation World

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.
For any issues, call 508-222-7000.
AI Research
School Cheating: Research Shows AI Has Not Increased Its Scale

Changes in Learning: Cheating and Artificial Intelligence
When reading the news, one gets the impression that all students use artificial intelligence to cheat in their studies. Headlines in newspapers such as The Wall Street Journal or the New York Times often mention ‘cheating’ and ‘AI’. Many stories, similar to a publication in New York Magazine, describe students who openly testify about using generative AI to complete assignments.
With the rise of such headlines, it seems that education is under threat: traditional exams, readings, and essays are filled with cheating through AI. In the worst cases, students use tools like ChatGPT to write complete works.
This seems frustrating, but such a thought is only part of the story.
Cheating has always existed. As an educational researcher studying cheating with AI, I can assert that preliminary data indicate that AI has changed the methods of cheating, but not its volumes.
Our early data suggest that AI has changed the method, but not necessarily the scale of cheating that was already taking place.
This does not mean that cheating using AI is not a serious problem. Important questions are raised: Will cheating increase in the future due to AI? Is the use of AI in education cheating? How should parents and schools respond to prepare children for a life that is significantly different from our experience?
The Pervasiveness of Cheating
Cheating has existed for a very long Time — probably since the creation of educational institutions. In the 1990s and 2000s, Don McCabe, a business school professor at Rutgers University, recorded high levels of cheating among students. One of his studies showed that up to 96% of business students admitted to engaging in ‘cheating behavior’.
McCabe used anonymous surveys where students had to indicate how often they engaged in cheating. This allowed for high cheating rates, which varied from 61.3% to 82.7% before the pandemic.
Cheating in the AI Era
Has cheating using AI increased? Analyzing data from over 1900 students from three schools before and after the introduction of ChatGPT, we found no significant changes in cheating behavior. In particular, 11% of students used AI to write their papers.
Our diligent work showed that AI is becoming a popular tool for cheating, but many questions remain to be explored. For example, in 2024 and 2025, we studied the behavior of another 28000-39000 students, where 15% admitted to using AI to create their work.
Challenges of Using AI
Students are accustomed to using AI but understand that there are boundaries between acceptable and unacceptable use. Reports indicate that many use AI to avoid doing homework or to gain ideas for creative work.
Students feel that their teachers use AI, and many consider it unfair when they are punished for using AI in education.
What Will AI Use Mean for Schools?
The modern education system was not designed with generative AI in mind. Traditionally, educational tasks are seen as the result of intensive work, but now this work is increasingly blurred.
It is important to understand what the main reasons for cheating are, how it relates to stress, time management, and the curriculum. Protecting students from cheating is important, but ways of teaching and the use of AI in classrooms also need to be rethought.
Four Future Questions
AI has not caused cheating in educational institutions but has only opened new possibilities. Here are questions worth considering:
- Why do students resort to cheating? The stress of studying may lead them to seek easier solutions.
- Do teachers adhere to their rules? Hypocrisy in demands on students can shape false perceptions of AI use in education.
- Are the rules concerning AI clearly stated? Determining the acceptability of AI use in education may be vague.
- What is important for students to know in a future rich in AI? Educational methods must be timely adapted to the new reality.
The future of education in the age of AI requires an open dialogue between teachers and students. This will allow for the development of new skills and knowledge necessary for successful learning.
AI Research
Artificial intelligence helps break barriers for Hispanic homeownership | National News

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.
For any issues, call (641) 684-4611.
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions