Connect with us

AI Research

Schools fight AI cheating with return to pen and paper blue books

Published

on


NEWYou can now listen to Fox News articles!

The rise of artificial intelligence in education is forcing schools and universities to rethink everything from homework policies to how final exams are administered. With tools like ChatGPT now widespread, students can generate essays, solve complex math problems or draft lab reports in seconds, raising urgent questions about what authentic learning looks like in 2025. 

To fight back, some schools are turning to an unlikely solution: pen and paper. The old-school “blue book,” a lined booklet used for handwritten test answers, is staging a comeback, according to reporting from The Wall Street Journal. And while it might seem like a relic of a pre-digital era, educators say it’s one of the most effective tools they have to ensure students are actually doing their own work.

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

FOX NEWS AI NEWSLETTER: CHATGPT REWIRING YOUR BRAIN

Exam blue book    (Kurt “CyberGuy” Knutsson)

How common is AI cheating in schools today?

While it’s difficult to measure precisely, recent surveys suggest up to 89% of students have used AI tools like ChatGPT to help with coursework. Some admit to using it only for brainstorming or grammar fixes, but others rely on it to write entire papers or take-home tests. As reported, the spike in academic dishonesty has left faculty scrambling to preserve academic standards.

Universities have reported a sharp rise in disciplinary cases tied to AI, but many incidents likely go undetected. Detection software like Turnitin’s AI writing checker is being used more widely, but even those tools admit their systems aren’t foolproof.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Why AI cheating in schools is so hard to detect

One reason this trend is so hard to police is that generative AI has become surprisingly good at mimicking human writing. Tools can tailor tone and style and even match a student’s previous work, making plagiarism nearly impossible to identify without sophisticated forensics or human intuition. 

In blind tests, teachers have often been unable to distinguish between human and AI-written responses. Making matters worse, some schools that initially tried detection software have started abandoning it due to accuracy concerns and privacy issues.

student at desk

A student using ChatGPT on his laptop (Kurt “CyberGuy” Knutsson)

Why schools are bringing back blue books to stop AI cheating

In response, a growing number of professors are bringing exams back into the classroom, with pen and paper. Schools like Texas A&M, University of Florida and UC Berkeley have all reported surging demand for blue books over the last two years. The logic is simple: If students have to write their essays by hand during class time, there’s no opportunity to copy from ChatGPT or another AI assistant. It’s not just nostalgia; it’s a strategic shift. In-person, handwritten exams are harder to game, and some instructors say the quality of student thinking actually improves without digital shortcuts. 

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Are handwritten exams enough to stop AI cheating in schools?

Still, not everyone is convinced this is the answer. Critics argue that relying on in-class, timed writing may shortchange students on deeper research skills and analytical thinking, especially for complex topics that benefit from time, revision and outside sources. Plus, blue books do little to prevent AI misuse on homework, group projects or take-home essays.

Should schools ban AI tools or teach responsible use?

Some educators are pushing for a more balanced response: Instead of banning AI tools, teach students how to use them responsibly. That means integrating AI literacy into the curriculum, so students learn where the line is between inspiration and plagiarism and understand when it’s appropriate to use tools like ChatGPT or Grammarly. 

“AI is part of the professional world students will enter,” said one university dean quoted in The Wall Street Journal. “Our job is to teach them how to think critically, even with new tools in hand.”

teacher at blackboard

A teacher teaching a lesson and a student using her smartphone     (Kurt “CyberGuy” Knutsson)

What’s next in the fight against AI cheating in schools?

As AI tools evolve, so will the strategies schools use to ensure honest learning. Some are shifting toward oral exams, where students must explain their reasoning out loud. Others are assigning more process-based work, such as annotated drafts, recorded brainstorming sessions or group projects that make cheating harder. There’s no silver bullet, but one thing is clear: the AI genie isn’t going back in the bottle, and the education system must adapt quickly or risk losing credibility.

Kurt’s key takeaways

AI cheating in education has forced schools to take a hard look at how they assess student learning. The return of the blue book is a sign of just how serious the problem has become and how far educators are willing to go to protect academic integrity. But the real solution will probably involve a mix of old and new, using analog tools like blue books, embracing digital detection methods and teaching students why honest work matters. As AI continues to evolve, education will have to evolve with it. The goal isn’t just to stop cheating, it’s to make sure students leave school with the skills, knowledge and values they need to succeed in the real world.

CLICK HERE TO GET THE FOX NEWS APP

If AI can do your homework and write your essays, what does it really mean to earn a diploma in the age of artificial intelligence?  Let us know by writing to us at Cyberguy.com/Contact

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

Copyright 2025 CyberGuy.com.  All rights reserved.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Denodo Announces Plans to Further Support AI Innovation by Releasing Denodo DeepQuery, a Deep Research Capability — TradingView News

Published

on


PALO ALTO, Calif., July 07, 2025 (GLOBE NEWSWIRE) — Denodo, a leader in data management, announced the availability of the Denodo DeepQuery capability, now as a private preview, and generally available soon, enabling generative AI (GenAI) to go beyond retrieving facts to investigating, synthesizing, and explaining its reasoning. Denodo also announced the availability of Model Context Protocol (MCP) support as part of the Denodo AI SDK.

Built to address complex, open-ended business questions, DeepQuery will leverage live access to a wide spectrum of governed enterprise data across systems, departments, and formats. Unlike traditional GenAI solutions, which rephrase existing content, DeepQuery, a deep research capability, will analyze complex, open questions and search across multiple systems and sources to deliver well-structured, explainable answers rooted in real-time information. To help users operate this new capability to better understand complex current events and situations, DeepQuery will also leverage external data sources to extend and enrich enterprise data with publicly available data, external applications, and data from trading partners.

DeepQuery, beyond what’s possible using traditional generative AI (GenAI) chat or retrieval augmented generation (RAG), will enable users to ask complex, cross-functional questions that would typically take analysts days to answer—questions like, “Why did fund outflows spike last quarter?” or “What’s driving changes in customer retention across regions?” Rather than piecing together reports and data exports, DeepQuery will connect to live, governed data across different systems, apply expert-level reasoning, and deliver answers in minutes.

Slated to be packaged with the Denodo AI SDK, which streamlines AI application development with pre-built APIs, DeepQuery is being developed as a fully extensible component of the Denodo Platform, enabling developers and AI teams to build, experiment with, and integrate deep research capabilities into their own agents, copilots, or domain-specific applications.

“With DeepQuery, Denodo is demonstrating forward-thinking in advancing the capabilities of AI,” said Stewart Bond, Research VP, Data Intelligence and Integration Software at IDC. “DeepQuery, driven by deep research advances, will deliver more accurate AI responses that will also be fully explainable.”

Large language models (LLMs), business intelligence tools, and other applications are beginning to offer deep research capabilities based on public Web data; pre-indexed, data-lakehouse-specific data; or document-based retrieval, but only Denodo is developing deep research capabilities, in the form of DeepQuery, that are grounded in enterprise data across all systems, data that is delivered in real-time, structured, and governed. These capabilities are enabled by the Denodo Platform’s logical approach to data management, supported by a strong data virtualization foundation.

Denodo DeepQuery is currently available in a private preview mode. Denodo is inviting select organizations to join its AI Accelerator Program, which offers early access to DeepQuery capabilities, as well as the opportunity to collaborate with our product team to shape the future of enterprise GenAI.

“As a Denodo partner, we’re always looking for ways to provide our clients with a competitive edge,” said Nagaraj Sastry, Senior Vice President, Data and Analytics at Encora. “Denodo DeepQuery gives us exactly that. Its ability to leverage real-time, governed enterprise data for deep, contextualized insights sets it apart. This means we can help our customers move beyond general AI queries to truly intelligent analysis, empowering them to make faster, more informed decisions and accelerating their AI journey.”

Denodo also announced support of Model Context Protocol (MCP), and an MCP Server implementation is now included in the latest version of the Denodo AI SDK. As a result, all AI agents and apps based on the Denodo AI SDK can be integrated with any MCP-compliant client, providing customers with a trusted data foundation for their agentic AI ecosystems based on open standards.

“AI’s true potential in the enterprise lies not just in generating responses, but in understanding the full context behind them,” said Angel Viña, CEO and Founder of Denodo. “With DeepQuery, we’re unlocking that potential by combining generative AI with real-time, governed access to the entire corporate data ecosystem, no matter where that data resides. Unlike siloed solutions tied to a single store, DeepQuery leverages enriched, unified semantics across distributed sources, allowing AI to reason, explain, and act on data with unprecedented depth and accuracy.”

Additional Information

  • Denodo Platform: What’s New
  • Blog Post: Smarter AI Starts Here: Why DeepQuery Is the Next Step in GenAI Maturity
  • Demo: Watch a short video of this capability in action.

About Denodo

Denodo is a leader in data management. The award-winning Denodo Platform is the leading logical data management platform for transforming data into trustworthy insights and outcomes for all data-related initiatives across the enterprise, including AI and self-service. Denodo’s customers in all industries all over the world have delivered trusted AI-ready and business-ready data in a third of the time and with 10x better performance than with lakehouses and other mainstream data platforms alone. For more information, visit denodo.com.

Media Contacts

pr@denodo.com



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

Trending