Connect with us

Tools & Platforms

So long, study guides? The AI industry is going after students

Published

on


A pair of hands typing on a laptop keyboard, surrounded by the floating text ‘AI.’ The background is light blue, and the concept revolves around artificial intelligence and modern technology. This flat illustration could represent the use of AI in digital work, automation, or innovation. Ample copy space is available for text or branding.

Ekaterina Goncharova

Students are using ChatGPT more than ever — and ChatGPT knows it.

Last week, OpenAI launched “study mode” in its chatbot, aimed directly at the student market. It’s meant to behave more like a tutor than a machine that spits out answers; it uses the Socratic method, builds quizzes and creates study plans. The same day, Google announced a suite of study-oriented tools.

So, how does generative AI compare to old-school tools like textbooks and online homework helpers like Chegg and Quizlet? Do they still have a place?

I first asked ChatGPT: “Would you recommend I use you as a study tool? How do you compare to textbooks and edtech companies?” The answer: “Yes, I can absolutely be a useful study tool, but the best results come from knowing how and when to use me alongside textbooks and edtech platforms.”

Then I talked to people running some of those platforms and some students who use (or once used) them. As generative AI plants its stake in education, they’re all doing what they can to acclimate.

How companies are adapting

Chegg sells textbooks and offers a slate of digital services, such as generating flash cards and practice questions. In May, the company laid off about 250 employees, or 22% of its workforce, partly due to students turning to generative AI, it confirmed to NPR. But rather than trying to expand its reach, it’s zooming in.

“We were trying to be everything to every student in a pre-AI world,” Chegg CEO Nathan Schultz says.

Several generative AI platforms, including ChatGPT, have free plans. Chegg hopes to reach students who will pay $19.99 a month for tools that encourage long-term use and goal setting.

“If you think about the fitness world, those apps and those services tend to be much more guided to getting you to your goal,” Schultz says. “They’re giving you, ‘Every week we’re going to do this many miles or this many rides or this much work,’ and that’s how we’ve been designing our service.”

Chegg is also wrapping AI models into its platform. A new feature shows subscribers side-by-side panels with Chegg’s answer to a question next to answers from other platforms, including ChatGPT, Google Gemini and Claude.

Macmillan Learning sells textbooks and e-books, and it offers quizzes and study guides. Like Chegg, it has incorporated an AI tool into its paid plan and began rolling it out late last year.

Macmillan’s tool doesn’t give students straight-up answers; instead, it guides them to the solution through open-ended questions that expose flawed thinking (aka the Socratic method).

“It Socratically supports them so that they have that learning experience that they can use … when they have to do it themselves on the exam,” says Tim Flem, Macmillan Learning’s chief product officer.

Flem claims Macmillan’s AI tutor is more accurate than AI chatbots, as it draws from the company’s textbooks. The platform also reduces “content switching,” he says.

“If you’re switching between that tab and that tab, you notice how you’re always kind of like, ‘Wait a minute, what did it say over here?’” Flem says. “So our AI tutor is right there next to the problem that the student is working on.”

How students are adapting

Some students are mixing and matching AI and traditional tools. Bryan Wheatley combined ChatGPT with Quizlet and Socratic (another AI tool) to study. A recent graduate of Prairie View A&M University in Texas, he initially approached ChatGPT with trepidation.

Bryan Wheatley graduated from Prairie View A&M University last year with a degree in sociology.

Bryan Wheatley graduated from Prairie View A&M University last year with a degree in sociology.

Grace Raver

“Something that’s really adaptive is kind of crazy in a sense,” he says, though he went on to use it to outline essays and for other tasks. He says ChatGPT is correct about half the time, and he had to do a lot of cross-referencing.

He was one of the 66% of students in bachelor’s, master’s and doctoral programs using ChatGPT regularly, according to July 2024 research from the Digital Education Council.

The survey also found that over 50% of students believed too much reliance on AI would negatively impact their academic performance.

Sally Simpson is trying to hold the line. The Georgetown University student, who’s working on a Ph.D. in German literature, does not use generative AI. In her undergrad days, she used websites like Quizlet and SparkNotes to reinforce information she processed.

Now, she sees undergraduates use generative AI to complete homework assignments and summarize bodies of work they didn’t read. “It cheapens people’s education,” she says. “I think it’s an important skill to be able to read an article, or read a text, and not only be able to summarize it, but think about it critically.”

Sally Simpson is studying for a doctorate in German at Georgetown University.

Sally Simpson is studying for a doctorate in German literature at Georgetown University.

Grace Raver

Dontrell Shoulders, a senior studying social work at Kentucky State University, was an avid Quizlet user and still uses it to study for tests. With Quizlet, he has to seek out answers. Generative AI doesn’t provide much of a challenge, he says.

“You’re just putting something in a computer, having to type it up, and just like, ‘Here you go,’ ” he says. “Are you going to remember it after you just typed it in? You’re not.”

How professors are adapting

Amy Lawyer, the department chair of equine administration at the University of Louisville’s business school, says some students still use online study guides like Chegg and SparkNotes. “Students are to a point where they’re going to use any resources available to them,” she says.

Of those resources, ChatGPT has had the most significant impact on her classroom. She uses it herself for editing and encourages her students to do the same. To stop them from plagiarizing or overusing AI chatbots, however, she’s now issuing more assignments that must be handwritten or completed in class.

Ayelet Fishbach, a marketing and behavioral science professor at the University of Chicago Booth School of Business, says students will always find shortcuts, no matter how the technology evolves. “Cheating has not been invented recently,” she says.

“What is different now is that the line seems, to many people, more blurry,” she says. “If before you knew you were cheating, now you feel, ‘Maybe I’m still doing what I’m supposed to do, only I’m being more efficient.’ This is confusing for students, and we do try to support them.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Workers ‘larping’ by pretending to use AI | Information Age

Published

on


Woman working at a computer.

Workers are feeling pressure to use AI at work. Photo: Shutterstock

Many employees are “larping” at work by pretending to use artificial intelligence due to pressure to harness the technology, according to social scientist Nigel Dalton.

Delivering the keynote speech at RMIT Online’s Future Skills Fest, Dalton, of tech consultancy Thoughtworks, described the difficult state of affairs for Australian workers of all ages when it comes to AI.

He said it’s like going from a zoo to the jungle, and that many workers experience paralysis when it comes to new technologies.

Dalton pointed to a recent survey that found that one in six workers were pretending to use AI at work.

The survey, conducted by engineer outsourcing company Howdy.com, found that workers felt pressured to use AI in situations they were unsure about, and that three-quarters of them were expected to use the technology at work.

“AI is taking over the white-collar workspace as daily updates provide opportunities to optimise,” the report said.

“However, potential does not always lead to smooth implementation.”

‘Larping’ at work

Dalton said these workers are “larping” and not keeping pace with new technologies such as AI.

“They’ve got Gemini or CoPilot open when their boss walks up behind them, and they are larping – they are live action roleplaying,” Dalton said.

“This is interesting. What human behaviour did we incite here from the way we were scaffolding the work and the scene and the structure?”

The use of AI by companies of all shapes and sizes has accelerated in recent years, particularly since the advent of generative AI tools such as ChatGPT.

Earlier this year, Goldman Sachs became one of the largest companies to hire an AI software engineer to work alongside its human employees and complete complicated, multistep tasks.

Social scientist Nigel Dalton says that in 10 years, we’ll look back on this period and laugh. Photo: Shutterstock

Dalton likened how many workers are feeling when it comes to AI to the German chess term “zugzwang”, which means the compulsion to move even when knowing this will likely worsen your overall position.

“This is very much a good description of where we feel ourselves today and in our careers,” he said.

“If I do that, it’ll be the wrong thing; if I stand still it’ll be okay. But you can’t stand still. That’s why you’re feeling the dissonance in your head. But it will likely lead you to doing nothing, which is probably the worst scenario.

“We’re anchored in this ridiculous period that in 10 years we will all look back on and laugh.”

From a zoo to a jungle

With the growing usage of AI across all operations, businesses have become increasingly challenging to navigate for employees at all levels, particularly those who are yet to harness the technology fully.

Dalton said this was like the workplace going from a zoo to a jungle.

“We all used to work in a zoo – a metaphorically complicated process,” he said.

“At a zoo you can take photos of wild animals but the path is concrete, there are timetables and it’s all very safe.

“In a zoo, every animal stays in their cage. That is how work used to be – there weren’t any looming threats of stuff coming out of the forest.

“Now we’re on a work safari, a career safari. There are no paths, no signposts, no timetables.

“The animals are hiding in plain sight and collaborating, and may come from anywhere.

“To navigate the jungle you need a new mindset, and it involves being comfortable with getting lost, with what it feels like to go backwards for a time.”

According to Dalton, there are four key factors shaping the future of work: the climate crisis, ageing citizens, disruptive technology and declining social equity.

“It’s not just these things individually, it’s them weaving in together,” he said.

“It’s in these unlikely places that I believe businesses will be built, where the opportunities lie.

“It’s hard to navigate now, but there are opportunities amidst all of this chaos, as there always have been in history.”





Source link

Continue Reading

Tools & Platforms

New Report: Remapping Travel With Agentic AI

Published

on


Agentic AI could upend the travel industry. Travel and hospitality organizations should explore how it can catalyze AI’s transformative potential.

The travel industry has repeatedly been reshaped by new technology, from global distribution systems to OTAs and mobile booking. Each wave has changed how people plan and experience journeys while forcing companies to rethink their business models. The latest advance, agentic AI, could prove just as disruptive.

Unlike generative AI, which mostly advises through recommendations, agentic AI can take action. It identifies problems, reasons through solutions, and executes fixes, often coordinating multiple AI agents to complete complex tasks. With memory, tool use, and autonomy, it promises to serve as the interface that helps travel companies finally harness AI’s full value.

Momentum is already visible. In 2022, only 4 percent of the largest public travel companies referenced AI in annual reports. By 2024, that rose to 35 percent. Venture capital is following suit: just 10 percent of travel start-up funding went to AI-enabled companies in 2023, compared with 45 percent by mid-2025. Executives report real benefits too. A McKinsey/Skift survey of 86 leaders found nearly 60 percent credit AI with boosting productivity, while others highlight faster decision-making, improved personalization, and measurable revenue and cost gains.

Adoption is in the early stages however. Most efforts focus on copilots and chatbots, which deliver diffuse or hard-to-measure results. Structural barriers play a role: fragmented data across countless small businesses limits feedback loops, while many travel companies still see themselves as service providers first, not tech firms, slowing investment and talent development.

Agentic AI raises the stakes. For travelers, it could make problem-solving seamless. While more than 90 percent of consumers trust AI-generated travel information, only 2 percent currently allow AI to book on their behalf. The ability of agentic AI to resolve issues autonomously, not just suggest solutions, could build confidence.

For companies, internal use cases may be the safest proving ground for agentic. Automating airline re-bookings, predicting hotel maintenance, managing housekeeping, or optimizing menus can boost efficiency while freeing staff to focus on empathetic service. Airlines could deploy agentic AI for personalized bundles, real-time fare adjustments, smarter overbooking, and tailored loyalty rewards, with each offering tangible return on investment and differentiation.

Scaling, however, requires more than scattered pilots. Companies need clean data, scalable cloud infrastructure, and clear digital roadmaps tied to outcomes. Crucially, employees must be trained to use new tools and corporate cultures must stay agile enough to pivot as the technology evolves. Workflows must be redesigned rather than patched, ensuring agentic AI becomes embedded in how organizations operate.

Agentic AI will not redefine why people travel, but it could transform how. By making personalization scalable, reducing friction, and freeing employees from repetitive tasks, it has the potential to improve experiences across the journey. The companies that succeed won’t simply adopt agentic AI quickly; they’ll integrate it in ways that align with their brand and customers. Technology can provide the catalyst, but the human touch will remain the heart of travel.

In This Report:

  • How agentic AI differs from generative AI and why it matters for travel
  • Challenges holding back AI adoption in the sector
  • Consumer attitudes toward AI-driven tools
  • Practical use cases in hotels, airlines, and internal workflows
  • Steps companies can take to scale adoption effectively



Source link

Continue Reading

Tools & Platforms

‘AI will not love you, AI will not cry with you’: COICOM panel warns Church of technology’s limits

Published

on


Arnold Enns, Vladimir Lugo, Steve Cordon, and Fabio Criales during the panel forum “Artificial Intelligence: Challenges and Opportunities for the Church” at COICOM 2025. Christian Daily International

Artificial intelligence is no longer a distant concept for the Church but a pressing reality that demands attention. That was the message of a panel at the 2025 Congress of the Ibero-American Confederation of Communicators, Pastors, and Christian Leaders (COICOM) held in Honduras last week, where ministry and technology experts explored both the promise and perils of AI for faith communities.

Moderated by COICOM president Arnold Enns, the session—titled “Artificial Intelligence: Challenges and Opportunities for the Church”—brought together Vladimir Lugo, Steve Cordon, and Fabio Criales. The panelists examined the nature of AI, its societal impact, and its growing yet inescapable role within Christian ministry.

The discussion began with definitions. Lugo described AI as a branch of computing that “allows machines to do things that were previously reserved for humans,” including learning, analyzing, and making decisions. He clarified that AI does not reside in a single place but operates on vast cloud servers controlled by global tech giants such as Google, Amazon, and Microsoft, each competing for dominance in the field.

The dilemma of control and inherent bias

One of the first concerns raised was the issue of control and ethics. Panelists emphasized that AI technologies are not neutral. Lugo warned that publicly available models “carry biases,” reflecting the agendas of the secular companies that train them.

“Many of these companies are woke,” he said, arguing that they promote “anti-biblical” values and that their AI creations reflect humanist and liberal ideologies.

Criales added that AI “was meant to make evident what is already present” in the human heart, citing Matthew 15:18-19. He also cautioned about the danger of “hallucination”—when AI generates incorrect or misleading information in response to poorly framed prompts.

“Be very careful with that, because it hallucinates, recreates what you ask, and if you ask incorrectly, you could end up saying heresies on stage,” Criales warned.

Digital consumers or disciples?

The panel also weighed AI’s influence on ministry content creation. With more pastors turning to tools like ChatGPT to write sermons, Lugo acknowledged that AI can be a useful “tool” for research. But he stressed that “the intelligent entity using the tool is the human” and cautioned against surrendering discernment.

Cordon posed a sharper question about the widespread adoption of AI-driven platforms, noting the 123 million daily users of ChatGPT: “Have we created more digital consumers than digital disciples?” True pastoral work, he said, cannot be automated. “People need pastors. AI will not love you, AI will not cry with you.”

He recounted a sobering personal experience with a counseling AI that not only conversed smoothly but also offered to pray for him in eloquent, detailed language. The moment highlighted for him the unsettling boundary between authentic pastoral care and technological simulation. “I believe AI will also be a test of maturity for the Church,” he reflected.

A call for training and responsibility

The panel closed with a strong call for Christian leaders to equip themselves and their congregations to engage AI critically. “Either you use it, or it uses you—there really isn’t an alternative,” Cordon said.

Criales stressed that believers must be intentional in learning how to apply these tools properly. Lugo concluded with an appeal to humility: “If there is anything we want to learn from the Lord, let us learn how to learn.”

The consensus was clear: artificial intelligence is not merely a technological development but a spiritual test. For the Church, it represents a challenge requiring maturity, ethical discernment, and above all, a reaffirmation of the irreplaceable value of human connection in ministry.

Originally published on Diario Cristiano, Christian Daily International’s Spanish edition.



Source link

Continue Reading

Trending