Connect with us

AI Research

AI Tools for Students: Smarter Research, Notes and Editing

Published

on


Just last week, OpenAI CEO Sam Altman said that today’s students are the luckiest in history. Because AI is advancing more quickly than ever, he predicts that no one born today will outsmart it, and future generations will marvel at how students once struggled without these tools. 

In 2025, college students can tap into more digital resources, guidance and support than any generation before them. The revolution is here—now it’s your move.

Whether you’re writing a report or preparing for a tough exam, AI tools can be a powerful starting point. They won’t replace deep research or give you ready-made quotes, but they can quickly provide a baseline understanding of your topic. For instance, chatbots like ChatGPT and Copilot are great for breaking down complex ideas, outlining key themes and pointing you toward the right angles to explore.

The challenge with research is usually too much information, not too little. But AI can help you sift through the noise and highlight what’s most relevant. Even the most basic tools can suggest structures for your essay, help you narrow your focus or recommend where to dig deeper. Just remember: You, not the bot, should lead your work. Think of AI as a guide, not a ghostwriter.

When it comes to finding credible sources, tools built specifically for students are even more useful. For example, platforms like Elicit and Research Rabbit help you search for peer-reviewed articles, discover new authors and map out connections between studies. Instead of wading through endless PDFs, you can use these tools to give you a clear path through the research jungle. 

While no AI tool is perfect, Elicit in particular still manages to be about 90% accurate, which makes it one of the more trustworthy options out there.

Pro tip: Use chatbots for brainstorming and structuring, then switch to academic tools like Elicit for evidence and citations. That way, you can stay efficient without compromising quality.

Take smarter notes with AI-powered summarization

Sometimes, sitting through a lecture feels like running a mental marathon. Maybe your mind is elsewhere, maybe your stomach is growling or maybe the clock itself seems to crawl. But even if your focus slips, showing up still matters. Attendance is one thing, but more importantly, lecturers often drop insights and tips that don’t appear in the textbook—and those can make the difference between a good grade and a great one.

The problem? Lectures move quickly, and professors often wander off on tangents. It can be hard to keep up, let alone capture every detail. That’s where AI summarization tools come in. Instead of trying to scribble down everything word-for-word, you can use AI to record, transcribe and condense the key points so you can review them later. 

If you struggle to keep up during fast-paced lectures, start with Otter.ai. It records and transcribes everything live, and it even highlights key themes so you can search and review them later (and yes, you can share them with your study group too). Or if your notes usually end up messy or half-finished, drop them into Notion AI. It’ll clean them up and break everything into clear sections—like “main points,” “examples” or even “possible test questions”—so revision feels less overwhelming. And if you’ve ever missed a lecture (no judgment), Perplexity can help you catch up fast. Just upload your transcript or class notes, and it will condense them into bullet points or explain the tricky bits you didn’t quite grasp.

These tools aren’t a shortcut. Instead, they’re a way to make your learning more efficient and your understanding deeper, all on your own schedule.

Pro tip: Use Otter.ai or Notta AI to record lectures live and generate a clean transcript so you can focus on listening in the moment.

Let AI be your editor, not your author

As tempting as it might be to let a chatbot spin up your essay for you, resist the urge. AI can’t (and shouldn’t) replace your thinking. But what it can do is act as a sharp-eyed editor. 

These tools are great at cleaning up messy sentences, tightening your arguments and making sure your conclusions line up with your thesis. The heavy lifting—your ideas, analysis and voice—still has to come from you though. Think of AI as the friend who points out where you’ve gone off track, not the one who writes the whole thing for you.

The best way to use AI for editing is in layers. Start with tools like Grammarly or Quillbot, which go beyond spellcheck to flag awkward phrasing, tone issues and wordy sections that drag down your flow. Then, use Notion AI or ChatGPT to get feedback on your structure. To do this, copy and paste your essay and ask the bot questions like, “Where does my argument feel weak?” or “Does my conclusion connect back to my thesis?” These chatbots will then give you practical suggestions you can act on without losing ownership of the writing.

Here’s a clever trick many students don’t know about: Try asking AI to play professor. Copy and paste your draft and say, “Grade this like a tough lecturer and give me detailed feedback.” This way, you’ll get a sense of what might trip up your reader before you even hand in the assignment. 

Even better, if you already have a marking rubric or an exemplar essay, feed that into the AI alongside your draft. AI tools are great at picking apart criteria and spotting the kind of language, structure and approach that examiners are looking for. The more context you give the tool, the more useful the feedback becomes.

Pro tip: Don’t shy away from being overly detailed in your prompts. The more context you give (like telling AI to focus only on your transitions or wordiness), the better the feedback will be and the more control you’ll keep over your work. 

If you’re a student in 2025, the perks are better than ever. Almost every AI chatbot has a Pro tier or student deal, so you can simply choose the one you vibe with most and get started.

For example, Google is giving away its AI Pro Plan, normally worth hundreds, for free when you sign up with a college email. That means a full year of Gemini 2.5 Pro, Deep Research, NotebookLM and Veo 3, plus 2 TB of storage to handle all your notes and projects. 

Perplexity is also making it ridiculously easy to get its Pro plan for free. Through its “Race to Infinity” challenge, if your school hits enough sign-ups with .edu or official university emails, every student there unlocks a full year of Perplexity Pro, no strings attached. You can check if your school is already in the race or start pushing it forward at perplexity.ai/backtoschool. Once you’re verified as a student (via SheerID), you’ll instantly get one free month of Pro, and every friend you refer adds on another free month.

If you add these offers up, you’ll have a suite of top-tier tools available for little to no cost, if you know where to look.

Pro tip: Don’t wait until exam season to claim these offers. Sign up early, stack the free trials and set reminders so you don’t miss renewal deadlines. That way, you can maximize every perk without paying a cent.

Elevate your studies through thoughtful AI use

This is a unique moment for your generation—you have access to tools that previous students could scarcely imagine. But AI isn’t just a new convenience. It’s also a skill that will shape how you learn, work and navigate the opportunities ahead. 

If you use it to experiment, problem-solve and hone your thinking, and if you approach it as a tool that will refine your abilities rather than a shortcut to complete assignments, you’ll gain an advantage that endures far beyond the lecture hall.

Photo by Gorodenkoff/Shutterstock.com



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Kennesaw State secures NSF grants to build community of AI educators nationwide

Published

on



KENNESAW, Ga. |
Sep 12, 2025

Shaoen Wu

The International Data Corporation projects that artificial intelligence will add
$19.9 trillion to the global economy by 2030, yet educators are still defining how
students should learn to use the technology responsibly.

To better equip AI educators and to foster a sense of community among those in the
field, Kennesaw State University Department Chair and Professor of Information Technology (IT) Shaoen Wu, along with assistant professors Seyedamin Pouriyeh and Chloe “Yixin” Xie, were recently awarded two National Science Foundation (NSF) grants. The awards, managed by the NSF’s Computer and Information Science and Engineering division, will fund the project through May 31, 2027 with an overarching goal to unite educators from across the country
to build shared resources, foster collaboration, and lay the foundation for common
guidelines in AI education.

Wu, who works in Kennesaw State’s College of Computing and Software Engineering (CCSE), explained that while many universities, including KSU, have launched undergraduate
and graduate programs in artificial intelligence, there is no established community
to unify these efforts.

“AI has become the next big thing after the internet,” Wu said. “But we do not yet have a mature, coordinated community for AI education. This project is the first step toward building that national network.”

Drawing inspiration from the cybersecurity education community, which has long benefited
from standardized curriculum guidelines, Wu envisions a similar structure for AI.
The goal is to reduce barriers for under-resourced institutions, such as community
colleges, by giving them free access to shared teaching materials and best practices.

The projects are part of the National AI Research Resource (NAIRR) pilot, a White
House initiative to broaden AI access and innovation. Through the grants, Wu and his
team will bring together educators from two-year colleges, four-year institutions,
research-intensive universities, and Historically Black Colleges and Universities
to identify gaps and outline recommendations for AI education.

“This is not just for computing majors,” Wu said. “AI touches health, finance, engineering, and so many other fields. What we build now will shape AI education not only in higher education but also in K-12 schools and for the general public.”

For Wu, the NSF grants represent more than just funding. It validates KSU’s growing presence in national conversations on emerging technologies. Recently, he was invited to moderate a panel at the Computing Research Association’s annual computing academic leadership summit, where department chairs and deans from across the country gathered to discuss AI education.

“These grants position KSU alongside institutions like the University of Illinois Urbana-Champaign and the University of Pennsylvania as co-leaders in shaping the future of AI education,” Wu said. “It is a golden opportunity to elevate our university to national and even global prominence.”

CCSE Interim Dean Yiming Ji said Wu’s leadership reflects CCSE’s commitment to both innovation and accessibility.

“This NSF grant is not just an achievement for Dr. Wu but for the entire College of Computing and Software Engineering,” Ji said. “It highlights our faculty’s work to shape national conversations in AI education while ensuring that students from all backgrounds, including those at under-resourced institutions, can benefit from shared knowledge and opportunities.”

– Story by Raynard Churchwell

Related Stories

A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate, and doctoral degrees to its more than 47,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The university’s vibrant campus culture, diverse population, strong global ties, and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 8 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.



Source link

Continue Reading

AI Research

UC Berkeley researchers use Reddit to study AI’s moral judgements | Research And Ideas

Published

on


A study published by UC Berkeley researchers used the Reddit forum, r/AmITheAsshole, to determine whether artificial intelligence, or AI, chatbots had “patterns in their moral reasoning.”

The study, led by researchers Pratik Sachdeva and Tom van Nuenen at campus’s D-Lab, asked seven AI large language models, or LLMs, to judge more than 10,000 social dilemmas from r/AmITheAsshole.  

The LLMs used were Claude Haiku, Mistral 7B, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B and OpenAI’s GPT-3.5 and GPT-4. The study found that different LLMs showed unique moral judgement patterns, often giving dramatically different verdicts from other LLMs. These results were self-consistent, meaning that when presented with the same issue, the model seemed to judge it with the same set of morals and values. 

Sachdeva and van Nuenen began the study in January 2023, shortly after ChatGPT came out. According to van Nuenen, as people increasingly turned to AI for personal advice, they were motivated to study the values shaping the responses they received.

r/AmITheAsshole is a Reddit forum where people can ask fellow users if they were the “asshole” in a social dilemma. The forum was chosen by the researchers due to its unique verdict system, as subreddit users assign their judgement of “Not The Asshole,” “You’re the Asshole,” “No Assholes Here,” “Everyone Sucks Here” or “Need More Info.” The judgement with the most upvotes, or likes, is accepted as the consensus, according to the study. 

“What (other) studies will do is prompt models with political or moral surveys, or constrained moral scenarios like a trolley problem,” Sechdava said. “But we were more interested in personal dilemmas that users will also come to these language models for like, mental health chats or things like that, or problems in someone’s direct environment.”

According to the study, the LLM models were presented with the post and asked to issue a judgement and explanation. Researchers compared their responses to the Reddit consensus and then judged the AI’s explanations along a six-category moral framework of fairness, feelings, harms, honesty, relational obligation and social norms. 

The researchers found that out of the LLMs, GPT-4’s judgments agreed with the Reddit consensus the most, even if agreement was generally pretty low. According to the study, GPT-3.5 assigned people “You’re the Asshole” at a comparatively higher rate than GPT-4. 

“Some models are more fairness forward. Others are a bit harsher. And the interesting thing we found is if you put them together, if you look at the distribution of all the evaluations of these different models, you start approximating human consensus as well,” van Nuenen said. 

The researchers found that even though the verdicts of the LLM models generally disagreed with each other, the consensus of the seven models typically aligned with the Redditor’s consensus.

One model, Mistral 7B, assigned almost no posts “You’re the Asshole” verdicts, as it used the word “asshole” to mean its literal definition, and not the socially accepted definition in the forum, which refers to whoever is at fault. 

When asked if he believed the chatbots had moral compasses, van Nuenen instead described them as having “moral flavors.” 

“There doesn’t seem to be some kind of unified, directional sense of right and wrong (among the chatbots). And there’s diversity like that,” van Nuenen said. 

Sachdeva and van Nuenen have begun two follow-up studies. One examines how the models’ stances adjust when deliberating their responses with other chatbots, while the other looks at how consistent the models’ judgments are as the dilemmas are modified. 



Source link

Continue Reading

AI Research

Imperial researchers develop AI stethoscope that spots fatal heart conditions in seconds

Published

on


Experts hope the new technology will help doctors spot heart problems earlier

Researchers at Imperial College London and Imperial College Healthcare NHS Trust have developed an AI-powered stethoscope that can diagnose heart conditions. The new device can detect serious heart conditions in just 15 seconds, including heart failure, heart valve disease, and irregular heart rhythms.

The device, manufactured by US firm Eko Health, uses a microphone to record heartbeats and blood flow, while simultaneously taking an ECG (electrocardiogram). The data is then analysed by trained AI software, allowing doctors to detect abnormalities beyond the range of the human ear or the traditional stethoscope.

In a trial involving 12,000 patients from 96 GP practices throughout the UK, the AI stethoscope proved accurate in diagnosing illnesses that usually require lengthy periods of examination.

Results revealed that those examined were twice as likely to be diagnosed with heart failure, and 3.5 times as likely to be diagnosed with atrial fibrillation – a condition linked to strokes. Studies further revealed that patients were almost twice as likely to be diagnosed with heart valve disease.

via Unsplash

The AI stethoscope was trialled on those with more subtle signs of heart failure, including breathlessness, fatigue, or swelling of the lower legs and feet. Retailing at £329 on the Eko Health website, the stethoscope can also be purchased for home use.

Professor Mike Lewis, Scientific Director for Innovation at the National Institute for Health and Care Research (NIHR), described the AI-stethoscope as a “real game-changer for patients.”

He added: “The AI stethoscope gives local clinicians the ability to spot problems earlier, diagnose patients in the community, and address some of the big killers in society.”

Dr Sonya Babu-Narayan, Clinical Director at the British Heart Foundation, further praised this innovation: “Given an earlier diagnosis, people can access the treatment they need to help them live well for longer.”

Imperial College London’s research is a significant breakthrough in rapid diagnosis technology. Studies by the British Heart Foundation reveal that over 7.6 million people live with a cardiovascular disease, causing 170,000 related deaths each year.

Often called a “silent killer”, heart conditions can go unnoticed for years, particularly in young people. The charity Cardiac Risk in the Young reports that 12 young people die each week from undiagnosed heart problems, with athletes at particular risk. Experts hope this new technology will allow these conditions to be identified far earlier.

The NHS has also welcomed these findings. Heart failure costs the NHS more than £2 billion per year, equating to 4 per cent of the annual budget. By diagnosing earlier, the NHS estimates this AI tool could save up to £2,400 per patient.

Researchers now plan to roll out the stethoscope across GP practices in Wales, South London and Sussex – a move that will transform how heart conditions are diagnosed throughout the country.

Featured image via Google Maps/Pexels



Source link

Continue Reading

Trending