Connect with us

Education

Don’t use AI to write your college admissions essay

Published

on


Every fall, high school students applying to college face an intimidating task: They must write a stylish, memorable essay that will boost their admissions chances. 

So who can blame them when they look at AI chatbots like ChatGPT that can brainstorm, compose, and edit text, and see what looks like a tempting advantage? 

But college admissions experts warn against falling for the imagined payoff of a crisp, well-researched, confident-sounding essay. Instead, using AI to write an admissions essay could land a student at the bottom of the pile. 

“A college application is a blank canvas,” says Dr. Jennifer Kirk, a high school counseling curriculum leader and member of the National Association for College Admission Counseling. “Everything that you throw at it should be a bright splash of color…If you throw a completely AI-written essay at that blank canvas, it’s just going to wash it out.”

Aside from a student sacrificing their authentic voice to AI, there are other important risks, like submitting an essay that contains embarrassing mistakes or inaccuracies, or that reads strangely similar to other applicants.

Why you shouldn’t use AI to write college admissions essays

When students set out to write their Common Application admissions essay this year, they can choose from one of seven questions. Their response is limited to 650 words.  

The different essay questions invite applicants to share a meaningful talent, reflect on gratitude, or discuss an engaging concept or idea, for example. Applicants also write an essay of their choice.

The writing doesn’t stop there, either. They may additionally submit a separate essay on “challenges and circumstances,” which provides an opportunity to address factors that may have affected their record of achievement, like housing instability, homelessness, family caretaking, community disruption, and war or political conflict. 

Each college or university may also require multiple supplemental essays or written responses. The University of California, Berkeley, for instance, has applicants respond to four of eight “personal insight” questions. Harvard’s application includes five questions that must each be answered in 150 words or less. 

For a student overwhelmed by the work of writing a memorable essay, plus crafting original responses for every application they submit, an AI chatbot promises a simple shortcut. 

Yet Connie Livingston, assistant director of admissions at Brown University, says that what sets special essays apart from those that don’t impress is an “authentic” voice of someone who sees themselves as a learner and scholar. 

“There’s no way AI can do that for you,” says Livingston, now a college counselor with Empowerly. “It has to be an intrinsic quality that a student possesses that they then translate onto the page for admissions officers to, hopefully, see and appreciate.”

While students might think they can prompt an AI chatbot or tool to approximate their own ideas and voice by feeding it personal information, Kirk cautions them against doing so, for privacy reasons. Some models may leak or publish sensitive or personally identifiable information to the internet, she says. 

There’s also no surefire way to conceal the use of AI in an essay. 

Mashable Trend Report

Kirk says that admissions officers can detect telltale signs, such as constructions, phrases, punctuation, and grammar, that suggest an applicant used AI. 

If the essay itself contains original ideas and an authentic voice, those red flags might be dismissed. But if it reads as bland and uninspired, then the reader may suspect AI. 

Additionally, phrases and wording may seem unique to an individual student, but instead reflect how ChatGPT commonly responds to the same Common Application essay question, with minimal prompting. 

Imagine, for example, thousands of students applying to the same university and using the same AI chatbot to write their supplemental essays; the chatbot may use similar language for each individual response.

“That absolutely can happen,” Kirk says. “They’re going to sound pretty similar, and look pretty similar.”  

When it’s OK to use AI for college admissions essays 

Though Kirk says students should never use AI to write their essays, she does think the technology can be otherwise useful in the process. 

First, she advises students to research whether each college or university they’re applying to actually permits the use of AI, in general and specifically in admissions applications, and then follow those rules.

Once students have that information, Kirk says they may consider consulting AI for researching, brainstorming, outlining, refining drafts, editing, and proofreading. 

Livingston recommends AI only for researching and brainstorming, and notes that students should also follow their high school’s policy on AI use before adopting it during their essay-writing process.

Livingston says that AI can helpfully summarize information about a university’s culture or academics, providing details that might have taken longer to track down. A student interested in a particular academic department, for example, could ask an AI tool to list the most accomplished faculty members or notable areas of research. The student can then potentially incorporate that information into an essay or written response. 

When it comes to research and facts, however, applicants should be careful to double-check what an AI search engine or chatbot says is true. 

“AI makes mistakes,” Livingston says. 


“Don’t rely on AI to choose your topic or develop core ideas without personal reflection.”

– Jennifer Kirk, school counselor

To use AI effectively for brainstorming, Kirk recommends narrowing down to a few key topics for further exploration, before asking AI for ideas about the subject of an essay. 

“Don’t rely on AI to choose your topic or develop core ideas without personal reflection,” Kirk says. 

Students may find AI helpful during the revision process, Kirk adds. She recently worked with an applicant who gave an AI tool two versions of the same essay, with a request to synthesize the content in order to write a new draft.

Still, Kirk says students shouldn’t let AI overly polish their writing, beyond helping with structure and correcting grammatical errors and punctuation. This can dilute a student’s original voice. Letting AI use big or fancy words that a student might not otherwise use has the same effect. Kirk says admissions officers can pick up on those discrepancies by looking at a student’s entire application. 

If students don’t want to find themselves in a high-pressure situation where deadlines are looming and they still have an essay and multiple responses to write, Livingston recommends starting as early as possible, if the process hasn’t already begun. (She recommends the summer before senior year.)

Students can reach out to high school writing centers, college counselors, and English teachers for valuable help and feedback throughout the process, Livingston says.  

Regardless, rushed or desperate students should know that AI won’t provide the winning shortcut to the college of their dreams. 

“Yes, AI can write a good essay, but a good essay is not going to get a student accepted into college,” Livingston says. “It has to be a great essay.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Education

Surrey schools embrace AI in the classroom – CTV News

Published

on



Surrey schools embrace AI in the classroom  CTV News



Source link

Continue Reading

Education

Baltimore schools roll out AI guidance to help teachers navigate a ChatGPT world

Published

on


This story was first published in Technical.ly.

For the past few years, Baltimore City Public Schools teacher Lee Krempel has watched students try to pass off generative artificial intelligence as their work — and the giveaways were often glaring. 

“One time I knew for sure this kid, just from their class performance, didn’t actually read Hamlet that closely, and suddenly they had ideas in an essay on feminism in Hamlet and psychoanalytic criticism in Hamlet … it was actually kind of hilarious,” Krempel said. 

But Krempel, who teaches 12th grade English and AP Literature, hasn’t always been sure how to handle these new forms of plagiarism. He’s grateful for the AI guidance City Schools released last week to help teachers navigate use of the technology in the ChatGPT era.

Dawn Shirey, the district’s director of virtual learning and instructional technology, led the development of the new guidance. After hearing the struggles of teachers like Krempel, she wanted to make sure staff had clear direction on how to manage AI use in the classroom. 

For now, the guidance is only viewable through a City Schools account as the district works to create a public facing page. 

The guidelines include: 

  • A definition of generative AI and an outline of commonly used tools
  • An introduction to a generative AI “acceptable use scale” to guide students and teachers on assignments
  • An explanation of inappropriate uses of AI, including submitting AI-generated work without citation or harassing another student
  • Guidance on how teachers can address plagiarism and enforce academic integrity
  • An outline of privacy rules for using AI tools

Instead of unreliable AI checkers, guiding teachers to trust their instincts

To draft the guidance, Shirey convened a workgroup, drawing on recommendations from TeachAI, an initiative that advises on AI in education. During the last academic year, she convened a series of listening sessions with parents, teachers, special education staff and students. 

She found that teachers often felt uncertain about how to address suspected plagiarism, while students were unsure how to use AI appropriately as a tool. 

A key part of the new guidance advises teachers not to rely on online AI checkers to prove plagiarism, but instead to draw on their knowledge of a student’s past work. These AI “detectors” remain inconsistent, per a January 2025 study from the Journal of Applied Learning and Teaching. 

“We’ve heard of too many false positives and false negatives with checkers… So we really don’t want folks to rely on it,” Shirey said. 

When ChatGPT first went live, Krempel, the English teacher, sometimes turned to these detection tools. He now regrets using the software to start those conversations with students. 

“I’m doing real-time writing with students in class all the time, so I’m familiar with their voice and the level of complexity in their sentences — I can tell without the software,” Krempel told Technical.ly. 

When plagiarism is suspected, teachers can decide on their own approaches, aligned with the district’s existing academic integrity policy. In Krempel’s classes, students may redo essays, while another teacher, who asked not to be named for fear of administrative blowback, assigns a zero for plagiarized work.

“The first time it happened, it was a warning, and I explained to them that they were going to get a zero in the grade book for it… and if it happened again, it would be a referral and a call home,” the teacher said. “But there wasn’t super clear guidance on how to approach that in the past.”

Flexible guidelines, because not all teaching is the same

At Level 1 of the district’s new generative AI acceptable use scale, students may not use AI at all. At Level 5, they can use it freely with personal oversight, as long as they cite the tool and link any chats to their work. 

Krempel has seen that students often don’t understand what constitutes inappropriate use of AI.

“It makes sense that a 16- or 17-year-old, who hasn’t quite developed an idea of plagiarism, thinks they can just snatch some of the language that ChatGPT has used and put it in an essay, unattributed,” Krempel said. 

Plagiarism was teachers’ biggest focus during the listening groups, per Shirey, but the new guidance also addresses the biases present in generative AI tools, encourages teachers to discuss how the technology can reinforce stereotypes, and refers teachers to the district’s bullying policy if students use AI to harass others.

The district held optional professional learning sessions for staff members the week before classes began to help teachers understand the guidance and integrate AI tools into their classrooms. Sessions included how to introduce grade-level appropriate discussions on the ethics of AI and concluded with a pitch competition, where teachers developed and presented their own ideas. One teacher created a Gemini Gem that adjusts the difficulty levels of primary sources and provides a Spanish translation.

High school English teacher Forrest Gertin helped lead the sessions. A proponent of new technologies in the classroom, Gertin uses the tools to coach the school debate team by prompting students with follow-up questions. He sees a lot of benefit from the new guidance.

“We really wanted to slow down the process,” Gertin said, “from ‘Oh my god, it’s here’ to how can it help and really improve the learning experience for our students.”



Source link

Continue Reading

Education

AI education bill introduced in House

Published

on


Proposed legislation would ban charter schools from using AI instructors moving forward.

The Center Square


PA State News

September 15, 2025




HARRISBURG — Rep Nikki Rivera, D-Lancaster, is proposing new legislation to ban charter schools fr…







{“to-print”:”To print”, “bradfordera-website”:”Website”}







Source link

Continue Reading

Trending