Education
Kentucky schools, healthcare embrace AI despite mixed reactions

(LEX 18) — Artificial intelligence is reshaping how local organizations operate, from classrooms in Irvine to healthcare facilities in Lexington, as professionals navigate both the opportunities and challenges of this rapidly evolving technology.
Lisa Blue, who researches AI’s impact on workforce development, delivers six to eight speaking engagements per month discussing AI policy and implementation. She encounters varied student experiences with the technology.
“AI is going to change how we work before it changes who works,” Blue said.
Blue works to shift perceptions about AI in education, particularly addressing misconceptions from K-12 settings.
“We do have students coming in from K through 12, who have been told AI is straight up cheating and it’s bad don’t use it and I’m really trying to change that narrative,” Blue said.
At Estill County Area Technology Center in Irvine, students continue integrating AI into their studies. Allyson Banks, who works at the school, describes the technology’s dual nature.
“It is fantastic and terrifying at the same time,” Banks said.
The school’s programs align well with AI applications, according to Banks.
“We have robotics, manufacturing, a lot of different things that pair really well with AI,” Banks said.
For computer science teacher Zach Bennett, AI offers significant efficiency gains.
“Using AI, you can create things in half the time that it would normally cost,” Bennett said.
Healthcare transformation on the horizon
In Lexington’s healthcare sector, CEO, Dr. Stephen Behnke at Lexington Clinic sees AI as a transformative force, though still in early stages.
“I’d say we’re in the early innings of this,” Behnke said.
Behnke anticipates fundamental changes across the healthcare industry.
“I think that AI is going to fundamentally transform healthcare. I think that the power of the tools today is pretty early,” Behnke said.
Looking ahead operationally, Behnke predicts significant changes within the next decade.
“There’s almost no way that by 2030 2035 healthcare doesn’t look profoundly different,” Behnke said.
A market size and forecast report from Grand View Research supports Behnke’s projections, showing substantial growth in healthcare AI spending. The report projects that $187 billion will be spent on healthcare AI alone by 2030, representing a significant jump from 2024 market size.
The research highlights AI’s expanding role across multiple sectors, from education and manufacturing to healthcare, as organizations adapt to integrate these tools into their operations while addressing concerns about implementation and workforce impact.
As for jobs of the future and how they connect with AI?
Dr. Blue at EKU and Banks at Estill County ATC addressed that question:
“Any kind of job where it’s hands-on so we’re talking like healthcare, advanced manufacturing, logistics, construction, agriculture, they’re all adding AI enhanced jobs right now. So, they’re not really being threatened by it, they’re being enhanced by AI capabilities,” Blue said.
“I don’t think it’s necessarily gonna replace as many humans as it’s going to make us better at our jobs, or at least faster at our jobs,” Banks added.
Education
AI cheating in US schools prompts shift to in-class assessments, clearer policies

Artificial intelligence is reshaping education in the United States, forcing schools to rethink how students are assessed.
Traditional homework like take-home essays and book reports is increasingly being replaced by in-class writing and digital monitoring. The rise of AI has blurred the definition of honest work, leaving both teachers and students grappling with new challenges.
California English teacher Casey Cuny, a 2024 Teacher of the Year, said, “The cheating is off the charts. It’s the worst I’ve seen in my entire career.”
He added that teachers now assume any work done at home may involve AI. “We have to ask ourselves, what is cheating? Because I think the lines are getting blurred.”
SHIFT TO IN-CLASS ASSESSMENTS
Across schools, teachers are designing assignments that must be completed during lessons. Oregon teacher Kelly Gibson explained, “I used to give a writing prompt and say, ‘In two weeks I want a five-paragraph essay.’
These days, I can’t do that. That’s almost begging teenagers to cheat.”
Students themselves are unsure how far they can go. Some use AI for research or editing, but question whether summarising readings or drafting outlines counts as cheating.
College student Lily Brown admitted, “Sometimes I feel bad using ChatGPT to summarise reading, because I wonder is this cheating?”
POLICY CONFUSION AND NEW GUIDELINES
Guidance on AI use varies widely, even within the same school. Some classrooms encourage AI-assisted study, while others enforce strict bans. Valencia 11th grader Jolie Lahey called it “confusing” and “outdated.”
Universities are also drafting clearer rules. At the University of California, Berkeley, faculty are urged to state expectations on AI in syllabi. Without clarity, administrators warn, students may use tools inappropriately.
At Carnegie Mellon University, rising cases of academic responsibility violations have prompted a rethink. Faculty have been told that outright bans ‘are not viable’ unless assessment methods change.
Emily DeJeu, who teaches at Carnegie Mellon’s business school, stressed, “To expect an 18-year-old to exercise great discipline is unreasonable, that’s why it’s up to instructors to put up guardrails.”
The debate continues as schools balance innovation with integrity, shaping how the next generation learns in an AI-driven world.
(With inputs from AP)
– Ends
Education
‘Cheating is off the charts: AI tools reshape how students learn and study

The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.
Student use of artificial intelligence has become so prevalent, high school and college educators say, that to assign writing outside of the classroom is like asking students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”
The question now is how schools can adapt, because many of the teaching and assessment tools that have been used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study and how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.
“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”
Cuny’s students at Valencia High School in southern California now do most writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”
In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She is also incorporating more verbal assessments to have students talk through their understanding of assigned reading.
“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”
Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is now to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, plus examples and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”
Students are uncertain when AI usage is out of bounds
Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation, and it’s sometimes hard to know where to draw the line.
College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.
“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”
Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, but that leaves a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as a cheater.
Schools tend to leave AI policies to teachers, which often means that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.
“Whether you can use AI or not depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her sophomore English class a variety of AI skills like how to upload study guides to ChatGPT and have the chatbot quiz them, and then explain problems they got wrong.
But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”
Schools are introducing guidelines, gradually
Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.
Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.
The University of California, Berkeley emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” around AI use. The guidance offered language for three sample syllabus statements — for courses that require AI, ban AI in and out of class, or allow some AI use.
“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”

Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.
For example, one student who is learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. But he didn’t realize the platform also altered his language, which was flagged by an AI detector.
Enforcing academic integrity policies has become more complicated, since use of AI is hard to spot and even harder to prove, Fitzsimmons said. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line, but are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students worry that if they are falsely accused, there is no way to prove their innocence.
Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told a blanket ban on AI “is not a viable policy” unless instructors make changes to the way they teach and assess students. A lot of faculty are doing away with take-home exams. Some have returned to pen and paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.
Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.
“To expect an 18-year-old to exercise great discipline is unreasonable,” DeJeu said. ”That’s why it’s up to instructors to put up guardrails.”
If you purchase a product or register for an account through a link on our site, we may receive compensation. By using this site, you consent to our User Agreement and agree that your clicks, interactions, and personal information may be collected, recorded, and/or stored by us and social media and other third-party partners in accordance with our Privacy Policy.
Education
Why Every School Needs an AI Policy

AI is transforming the way we work but without a clear policy in place, even the smartest technology can lead to costly mistakes, ethical missteps and serious security risks
CREDIT: This is an edited version of an article that originally appeared in All Business
Artificial Intelligence is no longer a futuristic concept. It’s here, and it’s everywhere. From streamlining operations to powering chatbots, AI is helping organisations work smarter, faster and more efficiently.
According to G-P’s AI at Work Report, a staggering 91% of executives are planning to scale up their AI initiatives. But while AI offers undeniable advantages, it also comes with significant risks that organisations cannot afford to ignore. As AI continues to evolve, it’s crucial to implement a well-structured AI policy to guide its use within your school.
Understanding the Real-World Challenges of AI
While AI offers exciting opportunities for streamlining admin, personalising learning and improving decision-making in schools, the reality of implementation is more complex. The upfront costs of adopting AI tools be high. Many schools, especially those with legacy systems, find it difficult to integrate new technologies smoothly without creating further inefficiencies or administrative headaches.
There’s also a human impact to consider. As AI automates tasks once handled by staff, concerns about job displacement and deskilling begin to surface. In an environment built on relationships and pastoral care, it’s important to question how AI complements rather than replaces the human touch.
Data security is another significant concern. AI in schools often relies on sensitive pupil data to function effectively. If these systems are compromised the consequences can be serious. From safeguarding breaches to trust erosion among parents and staff, schools must be vigilant about privacy and protection.
And finally, there’s the environmental angle. AI requires substantial computing power and infrastructure, which comes with a carbon cost. As schools strive to meet sustainability targets and educate students on climate responsibility, it’s worth considering AI’s footprint and the long-term environmental impact of widespread adoption.
The Role of an AI Policy in Modern School
To navigate these issues responsibly, schools must adopt a comprehensive AI policy. This isn’t just a box-ticking exercise, it’s a roadmap for how your school will use AI ethically, securely and sustainably. A good AI policy doesn’t just address technology; it reflects your values, goals and responsibilities. The first step in building your policy is to create a dedicated AI policy committee. This group should consist of senior leaders, board members, department heads and technical stakeholders. Their mission? To guide the safe and strategic use of AI across your school. This group should be cross-functional so they can represent all school areas and raise practical concerns around how AI may affect people, processes and performance.
Protecting Privacy: A Top Priority
One of the most important responsibilities when implementing AI is protecting personal and corporate data. Any AI system that collects, stores, or processes sensitive data must be governed by robust security measures. Your AI policy should establish strict rules for what data can be collected, how long it can be stored and who has access. Use end-to-end encryption and multi-factor authentication wherever possible. And always ask: is this data essential? If not, don’t collect it.
Ethics Matter: Keep AI Aligned With Your Values
When creating an AI policy, you must consider how your principles translate to digital behaviour. Unfortunately, AI models can unintentionally amplify bias, especially when trained on datasets that lack diversity or were built without appropriate oversight. Plagiarism, misattribution and theft of intellectual property are also common concerns. Ensure your policy includes regular audits and bias detection protocols. Consult ethical frameworks such as those provided by the EU AI Act or OECD principles to ensure you’re building in fairness, transparency and accountability from day one.
The Bottom Line: Use AI to Support, Not Replace, Your Strengths
AI is powerful. But like any tool, its value depends on how you use it. With a strong, ethical policy in place, you can harness the benefits of AI without compromising your people, principles, or privacy.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi