Connect with us

Tools & Platforms

Educators rethink assignments as AI becomes widespread

Published

on


AI tools like ChatGPT are transforming student learning, forcing educators to rethink assignments, in-class assessments, and how academic integrity is maintained.

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Best Practices for Responsible Innovation

Published

on


Dr. Heather Bassett, Chief Medical Officer at Xsoli

Our patients can’t afford to wait on officials in Washington, DC, to offer guidance around responsible applications of AI in the healthcare industry. The healthcare community needs to stand up and continue to put guardrails in place so we can roll out AI responsibly in order to maximize its evolving potential. 

Responsible AI, for example, should include reducing bias in access to and authorization of care, protecting patient data, and making sure that outputs are continually monitored. 

With the heightened need for industry-specific regulations to come from the bottom up — as opposed to top-down — let’s take a closer look at the AI best practices currently dominating conversations among the key stakeholders in healthcare. 

Responsible AI without squashing innovation

How can healthcare institutions and their tech industry partners continue innovating for the benefit of patients? That must be the question guiding the innovators moving AI forward. On a basic level of security and legal compliance, that means companies developing AI technologies for payers and providers should be aware of HIPAA requirements. De-identifying any data that can be linked back to patients is an essential component to any protocol whenever data-sharing is involved.

Beyond the many regulations that already apply to the healthcare industry, innovators must be sensitive to the consensus forming around the definition of “responsible AI use.” Too many rules around which technologies to pursue, and how, could potentially slow innovation. Too few rules can yield ethical nightmares. 

Stakeholders on both the tech industry and healthcare industry sides will offer different perspectives on how to balance risks and benefits. Each can contribute a valuable perspective on how to reduce bias within the populations they serve, being careful to listen to concerns from any populations not represented in high-level discussions.

The most pervasive pain point being targeted by AI innovators

Rampant clinician burnout has persisted as an issue within hospitals and health systems for years. In 2024, a national survey revealed the physician burnout rate dipped below 50 percent for the first time since the COVID-19 pandemic. The American Medical Association’s “Joy of Medicine” program, now in its sixth year, is one of many efforts to combat the reasons for physician burnout — lack of work/life balance, the burden of bureaucratic tasks, etc. — by providing guidelines for health system leaders interested in implementing programs and policies that actively support well-being.

To that end, ambient-listening AI tools in the office are helping save time by transforming conversations between the provider and patient into clinical notes that can be added to electronic health records. Previously, manual note-taking would have to be done during the appointment, reducing the quality of face-to-face time between provider and patient, or after appointments during a physician’s “free time,” when the information gleaned from the patient was not front of mind.

Other AI tools can help combat the second-order effects of burnout. Armed with the critical information needed to recommend a diagnostic test available to them in the patient’s electronic health record (EHR), a doctor still might not think to recommend a needed test. AI tools can scan an EHR — prior visit information, lab results — to analyze potentially large volumes of information and make recommendations based on the available data. In this way the AI reader acts like a second pair of eyes to interpret a lab result, or year’s worth of lab results, for something the physician might have missed.

Administrative tasks outside the clinical setting can save burned-out healthcare workers (namely, revenue cycle managers) time and bandwidth as well.

Private-sector vs. public-sector transparency 

How can we trust whether an institution is disclosing how it uses AI when the federal government doesn’t require it to? This is where organizations like CHAI (the Coalition for Health AI) come in. Its membership is composed of a variety of healthcare industry stakeholders who are promoting transparency and open-source documentation of actual AI use-cases in healthcare settings.

Healthcare is not the only industry facing the question of how to foster public trust in how it uses AI. In general, the key question is whether there’s a human in the loop when an AI-influenced process affects a human. It ought to be easy for consumers to interrogate that to their own satisfaction. For its part, CHAI has developed an “applied model card” — like a fact sheet that acts as a nutrition label for an AI model. Making these facts more readily available can only further the goal of fostering both clinician and patient trust.

Individual states have their own AI regulations. Most exist to curb profiling, the use of the technology to sort people into categories to make it easier to sell them products or services or to make hiring, insurance coverage and other business decisions about them. In December, California passed a law that prohibits insurance companies from using AI to deny healthcare coverage. It effectively requires a human in the loop (“a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand”) when any denials decisions are made. 

By vendors and health systems making their AI use transparent — following evolving recommendations on how we define and communicate transparency, and promoting how data is protected to end users and patients alike — hospitals and health systems have nothing to lose and plenty to gain.


About Dr. Heather Bassett 

Dr. Heather Bassett is the Chief Medical Officer with Xsolis, the AI-driven health technology company with a human-centered approach. With more than 20 years’ experience in healthcare, Dr. Bassett provides oversight of Xsolis’ data science team, denials management team and its physician advisor program. She is board-certified in internal medicine.



Source link

Continue Reading

Tools & Platforms

SA to roll out ChatGPT-style AI app in all high schools as tech experts urge caution

Published

on


Tech experts have welcomed the rollout of a ChatGPT-style app in South Australian classrooms but say the use of the learning tool should be managed to minimise potential drawbacks, and to ensure “we don’t dumb ourselves down”.

The app, called EdChat, has been developed by Microsoft in partnership with the state government, and will be made available across SA public high schools next term, Education Minister Blair Boyer said.

“It is like ChatGPT … but it is a version of that that we have designed with Microsoft, which has a whole heap of other safeguards built in,” Mr Boyer told ABC Radio Adelaide.

“Those safeguards are to prevent personal information of students and staff getting out, to prevent any nasties getting in.

AI is well and truly going be part of the future of work and it’s I think it’s on us as an education system, instead of burying our head in the sand and pretending it will go away, to try and tackle it.

EdChat was initially launched in 2023 and was at the centre of a trial involving 10,000 students, while all principals, teachers and pre-school staff have had access to the tool since late 2024.

The government said the purpose of the broader rollout was to allow children to “safely and productively” use technology of a kind that was already widespread.

SA Education Minister Blair Boyer says the technology has built-in safeguards. (ABC News: Justin Hewitson)

Mr Boyer said student mental health had been a major consideration during the design phase.

“There’s a lot of prompts set up — if a student is to type something that might be around self-harm or something like that — to alert the moderators to let them know that that’s been done so we can provide help,” he said.

“One of the things that came out [of the trial] which I have to say is an area of concern is around some students asking you know if it [EdChat] would be their friend, and I think that’s something that we’ve got to look at really closely.

“It basically says; ‘Thank you for asking. While I’m here to assist you and support your work, my role is that of an AI assistant, not a friend. That said, I’m happy to provide you with advice and answer your questions and help with your tasks’.”

The government said the app was already being used by students for tasks such as explaining solutions to difficult maths problems, rephrasing instructions “when they are having trouble comprehending a task”, and quizzing them on exam subjects.

“The conversational aspect I think is sometimes underplayed with these tools,” RMIT computing expert Michael Cowling said.

“You can ask it for something, and then you can ask it to clarify a question or to refine it, and of course you can also then use it as a teacher to ask you questions or to give you a pop quiz or to check your understanding for things.”

Adelaide Botanic High School students sit at a wooden table with their laptops

Adelaide Botanic High School students were involved in the trial of EdChat, which is rolling out across all SA high schools next term. (ABC News: Brant Cumming)

Adelaide Botanic High School principal Sarah Chambers, whose school participated in the trial of the app, described it as “an education equaliser”.

“It does provide students with a tool that is accessible throughout the day and into the evening,” she said.

“It is like using any sort of search tool on the internet — it is limited by the skill of the people using it, and really we need to work on building that capacity for our young people [by] teaching them to ask good questions.”

Ms Chambers said year levels 7 to 12 had access to the app, and year 11 student Sidney said she used it on a daily basis.

“I can use it to manage my time well and create programs of study guides, and … for scheduling so I don’t procrastinate,” the student said.

“A lot of students were already using other AI platforms so having EdChat as a safe platform that we can use was beneficial for all our learning, inside school and outside school.”

EdChat is similar to an app that has been trialled in New South Wales.

Toby Walsh, a man wearing yellow framed spectacles and a navy blazer, poses for a photo in front of an ochre coloured background

University of NSW artificial intelligence expert Toby Walsh says AI has a place in modern learning, but urges caution. (Supplied)

University of NSW artificial intelligence expert Toby Walsh said while generative AI very much had a place in modern learning, educators had “to be very mindful” of the way in which it was used.

“We have to be very careful that we don’t dumb ourselves down by using this technology as a crutch,” Professor Walsh said.

“It’s really important that people do learn how to write an essay themselves and not rely upon ChatGPT to write the essay, because there are important skills we learn in writing the essay — command of a particular domain of knowledge, ability to construct arguments and think critically.

We want to make sure that we don’t lose those skills or never even acquire those skills.

Professor Cowling said while generative AI tools came with the risk of plagiarism, they could in fact strengthen critical skills, if used appropriately.

“We’ve been very focused on the academic integrity concerns but I do think we can also use these tools for things like brainstorming and starting ideas,” he said.

“As long as we anchor ourselves in the idea that we need to know how to prompt these tools properly and that we need to carefully evaluate the output of these tools, I’m not entirely convinced that critical thinking is going to be an issue.

“In fact I would argue that the opposite may be true, and that critical thinking is actually something that will develop more with these gen AI tools.”



Source link

Continue Reading

Tools & Platforms

AI-exposed industries hiring fewer young workers, study finds

Published

on


On the other hand, the study pointed out that positions related to health aides, such as nursing, have observed a different trend from AI-exposed roles.

“Employment for young workers [in these roles] has been growing faster than for older workers,” the study stated

Impact on entry-level jobs

The findings come amid reports that entry-level employment, where young workers usually start, could be wiped out by the implementation of AI tools.

In fact, 47% of Gen Z and Millennial employees are already worried that AI could replace their jobs, to the point that they are too nervous to admit how much of their work is accomplished by AI.

Stanford’s study confirms that the implementation of AI tools has an influence on entry-level jobs, but the impact is not across the board.



Source link

Continue Reading

Trending