Connect with us

AI Research

Intent Amplified: Teaching Students How to Learn with Artificial Intelligence

Published

on


“You can choose to use AI to learn, or you can choose to use AI to avoid learning.”

That’s the central message of a new a new first-year philosophy course created by Joshua “Gus” Skorburg (Guelph) called, “Digital Wisdom: How to Use AI Critically and Responsibly”.

The course was prompted by Skorburg’s observation that “students get lots of vague and mixed messages about AI use, but very little sustained, hands-on demonstration of what it looks like to use AI to learn, rather than avoid learning.” He thought he should help students ask and answer the question: “What does it look like to choose to use AI to learn?”

In the following guest post, he talks about his motivation for the course, its main idea, and what he teaches his students in it.

It is a version of the first in a planned series of posts on the course for his blog/newsletter, Moving Things Around. In that series, he said by email, “I will share much of the course content and the thinking behind it, in hopes that others can use parts of my course in their own teaching, or develop a similar course in their department. I’m also very keen to get feedback from people who are less optimistic about AI than I am.”


Intent Amplified:
Teaching Students How to Learn with Artificial Intelligence
by Joshua “Gus” Skorburg

Last summer, HudZah, an undergraduate student at Waterloo, used Claude Pro, the AI from Anthropic, to build a nuclear fusor in his bedroom.

This is the kind of thing AI Natives can do, and I cannot. And like Ashlee Vance,  it makes me want to weep. It also portends a crisis.

One recent study of over 1,600 faculty from 28 countries found that 40% of faculty feel that they are just beginning their AI literacy journey and only 17% are at advanced or expert level.

This Fall, how will the 83% of faculty lacking AI literacy address students who feel ripped off: expected to use AI professionally but not taught how? How will they answer students questioning why they should pay tuition when AI teaches better, faster, cheaper?

I’ve not seen many answers that students are likely to find convincing. That’s why I spent the summer developing a new first-year Philosophy course called, “Digital Wisdom: How to Use AI Critically and Responsibly”. The course’s message is simple:

You can choose to use AI to learn, or you can choose to use AI to avoid learning.

By now, everyone knows what it looks like to use AI to avoid learning, although the strategies are becoming more sophisticated and harder to detect.

The problem, as I see it, is that students get lots of vague and mixed messages about AI use, but very little sustained, hands-on demonstration of what it looks like to use AI to learn, rather than avoid learning.

So, what does it look like to choose to use AI to learn?

The Pedagogical Potential of AI

First, there’s the choice. It requires concerted, deliberate action and it doesn’t happen by default.

One method, persona prompting, is the lowest-hanging fruit here. Rather than asking for answers, have students tell the LLM things like, “You are a biology professor who specializes in making complex concepts accessible to first-year students. Explain CRISPR using Canadian agricultural examples.”

If students learn better through concrete examples, then: “Explain [concept] by providing three real-world examples from different domains, then show me how the same principle applies in each case. Quiz me at the end to test my understanding.” And so on.

By now, everyone also knows the risks of AI in education and many judge them high enough to justify AI “bans.” I don’t think total bans are feasible. Sure, we can mandate in-person exams, but does anyone honestly think students don’t use ChatGPT to prepare for them?

Reddit is full of examples of how students use AI in this way. Students I trust have told me how they’ve done so to prepare for my in-person essay exams (“upload the study guide to ChatGPT, try to memorize the outputs”). Banning AI just incentivizes unguided shadow use, where avoiding learning is more likely.

We also shouldn’t forget about the risks of not using AI. It can be very difficult for some students to ask clarification questions in large lecture halls, or to admit that they don’t understand a basic concept in front of their peers.

One of the most important features of LLMs for learning is that they are patient and non-judgmental. Students can ask as many follow-ups as they want. They can ask for explanations tailored to their learning styles, or for analogies to domains they are more familiar with. Banning AI in the classroom deprives students of these learning opportunities.

New features like ChatGPT’s Study Mode, Claude’s Learning Mode, and Gemini’s Guided Learning incorporate the above ideas with the click of a button.

Of course, it’s never so simple as clicking a button.

The Hidden Curriculum of Default AI

A big problem with today’s LLMs is that they are sycophantic: they tend to tell users what they want to hear and use flattering language that is inconducive to learning. Unfortunately, the default setting of LLMs seems to incentivize providing the illusion of learning, without the hard work of actually learning.

When AI constantly validates and flatters, it can create false confidence in weak work and prevent genuine skill development. In extreme cases, it can even contribute to psychotic breaks.

Thus, when it comes to using AI for learning (and AI use more generally) one of the most important prompting strategies is anti-personas, or telling the AI what it is NOT.

By explicitly programming against sycophancy, you make it more likely that you will get the kind of honest feedback that actually promotes learning: The kind a trusted mentor would give in private, not the polite encouragement given in public.

Here are some examples I encourage students to use in my course:

The “brutal editor” persona prompt

  • “You are a harsh but fair editor reviewing my work. You are NOT interested in making me feel good about my writing. You do NOT start with compliments or end with encouragement. You do NOT say things like “great job” or “you’re on the right track.” Instead, directly identify specific problems and explain why they weaken my argument. Be concise and critical.”

The “skeptical professor” persona prompt

  • “You are a demanding professor who has seen thousands of student papers. You are NOT impressed by basic observations or surface-level analysis. You do NOT give credit for merely attempting something. You do NOT soften criticism with praise sandwiches. Point out exactly where my thinking is shallow, where my evidence is weak, and where my logic fails. If something is genuinely good, you’ll mention it briefly, but focus on what needs improvement.” And so on.

Custom Instructions

At this point, many will object: “The temptation to just ask AI to do all the work is too great, and students won’t reliably use those prompts.”

Fair point. Students can and do choose to take shortcuts. But they can also choose to not do this, if they are shown good alternatives.

An underutilized feature in today’s LLMs is “custom instructions” (ChatGPTClaudeGemini). These are like “meta prompts” that automatically apply to all your conversations with an LLM.*

Here’s what I say to students in my course:

If you want to make it more likely that AI will help you learn rather than avoid learning, add custom instructions like:

    • “When I ask for help with an assignment, respond with 3-4 targeted questions that will help me think through the problem myself, rather than giving me solutions. Only provide direct guidance after I’ve demonstrated my own reasoning.”
    • “If I ask you to write, summarize, or analyze something for me, instead provide a structured thinking framework and ask me to work through it step-by-step, checking my reasoning at each stage.”
    • “If I seem to be using you to avoid learning rather than to enhance learning, point this out.”

Interviews

A fun and thought-provoking way to write these custom instructions is to have the AI interview you, with a learning-focused prompt like this:

Please interview me to develop a set of custom instructions for [ChatGPT, Claude Gemini]. Help me create learning-focused custom instructions by asking about: (1) how I want [ChatGPT, Claude, Gemini] to support my learning process without doing the thinking for me, (2) my preferences for direct, honest communication over excessive positivity or sycophancy, (3) my background and main use cases, and (4) specific output requirements. Focus especially on understanding when I want to be challenged, corrected, or pushed to think harder rather than given easy answers. Ask follow-up questions as needed. At the end of the conversation, please draft the custom instructions for me to review.

These aren’t silver bullets. Students can always choose to override custom instructions. But they’re no less a band-aid solution than “banning” AI and driving use into the shadows.

Podcasts and Flywheels

All the examples so far are strategies I give to students. But here’s one of my personal favorites. Lots of good podcasters choose to use AI to learn, which helps them ask better questions of experts on their podcasts, which helps me to learn about a much wider range of topics than I did pre-ChatGPT.

In turn, I use AI to extend my Zone of Proximal Development. When I’m listening to an AI researcher on Latent Space, or a biochemist on Mindscape and a technical detail goes over my head, I sometimes choose to pause the podcast, switch to the Claude app, provide a link to the transcript (which the podcaster generated with AI), ask for an explanation (using voice input which is much faster than typing), follow-up if needed, then switch back to the podcast. I do all of this from my phone, while walking around campus.

These exemplify the flywheel effects that are enabled by choosing to use AI to learn. They also seem quaint, relative to some AI Native workflows.

The point is: AI amplifies intent.

Choose to input lazy prompts which avoid thinking? Produce slop. Choose to write demanding prompts affording learning? Build a nuclear fusor.

None of this is to say that humanities faculty need to match HudZah’s technical sophistication. This Fall, we have to teach what we’ve always taught: how to reason critically, how to question comfortable assumptions, how to sit with ambiguity, how to entertain opposing views charitably. One difference between my “Digital Wisdom” course and those with AI “bans” is the recognition that these skills apply at least as much, maybe more, to prompting AI as they do to reading texts or writing essays.


You can subscribe to posts by Dr. Skorburg about his course here.

* Beyond learning-focused applications, custom instructions are generally quite useful for getting LLMs to stop doing things you find annoying or unhelpful. I get a lot of mileage out of custom instructions like: “write at the level of a tenured academic”; “Avoid flowery, overly cheery language, sycophantic responses, or engagement-driven questions”; “Never use phrases like ‘fascinating,’ ‘great point,’ or ask follow-up questions for engagement”; “Provide comprehensive, detailed explanations without concern for length”; “Prioritize critical, analytical thinking over tone-matching or making the user feel good”; “avoid unnecessary juxtapositions of the form, “Its not X, its Y”.

Deep Utopia by Nick Bostrom



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Lila Sciences raises $235 million to expand AI-driven research platform | Pharmaceutical | The Pharmaletter

Published

on


Lila Sciences has secured $235 million in series A financing, co-led by Braidwell and Collective Global, at a valuation of about $1.23 billion. The Massachusetts-based company, founded by Flagship Pioneering in 2023, is building an artificial intelligence platform designed to automate and accelerate the scientific method across multiple disciplines.

The latest financing follows a $200-million seed round in March and will be used to hire staff and open new sites in Boston, San Francisco and London. These locations will house the company’s so-called AI Science Factories, facilities that integrate AI, robotics and laboratory systems to design and run experiments at scale. Lila says these factories have already conducted hundreds of thousands of studies across life science, chemistry and materials science.

Building autonomous science at scale

This article is accessible to registered users, to continue reading please register for free.  A free trial will give you access to exclusive features, interviews, round-ups and commentary from the sharpest minds in the pharmaceutical and biotechnology space for a week. If you are already a registered user please login. If your trial has come to an end, you can subscribe here.

Login to your account

Become a subscriber

 

£820

Or £77 per month

Subscribe Now

  • Unfettered access to industry-leading news, commentary and analysis in pharma and biotech.
  • Updates from clinical trials, conferences, M&A, licensing, financing, regulation, patents & legal, executive appointments, commercial strategy and financial results.
  • Daily roundup of key events in pharma and biotech.
  • Monthly in-depth briefings on Boardroom appointments and M&A news.
  • Choose from a cost-effective annual package or a flexible monthly subscription

The Pharma Letter is an extremely useful and valuable Life Sciences service that brings together a daily update on performance people and products. It’s part of the key information for keeping me informed

Chairman, Sanofi Aventis UK



Source link

Continue Reading

AI Research

Gachon University establishes AI·Computing Research Institute – 조선일보

Published

on



Gachon University establishes AI·Computing Research Institute  조선일보



Source link

Continue Reading

AI Research

Tech war: Tencent pushes adoption of Chinese AI chips as mainland cuts reliance on Nvidia

Published

on

By


The Shenzhen-based tech conglomerate’s cloud computing unit, Tencent Cloud, said it was supporting “mainstream domestic chips” in its AI computing infrastructure, without naming any Chinese integrated circuit brand.

Tencent has “fully adapted to mainstream domestic chips” and “participates in the open-source community”, Tencent Cloud president Qiu Yuepeng said at the company’s annual Global Digital Ecosystem Summit on Tuesday.

It is a commitment that reflects growing efforts in the country’s semiconductor industry and AI sector to push forward Beijing’s tech self-sufficiency agenda amid US export restrictions on China and rising geopolitical tensions.
Tencent Cloud unveils support for Chinese-designed AI chips at the company’s annual Global Digital Ecosystem Summit. Photo: Weibo



Source link

Continue Reading

Trending