Connect with us

AI Insights

How Artificial Intelligence Is Reshaping Student Learning and Teaching

Published

on

AI Insights

A professor negotiates with ChatGPT

Published

on


ChatGPT, I’m teaching Moby Dick for the umpteenth time this semester. I probably shouldn’t be telling you this, but I just don’t have the energy to come up with a writing prompt for my students. Can you help?

Is “Discuss the symbolism of the whale” the best you can do? My eyes glazed over just reading it. ChatGPT, could you spice things up with something a bit more relevant to young people today?

Opinion

Get smart opinions on the topics North Texans care about.

By signing up, you agree to our Terms of Service and Privacy Policy.

I’ll admit that “Comment on Melville’s toxic masculinity” wouldn’t be my own first choice for a prompt. But the kids will like it, so let’s go with it. Now I have another question: how should I grade their essays?

You’re right about grade inflation, ChatGPT. But did you have to rub it in by replying, “Don’t worry, everyone in your class will get an A or A- minus”? That was just cruel. Anyway, how should I decide who gets the higher grade?

“Reward the most original arguments”? Are you for real? (Don’t answer that.) We both know that the students will be using you to write their essays, just like I’m using you to grade them. Speaking of which: How about a rubric?

Thanks, ChatGPT. Your five-part grading rubric is going to make my life easier. And things will be even easier if I can just feed the students’ essays to you and let you fill in the boxes. Can we make that happen?

“Yes, but I might hallucinate now and again” isn’t giving me a lot of confidence, ChatGPT. Full disclosure: I did some of my own hallucinating back in college. Are you basically telling me that you’re tripping?

ChatGPT, thanks for confirming that you’re not on LSD. And I’ll try to be more literal from now on. Can you please draw up a lesson plan that I can use for my three class sessions about Moby Dick?

Since you asked: Yes, please design group activities for each lesson. I mean, do I have to do everything?

ChatGPT, I like the idea of making one side of the room Team Ishmael and the other Team Ahab. But the exercise won’t work unless the students have read the book. How can I make sure they have done that?

An in-class quiz? Are you kidding? ChatGPT, that’s, like, so high school. My students are grown-ups, and I need to treat them that way.

“Grown-ups need to be held accountable” is business-speak, ChatGPT. I’m a humanities guy, remember? I want my students to suck out all the marrow of life, just like Thoreau said. How can I help them do that?

“Assign Walden” presumes they’ll actually read Walden instead of skimming the bland summaries that you and your fellow bots generate. We’re back where we started. Any other ideas?

I take your point: if I want the students to live a life of the mind, I need to model that. But how? When you can do everything, what’s left to be done?

Sorry, ChatGPT, but “A Large Language Model can scan huge swaths of text, yet it can’t feel emotions” doesn’t really answer my question. My job is to write things, not feel things. And I’m afraid you’re going to take my job soon, along with almost any gig my students might want. How’s that for a feeling?

I know, I know, you just said you can’t feel stuff. Sorry.

“Apology accepted”? So you do have feelings, after all!

ChatGPT, please create a 650-word satire of yourself in the voice of an amiable but baffled senior professor. Make it kind of cute, in an old-person’s kind of way. But don’t make it too cute, or everyone will know that you wrote it. Do we understand each other?

Jonathan Zimmerman teaches education and history at the University of Pennsylvania. He is author of “Whose America? Culture Wars in the Public Schools” and eight other books. He really wrote those books. He wrote this column, too.



Source link

Continue Reading

AI Insights

Let AI Decide Whether You Should Be Covered or Not

Published

on


Donald Trump says he is Making America Great Again, which seems like it might be code for: making everything shittier, less affordable, and less efficient. Certainly, when it comes to the realm of public services, the White House seems to be doing everything in its power to make the century-old social welfare programs—like Social Security and Medicare—significantly less helpful.

The latest unfortunate example of this unfurled itself this week with the announcement of a new pilot program being trialed by the Centers for Medicare and Medicaid Services. The pilot, which the New York Times reports is scheduled to begin next year in six different states, will use artificial intelligence software to determine whether certain kinds of coverage are “appropriate” or not. In a press release on the agency’s website that feels very DOGE-like, the CMS notes that its new program will “Target Wasteful, Inappropriate Services in Original Medicare.” It reads: 

The Centers for Medicare & Medicaid Services (CMS) is announcing a new Innovation Center model aimed at helping ensure people with Original Medicare receive safe, effective, and necessary care.

Yes, you wouldn’t want to have unnecessary care, would you? That would be terrible. The press release continues:

Through the Wasteful and Inappropriate Service Reduction (WISeR) Model, CMS will partner with companies specializing in enhanced technologies to test ways to provide an improved and expedited prior authorization process relative to Original Medicare’s existing processes, helping patients and providers avoid unnecessary or inappropriate care and safeguarding federal taxpayer dollars.

Prior authorization is the process whereby medical providers are required to check with insurance companies before providing certain types of care. Traditionally, folks enjoying public benefits with Original Medicare do not need to worry about this sort of thing, but for those using the more “modernized” program, Medicare Advantage, they seem to be getting hit with it all the time. In this case, recipients who are receiving Original Medicare will still be subjected to prior authorization through the pilot program. The AI algorithms will be used to determine whether the care recipients are getting represents an “appropriate” expenditure of “federal taxpayer dollars.” This is all packaged by the government as if it’s doing you some sort of favor. The press release states:

The WISeR Model will test a new process on whether enhanced technologies, including artificial intelligence (AI), can expedite the prior authorization processes for select items and services that have been identified as particularly vulnerable to fraud, waste, and abuse, or inappropriate use.

The New York Times notes that algorithms of this sort have been subjected to litigation, while also noting that the AI companies involved “would have a strong financial incentive to deny claims,” and the new pilot has already been referred to as an “AI death panels” program. Gizmodo reached out to the government for information.



Source link

Continue Reading

AI Insights

Wall Street’s Battle With Which Road to Take

Published

on


As investments in artificial intelligence continue to soar, some analysts are raising alarms about a looming bubble that could burst and trigger broader market declines. Others, however, say they’ve never been so sure that it is a growing opportunity.

So who is right? Well, on Wall Street, there’s a pick-your-flavor opinion for whatever it is you want to back, so we can’t determine that. But we can show you what each side is thinking.

Firstly, that the sector is overvalued. Analysts and investors and even company CEOs of AI giants have expressed concerns that current valuations of AI-related stocks may be disconnected from their underlying fundamentals.

The rapid rally in companies involved in AI hardware, software, and infrastructure—including chipmakers, cloud providers, and automation firms—has driven valuations to levels that many consider unsustainable.

Why does that matter? Because everything that goes up must eventually come down.

That means that recent market volatility and warnings from veteran investors suggest that a sudden reassessment of valuations could result in a significant downturn, similar to past technology and internet bubbles. 

The hype men

Secondly, that growth is why those valuations are worth it.

Despite recent concerns about overvaluation and a possible slowdown in AI-related growth, UBS analysts reaffirmed their positive outlook on the sector this week, buoyed by Nvidia’s hotly anticipated quarterly results.

In a note released after Nvidia reported earnings that exceeded expectations (but only just barely), UBS said that the core case for AI investment remains intact.

“While valuations might appear stretched in the short term, the fundamental need for AI technology across industries continues to grow,” UBS wrote in a note to investors.

The firm highlighted Nvidia’s role as a leader in semiconductor and AI infrastructure, emphasizing that the company’s robust revenue growth, which is projected at 48% for the current quarter, is a sign for ongoing demand for AI hardware and software solutions.

Analysts also pointed out that the broader enterprise move toward integrating AI is supported by increasing capital spending, which bodes well for the sector’s long-term prospects.

“Investors should maintain conviction,” UBS added, “as the demand for scalable, high-performance AI platforms is only poised to accelerate.”

Market experts agree that while short-term volatility is inevitable, the fundamental structural drivers, such as the adoption of AI in cloud computing, autonomous vehicles, and enterprise AI, suggest the sector’s growth story remains robust for the foreseeable future.

The haters

Not everyone is as bullish on AI as UBS.

Take OpenAI CEO Sam Altman, a man who is watching billions of dollars being poured into his competitors. Altman caused a major market rout when he said that investors are getting “over-excited” about AI.

“Are we in a phase where investors as a whole are over-excited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes,” He told The Verge, adding that he thinks that some valuations of AI start-ups are “insane” and “not rational”.

Investors are also increasingly wary after reports that Meta is considering a “downsizing” of its artificial intelligence division, with some executives expected to depart.

This potential shift marks a notable departure from Meta CEO Mark Zuckerberg’s recent heavy investments in transforming the company’s AI operations.

Over the past few months, Zuckerberg has championed a major overhaul of Meta’s AI strategy, emphasizing its critical role in enhancing user experience and competing with rivals like OpenAI and Google.

The New York Times cited sources close to the company, indicating that the restructuring could lead to significant layoffs or a shakeup in leadership.

The planned changes have raised questions among market watchers about whether Meta’s aggressive AI ambitions are being reassessed, or if internal challenges are forcing a strategic pivot. The move signals a period of uncertainty for Meta’s AI efforts, which had been a key part of Zuckerberg’s vision for the company’s future growth

So full speed ahead or hit the brakes?

While some experts acknowledge the transformative potential of AI, they caution investors to remain vigilant and avoid chasing speculative gains that lack proper valuation.

“The risk is that we are in a man-made bubble that will eventually burst, causing widespread damage,” said industry veteran Michael Johnson.

“Even when the dotcom bubble burst, there were a handful of fairly obvious winners that eventually came roaring back,” said CNBC‘s Jim Cramer. “If you gave up on Amazon in 2001, you missed the $2 trillion (£1.4 trillion) boat.”

Cramer has been investigated by the Securities and Exchange Commission at least once, and has also drawn criticism for past comments on market manipulation.



Source link

Continue Reading

Trending