Connect with us

Tools & Platforms

3 Misleading AI Metaphors And One To Make It Right

Published

on


No, AI is not a tool, an amplifier or a mirror. And by using these metaphors when talking about technology, you make it harder for yourself and everyone else to develop a responsible relationship with your surroundings. Here’s why your typical AI metaphors are misleading – and how you should think and talk about technology instead.

AI Is Not A Tool For What You Want

When we talk about technology as a tool, we think of ourselves as someone who uses the tool for something. It can be to:

  • Do something we would otherwise not be able to do – like when our early ancestors used a bow (premodern technology) to kill an animal instead of just eating fruits
  • Do something faster or more convenient – like when our great grandparents used a car (modern technology) to get from A to B instead of walking or riding a horse, or to
  • Avoid doing something altogether – like when our kids use chatbots (digital technology) to write college papers instead of writing the papers themselves

Throughout history, we have viewed technology as tools and weapons that can be used for different things with different purposes. Lately we even coined the term technology for good to emphasize that technology can be used for good and bad, and that we – the developers, regulators, and users – determine whether it’s used for one or the other.

In other words, thinking about technology as tools makes us think about ourselves as being in control of why and for what technology is used. But, as more and more people emphasize, technology is never neutral. And not just because it reflects the biases of the people who design and generate the data that feeds technology. Also because it has a bias of its own: A simple yet very powerful bias that no matter what we want, we are better off if we use technology to achieve it than if we don’t.

This bias, which is a default bias in all technologies, regardless of why and by whom they were developed, takes hold long before we ask ourselves what we want. Therefore AI is not a tool for what you want. It’s a technology that allows you to effectively do or avoid doing certain things – while making you think that’s exactly what you want.

AI Is Not An Amplifier Of How You Work

Like all other technologies, AI makes you think you want what technology offers. That is, speed, efficiency, and convenience. But unlike other technologies, AI doesn’t hide its default bias. It brings us face to face with its insatiable hunger for data, making us ask ourselves if speed, efficiency, and convenience is really what we want. Is it all we want? Or do we have other needs that cannot be met with technology?

These kinds of questions have led people to regard AI not only as a tool but as an amplifier of who we are and what we do. In a 2024 research paper, former MIT professor, Douglas C. Youvan posits that “AI operates as an amplifier of human traits and tendencies, rather than a neutral or equalizing force”. And he argues that “far from leveling the playing field, AI tends to accentuate what is already present within individuals, pushing them further along the paths they are predisposed to – whether these are cognitive, economic, behavioral, or ideological.”

This stands in contrast to the mantra that has been repeated over and over since OpenAI launched ChatGPT, namely that “AI won’t take your job, but a human using AI will.” Yet both Youvan and those who argue that AI users will soon replace non-users in all parts of work and life regard AI as an amplifier of how you work. And it is not. You are not designed for speed, efficiency, and convenience. In fact, you are not designed at all.

You are born to live, learn and grow in step with your surroundings. And reinforcing and subjecting yourself to technology’s default bias that whatever you want, you’re better off if you use technology than if you don’t doesn’t help you do that.

AI Is Not A Mirror Of Who You Are

Verity Harding is the director of AI and geopolitics project at Cambridge University. In an interview to Forbes India in 2024, she said:

“In some ways, AI holds a mirror up to us and shows us what we are like, particularly these generative AI technologies that are built on existing human data and language online, from books, scripts and blog posts. Although the companies involved have tweaked the algorithms to try their best to ensure it doesn’t bring up the worst sides… of course, anything that can show the best side of us can also show the worst.”

Thinking about AI as a mirror is a bit different than thinking about it is as a tool or an amplifier. But all three metaphors suffer from the same mistake in that they make you think of AI as something you can use to achieve something else – better performance in the first cases and better understanding of yourself and other people in the latter.

The problem is not that these metaphors are reductive. The problem is they reduce your understanding of yourself. Instead of thinking of yourself as someone who asks what you want, how you grow, and who you are, the tech metaphors make you think of yourself as a tech user. That is, someone who stands in front of your surroundings with a purpose to achieve something, rather than someone who is part of your surroundings with a responsibility for your shared future.

AI Is Like Grammar, Shaping How You Think

Tool, amplifier, and mirror are not just misleading metaphors for AI, they are misleading metaphors for all technologies. Because, as the German philosopher Martin Heidegger put it, they make us “utterly blind to the essence of technology” – that is, the default bias that makes us think that we are always better off using technology than we are not using it.

But as mentioned above, AI differs from other technologies in that it does not hide this default bias. Unlike bows, cars, and other premodern and modern technologies, and unlike the internet, social media, and other digital technologies, AI allows us to see technology for what it really is. Namely, something that has as much influence on us as we have on it.

We are never just users of technology, we are always also being used to promote the bias towards faster and more efficient technology. And while we were previously blind to this mutual influence, the data harvesting of large AI companies has made us see clearly that it is not only the nature around us that is being exploited in the name of technology and progress. We are being exploited too.

To capture this insight and to avoid reducing ourselves and each other to tech users, we need a new metaphor for how to think and talk about technology. Heidegger used the term ‘enframing’ (Gestell) to describe how the essence of technology positions us as someone who stands in front of our surroundings with a purpose to achieve something. But I would suggest that an everyday phenomenon and term like ‘grammar’ can do just as well.

Like grammar, technology is a structural system that enables us to understand and navigate the world in a way we otherwise would not be able to – but which also makes it harder for us to understand and navigate the world differently.

Common to technology and grammar is that:

  • We use them before we understand them
  • We (partially) understand how they work, but not how they affect the way we work
  • They make it easier for us to understand and do something (e.g. communicate with people who speak the same language or use the same technologies as we do) and harder for us to understand and do something else (e.g. communicate with people who speak a different language or use different technologies)
  • The more we use them, the less aware we become of the impact they have on how we think and talk about ourselves and our surroundings
  • There is no escaping them – even when we try to transcend grammar/technology, we rely on them, e.g. when we try to express what cannot be expressed in words or musical notes by writing a poem or playing a piano

Thinking and talking about technology as grammar opens up a world of possibilities to discover and discuss new aspects of our relationship with AI – and how it shapes our relationship with everything else.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Anthropic Taps Higher Education Leaders for Guidance on AI

Published

on


The artificial intelligence company Anthropic is working with six leaders in higher education to help guide how its AI assistant Claude will be developed for teaching, learning and research. The new Higher Education Advisory Board, announced in August, will provide regular input on educational tools and policies.

According to a news release from Anthropic, the board is tasked with ensuring that AI “strengthens rather than undermines learning and critical thinking skills” through policies and products that support academic integrity and student privacy.

As teachers adapt to AI, ed-tech leaders have called for educators to play an active role in aligning AI to educational standards.


“Teachers and educators and administrators should be in the decision-making seat at every critical decision-making point when AI is being used in education,” Isabella Zachariah, formerly a fellow at the U.S. Department of Education’s Office of Educational Technology, said at the EDUCAUSE conference in October 2024. The Office of Educational Technology has since been shuttered by the Trump administration.

To this end, advisory boards or councils involving educators have emerged in recent years among ed-tech companies and institutions seeking to ground AI deployments in classroom experiences. For example, the K-12 software company Otus formed an AI advisory board earlier this year with teachers, principals, instructional technology specialists and district administrators representing more than 20 school districts across 11 states. Similarly, software company Frontline Education launched an AI advisory council last month to allow district leaders to participate in pilots and influence product design choices.

The Anthropic board taps experts in the education, nonprofit and technology sectors, including two former university presidents and three campus technology leaders. Rick Levin, former president of Yale University and CEO of Coursera, will serve as board chair. Other members include:

  • David Leebron, former president of Rice University
  • James DeVaney, associate vice provost for academic innovation at the University of Michigan
  • Julie Schell, assistant vice provost of academic technology at the University of Texas at Austin
  • Matthew Rascoff, vice provost for digital education at Stanford University
  • Yolanda Watson Spiva, president of Complete College America

The board contributed to a recent trio of AI fluency courses for colleges and universities, according to the news release. The online courses aim to give students and faculty a foundation in the function, limitations and potential uses of large language models in academic settings.

Schell said she joined the advisory board to explore how technology can address persistent challenges in learning.

“Sometimes we forget how cognitively taxing it is to really learn something deeply and meaningfully,” she said. “Throughout my career, I’ve been excited about the different ways that technology can help accentuate best practices in teaching or pedagogy. My mantra has always been pedagogy first, technology second.”

In her work at UT Austin, Schell has focused on responsible use of AI and engaged with faculty, staff, students and the general public to develop guiding principles. She said she hopes to bring the feedback from the community, as well as education science, to regular meetings. She said she participated in vetting existing Anthropic ed-tech tools, like Claude Learning mode, with this in mind.

In the weeks since the board’s announcement, the group has met once, Schell said, and expects to meet regularly in the future.

“I think it’s important to have informed people who understand teaching and learning advising responsible adoption of AI for teaching and learning,” Schell said. “It might look different than other industries.”

Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor’s degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.





Source link

Continue Reading

Tools & Platforms

Duke AI program emphasizes critical thinking for job security :: WRAL.com

Published

on


Duke’s AI program is spearheaded by a professor who is not just teaching, he also built his own AI model. 

Professor Jon Reifschneider says we’ve already entered a new era of teaching and learning across disciplines.

He says, “We have folks that go into healthcare after they graduate, go into finance, energy, education, etc. We want them to bring with them a set of skills and knowledge in AI, so that they can figure out: ‘How can I go solve problems in my field using AI?'”

He wants his students to become literate in AI, which is a challenge in a field he describes as a moving target. 

“I think for most people, AI is kind of a mysterious black box that can do somewhat magical things, and I think that’s very risky to think that way, because you don’t develop an appreciation of when you should use it and when you shouldn’t use it,” Reifschneider told WRAL News.

Student Harshitha Rasamsetty said she is learning the strengths and shortcomings of AI.

“We always look at the biases and privacy concerns and always consider the user,” she said.

The students in Duke’s engineering master’s programs come from all backgrounds, countries, even ages. Jared Bailey paused his insurance career in Florida to get a handle on the AI being deployed company-wide. 

He was already using AI tools when he wondered, “What if I could crack them open and adjust them myself and make them better?”

John Ernest studied engineering in undergrad, but sought job security in AI.

“I hear news every day that AI is replacing this job, AI is replacing that job,” he said. “I came to a conclusion that I should be a part of a person building AI, not be a part of a person getting replaced by AI.”

Reifschneider thinks warnings about AI taking jobs are overblown. 

In fact, he wants his students to come away understanding that humans have a quality AI can’t replace. That’s critical thinking. 

Reifschneider says AI “still relies on humans to guide it in the right direction, to give it the right prompts, to ask the right questions, to give it the right instructions.”

“If you can’t think, well, AI can’t take you very far,” Bailey said. “It’s a car with no gas.”

Reifschneider told WRAL that he thinks children as young as elementary school students should begin learning how to use AI, when it’s appropriate to do so, and how to use it safely.

WRAL News went inside Wake County schools to see how it is being used and what safeguards the district is using to protect students. Watch that story Wednesday on WRAL News.



Source link

Continue Reading

Tools & Platforms

WA state schools superintendent seeks $10M for AI in classrooms

Published

on


This article originally appeared on TVW News.

Washington’s top K-12 official is asking lawmakers to bankroll a statewide push to bring artificial intelligence tools and training into classrooms in 2026, even as new test data show slow, uneven academic recovery and persistent achievement gaps.

Superintendent of Public Instruction Chris Reykdal told TVW’s Inside Olympia that he will request about $10 million in the upcoming supplemental budget for a statewide pilot program to purchase AI tutoring tools — beginning with math — and fund teacher training. He urged legislators to protect education from cuts, make structural changes to the tax code and act boldly rather than leaving local districts to fend for themselves. “If you’re not willing to make those changes, don’t take it out on kids,” Reykdal said.

The funding push comes as new Smarter Balanced assessment results show gradual improvement but highlight persistent inequities. State test scores have ticked upward, and student progress rates between grades are now mirroring pre-pandemic trends. Still, higher-poverty communities are not improving as quickly as more affluent peers. About 57% of eighth graders met foundational math progress benchmarks — better than most states, Reykdal noted, but still leaving four in 10 students short of university-ready standards by 10th grade.

Reykdal cautioned against reading too much into a single exam, emphasizing that Washington consistently ranks near the top among peer states. He argued that overall college-going rates among public school students show they are more prepared than the test suggests. “Don’t grade the workload — grade the thinking,” he said.

Artificial intelligence, Reykdal said, has moved beyond the margins and into the mainstream of daily teaching and learning: “AI is in the middle of everything, because students are making it in a big way. Teachers are doing it. We’re doing it in our everyday lives.”

OSPI has issued human-centered AI guidance and directed districts to update technology policies, clarifying how AI can be used responsibly and what constitutes academic dishonesty. Reykdal warned against long-term contracts with unproven vendors, but said larger platforms with stronger privacy practices will likely endure. He framed AI as a tool for expanding customized learning and preparing students for the labor market, while acknowledging the need to teach ethical use.

Reykdal pressed lawmakers to think more like executives anticipating global competition rather than waiting for perfect solutions. “If you wait until it’s perfect, it will be a decade from now, and the inequalities will be massive,” he said.

With test scores climbing slowly and AI transforming classrooms, Reykdal said the Legislature’s next steps will be decisive in shaping whether Washington narrows achievement gaps — or lets them widen.

TVW News originally published this article on Sept. 11, 2025.


Paul W. Taylor is programming and external media manager at TVW News in Olympia.



Source link

Continue Reading

Trending