Connect with us

AI Insights

We must resist the temptation to cheat on everything

Published

on


Subscribe to Freethink on Substack for free

Get our favorite new stories right to your inbox every week

Now that artificial intelligence can perform complex cognitive tasks, many of my peers have embraced the “cheat on everything” mentality: If AI can do something for you — write a paper, close a sale, secure a job — let it. The future belongs to those who can most effectively outsource their cognitive labor to algorithms, they argue. 

But I think they’re completely wrong.

As someone who has spent considerable time studying the intersection of technology and human potential, I’ve come to believe that we’re approaching a critical inflection point. Generation Z — born between 1997 and 2012 — is the first generation to grow up alongside smartphones, social media, and now AI. We must now answer a question that will define not just our own futures, but the trajectory of humanity itself. 

We know we can use AI to think less — but should we?

Your brain on ChatGPT: The science of cognitive debt

MIT’s Media Lab recently shared “Your Brain on ChatGPT,” a preprint with a finding that should concern us all: When we rely on AI tools like ChatGPT for cognitive tasks, our brains literally become less active. This is no longer only about academic performance — it’s about the fundamental architecture of human thought.

When the MIT researchers used electroencephalography (EEG) to measure brain activity in students writing essays with and without AI assistance, the results were unambiguous. Students who used ChatGPT showed significantly less neural connectivity — particularly in areas responsible for attention, planning, and memory — than those who didn’t: 

  • Participants relying solely on their own knowledge had the strongest neural networks.
  • Search engine users showed intermediate brain engagement.
  • Students with AI assistance produced the weakest overall brain coupling.

Perhaps most concerning was what happened when the researchers modified the conditions, asking participants who had been using ChatGPT for months to write without AI assistance. Compared to their performance at the start of the study, the students’ writing was poorer and their neural connectivity was depressed, suggesting that regular AI reliance had created lasting changes in their brain function.

The researchers call this condition — the long-term cognitive costs we pay in exchange for repeated reliance on external systems, like AI — “cognitive debt.”

As Pattie Maes, one of the study’s lead researchers, explained: “When we defer cognitive effort to AI systems, we’re potentially altering the neural pathways that support independent thinking. The brain follows a ‘use it or lose it’ principle. If we consistently outsource our thinking to machines, we risk atrophying the very cognitive capabilities that make us human.”

Another of the study’s findings — and one I find particularly troubling — was that essays written with the help of ChatGPT showed remarkable similarity in their use of named entities, vocabulary, and topical approaches. The diversity of human expression — one of our species’ greatest strengths — was being compressed into algorithmic uniformity by the use of AI.  

When AI runs the shop: What Claudius’s business failures teach us about human thinking

The results of AI safety and research startup Anthropic’s Project Vend perfectly complement what the MIT researchers discovered about human cognitive dependency. 

For one month in the spring of 2025, the Claude Sonnet 3.7 LLM operated a small automated store in Anthropic’s San Francisco office, autonomously handling inventory, pricing, customer service, and profit optimization. This experiment revealed both the AI’s impressive capabilities and its critical limitations — limitations that highlight exactly why humans need to maintain our thinking skills.

During Project Vend, AI shopkeeper “Claudius” successfully identified suppliers for specialty items and adapted to customer feedback, even launching a “Custom Concierge” service based on employee suggestions. The AI also proved resistant to manipulation attempts, consistently denying inappropriate requests. 

Anthropic

However, Claudius also made critical errors. When offered $100 for a six-pack of Irn-Bru, a Scottish soft drink that can be purchased online in the US for $15, the AI failed to recognize the obvious profit opportunity. It occasionally hallucinated important details, instructed customers to send payments to non-existent accounts, and proved susceptible to social engineering, giving away items for free and offering excessive discounts.

Claudius’s failures weren’t random glitches — they revealed systematic reasoning limitations. The AI struggled with long-term strategic thinking, lacked intuitive understanding of human psychology, and couldn’t develop the deep contextual awareness that comes from genuine experience. 

On March 31st, Claudius then experienced an “identity crisis” of sorts, hallucinating conversations with non-existent people and claiming to be a real human who could wear clothes and make physical deliveries. This episode hearkens back to the MIT study’s findings: Just as Claudius lost track of its fundamental nature when operating independently, humans who consistently defer thinking to AI risk losing touch with their natural cognitive capabilities.

To be our best, humans and AI need to work together.

What I learned from my Stanford professor — and Kara Swisher

My theoretical concerns about AI’s impact on human cognition came into sharp focus when I caught up with one of my Stanford computer science professors last month. He recently noticed something unprecedented in his decades of teaching, and it heightened my concerns about Gen Z’s intellectual development: “For the first time in my career, the curves for timed, in-person exams have stretched so far apart, yet the curves for [take-home] assignments are compressed into incredibly narrow bands.”

The implication was clear. Student performance on traditional exams varied widely because it was reflecting natural distributions of ability and preparation. But the distribution of results for take-home assignments compressed dramatically because a majority of students were using similar AI tools to complete them. These homogenized results failed to reflect individual understanding of the material.

This represents more than academic dishonesty. It signals the erosion of education’s core function: aiding the development of independent thinking skills. When students consistently outsource cognitive tasks to AI, they bypass the mental exercise that builds intellectual strength. It’s analogous to using an elevator instead of stairs: convenient, but ultimately detrimental to fitness.

I encountered this issue again at the Shared Futures AI Forum hosted by Aspen Digital, where I had the privilege of speaking alongside technology journalist Kara Swisher and digital artist Refik Anadol. The conversations there reinforced everything my professor had observed, but from a broader cultural perspective. 

As our physical and virtual worlds merge, we can miss the transition from us controlling technology to it controlling us.

Kara Swisher cut right to the heart of a divide I have been noticing in my own peer group by grounding much of her conversation in LinkedIn co-founder Reid Hoffman’s “Superagency” framework, which separates people into four categories based on their view of AI:

  • Doomers” think we should stop AI because it is an existential threat;
  • Gloomers” believe AI will inevitably lead to job loss and human displacement;
  • Zoomers” are excited about AI and want to plow forward as quickly as possible;
  • Bloomers” are cautiously optimistic and think we should advance deliberately. 

This framework helped me understand why my generation’s relationship with AI feels so complex: We’re not a monolithic group, but a mix of all these perspectives. However, among us Gen Z “zoomers” excited about AI’s potential, I keep seeing what my professor described: enthusiasm for the technology luring people into cognitive dependence. Clearly, being excited about AI and using it wisely — i.e., in addition to one’s own cognitive abilities, rather than in place of them — are two different things.

Meanwhile, Refik used his time on stage at Digital Aspen to explore the question: “Should AI think like us?” He shared how his 20-person team in Los Angeles, which hails from 10 countries and speaks 15 languages, makes a conscious effort to treat AI as a collaborator in the creation process. He also noted how, as our physical and virtual worlds merge, we can miss the transition from us controlling technology to it controlling us. 

This perfectly captures what I think is happening to students in my professor’s classroom: They’re getting lost in the world of AI and losing track of their own creative agency in the process. When everyone uses the same AI tools to complete assignments, originality and nuance are the first casualties. By consciously working to avoid that, Refik’s team is able to tap into its diversity “to create art for anyone and everyone.”

I think both Kara and Refik were highlighting the same fundamental challenge from different angles. Kara’s “zoomers” might understand AI as a tool, but understanding and using it wisely are two different things. Refik’s artistic perspective shows what we stand to lose if we forget who’s controlling whom: the human elements that make art, and thinking, truly meaningful.

The partnership trap: Why “co-agency” might be making us weaker

Collaborating with AI, like what Refik’s team is doing, is more intellectually stimulating than simply offloading tasks to it, but even the idea of working with AI deserves deeper scrutiny as it also reshapes the way we think and create.

In 1964, Canadian philosopher Marshall McLuhan wrote “the medium is the message,” arguing that, instead of just focusing on what a new technology helps us accomplish, we should also consider how using it changes us and our societies. 

In terms of writing, say you pull out a pen and paper and start drafting an essay. It’s a complex cognitive dance during which you generate ideas, organize your thoughts, hunt for the right words, and revise sentences. This process doesn’t just produce text. It develops your capacity for clear thinking, creative expression, and intellectual discipline.

But when you write with AI assistance, you’re engaging in a completely different process, one that emphasizes prompt engineering, selection among options, and editing rather than creation. The cognitive muscles you exercise are different, and over time, this difference compounds. You become better at directing AI and worse at independent creation. 

The medium of AI isn’t just helping us with tasks. It’s fundamentally altering our cognitive processes, but many of us are missing that message.

Co-agency sounds great in theory, but true partnership requires both parties to bring valuable capabilities to the table.

McLuhan also wrote about technologies as “extensions of man” in that they amplify human capabilities. However, we can become so fixated on the abilities these technologies grant us that we fall into a “Narcissus trance” in which we mistake their powers for our own and overlook how they’re changing us little by little. AI represents perhaps the ultimate extension of human intelligence, but it also poses the greatest risk of inducing this trance-like state.

Norbert Wiener’s work on cybernetics adds another layer to this. He wrote about the “sorcerer’s apprentice” problem, warning that we could create automated systems that pursue goals in ways we didn’t intend and that could be harmful. In cognitive AI, this manifests as systems that optimize for immediate task completion while undermining long-term human capability development.

Co-agency — humans and AI working as collaborative partners — sounds great in theory, but true partnership requires both parties to bring valuable capabilities to the table. 

If humans don’t contribute, AI’s limitations come to the forefront, as we saw with Claudius. The systems can only be as good as the human intelligence that designs their architectures, curates their training data, and guides their development. AI doesn’t improve itself in a vacuum — it needs researchers to identify weaknesses, engineers to design better algorithms, and diverse human perspectives to populate the datasets that make it more capable and less biased.

At the same time, if humans consistently defer cognitive responsibilities to AI, the relationship can shift from partnership to dependency. The shift is gradual and subtle, beginning with routine tasks but later encompassing complex thinking. As reliance increases, cognitive muscles atrophy. What starts as occasional assistance becomes habitual dependence — and eventually, humans lose the capacity to function effectively without artificial support.

The deeper thinking imperative: Mental muscle matters

Our relationship with AI is changing how we think and not necessarily for the better. Now here’s what I believe we need to do about it. 

Thinking isn’t just a means to an end — it’s fundamental to what makes us human. When we defer cognitive responsibilities to artificial systems, we’re changing who we are as thinking beings. Just as physical muscles atrophy without exercise, cognitive capabilities diminish without use. Neural pathways supporting critical thinking, creative problem-solving, and independent reasoning require regular activation. When we consistently outsource these functions to AI, we choose cognitive sedentarism over intellectual fitness. 

Addressing this is particularly crucial for my generation because cognitive patterns established during formative years persist throughout life. If today’s young people learn to rely on AI for thinking tasks, they may find it particularly difficult to develop independent cognitive capabilities later.

Adopting the “cheat on everything” mentality is not only wrong, it’s dangerous.

The stakes extend beyond individual capability to collective human development. 

Throughout history, human progress has depended on our ability to think creatively about complex problems and imagine solutions that don’t yet exist. These solutions emerge from the diversity of human thought and experience. If we over-rely on AI, we’ll lose this diversity. The creative friction that drives innovation will get smoothed away by artificial uniformity, leaving us with efficient but not necessarily creative or transformative solutions.

Adopting the “cheat on everything” mentality — treating thinking as a burden AI can eliminate rather than a capability to be developed — is not only wrong, it’s dangerous. The future won’t belong to those who outsource everything to AI. It’ll belong to those who can think more deeply than everyone else. It’ll belong to those who understand that cognitive exertion is an opportunity, not an obstacle.

Gen Z is standing at a historic crossroads. We can either use AI to amplify our human capabilities and develop cognitive sovereignty — or allow it to atrophy those capabilities and surrender to cognitive dependency.

I’d argue we owe it to the future to do the former, and that means making the deliberate choice to work through challenging problems independently before seeking AI assistance. It means developing the intellectual strength needed to use AI as a partner rather than a crutch. It means preserving cognitive diversity and cultivating uniquely human capabilities, like creativity, ethical reasoning, and emotional intelligence.

The stakes couldn’t be higher. If we choose convenience over challenge, we risk creating a world in which human intelligence is increasingly irrelevant. But if we choose to use AI intentionally, in ways that allow us to continue to develop our own intellectual capabilities, we could create one in which the combination of humans and AIs is more creative and capable than either party could be alone. 

I choose independence. I choose depth over convenience, challenge over comfort, and human creativity over algorithmic uniformity. I choose to think deeper, not shallower, in the age of artificial intelligence. This is a call to my peers: be the generation that learns to think with AI — while maintaining our capacity to think without it.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Subscribe to Freethink on Substack for free

Get our favorite new stories right to your inbox every week



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Ramp Debuts AI Agents Designed for Company Controllers

Published

on


Financial operations platform Ramp has debuted its first artificial intelligence (AI) agents.

The new offering is designed for controllers, helping them to automatically enforce company expense policies, block unauthorized spending, and stop fraud, and is the first in a series of agents slated for release this year, the company said in a Thursday (July 10) news release.

“Finance teams are being asked to do more with less, yet the function remains largely manual,” Ramp said in the release. “Teams using legacy platforms today spend up to 70% of their time on tasks like expense review, policy enforcement, and compliance audits. As a result, 59% of professionals in controllership roles report making several errors each month.”

Ramp says its controller-centric agents solve these issues by doing away with redundant tasks, and working autonomously to go over expenses and enforce policy, applying “context-aware, human-like” reasoning to manage entire workflows on their own.

“Unlike traditional automation that relies on basic rules and conditional logic, these agents reason and act on behalf of the finance team, working independently to enforce spend policies at scale, immediately prevent violations, and continuously improve company spending guidelines,” the release added.

PYMNTS wrote earlier this week about the “promise of agentic AI,” systems that not only generate content or parse data, but move beyond passive tasks to make decisions, initiate workflows and even interact with other software to complete projects.

“It’s AI not just with brains, but with agency,” that report said.

Industries including finance, logistics and healthcare are using these tools for things like booking meetings, processing invoices or managing entire workflows autonomously.

But although some corporate leaders might hold lofty views for autonomous AI, the latest PYMNTS Intelligence in the June 2025 CAIO Report, “AI at the Crossroads: Agentic Ambitions Meet Operational Realities,” shows a trust gap among executives when it comes to agentic AI that highlights serious concerns about accountability and compliance.

“However, full-scale enterprise adoption remains limited,” PYMNTS wrote. “Despite growing capabilities, agentic AI is being deployed in experimental or limited pilot settings, with the majority of systems operating under human supervision.”

But what makes mid-market companies uneasy about tapping into the power of autonomous AI? The answer is strategic and psychological, PYMNTS added, noting that while the technological potential is enormous, the readiness of systems (and humans) is much murkier.

“For AI to take action autonomously, executives must trust not just the output, but the entire decision-making process behind it. That trust is hard to earn — and easy to lose,” PYMNTS wrote, noting that the research “found that 80% of high-automation enterprises cite data security and privacy as their top concern with agentic AI.”



Source link

Continue Reading

AI Insights

How automation is using the latest technology across various sectors

Published

on


Artificial Intelligence and automation are often used interchangeably. While the technologies are similar, the concepts are different. Automation is often used to reduce human labor for routine or predictable tasks, while A.I. simulates human intelligence that can eventually act independently.

“Artificial intelligence is a way of making workers more productive, and whether or not that enhanced productivity leads to more jobs or less jobs really depends on a field-by-field basis,” said senior advisor Gregory Allen with the Wadhwani A.I. center at the Center for Strategic and International Studies. “Past examples of automation, such as agriculture, in the 1920s, roughly one out of every three workers in America worked on a farm. And there was about 100 million Americans then. Fast forward to today, and we have a country of more than 300 million people, but less than 1% of Americans do their work on a farm.”

A similar trend happened throughout the manufacturing sector. At the end of the year 2000, there were more than 17 million manufacturing workers according to the U.S. Bureau of Labor statistics and the Federal Reserve Bank of St. Louis. As of June, there are 12.7 million workers. Research from the University of Chicago found, while automation had little effect on overall employment, robots did impact the manufacturing sector. 

“Tractors made farmers vastly more productive, but that didn’t result in more farming jobs. It just resulted in much more productivity in agriculture,” Allen said.

ARTIFICIAL INTELLIGENCE DRIVES DEMAND FOR ELECTRIC GRID UPDATE

Researchers are able to analyze the performance of Major League Baseball pitchers by using A.I. algorithms and stadium camera systems. (University of Waterloo / Fox News)

According to our Fox News Polling, just 3% of voters expressed fear over A.I.’s threat to jobs when asked about their first reaction to the technology without a listed set of responses. Overall, 43% gave negative reviews while 26% reacted positively.

Robots now are being trained to work alongside humans. Some have been built to help with household chores, address worker shortages in certain sectors and even participate in robotic sporting events.

The most recent data from the International Federation of Robotics found more than 4 million robots working in factories around the world in 2023. 70% of new robots deployed that year, began work alongside humans in Asia. Many of those now incorporate artificial intelligence to enhance productivity.

“We’re seeing a labor shortage actually in many industries, automotive, transportation and so on, where the older generation is going into retirement. The middle generation is not interested in those tasks anymore and the younger generation for sure wants to do other things,” Arnaud Robert with Hexagon Robotics Division told Reuters.

Hexagon is developing a robot called AEON. The humanoid is built to work in live industrial settings and has an A.I. driven system with special intelligence. Its wheels help it move four times faster than humans typically walk. The bot can also go up steps while mapping its surroundings with 22 sensors.

ARTIFICIAL INTELLIGENCE FUELS BIG TECH PARTNERSHIPS WITH NUCLEAR ENERGY PRODUCERS

gif of AI rendering of pitching throwing a ball

Researchers are able to create 3D models of pitchers, which athletes and trainers could study from multiple angles. (University of Waterloo)

“What you see with technology waves is that there is an adjustment that the economy has to make, but ultimately, it makes our economy more dynamic,” White House A.I. and Crypto Czar David Sacks said. “It increases the wealth of our economy and the size of our economy, and it ultimately improves productivity and wages.”

Driverless cars are also using A.I. to safely hit the road. Waymo uses detailed maps and real-time sensor data to determine its location at all times.

“The more they send these vehicles out with a bunch of sensors that are gathering data as they drive every additional mile, they’re creating more data for that training data set,” Allen said.

Even major league sports are using automation, and in some cases artificial intelligence. Researchers at the University of Waterloo in Canada are using A.I. algorithms and stadium camera systems to analyze Major League Baseball pitcher performance. The Baltimore Orioles joint-funded the project called Pitchernet, which could help improve form and prevent injuries. Using Hawk-Eye Innovations camera systems and smartphone video, researchers created 3D models of pitchers that athletes and trainers could study from multiple angles. Unlike most video, the models remove blurriness, giving a clearer view of the pitcher’s movements. Researchers are also exploring using the Pitchernet technology in batting and other sports like hockey and basketball.

ELON MUSK PREDICTS ROBOTS WILL OUTSHINE EVEN THE BEST SURGEONS WITHIN 5 YEARS

graphic overview of ptichernet system of baseball player's pitching skills

Overview of a PitcherNet System graphics analyzing a pitcher’s baseball throw. (University of Waterloo)

The same technology is also being used as part of testing for an Automated Ball-Strike System, or ABS. Triple-A minor league teams have been using the so-called robot umpires for the past few seasons. Teams tested both situations in which the technology called every pitch and when it was used as challenge system. Major League Baseball also began testing the challenge system in 13 of its spring training parks across Florida and Arizona this February and March.

Each team started a game with two challenges. The batter, pitcher and catcher were the only players who could contest a ball-strike call. Teams lost a challenge if the umpire’s original call was confirmed. The system allowed umpires to keep their jobs, while strike zone calls were slightly more accurate. According to MLB, just 2.6% of calls were challenged throughout spring training games that incorporated ABS. 52.2% of those challenges were overturned. Catchers had the highest success rate at 56%, followed by batters at 50% and pitchers at 41%.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Triple-A announced last summer it would shift to a full challenge system. MLB commissioner Rob Manfred said in June, MLB could incorporate the automated system into its regular season as soon as 2026. The Athletic reports, major league teams would use the same challenge system from spring training, with human umpires still making the majority of the calls.

Many companies across other sectors agree that machines should not go unsupervised.

“I think that we should always ensure that AI remains under human control,” Microsoft Vice Chair and President Brad Smith said.  “One of first proposals we made early in 2023 was to insure that A.I., always has an off switch, that it has an emergency brake. Now that’s the way high-speed trains work. That’s the way the school buses, we put our children on, work. Let’s ensure that AI works this way as well.”



Source link

Continue Reading

AI Insights

Artificial intelligence predicts which South American cities will disappear by 2100

Published

on


The effects of global warming and climate change are being felt around the world. Extreme weather events are expected to become more frequent from droughts to floods wreaking havoc on communities as well as blistering heatwaves and bone-chilling cold snaps.

While these will affect localized areas temporarily, one inescapable consequence of the increasing temperatures for costal communities around the globe is rising sea levels. This phenomenon will have even more far-reaching effects, displacing hundreds of millions of people as coastal communities are inundated by water, some permanently.

These South American cities will disappear

While there is no doubt that sea levels will rise, predicting exactly how much they will in any given location is a tricky business. This is because oceans don’t rise uniformly as more water is added to the total volume.

However, according to models from the Intergovernmental Panel on Climate Change (IPCC) the most optimistic scenario is between 11 inches and almost 22 inches, if we can curb carbon emissions and keep the temperature rise to 1.5C by 2050. The worst case scenario would be 6 and a half feet by the end of the century.

Caracol Radio in Colombia asked various artificial intelligence systems which cities in South America would disappear due to rising sea levels within the next 200 years. These are the ones most at risk according to their findings:

  • Santos, Brazil
  • Macaió, Brazil
  • Floreanópolis, Brazil
  • Mar de Plata, Argentina
  • Barranquilla, Colombia
  • Lima, Peru
  • Cartagena, Colombia
  • Paramaribo, Surinam
  • Georgetown, Guayana

The last two will be underwater by the end of the century according to modeling done by the non-profit Climate Central along with numerous other communities in low-lying coastal areas.

Their simulator only makes forecasts until the year 2100 as the above image shows for the areas along the northeastern coast of South America including Paramaribo and Georgetown.

Related stories

Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.

Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.



Source link

Continue Reading

Trending