Connect with us

Tools & Platforms

The End of Work as We Know It

Published

on


For centuries, work has defined us. It has given us identity, purpose, and status in society. But what happens when work, our source of income, itself begins to disappear? Not because of war, depression, or outsourcing, but because of algorithms. What does it mean to work in an AI-driven economy? I spent this month of July interviewing several experts from diverse corners of the labor landscape. Through these conversations, a complex and often contradictory picture emerges, one filled with both promise and peril, efficiency and exploitation, displacement and dignity.

The View from the Top: Efficiency, Experience

From the C-suite, the AI revolution is viewed with a mixture of excitement and urgency. Elijah Clark, a consultant who advises companies on AI implementation, is blunt about the bottom line. “CEOs are extremely excited about the opportunities that AI brings,” he says. “As a CEO myself, I can tell you, I’m extremely excited about it. I’ve laid off employees myself because of AI. AI doesn’t go on strike. It doesn’t ask for a pay raise. These things that you don’t have to deal with as a CEO.”

This unvarnished perspective reveals a fundamental truth about the corporate embrace of AI: it is, at its core, a quest for efficiency and profitability. And in this quest, human labor is often seen as a liability, an obstacle to be overcome. Clark recalls firing 27 out of 30 student workers in a sales enablement team he was leading. “We can get done in less than a day, less than an hour, what they were taking a week to produce,” he explains. “In the area of efficiency, it made more sense to get rid of people.”

Peter Miscovich, JLL’s Global Future of Work Leader, sees AI as an “accelerant of a trend that was underway for the last 40, 50 years.” He describes a “decoupling” of headcount from real estate and revenue, a trend that is now being supercharged by AI. “Today, 20% of the Fortune 500 in 2025 has less headcount than they had in 2015,” he notes.

But Miscovich also paints a picture of a future where the physical workplace is not obsolete but transformed. He envisions “experiential workplaces” that are “highly amenitized” and “highly desirable,” like a “boutique hotel.” In these “Lego-ized” offices, with their movable walls and plug-and-play technology, the goal is to create a “magnet” for talent. “You can whip the children, or you can give the children candy,” he says. “And, you know, people respond better to the candy than to the whipping.”

Yet, even in this vision of a more pleasant workplace, the specter of displacement looms large. Miscovich acknowledges that companies are planning for a future where headcount could be “reduced by 40%.” And Clark is even more direct. “A lot of CEOs are saying that, knowing that they’re going to come up in the next six months to a year and start laying people off,” he says. “They’re looking for ways to save money at every single company that exists.”

The Hidden Human Cost: “It’s a New Era in Forced Labor”

While executives and consultants talk of efficiency and experience, a very different story is being told by those on the front lines of the AI economy. Adrienne Williams, a former Amazon delivery driver and warehouse worker, offers a starkly different perspective. “It’s a new era in like forced labor,” she says. “It’s not slavery, because slavery is different. You can’t move around, but it is forced labor.”

Williams, a research fellow at the Distributed AI Research Institute (DAIR) that focuses on examining the social and ethical impact of AI, is referring to the invisible work that we all do to train AI systems every time we use our phones, browse social media, or shop online. “You’re already training AI,” she explains. “And so as they’re taking jobs away, if we just had the ability to understand who was taking our data, how it was being used and the revenue it was making, we should have some sovereignty over that.”

This “invisible work” is made visible in the stories of gig workers like Krystal Kauffman, who has been working on Amazon’s Mechanical Turk platform since 2015. She has witnessed firsthand the shift from a diverse range of tasks to a near-exclusive focus on “data labeling, data annotation, things like that.” This work, she explains, is the human labor that powers the AI boom. “Human labor is absolutely powering the AI boom,” she says. “And I think one thing that a lot of people say is, ‘teach AI to think,’ but it’s actually, at the end of the day, it’s not thinking. It’s recognizing patterns.”

The conditions for this hidden workforce are often exploitative. Kauffman, who is also a research fellow at DAIR, describes how workers are “hidden,” “underpaid,” and denied basic benefits. She also speaks of the psychological toll of content moderation, a common form of AI-related work. “We talked to somebody who was moderating video content of a war in which his family was involved in a genocide, and he saw his own cousin through annotating data,” she recalls. “And then he was told to get over it and get back to work.”

Williams, who has worked in both warehouses and classrooms, has seen the harmful effects of AI in a variety of settings. In schools, she says, AI-driven educational tools are creating a “very carceral” environment where children are suffering from “migraines, back pain, neck pain.” In warehouses, workers are “ruining their hands, getting tendonitis so bad they can’t move them,” and pregnant women are being fired for needing “modified duties.” “I’ve talked to women who have lost their babies because Amazon refused to give them modified duties,” she says.

The Dignity of Human Work: “A Calling” in the Face of Automation

In the face of this technological onslaught, there are those who are fighting to preserve the dignity of human labor. Ai-jen Poo, president of the National Domestic Workers Alliance, is a leading voice in this movement. She champions “care work”—the work of nurturing children, supporting people with disabilities, and caring for older adults—as a prime example of the kind of “human-anchored” work that technology cannot easily replace.

“That work of enabling potential and supporting dignity and agency for other human beings is at its heart human work,” she says. “Now, what I think needs to happen is that technology should be leveraged to support quality of work and quality of life as the fundamental goals, as opposed to displacing human workers.”

Poo argues for a fundamental rethinking of our economic priorities. “I would create a whole new foundation of safety net that workers could expect,” she says, “that they could have access to basic human needs like health care, paid time off, paid leave, affordable child care, affordable long-term care. I would raise the minimum wage so that at least people who are working are earning a wage that can allow them to pay the bills.”

For the care workers Poo represents, their work is more than just a job; it’s a “calling.” “The median income for a home care worker is $22,000 per year,” she notes. “And people in our membership have done this work for three decades. They see it as a calling, and what they would really like is for these jobs to offer the kind of economic security and dignity that they deserve.”

A Fork in the Road: Deepening Inequality or Democratizing Technology?

The conversations with these specialists reveal a stark choice, a fork in the road for the future of work. On the one hand, there is the path of unchecked technological determinism, where AI is used to maximize profits, displace workers, and deepen existing inequalities. Adrienne Williams warns that AI has the potential to “exacerbate all these problems we already have,” particularly for “poor people across the board.”

On the other hand, there is the possibility of a more democratic and humane future, one where technology is harnessed to serve human needs and values. Ai-jen Poo believes that we can “democratize” AI by giving “working-class people the ability to shape these tools and to have a voice.” She points to the work of the National Domestic Workers Alliance, which is “building our own tools” to empower care workers.

Krystal Kauffman also sees hope in the growing movement of worker organizations. “The company wants to keep this group at the bottom,” she says of gig workers, “but I think what we’re seeing is that group saying ‘no more, we exist,’ and starting to push back.”

The Search for Meaning in a Post-Work World

Ultimately, the question of the purpose of work in an AI-driven economy is a question of values. Is the purpose of our economy to generate wealth for a few, or to create a society where everyone has the opportunity to live a dignified and meaningful life?

Clark is clear that from the CEO’s perspective, the “humanness inside of the whole thing is not happening.” The focus is on “growth and that’s maintaining the business and efficiency and profit.” But for Ai-jen Poo, the meaning of work is something much deeper. “Work should be about a way that people feel a sense of pride in their contributions to their families, their communities and to society as a whole,” she says. “Feel a sense of belonging and have recognition for their contribution and feel like they have agency over their future.”

Our Take

The question is not just whether machines will do what we do, but whether they will unmake who we are.

The warning signs are everywhere: companies building systems not to empower workers but to erase them, workers internalizing the message that their skills, their labor and even their humanity are replaceable, and an economy barreling ahead with no plan for how to absorb the shock when work stops being the thing that binds us together.

It is not inevitable that this ends badly. There are choices to be made: to build laws that actually have teeth, to create safety nets strong enough to handle mass change, to treat data labor as labor, and to finally value work that cannot be automated, the work of caring for each other and our communities.

But we do not have much time. As Clark told me bluntly: “I am hired by CEOs to figure out how to use AI to cut jobs. Not in ten years. Right now.”

The real question is no longer whether AI will change work. It is whether we will let it change what it means to be human.



Source link

Tools & Platforms

AI Lies Because It’s Telling You What It Thinks You Want to Hear

Published

on


Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.  

While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

AI Atlas art badge tag

In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble

Don’t miss any of CNET’s unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome.

How machines learn to lie

To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

There are three phases of training LLMs:

  • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
  • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
  • Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.

The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us. 

“Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

Getting AI to be honest 

Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

The Princeton researchers identified five distinct forms of this behavior:

  • Empty rhetoric: Flowery language that adds no substance to responses.
  • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
  • Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
  • Unverified claims: Making assertions without evidence or credible support.
  • Sycophancy: Insincere flattery and agreement to please.

To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.

“It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI





Source link

Continue Reading

Tools & Platforms

AI: The Church’s Response to the New Technological Revolution

Published

on


Artificial intelligence (AI) is transforming everyday life, the economy, and culture at an unprecedented speed. Capable of processing vast amounts of data, mimicking human reasoning, learning, and making decisions, this technology is already part of our daily lives: from recommendations on Netflix and Amazon to medical diagnoses and virtual assistants.

But its impact goes far beyond convenience or productivity. Just as with the Industrial Revolution, the digital revolution raises social, ethical, and spiritual questions. The big question is:  How can we ensure that AI serves the common good without compromising human dignity?

A change of era

Pope Francis has described artificial intelligence as a true “epochal change,” and his successor, Pope Leo XIV, has emphasized both its enormous potential and its risks. There is even talk of a future encyclical entitled Rerum Digitalium, inspired by the historic Rerum Novarum of 1891, to offer moral guidance in the face of the “new things” of our time.

The Vatican insists that AI should not replace human work, but rather enhance it. It must be used prudently and wisely, always putting people at the centre. The risks of inequalities, misinformation, job losses, and military uses of this technology necessitate clear limits and global regulations.

The social doctrine of the Church and AI

The Church proposes applying the four fundamental principles of social doctrine to artificial intelligence  :

  • Dignity of the person: the human being should never be treated as a means, but as an end in itself.

  • Common good: AI must ensure that everyone has access to its benefits, without exclusions.

  • Solidarity: Technological development must serve the most needy in particular.

  • Subsidiarity: problems should be solved at the level closest to the people.

Added to this are the values ​​of truth, freedom, justice, and love , which guide any technological innovation towards authentic progress.

Opportunities and risks

Artificial intelligence already offers advances in medicine, education, science, and communication. It can help combat hunger, climate change, or even better convey the Gospel. However, it also poses risks:

  • Massive job losses due to automation.

  • Human relationships replaced by fictitious digital links.

  • Threats to privacy and security.

  • Use of AI in autonomous weapons or disinformation campaigns.

Therefore, the Church emphasizes that AI is not a person: it has no soul, consciousness, or the capacity to love. It is merely a tool, powerful but always dependent on the purposes assigned to it by humans.

A call to responsibility

The Antiqua et nova (2025) document   reminds us that all technological progress must contribute to human dignity and the common good. Responsibility lies not only with governments or businesses, but also with each of us, in how we use these tools in our daily lives.

Artificial intelligence can be an engine of progress, but it can never be a substitute for humankind. No machine can experience love, forgiveness, mercy, or faith. Only in God can perfect intelligence and true happiness be found.



Source link

Continue Reading

Tools & Platforms

2025 PCB Market to Surpass $100B Driven by AI Servers and EVs

Published

on

By


In the fast-evolving world of technology, 2025 is shaping up to be a pivotal year for breakthroughs in printed circuit boards, or PCBs, which form the backbone of everything from AI servers to automotive systems. Industry forecasts point to a global PCB market exploding past $100 billion, driven by surging demand for high-density interconnect (HDI) technology and innovative materials like low dielectric constant (Low Dk/Df) substrates that enhance signal integrity in high-speed applications.

This growth isn’t just about volume; it’s fueled by strategic shifts in manufacturing, where companies are investing heavily in automation and sustainable practices to meet regulatory pressures and supply chain disruptions. For insiders, the real story lies in how these advancements are reshaping sectors like electric vehicles, where PCBs must withstand extreme conditions while supporting advanced driver-assistance systems.

As we delve deeper into the PCB boom, experts highlight AI server boards as a key driver, with projections from sources like UGPCB indicating a 15-20% compound annual growth rate through the decade, propelled by data center expansions from tech giants like Nvidia and Amazon.

Beyond PCBs, broader technology trends for 2025 underscore the rise of artificial intelligence as a transformative force across industries. Gartner’s latest analysis identifies AI governance, agentic AI, and post-quantum cryptography as top strategic priorities, emphasizing the need for businesses to balance innovation with ethical oversight amid increasing regulatory scrutiny.

These trends extend to cybersecurity, where post-quantum solutions are gaining traction to counter threats from quantum computing, potentially rendering current encryption obsolete. For enterprise leaders, this means reallocating budgets toward resilient infrastructures, with investments in AI-driven threat detection systems expected to surge by 25% according to industry reports.

In a comprehensive overview shared via Medium, analyst Mughees Ahmad breaks down how trends like AI TRiSM (trust, risk, and security management) will redefine corporate strategies, urging firms to integrate these into their core operations for competitive edges in volatile markets.

Collaboration between tech firms and media is also amplifying these discussions, as seen in recent partnerships that blend data insights with journalistic depth. At the World Economic Forum in 2025, Tech Mahindra teamed up with Wall Street Journal Intelligence to unveil “The Tech Adoption Index,” a report that quantifies how enterprises are embracing emerging technologies, revealing adoption rates in AI and cloud computing hovering around 60% in leading sectors.

This index highlights disparities, with healthcare and finance outpacing manufacturing in tech integration, offering a roadmap for laggards. Insiders note that such collaborations are crucial for demystifying complex trends, providing actionable intelligence amid economic uncertainties.

Drawing from the Morningstar coverage of the launch, the report underscores that regions like the Middle East are becoming hubs for tech discourse, with Qatar set to host The Wall Street Journal’s Tech Live conference annually starting this year, attracting global innovators to explore these very themes.

Investment opportunities in 2025 are equally compelling, particularly in AI stocks and emerging markets, where resilient tech portfolios are projected to yield strong returns despite macroeconomic headwinds. Wall Street strategists from firms like Goldman Sachs and Morgan Stanley are bullish on AI-driven retail and consumer sectors, citing rebounding demand post-pandemic.

Meanwhile, high-yield bonds in tech infrastructure offer stability, as per JPMorgan analyses, while Bank of America flags emerging markets for their growth potential in digital transformation. For industry veterans, the key is diversification, blending tech equities with bonds to mitigate risks from geopolitical tensions.

According to insights compiled in WebProNews, these opportunities reflect a maturing market where AI not only drives innovation but also stabilizes investment strategies, with forecasts suggesting double-digit gains for well-positioned portfolios through 2025 and beyond.

Shifting focus to specific sectors, the beauty and retail industries are leveraging tech for growth, as evidenced by quarterly deep dives into companies like Estée Lauder and Victoria’s Secret. These firms are navigating consumer shifts through product innovation and digital channels, though margin pressures from tariffs loom large.

In parallel, advanced technology segments in manufacturing, such as those in Nordson Corporation, show robust expansion in medical and electronics, driven by portfolio optimizations. These examples illustrate how tech integration is bolstering resilience across diverse fields.

A detailed examination in TradingView News reveals that for Victoria’s Secret, Q2 2025 revenue beats signal a turnaround, with store traffic and e-commerce innovations countering external challenges, a pattern echoed in broader retail tech adoption trends.

Looking ahead, events like the WSJ Tech Live in Qatar promise to convene leaders for in-depth dialogues on these topics, fostering cross-border collaborations. As



Source link

Continue Reading

Trending