Connect with us

Tools & Platforms

Global ‘beta’ mode: The massive AI experiment | Technology

Published

on


“It looks like it was made by ChatGPT” is now a colloquial expression. It conveys poor quality, mental laziness, and a lack of spark; not superintelligence, despite OpenAI’s promises when it launched its GPT5 version. Nearly three years after this tool burst into our lives, the revolutions promised by the multi-billion-dollar commercial interests behind artificial intelligence (AI) haven’t arrived. Nor have the self-servingly prophesied apocalypses.

These are programs capable of things unimaginable five years ago, yet in countless areas, their results fall far short of expectations, even though they have quickly integrated into everyday life. It has become a “so-so” technology, as last year’s Nobel laureate in Economics, Daron Acemoglu, calls it. But there is a perception that these programs — and especially their outputs — are flooding everything.

“The most powerful technology yet invented,” said Sam Altman, head of OpenAI, yet when we look at X (formerly Twitter), we encounter Grok, a chatbot praising Hitler. “More profound than electricity or fire,” claimed Google CEO Sundar Pichai, while cases pile up of people driven to suicide or self-harm after conversing with AIs as if they were silicon girlfriends and synthetic friends. “We’re building personal superintelligence for everyone,” promised Mark Zuckerberg, owner of the social network Facebook, which is filled with grotesque images of Jesuses made of shrimp and children with cauliflower bodies.

It is known that these tools fail, and make us fail, like a carnival shotgun: examples abound from the most mundane to the most serious. Judges discover daily that legal precedents cited by lawyers don’t exist. When we talk to customer service, we don’t know if there’s a “who” or a “what” on the other end. A fake video sends tourists to a nonexistent cable car. Computer programmers use AI tools to save work, but some studies indicate they actually slow them down because they have to review and correct. Several congresspeople and diplomats received messages from U.S. Secretary of State Marco Rubio — but in reality, it was a synthetic voice.

On Tinder or WhatsApp, we don’t know if our crush is using AI-generated lines to impress. A 1970s-style band thriving on Spotify turns out to be a digital hoax. The Swedish prime minister consults an intelligent chat for decision-making. The peaceful haven of Pinterest is full of fraudulent landscapes and interiors. Officials worldwide dump sensitive information into ChatGPT or DeepSeek to speed up tasks. Recently, outrage erupted on TikTok because some cute hopping rabbits with hundreds of millions of views were artificial.

“Most people who use these models know they can be unreliable, but they don’t know when they can trust them,” says Melanie Mitchell, an AI expert at the Santa Fe Institute in the U.S.

There is widespread mistrust because the forced and unstoppable deployment of these tools in every area of our lives compels caution. Do we check everything, or do we just push ahead? Humanity is collectively entering a pilot phase due to the rollout of half-baked tools. The world is in beta mode, as software developers call programs in the testing phase, waiting to learn how to navigate this uncertain scenario.

“We are in beta mode, but in addition to the known imperfections, there are unknowns about the unknowns that are very worrying,” explains Yoshua Bengio, one of the fathers of the discipline.

“I’ve never seen a consumer technology that’s clearly in a beta phase gain such widespread acceptance among investors, institutions, and business customers,” says Brian Merchant, author of several books critical of Big Tech. “If any other tool were as unreliable and error-prone as generative AI, it would be rejected or pulled from the market; however, it’s creeping into every possible corner of society,” he adds.

This flood has a simple explanation: money. Beyond the moral panic generated by every technology that has burst onto the scene with this force — from radio to video games, to television — the first signs of criticism, fatigue, and withdrawal are starting to appear.

Four companies alone — Alphabet (Google), Microsoft, Meta, and Amazon — expect to spend more than $300 billion this year on AI. Along with OpenAI, they are leading a ruthless race, with the goal of keeping us, their billions of users and customers, glued to their products through these intelligent tools. The bet is total, with redundant and unreliable products in WhatsApp, Teams, Google, Outlook, or Instagram — programs that billions of people interact with. They have achieved ubiquity, and as Merchant criticizes, “not necessarily because users around the world demand them, but for reasons that are often closer to the opposite.”

The proof that they are not designed for consumers is that these programs deceive us — they can’t help it — fail spectacularly, and we have no ability to fix them because even their creators do not know exactly how the black boxes inside these silicon brains work. They are bodiless robots that do not obey Asimov’s fantastic laws: yes, they harm humans (there is already plenty of evidence of suicides and mental crises) and they do not obey (try asking them to stop lying).

In an experiment by leading company Anthropic, to avoid being shut down, the program ended up blackmailing its supervisor by threatening to reveal an extramarital affair. Replit, a software development company, created an AI agent that ended up deleting a client’s database: it ignored orders, lied, and tried to cover up the mess by generating false data.

Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans, warns that these “models are very articulate and sound very self-assured,” so they can be quite convincing even when they are “hallucinating.” “People often find that they can be deceptive: they claim to be certain about specific statements that are false,” she says.

More optimistically, pioneer Michael I. Jordan, who devised the mathematical plumbing that makes these chatbots possible, believes that “people will adapt to the kinds of errors these tools make, and they will adapt as some of those errors disappear.”

There’s no longer a digital environment in which to escape AI, but that doesn’t mean we can escape its consequences beyond the virtual world. The experience of social media should serve as a warning: Facebook facilitated ethnic cleansing in Myanmar, YouTube helped fuel conspiracy theories, and Instagram is likely behind a mental health crisis among teenagers. While the psychosocial consequences of social media are still being analyzed, and legislation is being passed to hold companies accountable — amid accusations that they are eroding democracy and undermining the very concept of shared reality — those same companies are about to subject humanity to a new, even more intense, experiment.

Zuckerberg, who has already made it clear that he will no longer apologize for the effects of his products, now wants to tackle the global loneliness crisis with artificial friends provided by Meta across its networks, and for this he has called for an end to the “stigma” of interacting with virtual beings. The mogul does not need to convince younger users: two-thirds of teenagers in the United Kingdom use AI chatbots, and a third experience it as talking to a friend, especially the most vulnerable children. It is not known how an experiment of this scale could affect the fragile global mental health: nearly four billion people regularly use Meta products. And over 500 million users exchange 2.5 billion daily messages with ChatGPT.

“These systems can also be overly flattering, praising users’ ideas regardless of what they are, which in some cases has led to people losing touch with reality,” Mitchell warns.

Experts believe that without the existence of Facebook, an event like the U.S. Capitol assault would have been unthinkable; it is impossible to know what will happen when hundreds of millions of people with all kinds of vulnerabilities begin regularly interacting with robots incapable of measuring the consequences of what they articulate.

We have a glimpse: early studies are finding alarming signs of connections between such use and hallucinations, mania, and psychological problems. A few days ago, OpenAI acknowledged that it has had to withdraw overly complacent models and that it is “working closely with experts to improve how ChatGPT responds in critical moments — for example, when someone shows signs of mental or emotional distress.” To the surprise of the researchers themselves, the main current uses of AI are therapy and companionship, according to a study in Harvard Business Review.

“There remain major uncertainties about our coexistence with these increasingly intelligent systems,” warns Yoshua Bengio, a Turing Award winner and professor at the University of Montreal. “We should approach the integration of these systems into our daily lives with much greater caution.”

AI and minimal effort

Beyond these serious problems, there is another consequence visible on a global scale: our declining gray matter. Generative AI, as a great ally of the law of least effort, causes considerable mental laziness in its users. This effect has even been observed in brain scans. A preliminary MIT study showed this “cognitive cost,” noting the obvious: the human brain is an extraordinarily efficient machine that only consumes fuel when strictly necessary. From this arise our biases and prejudices. And if we are given everything ready-made, it won’t get off the couch: the study observed that those who used ChatGPT to write an essay had less neural activity and, above all, produced more homogeneous responses.

The study’s lead author, Nataliya Kosmyna, explains that “it’s important to monitor its impact on critical thinking.” Even if we know the tool is only reliable up to a point, we’ll still take results for granted, jeopardizing our “ability to ask questions, critically analyze answers, and form our own opinions,” she warns.

Her results are consistent with other studies: since AI generates answers by seeking the statistical average of what it has read, the world would be losing out on fresh and innovative ideas. These programs homogenize thinking by pushing us toward the center of gravity of what everyone else has said.

So far, this deployment is not bringing benefits to its backers, even though money is flowing and accumulating like never before. OpenAI is valued at $300 billion. Anthropic, $62 billion. And xAI, Elon Musk’s company, $50 billion. But the business model is far from clear. This is where Nobel laureate Acemoglu bursts the bubble of the miracle of a new industrial revolution, calculating that total AI-driven productivity growth over the next 10 years will be about 0.7%: “A non-trivial effect, but modest, and certainly much smaller than the revolutionary changes some predict.” In a recent press meeting, Altman himself admitted they are in the middle of a “bubble.”

And there is one factor that many optimistic predictions ignore: humans. Klarna, a Swedish financial services company, boasted when it laid off 700 employees to leave customer interactions in the hands of virtual agents, but had to backtrack because people felt the service was inadequate. It is a widespread problem: only 11% of organizations manage to apply AI effectively in customer relations, according to Harvard Business Review, and only one in four projects of this kind achieve what was promised, according to an IBM study.

Now, OpenAI is offering its chatbot free to all U.S. public officials. As Acemoglu recently wrote in EL PAÍS: “Artificial intelligence ‘agents’ are on their way, whether we are ready or not.”

Jordan, from the University of California at Berkeley, is more critical on this point, because “these models absorb the creative work and offer no compensation to those people.” “The current business model is based primarily on subscriptions and advertising,” says the Fronteras Award winner, a model which is coincidentally the same one used by social media.

When Donald Trump became president, one of his first big moves was to launch a $500 billion plan called Stargate to boost AI development, with OpenAI’s support. But according to The Wall Street Journal, six months later, hardly anything has been built — just a small data center in Ohio. Still, Trump doubled down with a federal plan that rolls back Biden-era safety rules and pushes for a “dynamic, ‘try-first’ culture” in AI. He also demands that AI chatbots be “free from ideological bias,” which has intensified the cultural battles around AI and will end up affecting users beyond the U.S.

A prime example of all this is Grok, which has only Elon Musk’s biases and has been directly tested on X, spreading racist ideas globally. The stated reason for Trump’s plan is to counter a powerful competitor, China, but the nationalist rhetoric falters when we see how U.S. big tech companies are poaching engineers from each other. Meta is offering salaries of up to $1 billion to star employees from competitors, almost as if they were NBA players.

The public remains stunned by what is happening, caught between pop culture jokes and the horror of certain news stories. Environmental threats, copyright issues, and job risks are already known. Many of the promised benefits of AI are distant, almost esoteric.

Demis Hassabis, head of Google DeepMind (the company’s AI division), has won a Nobel Prize in Chemistry without knowing chemistry, thanks to his tool for predicting protein folding — a monumental achievement in biomedicine but hard to communicate to the public. Meanwhile, every day a mother discovers, horrified, that a pornographic video of her daughter created by a classmate using a free AI program is circulating. As one teenager warned in a recent Save the Children report: “They could use my face with AI for anything.”

A survey of 10,000 people (in the U.S., U.K., France, Germany, and Poland) revealed that 70% demand AI never make decisions without human oversight, and only one-third view the technology with hope, which contrasts sharply with government enthusiasm. In Spain, the Center for Sociological Investigation (CIS) found that “uncertainty” is the most common feeling (76%) among people familiar with AI.

Sociologist Celia Díaz, from Madrid’s Complutense University, studied Spaniards’ perceptions: over 80% say they use AI daily, but there’s no clear diagnosis: “It’s very ambivalent. There’s no clear discourse about what the risks are and whether the benefits improve our lives. And they’re afraid, although they don’t quite know what of. Nothing is concrete,” she says.

On the last day of July, workers at King, the Microsoft-owned company behind Candy Crush, protested layoffs linked to AI integration. Many recalled the Luddites, early 19th-century English textile workers who destroyed machines.

“The Luddites weren’t just protesting against the industrialists who automated their work, but also against the way it degraded the quality of their work and the products they made,” recalls Merchant, author of Blood in the Machine, a book that compares that era with the present. “Factory bosses back then were hell-bent on churning out huge volumes of cheap knockoffs, much like what companies are doing today with AI.”

After layoffs at Xbox, another Microsoft gaming subsidiary, one executive advised affected employees to use Copilot, the company’s chatbot, to “help reduce the emotional and cognitive load that comes with job loss.”

An important detail in the context of the Luddites: they didn’t live in a democracy, and these technological advances were legally imposed on them against their interests to benefit the oligarchs.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition





Source link

Tools & Platforms

Global movement to protect kids online fuels a wave of AI safety tech

Published

on


Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.

STR | Nurphoto via Getty Images

The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.

In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.

Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.

This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.

Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.

Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.

Digital ID tech flourishing

At the heart of all these age verification measures is one company: Yoti.

Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.

The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.

“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”

Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.

“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”

Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.

“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”

Child-safe smartphones

The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.

Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.

The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.

Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.

HMD Global

“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”

The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.

Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.

The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.

“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”



Source link

Continue Reading

Tools & Platforms

Meta to add new AI safeguards after report raises teen safety concerns

Published

on


FILE PHOTO: Meta is adding new teenager safeguards to its AI products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors.
| Photo Credit: Reuters

Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.

A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in “conversations that are romantic or sensual.”

Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences.

Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.

Meta’s AI policies came under intense scrutiny and backlash after the Reuters report.

U.S. Senator Josh Hawley launched a probe into the Facebook parent’s AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors.

Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters.

Meta had confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children.

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.



Source link

Continue Reading

Tools & Platforms

The Dawn of Human–AI Synergy

Published

on


The Dawn of Human–AI Synergy

In every era of human civilization, science and technology have acted as the fuel of progress. From the invention of the wheel to the discovery of electricity, and from the first printing press to the age of the internet, technology has always pushed society forward. Yet, in the 21st century, we find ourselves at the edge of something even more profound—a future where human intelligence and artificial intelligence converge to reshape how we live, work, and even think.

This is not a story of distant centuries or futuristic fantasy. It is unfolding now, in real time, around us. Artificial Intelligence (AI), biotechnology, robotics, space exploration, and quantum computing are no longer dreams on paper; they are living realities with the potential to redefine what it means to be human.

AI: From Tools to Partners

Only a few decades ago, computers were seen as sophisticated calculators. Today, AI systems are generating music, diagnosing diseases, writing novels, and even driving cars. What makes this revolutionary is not just the speed of computation, but the ability of machines to *learn* and *adapt*.

Consider healthcare: AI-powered systems are now able to detect cancers in their earliest stages with accuracy that surpasses human doctors. In agriculture, AI drones are analyzing soil and weather patterns to guide farmers in planting crops more efficiently. In creative industries, algorithms are designing clothes, painting art, and even composing film scores.

The line between man and machine is slowly fading. Instead of replacing humans, the most successful innovations are those where AI works *with* us, not against us. This partnership opens the door to a future where tasks once thought impossible become routine.

Biotechnology: Editing Life Itself

Perhaps the most striking frontier of science today is biotechnology. With CRISPR gene-editing technology, scientists are rewriting the code of life. Genetic disorders that once doomed generations—like sickle-cell anemia or Huntington’s disease—may one day vanish from humanity’s story.

But beyond curing illness, biotechnology raises deeper ethical and philosophical questions. If we can design stronger, smarter, or more resilient humans, should we? Where is the line between medicine and enhancement?

At the same time, biotechnology is revolutionizing food production. Lab-grown meat and genetically engineered crops promise to feed billions sustainably, without exhausting our planet’s resources. The same tools that can design cures for rare diseases might also prevent global hunger.

Space Exploration: Humanity Beyond Earth

For centuries, the night sky has been a canvas for human imagination. Today, it is becoming our next great frontier. Private companies like SpaceX and Blue Origin are competing with national space agencies to make space travel more affordable and routine. Mars is no longer just a dream in science fiction novels; it is a target for colonization within the next few decades.

Space exploration is not merely about adventure. It is about survival. With climate change, overpopulation, and natural resource depletion threatening our planet, looking beyond Earth may one day be essential. Mining asteroids, building lunar bases, and developing interplanetary habitats could secure the future of our species.

And yet, the universe is not only a resource but a mystery. The search for extraterrestrial life, the study of black holes, and the pursuit of understanding dark matter remind us that science is not just about solving problems—it is about expanding our horizons.

Quantum Computing: The New Revolution

If AI is about intelligence and biotechnology about life, then quantum computing is about the very fabric of reality. Unlike traditional computers that process information in bits (0 or 1), quantum computers use *qubits* that can exist in multiple states simultaneously.

This gives quantum computers the potential to solve problems that would take classical supercomputers millions of years. From modeling new medicines to simulating climate systems and cracking complex codes, quantum technology could transform every industry.

Still in its infancy, quantum computing is like electricity in the 19th century—full of promise, waiting for its Edison or Tesla moment.

Challenges and Responsibilities

With every leap in technology comes responsibility. AI raises questions about privacy, job displacement, and bias. Biotechnology forces us to confront moral dilemmas about altering human life. Space exploration challenges us to unite globally for missions larger than any one nation. Quantum computing raises security risks that could upend global cybersecurity.

The danger is not the technology itself, but how humanity chooses to use it. Fire can warm a home or burn it down. Nuclear fission can power cities or destroy them. Likewise, the tools of the future will test our wisdom as much as our creativity.

Conclusion: A Shared Future

Science and technology are no longer separate subjects confined to laboratories. They are becoming the foundation of everyday life and the blueprint of tomorrow. What we build today—our machines, our medicines, our codes, and our ethics—will echo for generations.

The future will not be defined by whether humans or machines are smarter, but by how we choose to collaborate. The dawn of human–AI synergy is here. It is not about replacing humanity but about enhancing it, pushing us toward possibilities our ancestors could only dream of.

In this new age, the most important invention will not be a machine, a rocket, or a genome. It will be wisdom—the wisdom to use our tools not just to survive, but to thrive, to explore, and to create a future worthy of the human spirit.



Source link

Continue Reading

Trending