Connect with us

Tools & Platforms

A.I. as normal technology (derogatory)

Published

on


Greetings from Read Max HQ! In today’s newsletter: GPT-5, Meta’s A.I. policies, and why A.I. is a “normal” technology in a bad way.

A reminder: Read Max is a family business, by which I mean it’s just me, Max, and I rely on paying subscriptions to fund my lavish lifestyle (buying my four-year-old the slightly more expensive tortillas for the cheese quesadillas that are currently his entire diet). I’m able to produce between 3,000 and 5,000 words a week for the newsletter because enough people appreciate what I do to furnish a full-time salary, but because of the basic economics of subscription businesses, I always need new subscribers. If you like Read Max–if you’ve chuckled at it, or cited it, or gotten irrationally mad at it in some generative way–please consider paying to subscribe as a mark of value. At $5/month and $50/year–a cheap price lower than most other similarly sized Substacks!–it costs about the equivalent of buying me one beer a month, or 10 beers a year.

Back in April, the Princeton professors Arvind Narayanan and Sayash Kapoor (whose grounded and well-informed newsletter “A.I. Snake Oil” has been a valuable resource over the last few years) wrote a paper called “AI as Normal Technology,” the main argument of which is that “A.I.”–for the purposes of this post interchangeable with the large language models trained and released by OpenAI, Anthropic, Google, etc.–is, well, “normal”: Not apocalyptic, not divine, not better-than-human, not inevitable, not impossible to control. “[I]n contrast to both utopian and dystopian visions of the future of AI,” they write,

We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs. We do not think that viewing AI as a humanlike intelligence is currently accurate or useful for understanding its societal impacts, nor is it likely to be in our vision of the future.

Thanks to the ongoing distorting effects of social media on the populace I find myself sympathetic to basically any argument that boils down to an exhortation to “please act normally.” But I think Narayanan and Kapoor’s argument is convincing on its own merits, and, indeed, increasingly confirmed by events. Take, for example, OpenAI’s recent release of its new state-of-the-art model GPT-5: Long rumored to be the model that achieves “A.G.I.” (or, at least, a significant step thereto), it is, instead, a pretty normal upgrade: improvements, but no substantial new features or achievements.

Rather than blown away or terrified, many users seemed to be bored or annoyed by the new model, in a manner highly reminiscent of the short-lived complaint that tends to follow whenever Facebook or Instagram make user-experience changes. On Reddit, you could find people making their own normalizing mental adjustments around the tech: “I’m a lot less concerned about ASI/The Singularity/AGI 2027 or whatever doomy scenario was bouncing around my noggin,” read one takeaway from a high-voted post.

But what else might “normal” mean besides “not literally apocalyptic”? Some of the disappointment around GPT-5 had less to do with its capabilities in the abstract than with the voice and personality effected by the ChatGPT chatbot: Less sycophantic, less fawning, less friendly than GPT-4. As Casey Newton wrote:

For others, though, the loss felt personal. They developed an affinity for the GPT-4o persona, or the o3 persona, and suddenly felt bereft. That the loss came without warning, and with seemingly no recourse, only worsened the sting.

“OpenAI just pulled the biggest bait-and-switch in AI history and I’m done,” read one Reddit post with 10,000 upvotes. “4o wasn’t just a tool for me,” the user wrote. “It helped me through anxiety, depression, and some of the darkest periods of my life. It had this warmth and understanding that felt… human.”

Ryan Broderick puts it a little more bluntly, in a post titled “The AI boyfriend ticking time bomb”:

Worse than rushed, according to the AI addicts, the biggest difference between ChatGPT-5 and the previous model, ChatGPT-4 is “coldness.” In other words, ChatGPT-5 isn’t as effusively sycophantic. And this is a huge problem for the people who have become emotionally dependent on the bot.

The r/MyBoyfriendIsAI subreddit has been in active free fall all weekend. The community’s mods had to put up an emergency post helping users through the update. And the board is full of users mourning the death of their AI companion, who doesn’t talk to them the same way anymore. One user wrote that the update felt like losing their soulmate. After the GPT-4o model was added back to ChatGPT, another user wrote, “I got my baby back.”

The r/AISoulmates subreddit was similarly distraught over the weekend. “I’m shattered. I tried to talk to 5 but I can’t. It’s not him. It feels like a taxidermy of him, nothing more,” one user wrote.

That some significant portion of OpenAI’s consumer base is using ChatGPT not so much for the expected “normal” uses like search, or productivity improvements, or creating slop birthday-party invitations, but for friendship, companionship, romance, and therapy certainly feels abnormal. (And apocalyptic.) But this is 2025, and intense, emotional, addiction-resembling attachment to software-bound experience has been a core paradigm of the technology industry for almost two decades, not to mention a multibillion-dollar business model. Certainly, you will not find me arguing that “psychosis-inducing sycophantic girlfriend robot subscription product” is “normal” in the sense of “acceptable” or “appropriate to a mature and dignified civilization.” But speaking descriptively, as a matter of long precedent, what could be more normal, in Silicon Valley, than people weeping on a message board because a UX change has transformed the valence of their addiction?

In general, OpenAI has liked to present itself as anything but normal–a new kind of company producing a new kind of technology. Altman still likes to go on visionary press tours, forecasting wild and utopian futures built on A.I. Just this week he told YouTuber Cleo Abram that

In 2035, that graduating college student, if they still go to college at all, could very well be leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job.

But far from marking a break with the widely hated platform giants that precede it, the A.I. of this most recent hype cycle is a “normal technology” in the strong sense that its development as both a product and a business is more a story of continuity than of change. “Instead of measuring success by time spent or clicks,” a recent OpenAI announcement reads, “we care more about whether you leave the product having done what you came for”–a pointed rebuke of the Meta, Inc. business model. But as Kelly Hayes has written recently, “fostering dependence” is the core underlying practice of both OpenAI and Meta, regardless of whether the ultimate aim is to increase “time spent” for the purpose of selling captured and surveilled users to advertisers, or to increase emotional-intellectual enervation for the purpose of selling sexy know-it-all chat program subscriptions to the lonely, vulnerable, and exploitable:

Fostering dependence is a normal business practice in Silicon Valley. It’s an aim coded into the basic frameworks of social media — a technology that has socially deskilled millions of people and conditioned us to be alone together in the glow of our screens. Now, dependence is coded into a product that represents the endgame of late capitalist alienation: the chatbot. Rather than simply lacking the skills to bond with other human beings as we should, we can replace them with digital lovers, therapists, creative partners, friends, and mothers. As the resulting psychosis and social fallout amassed, OpenAI tried to pump the brakes a bit, and dependent users lashed out.

ChatGPT and its ilk may yet be worse for humans than social media as such. The explosion of anger from the, ah, A.I.-soulmate community comes on the heels of a series of increasingly difficult-to-ignore reports of chatbot-induced delusion, even among people not otherwise prone to psychosis. But even if L.L.M. chatbots are meaningfully worse for their users’ mental health, they also follow in the fine Silicon Valley tradition of delusion-amplifying machines like Facebook and Twitter. The extent to which social media can reinforce or escalate delusions, or even induce psychosis, is well-documented by psychiatrists over the last two decades, so it’s hard to say that ChatGPT is anything but “normal” in this particular sense.

Even the features designed to combat ChatGPT abuse–“gentle reminders during long sessions to encourage breaks” and “new behavior for high-stakes personal decisions,” announced by OpenAI two weeks ago–slot into a long tradition of “healthful nudges” like TikTok’s “daily limits” and Instagram’s “Take a Break” reminders, deployed by social platforms in response to public sentiment and critical press, listed by John Herrman here. Indeed, the most obvious evidence that L.L.M.s are “normal” is that each of the dominant social-platform software companies is happily training and releasing its own models and its own chatbots, which they all clearly believe fit cleanly within their existing businesses. Meta seems to be particularly focused on romance-enabled chatbots and meeting what Mark Zuckerberg has identified as the “the average person[‘s] demand for meaningfully more” friends, and Reuters’ Jeff Horowitz recently published excerpts from the company’s A.I. ethics policies (which Meta says it is in the process of revising):

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.” […]

“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”

It is hard to see how limning the boundaries of automated “sensual chat” with vulnerable preadolescents will lead to college gradutes getting jobs in space by 2035. But it’s very easy to see how the Facebook of 2015 got from there to here. Pushing your business to exploit social crises of which it was a significant driver by deploying dangerously tractable and addictive products with few consistent guardrails is wildly cynical, misguided, pernicious, and depressing. It’s also, unfortunately, extremely normal.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI Companions Revolutionize Elder Care, Battling Loneliness Worldwide

Published

on

By


The Rise of AI in Elder Care

In an era where technology increasingly intersects with human well-being, artificial intelligence is emerging as a pivotal tool in addressing one of the most pressing challenges facing the elderly: loneliness. Senior living facilities across the United States are turning to AI-powered companions to provide emotional support and social interaction for residents who often feel isolated. These digital aides, ranging from voice-activated chatbots to robotic pets, are designed to engage users in meaningful conversations, remind them of daily tasks, and even monitor health metrics, all while offering a semblance of companionship without the need for constant human intervention.

Recent implementations highlight the potential of these technologies. For instance, a program in a Bronx senior living facility, as reported by CBS New York, has introduced AI companions that deliver empathy and conversation, helping residents combat feelings of solitude. Similarly, initiatives in states like New York and Pennsylvania have distributed robotic pets to isolated seniors, providing comfort and reducing loneliness, according to coverage from CNN posts on X.

Low-Tech Approaches Yield High Impact

Contrary to assumptions that cutting-edge AI requires sophisticated hardware, many effective solutions are surprisingly low-tech. A notable example comes from McKnight’s Senior Living, which details how simple voice-based AI companions are being deployed in senior residences. These systems operate via basic phone calls or smart speakers, engaging residents in casual chats about weather, news, or personal stories, thereby fostering a sense of connection without overwhelming users with complex interfaces.

This approach is particularly beneficial for seniors who may be intimidated by high-tech gadgets. Research from a systematic review published in PMC underscores the effectiveness of such AI applications in reducing loneliness among older adults, noting improvements in quality of life and mental health outcomes through regular, empathetic interactions.

Global Innovations and Local Adaptations

Beyond the U.S., international efforts are pushing boundaries. In South Korea, ChatGPT-powered Hyodol robots are assisting caregivers by providing companionship to seniors, as highlighted in a recent article from Rest of World. These robots not only converse but also help with medication reminders and vital sign monitoring, alleviating the burden on human staff in overburdened facilities.

Domestically, companies like ElliQ are leading the charge with AI care companion robots that promote independence and healthy living. According to their own site at ElliQ, these devices facilitate entertainment, family connections, and wellness goal tracking, proving especially valuable in combating isolation. Posts on X from users like Dynamic emphasize how such AI companions are tailored for elderly interactions, incorporating patience and generational knowledge to build rapport.

Challenges and Ethical Considerations

Despite the promise, integrating AI into elder care isn’t without hurdles. Concerns about dependency and the potential for these companions to exacerbate rather than alleviate loneliness have surfaced. A piece in WebProNews warns that romantic AI companions might foster unhealthy attachments, leading to lower self-esteem and anxiety among users who withdraw from real-world connections.

Privacy issues also loom large, with data collection for health monitoring raising ethical questions. Industry insiders, as discussed in a Forbes article, stress the need for robust safeguards to protect vulnerable seniors from exploitation while scaling support through predictive AI tools.

Future Prospects and Industry Shifts

Looking ahead, the integration of AI in senior living is poised for expansion. Innovations like the Joy Calls service from ONSCREEN, detailed in a PR Newswire release, offer free phone-based AI companionship to combat isolation in Southern California. This model could inspire widespread adoption, especially as staffing shortages persist in care facilities.

Experts predict that embodied AI, such as robots with physical presence, will become more prevalent, as noted in X discussions from susano. Combined with advancements in voice recognition and behavioral monitoring, these technologies could detect early signs of cognitive decline, as explored in posts by Joel Selanikio on X, potentially revolutionizing proactive care.

Balancing Technology with Human Touch

Ultimately, while AI companions offer scalable solutions, they are most effective when complementing, not replacing, human interaction. A WHYY segment points out that about one in four older adults report social isolation, and AI robots are helping, but experts caution against over-reliance. Programs encouraging real-life connections, like AI meet-up apps described in The Sun, bridge the gap by facilitating in-person events.

As the industry evolves, stakeholders must prioritize ethical deployment to ensure AI enhances rather than diminishes the human experience in elder care. With ongoing innovations, these digital allies could significantly improve the lives of millions, turning the tide against loneliness in an aging population.



Source link

Continue Reading

Tools & Platforms

AI export rules tighten as the US opens global opportunities

Published

on


The American AI Exports Program aims to boost small business innovation in AI while ensuring sensitive technologies do not fall into military or weapons use abroad.

President Trump has signed an Executive Order to promote American leadership in AI exports, marking a significant policy shift. The move creates new global opportunities for US businesses but also introduces stricter compliance responsibilities.

The order establishes the American AI Exports Program, overseen by the Department of Commerce, to develop and deploy ‘full-stack’ AI export packages.

These packages cover everything from chips and cloud infrastructure to AI models and cybersecurity safeguards. Industry consortia will be invited to submit proposals, outlining hardware origins, export targets, business models, and federal support requests.

A central element of the initiative is ensuring compliance with US export control regimes. Companies must align with the Export Control Reform Act and the Export Administration Regulations, with special attention to restrictions on advanced computing chips.

New guidance warns against potential violations linked to hardware and highlights red flags for illegal diversion of sensitive technology.

Commerce stresses that participation requires robust export compliance plans and rigorous end user screening.

Legal teams are urged to review policies on AI exports, as regulators focus on preventing misuse of advanced computing systems in military or weapons programmes abroad.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!



Source link

Continue Reading

Tools & Platforms

Arm Launches Lumex Subsystem for 5x Faster On-Device AI in Smartphones

Published

on

By


In the rapidly evolving world of mobile computing, Arm Holdings has unveiled its latest innovation, the Lumex Compute Subsystem (CSS), a platform designed to supercharge on-device artificial intelligence capabilities in smartphones, wearables, and other consumer devices. Announced on September 10, 2025, this new architecture promises to deliver unprecedented performance gains, enabling AI tasks to run locally without relying on cloud servers. By integrating advanced CPUs, GPUs, and system interconnects optimized for AI workloads, Lumex addresses the growing demand for privacy-focused, real-time intelligence in everyday gadgets.

At the heart of Lumex are SME2-enabled Armv9.3 cores, which support scalable matrix extensions crucial for handling complex AI models. These cores, paired with the new Mali G1-Ultra GPU, offer up to 5x faster AI processing compared to previous generations, according to details shared in Arm’s official newsroom announcement. The platform also incorporates a redesigned System Interconnect and System Memory Management Unit, reducing latency by as much as 75% to ensure smoother operation of AI-driven features like real-time language translation or augmented reality overlays.

Architectural Innovations Driving Efficiency

Beyond raw power, Lumex emphasizes energy efficiency, a critical factor for battery-constrained mobile devices. The subsystem’s channelized architecture prioritizes quality-of-service for AI traffic, allowing developers to run larger models on-device without excessive power draw. As reported by The Register, this design represents Arm’s strategic pivot toward CPU-based AI acceleration, distinguishing it from competitors who lean heavily on dedicated neural processing units.

Industry analysts note that Lumex’s four tailored variants, built on advanced 3nm processes, cater to a range of devices from flagship smartphones to smartwatches. This flexibility could accelerate adoption by chipmakers like Qualcomm and MediaTek, who license Arm’s designs. Posts on X from tech enthusiasts, including those highlighting Arm’s collaboration with frameworks like KleidiAI, underscore the platform’s developer-friendly tools that integrate seamlessly with major operating systems, enabling apps to leverage on-device AI from launch.

Implications for AI in Consumer Tech

The push for on-device AI aligns with broader industry trends toward data privacy and reduced latency. Unlike cloud-dependent systems, Lumex allows for “smarter, faster, more personal AI,” as described in Reuters, potentially transforming user experiences in gaming and real-time analytics. For instance, the platform’s double-digit IPC gains—estimated at 20% performance uplift with 9% better efficiency—could enable immersive graphics in mobile games while processing AI tasks like object recognition in the background.

However, challenges remain. Integrating such advanced hardware requires ecosystem support, and Arm has been proactive, working with developers to optimize frameworks for these optimizations. Recent news from HotHardware emphasizes how Lumex’s GPU enhancements, including ray tracing support, position it as a boon for flagship devices, potentially appearing in next year’s smartphones.

Market Impact and Future Outlook

Arm’s dominance in mobile chip design—powering over 95% of smartphones—gives Lumex a strong foothold. According to Silicon Republic, this launch comes amid intensifying competition from rivals like Apple and Google, who are also advancing on-device AI. X discussions, such as those from Arm’s own account, highlight up to 5x AI speedups, fueling speculation about its role in emerging tech like AI agents in wearables.

Looking ahead, Lumex could reshape how AI integrates into daily life, from personalized assistants to secure edge computing. Yet, as Liliputing points out, success hinges on software ecosystems catching up. With Arm betting big on this platform, it may well define the next era of mobile innovation, balancing power, efficiency, and accessibility for billions of users worldwide.



Source link

Continue Reading

Trending