Tools & Platforms
Sorry, AI tech lords — human flaws make life worth living

In 1944, Jean-Paul Sartre acidly penned the line “Hell is other people” without the benefit of ever having visited a water park.
Nonetheless, “other people” are taking a beating these days.
In an interview with the New York Times’ Ross Douthat last week, billionaire techno-futurist Peter Thiel struggled to answer whether he thinks humanity should continue to exist, as artificial intelligence continues to gobble up more of our brain activity.
“You would prefer the human race to endure, right?” Douthat asked.
Thiel stared ahead blankly and began stammering, as if his brain were buffering.
“This is a long hesitation,” Douthat noted. “Should the human race survive?”
“Yes,” Thiel answered, perhaps remembering he is a member of the human race and that he should be in favor of Peter Thiel’s surviving.
The other members of humanity? Meh.
In fact, most of AI’s appeal is that it will begin replacing humans with machine learning.
Gone will be those pesky employees who show up to work hung over, microwave leftover chicken tikka masala at lunch, and regale coworkers with made-up stories about how hilarious their thickheaded kids are.
But it seems worth pointing out — and this may be a controversial opinion among the billionaires looking to shape the future — that, contra Sartre, people are . . . worth keeping around?
It is a given that Homo sapiens isn’t exactly at its peak at the moment.
Human beings often smell bad. They lie and cheat and reward others who do so.
They stand in the aisle the second the plane comes to a halt. They insist on telling you to watch “Love Island.”
They sometimes change all their political beliefs before your eyes in service to a lout.
But the whole point of AI is to improve the lives of humans, not to render them altogether redundant.
If there are no flesh-and-blood humans left to enjoy the benefits of computer learning, why exactly are we creating it?
Sure, the people who constantly argue with us are irritating, but conversation with other humans is how we learn things and come up with new ideas.
The friction from bumping into others sands us down into reasonable, intelligent beings who know why we believe things.
Further, communicating with each other is how we determine who is lying and who can be trusted. It is how we decide what is important to us, rather than having it dictated to us by an algorithm.
As wretched as they can often be, humans make almost everything better.
What meal isn’t improved by a carbon-based dining companion? How many times have you been brought to tears of laughter by a room full of real friends?
But AI attempts to remove all the friction from our lives, assuming we all want a fully lubricated existence devoid of imperfection.
Malcolm Gladwell, whose primary gift to the world is making sure everyone has heard of Malcolm Gladwell, actually made a great point recently when he discussed riding in a driverless car.
He noted that the sensors in the car, for obvious reasons, force the car to stop when a pedestrian is in front of it. But because the sensors are perfect, that invites pedestrian malfeasance.
Kids playing ball in the street could hold up a car for an hour. Someone looking to rob a rider could stop the car simply by standing in front of it.
Driving as currently constituted requires a human being behind the wheel to discern the intent of passersby.
Because humans sometimes make mistakes, few kids will play in the street. If a computer is at the wheel, they might lose their healthy fear of being hit.
In other words, society functions when uncertainty reigns. People make commonsense judgments; computers adhere to a bloodless formula that eliminates imperfections.
Consequently, people who pursue real-world romantic relationships will soon be like the hobbyists of today who collect vinyl albums, insisting the sound is more authentic.
Sure, dating one’s phone can eliminate a lot of heartbreak: Digital lovers are always there, they never argue with you, they won’t join a Facebook group to spill your secrets, they won’t cheat, and they won’t lecture you on the proper way to load the dishwasher.
But eliminating those negative experiences will only produce feeble-minded, spoiled adult infants who can’t handle adversity.
And, of course, replacing real-world relationships with computer formulas will bring declining birth rates and fewer people — and even more AI to do the work of the people never born.
This is how the machines take over. (It is difficult to have a child with a computer, as they insist on using the algorithm method.)
For humans, the imperfections are where life — companionship, art, humor, amazing coincidences, and enduring mysteries — happens.
Christian Schneider writes at Anti-Knowledge and hosts the podcast “Wasn’t That Special: 50 Years of ‘SNL.’” Adapted from National Review.
Tools & Platforms
Agentic AI, Fintech Innovation, and Ethical Risks

The Rise of Agentic AI in 2025
As the technology sector gears up for 2025, industry leaders are focusing on transformative shifts driven by artificial intelligence, particularly the emergence of agentic AI systems. These autonomous agents, capable of planning and executing complex tasks without constant human oversight, are poised to redefine operational efficiencies across enterprises. According to a recent analysis from McKinsey, agentic AI ranks among the top trends, enabling “virtual coworkers” that handle everything from data analysis to strategic decision-making.
This evolution builds on the generative AI boom of previous years, but agentic systems introduce a layer of independence that could slash costs and accelerate innovation. Insiders note that companies like Google and Microsoft are already integrating these capabilities into their cloud platforms, signaling a broader industry pivot toward AI that acts rather than just generates.
Monetizing AI Infrastructure Amid Surging Demand
Cloud giants such as Amazon, Google, and Microsoft have subsidized AI development to attract builders, but 2025 is expected to mark a turning point toward aggressive monetization. Posts found on X highlight this shift, with predictions that these firms will capitalize on the explosive demand for AI infrastructure, potentially driving significant revenue growth. For instance, TechCrunch reports on how startups and enterprises are increasingly reliant on these platforms, fueling a market projected to reach trillions.
The push comes as AI applications expand into IoT, blockchain, and 5G integrations, creating hybrid ecosystems that enhance real-time business operations. However, challenges like data governance and compliance loom large, with BigID‘s insights via X emphasizing the need for robust strategies to manage AI-related risks.
Fintech Disruption and Digital Banking Evolution
Fintech is set to disrupt traditional sectors further in 2025, with digital banks rapidly gaining ground through AI-driven personalization and seamless services. X discussions point to a $70 trillion wealth transfer boosting assets under management for registered investment advisors, while innovations in decentralized finance leverage blockchain for secure, efficient transactions. CNBC covers how companies like those in Silicon Valley are leading this charge, integrating AI for fraud detection and customer engagement.
Emerging sectors such as AI-driven diagnostics and telemedicine are also on the rise, as noted in trends from UpGrad, promising to revolutionize healthcare delivery. Yet, regulatory hurdles, including new rules on data privacy and cybersecurity, could temper this growth, requiring fintech players to navigate a complex web of compliance demands.
Sustainability and Energy Innovations Take Center Stage
Sustainability emerges as a core theme, with small nuclear reactors and decentralized renewable energy addressing the power needs of AI data centers. X posts underscore the potential of these technologies to provide clean energy, projecting a 15% increase in capacity by 2030. WIRED explores how this aligns with broader environmental goals, as tech firms face pressure to reduce carbon footprints amid climate-driven challenges like urban density increasing pest infestations—a macro tailwind for related industries.
Bio-based materials and agri-tech manufacturing are gaining traction, fostering micro-factories that minimize waste. Industry insiders, as reported in ITPro Today, predict these innovations will drive revenue growth for forward-thinking companies, much like Tesla’s impact on electric vehicles.
Navigating Challenges in a Quantum-Leap Era
The IT industry in 2025 will grapple with quantum computing’s potential, which could revolutionize fields like cryptography and materials science. Gartner, via insights shared on X, highlights agentic AI’s role in this, but warns of cybersecurity threats from advanced attacks. Reuters details ongoing concerns, including the fight against deepfakes through AI watermarking, estimated to save billions in trust-related losses.
Mental health apps and 3D printing for goods represent niche growth areas, blending technology with human-centric solutions. As Fox Business notes, these trends underscore the need for ethical AI deployment, ensuring innovations benefit society without exacerbating inequalities.
Strategic Imperatives for Tech Executives
For executives, the key lies in balancing innovation with risk management. Ad Age discusses how brands are adopting AI for marketing, including revenue-sharing models with publishers like those piloted by Perplexity. Remote work’s permanence, as per X trends, demands AI tools for collaboration, while sustainability mandates investment in green tech.
Ultimately, 2025’s tech environment promises unprecedented opportunities, but success hinges on adaptive strategies. Companies that integrate AI with
Tools & Platforms
Anthropic Bans Chinese Entities from Claude AI Over Security Risks

In a move that underscores escalating tensions in the global artificial intelligence arena, Anthropic, the San Francisco-based AI startup backed by tech giants like Amazon, has tightened its service restrictions to exclude companies majority-owned or controlled by Chinese entities. This policy update, effective immediately, extends beyond China’s borders to include overseas subsidiaries and organizations, effectively closing what the company described as a loophole in access to its Claude chatbot and related AI models.
The decision comes amid growing concerns over national security, with Anthropic citing risks that its technology could be co-opted for military or intelligence purposes by adversarial nations. As reported by Japan Today, the company positions itself as a guardian of ethical AI development, emphasizing that the restrictions target “authoritarian regions” to prevent misuse while promoting U.S. leadership in the field.
Escalating Geopolitical Frictions in AI Access This clampdown is not isolated but part of a broader pattern of U.S. tech firms navigating the fraught U.S.-China relationship. Anthropic’s terms of service now prohibit access for entities where more than 50% ownership traces back to Chinese control, a threshold that could impact major players like ByteDance, Tencent, and Alibaba, even through their international arms. Industry observers note this as a first-of-its-kind explicit ban in the AI sector, potentially setting a precedent for competitors.
According to Tom’s Hardware, the policy cites “legal, regulatory, and security risks,” including the possibility of data coercion by foreign governments. This reflects heightened scrutiny from U.S. regulators, who have increasingly viewed AI as a strategic asset akin to semiconductor technology, where export controls have already curtailed shipments to China.
Implications for Global Tech Ecosystems and Innovation For Chinese-owned firms operating globally, the restrictions could disrupt operations reliant on advanced AI tools, forcing a pivot to domestic alternatives or open-source options. Posts on X highlight a mix of sentiments, with some users decrying it as an attempt to monopolize AI development in a “unipolar world,” while others warn of retaliatory measures that might accelerate China’s push toward self-sufficiency in AI.
Anthropic’s move aligns with similar actions in the tech industry, such as restrictions on chip exports, which have spurred Chinese innovation in areas like Huawei’s Ascend processors. As detailed in coverage from MediaNama, this policy extends to other unsupported regions like Russia, North Korea, and Iran, but the focus on China underscores the AI arms race’s intensity.
Industry Reactions and Potential Ripple Effects Executives and analysts are watching closely to see if rivals like OpenAI or Google DeepMind follow suit, potentially forgoing significant revenue streams. One X post from a technology commentator suggested this could pressure competitors into similar decisions, given the geopolitical stakes, while another lamented the fragmentation of global AI access, arguing it denies “AI sovereignty” to nations outside the U.S. sphere.
The financial backing of Anthropic—valued at over $18 billion—includes heavy investments from Amazon and Google, which may influence its alignment with U.S. interests. Reports from The Manila Times indicate that the company frames this as a proactive step to safeguard democratic values, but critics argue it could stifle international collaboration and innovation.
Navigating Future Uncertainties in AI Governance Looking ahead, this development raises questions about the balkanization of AI technologies, where access becomes a tool of foreign policy. Industry insiders speculate that Chinese firms might accelerate investments in proprietary models, as evidenced by recent open-source releases that challenge Western dominance. Meanwhile, Anthropic’s stance could invite scrutiny from antitrust regulators, who might view it as consolidating power among U.S. players.
Ultimately, as the AI sector evolves, such restrictions highlight the delicate balance between security imperatives and the open exchange that has driven technological progress. With ongoing U.S. sanctions and China’s rapid advancements, the coming years may see a more divided global AI ecosystem, where strategic decisions like Anthropic’s redefine competitive boundaries and influence the trajectory of innovation worldwide.
Tools & Platforms
Community Editorial Board: Considering Colorado’s AI law

Members of our Community Editorial Board, a group of community residents who are engaged with and passionate about local issues, respond to the following question: During the recent special session, Colorado legislators failed to agree on an update to the state’s yet-to-be-implemented artificial intelligence law, despite concerns from the tech industry that the current law will make compliance onerous. Your take?
Colorado’s artificial intelligence law, passed in 2024 but not yet in effect, aims to regulate high-risk AI systems by requiring companies to assess risk, disclose how AI is used and avoid discriminatory outcomes. But as its 2026 rollout approaches, tech companies and Governor Polis argue the rules are too vague and costly to implement. Polis has pushed for a delay to preserve Colorado’s competitiveness, and the Trump administration’s AI Action Plan has added pressure by threatening to withhold federal funds from states with “burdensome” AI laws. The failure to update the law reflects a deeper tension: how to regulate fast-moving technology without undercutting economic growth.
Progressive lawmakers want people to have rights to see, correct and challenge the data that AI systems use against them. If an algorithm denies you a job, a loan or health coverage, you should be able to understand why. On paper, this sounds straightforward. In practice, it runs into the way today’s AI systems actually work.
Large language models like ChatGPT illustrate the challenge. They don’t rely on fixed rules that can be traced line by line. Instead, they are trained on massive datasets and learn statistical patterns in language. Input text is broken into words or parts of a word (tokens), converted into numbers, and run through enormous matrices containing billions of learned weights. These weights capture how strongly tokens relate to one another and generate probabilities for what word is most likely to come next. From that distribution, the model picks an output, sometimes the top choice, sometimes a less likely one. In other words, there are two layers of uncertainty: first in the training data, which bakes human biases into the model, and then in the inference process, which selects from a range of outputs. The same input can therefore yield different results, and even when it doesn’t, there is no simple way to point to a specific line of data that caused the outcome. Transparency is elusive because auditing a model at this scale is less like tracing a flowchart and more like untangling billions of connections.
These layers of uncertainty combine with two broader challenges. Research has not yet shown whether AI systems discriminate more or less than humans making similar decisions. The risks are real, but so is the uncertainty. And without federal rules, states are locked in competition. Companies can relocate to jurisdictions with looser standards. That puts Colorado in a bind: trying to protect consumers without losing its tech edge.
Here’s where I land: Regulating AI is difficult because neither lawmakers nor the engineers who build these systems can fully explain how specific outputs are produced. Still, in sensitive areas like housing, employment, or public benefits, companies should not be allowed to hide behind complexity. Full transparency may be impossible, but clear rules are not. Disclosure of AI use should be mandatory today, and liability should follow: If a system produces discriminatory results, the company should face lawsuits as it would for any other harmful product. It is striking that a technology whose outputs cannot be traced to clear causes is already in widespread use; in most industries, such a product would never be released, but AI has become too central to economic competitiveness to wait for full clarity. And since we lack evidence on whether AI is better or worse than human decision-making, banning it outright is not realistic. These models will remain an active area of research for years, and regulation will have to evolve with them. For now, disclosure should come first. The rest can wait, but delay must not become retreat.
Hernán Villanueva, chvillanuevap@gmail.com
Years ago, during a Senate hearing into Facebook, senators were grilling Mark Zuckerberg, and it was clear they had no idea how the internet works. One senator didn’t understand why Facebook had to run ads. It took Zuckerberg a minute to understand the senator’s question, as he couldn’t imagine anyone being that ignorant on the subject of the hearing! Yet these senators write and enact laws governing Facebook.
Society does a lot of that. Boulder does this with homelessness and climate change. They understand neither, yet create and pass laws, which, predictably, do nothing, or sometimes, make the problem worse. Colorado has done it before, as well, when it enacted a law requiring renewable energy and listed hydrogen as an energy source. Hydrogen is only an energy source when it is separated from oxygen, like in the sun. On Earth, hydrogen is always bound to another element and, therefore, it is not an energy source; it is an energy carrier. Colorado continued regulating things it doesn’t understand with the Colorado AI Act (CAIA), which shows a fundamental misunderstanding of how deep learning and Large Language Models, the central technologies of AI today, work.
The incentive to control malicious AI behavior is understandable. If AI companies are creating these on purpose, let’s get after them. But they aren’t. But bias does exist in AI programs. The bias comes from the data used to train the AI model. Biased in what way, though? Critics contend that loan applications are biased against people of color, even when a person’s race is not represented in the data. The bias isn’t on race. It is possibly based on the person’s address, education or credit score. Banks want to bias applicants based on these factors. Why? Because it correlates with the applicant’s ability to pay back the loan.
If the CAIA makes it impossible for banks to legally use AI to screen loan applicants, are we better off? Have we eliminated bias? Absolutely not. If a human is involved, we have bias. In fact, our only hope to eliminate bias is with AI, though we aren’t there yet because of the aforementioned data issue. So we’d still have bias, but now loans would take longer to process.
Today, there is little demand for ditch diggers. We have backhoes and bulldozers that handle most of that work. These inventions put a lot of ditch diggers out of work. Are we, as a society, better for these inventions? I think so. AI might be fundamentally different from heavy equipment, but it might not be. AI is a tool that can help eliminate drudgery. It can speed up the reading of X-rays and CT scans, thereby giving us better medical care. AI won’t be perfect. Nothing created by humans can be. But we need to be cautious in slowing the development of these life-transforming tools.
Bill Wright, bill@wwwright.com
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi