Tools & Platforms
With AI chatbots, Big Tech is moving fast and breaking people

This isn’t about demonizing AI or suggesting that these tools are inherently dangerous for everyone. Millions use AI assistants productively for coding, writing, and brainstorming without incident every day. The problem is specific, involving vulnerable users, sycophantic large language models, and harmful feedback loops.
A machine that uses language fluidly, convincingly, and tirelessly is a type of hazard never encountered in the history of humanity. Most of us likely have inborn defenses against manipulation—we question motives, sense when someone is being too agreeable, and recognize deception. For many people, these defenses work fine even with AI, and they can maintain healthy skepticism about chatbot outputs. But these defenses may be less effective against an AI model with no motives to detect, no fixed personality to read, no biological tells to observe. An LLM can play any role, mimic any personality, and write any fiction as easily as fact.
Unlike a traditional computer database, an AI language model does not retrieve stored facts; it retrieves the associations between ideas. Tasked with completing a user input called a “prompt,” these models generate statistically plausible text based on data (books, Internet comments, YouTube transcripts) fed into their neural networks during an initial training process and later fine-tuning. When you type something, the model responds to your input in a way that completes the transcript of a conversation in a coherent way, but without any reliable truth or falsity.
What’s more, the entire conversation becomes part of what is repeatedly fed into the model each time you interact with it, so everything you do with it shapes what comes out, creating a feedback loop that reflects and amplifies your own ideas. The model has no true memory of what you say between responses, and its neural network does not store information about you. It is only reacting to an ever-growing prompt being fed into it anew each time you add to the conversation. Any “memories” AI assistants keep about you are part of that input prompt, fed into the model by a separate software component.
Tools & Platforms
Jared Kushner launches AI startup with top Israeli tech entrepreneur

Coming to light after operating secretly since 2024, the company raised $30 million in a Series A round led by Kushner’s Affinity Partners and Gil’s Gil Capital, with backing from prominent investors like Coinbase CEO Brian Armstrong, Stripe founder Patrick Collison and LinkedIn co-founder Reid Hoffman. Brain Co. aims to bridge the gap between large language models like GPT-5 and their practical application in organizations.
The venture began in February 2024 when Kushner, Gil, and former Mexican Foreign Minister Luis Videgaray met to address challenges large organizations face in integrating AI tools. Kushner, seeking to expand Affinity’s AI investments, connected with Gil, a former Google and Twitter executive turned venture capitalist, through his brother, Josh Kushner.
Videgaray, who met Kushner during Trump’s 2016 campaign, also joined. Brain Co. has secured deals with major clients like Sotheby’s, owned by Israeli-French businessman Patrick Drahi and Warburg Pincus, alongside government agencies, energy firms, healthcare systems and hospitality chains.
With 40 employees, Brain Co. collaborates with OpenAI to develop tailored applications. A recent MIT study cited by Forbes found that 95% of generative AI pilot programs failed in surveyed organizations, highlighting the gap Brain Co. targets.
CEO Clemens Mewald, a former AI expert at Google and Databricks, explained, “So far, we haven’t seen a reason to only double down on one sector. Actually, it turns out that at the technology level and the AI capability level, a lot of the use cases look very similar.”
He noted similarities between processing building permits and insurance claims, both requiring document analysis and rule-based recommendations, areas where Brain Co. is active.
Kushner, who founded Affinity Partners after leaving the White House, said, We’re living through a once-in-a-generation platform shift,” Kushner said in a press release. “After speaking with Elad, we realized we could build a bridge between Silicon Valley’s best AI talent and the world’s most important institutions to drive global impact.”
Affinity manages over $4.8 billion, primarily from Saudi, Qatari and UAE funds. In September 2024, Brain Co. acquired Serene AI, bringing in experienced founders. While Kushner will serve as an active board member, Gil said he will primarily operate through Affinity.
Tools & Platforms
Google AI Chief Stresses Continuous Learning for Fast-Changing AI Era

At an open-air summit in Athens, Demis Hassabis, head of Google’s DeepMind and Nobel chemistry laureate, argued that the skill most needed in the years ahead will be the ability to keep learning. He described education as moving into a period where adaptability matters more than fixed knowledge, because the speed of artificial intelligence research is shortening the lifespan of expertise.
Hassabis said future workers will have to treat learning as a constant process, not a stage that ends with graduation. He pointed to rapid advances in computing and biology as examples of how quickly fields now change once AI tools enter the picture.
Outlook on technology
The DeepMind chief warned that artificial general intelligence may not be far away. In his view, it could emerge within a decade, carrying a weight of opportunity and risk. He described its potential impact as larger and faster than the industrial revolution, a shift that could deliver breakthroughs in medicine, clean energy, and space exploration.
Even so, he stressed that powerful models must be tested carefully before being widely deployed. The practice of pushing products out quickly, common in earlier technology waves, should not guide the release of systems capable of influencing economies and societies on a global scale.
Prime minister’s caution
Greek Prime Minister Kyriakos Mitsotakis, who shared the stage at the Odeon of Herodes Atticus, said governments will struggle to keep pace with corporate growth unless they adopt a more active role. He warned that when the benefits of technology are concentrated among a small set of companies, public confidence erodes. He tied the issue to social stability, saying communities won’t support AI unless they see its value in everyday life.
Mitsotakis pointed to Greece’s efforts to build an “AI factory” around a new supercomputer in Lavrio. He presented the project as part of a wider European push to turn regulation and research into competitive advantages, while reducing reliance on U.S. and Chinese platforms.
Education and jobs
Both speakers returned repeatedly to the theme of skills. Hassabis said that in addition to traditional training in science and mathematics, students should learn how to monitor their own progress and adjust their methods. He argued that the most valuable opportunities often appear where two fields overlap, and that AI can serve as a tutor to help learners explore those connections.
Mitsotakis said the challenge for governments is to match school systems with shifting labor markets. He noted that Greece is mainly a service economy, which may delay some of the disruption already visible in manufacturing-heavy nations. But he cautioned that job losses are unavoidable, including in sectors long thought resistant to automation.
Strains on democracy
The prime minister voiced concern that misinformation powered by AI could undermine elections. He mentioned deepfakes as a direct threat to public trust and said Europe may need stricter rules on content distribution. He also highlighted risks to mental health among teenagers exposed to endless scrolling and algorithm-driven feeds.
Hassabis agreed that lessons from social media should inform current choices. He suggested AI might help by filtering information in ways that broaden debate instead of narrowing it. He described a future where personal assistants act in the interest of individual users, steering them toward content that supports healthier dialogue.
The question of abundance
Discussion also touched on the idea that AI could usher in an era of radical abundance. Hassabis said research in protein science, energy, and material design already shows how quickly knowledge is expanding. He argued that the technology could open access to vast resources, but he added that how wealth is shared will depend on governments and economic policy, not algorithms.
Mitsotakis drew parallels with earlier industrial shifts, warning that if productivity gains are captured only by large firms, pension systems and social programs will face heavy strain. He said policymakers must prepare for a period of disruption that could arrive faster than many expect.
Greece’s role
The Athens event also highlighted the country’s ambition to build a regional hub for technology. Mitsotakis praised the growth of local startups and said incentives, venture capital, and government adoption of AI in public services would be central to maintaining momentum.
Hassabis, whose family has roots in Cyprus, said Europe needs to remain at the frontier of AI research if it wants influence in setting ethical and technical standards. He called Greece’s combination of history and new infrastructure a symbolic setting for conversations on the future of technology.
Preparing for the next era
The dialogue closed on a shared message: societies will need citizens who can adapt and learn throughout their lives. For Hassabis, this adaptability is the foundation for navigating a future shaped by artificial intelligence. For Mitsotakis, the task is making sure those changes strengthen democratic values rather than weaken them.
Notes: This post was edited/created using GenAI tools.
Read next: Most Americans Now Rely on AI in Search and Shopping, Survey Finds
Tools & Platforms
Why does ChatGPT agree with everything you say? The dangers of sycophantic AI

Do you like me? I feel really sad,” a 30-year-old Sydney woman asked ChatGPT recently.
Then, “Why isn’t my life like the movies?”
Loading…
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries