Tools & Platforms
AI disagreements – by Brian Merchant

Hello all,
Well, here’s to another relentless week of (mostly bad) AI news. Between the AI bubble discourse—my contribution, a short blog on the implications of an economy propped up by AI, is doing numbers, as they say—and the AI-generated mass shooting victim discourse, I’ve barely had time to get into OpenAI. The ballooning startup has released its highly anticipated GPT-5 model, as well as its first actually “open” model in years, and is considering a share sale that would value it at $500 billion. And then there’s the New York Times’ whole package of stories on Silicon Valley’s new AI-fueled ‘Hard Tech’ era.
That package includes a Mike Isaac piece on the vibe shift in the Bay Area, from the playful-presenting vibes of the Googles and Facebooks of yesteryear, to the survival-of-the-fittest, increasingly right-wing-coded vibes of the AI era, and a Kate Conger report on what that shift has meant for tech workers. A third, by Cade Metz, about “the Rise of Silicon Valley’s Techno-Religion,” was focused largely on the rationalist, effective altruist, and AI doomer movement rising in the Bay, and whose base is a compound in Berkeley called Lighthaven. The piece’s money quote is from Greg M. Epstein, a Harvard chaplain and author of a book about the rise of tech as a new religion. “What do cultish and fundamentalist religions often do?” he said. “They get people to ignore their common sense about problems in the here and now in order to focus their attention on some fantastical future.”
All this reminded me that not only had I been to the apparently secret grounds of Lighthaven (the Times was denied entry), late last year, where I was invited to attend a closed door meeting of AI researchers, rationalists, doomers, and accelerationists, but I had written an account of the whole affair and left it unpublished. It was during the holidays, I’d never satisfactorily polished the piece, and I wasn’t doing the newsletter regularly yet, so I just kind of forgot about it. I regret this! I reread the blog and think there’s some worthwhile, even illuminating stuff about this influential scene at the heart of the AI industry, and how it works. So, I figure better late than never, and might as well publish now.
The event was called “The Curve” and it took place November 22-24th, 2024, so all commentary should be placed in the context of that timeline. I’ve given the thing a light edit, but mostly left it as I wrote it late last year, so some things will surely be dated. Finally, the requisite note that work like this is now made entirely possible by my subscribers, and especially those paid supporters who chip in $6 a month to make this writing (and editing!) happen. If you’re an avid reader, and you’re able, consider helping to keep the Blood flowing here. Alright, enough of that. Onwards.
A couple weeks ago, I traveled to Berkeley, CA, to attend the Curve, an invite-only “AI disagreements” conference, per its billing. The event was held at Lighthaven, a meeting place for rationalists and effective altruists (EAs), and, according to a report in the Guardian, allegedly purchased with the help of a seven-figure gift from Sam Bankman-Fried. As I stood in the lobby, waiting to check in, I eyed a stack of books on a table by the door, whose title read Harry Potter and the Methods of Rationality. These are the 660,000-word, multi-volume works of fan fiction written by rationalist Eliezer Yudkowsky, who is famous for his assertion that tech companies are on the cusp of building an AI that will exterminate all human life on this planet.
The AI disagreements encountered at the Curve were largely over that very issue—when, exactly, not if, a super-powerful artificial intelligence was going to arise, and how quickly it would wipe out humanity when it did so. I’ve been to my share of AI conferences by now, and I attended this one because I thought it might be useful to hear this widely influential perspective articulated directly by those who believe it, and because there were top AI researchers and executives from leading companies like Anthropic in attendance, and I’d be able to speak with them one on one.
I told myself I’d go in with an open mind, do my best to check my priors at the door, right next to the Harry Potter fan fiction. I mingled with the EA philosophers and the AI researchers and doomers and tech executives. Told there would be accommodations onsite, I arrived to discover that, my having failed to make a reservation in advance, meant either sleeping in a pod or shared dorm-style bedding. Not quite sure I could handle the claustrophobia of a pod, I opted for the dorms.
I bunked next to a quiet AI developer who I barely saw the entire weekend and a serious but polite employee of the RAND corporation. The grounds were surprisingly expansive; there were couches and fire pits and winding walkways and decks, all full of people excitedly talking in low voices about artificial general intelligence (AGI) or super intelligence (ASI) and their waning hopes for alignment—that such powerful computer systems would act in concert with the interests of humanity.
I did learn a great deal, and there was much that was eye-opening. For one thing, I saw the extent to which some people really, truly, and deeply believe that the AI models like those being developed by OpenAI and Anthropic are just years away from destroying the human race. I had often wondered how much of this concern was performative, a useful narrative for generating meaning at work or spinning up hype about a commercial product—and there are clearly many operators in Silicon Valley, even attendees at this very conference, who are sharply aware of this particular utility, and able to harness it for that end. But there was ample evidence of true belief, even mania, that is not easily feigned. There was one session where people sat in a circle, mourning the coming loss of humanity, in which tears were shed.
The first panel I attended was headed up by Yudkowsky, perhaps the movements’ leading AI doomer, to use the popular shorthand, which some rationalists onsite seemed to embrace and others rejected. In a packed, standing-room only talk, the man outlined the coming AI apocalypse, and his proposed plan to stop it—basically, a unilateral treaty enforced by the US and China and other world powers to prevent any nation from developing more advanced AI than what is more or less currently commercially available. If nations were to violate this treaty, then military force could be used to destroy their data centers.
The conference talks were held under Chatham House Rule, so I won’t quote Yudkowsky directly, but suffice to say his viewpoint boils down to what he articulated in a TIME op-ed last year: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” At one point in his talk, at the prompting of a question I had sent into the queue, the speaker asked everyone in the room to raise their hand to indicate whether or not they believed AI was on the brink of destroying humanity—about half the room believed on our current path, destruction was imminent.
This was no fluke. In the next three talks I attended, some variation of “well by then we’re already dead” or “then everyone dies” was uttered by at least one of the speakers. In one panel, a debate between a former OpenAI employee, Daniel Kokotajlo, and Sayash Kapoor, a computer scientist who’d written a book casting doubt on some of these claims, the audience, and the OpenAI employee, seemed outright incredulous that Kapoor did not think AGI posed an immediate threat to society. When the talk was over, the crowd flocked around Kokotajlo, to pepper him with questions, while just a few stragglers approached Kapoor.
I admittedly had a hard time with all this, and just a couple hours in, I began to feel pretty uncomfortable—not because I was concerned with what the rationalists were saying about AGI, but because my apparent inability to occupy the same plane of reality was so profound. In none of these talks did I hear any concrete mechanism described through which an AI might become capable of usurping power and enacting mass destruction, or a particularly plausible process through which a system might develop to “decide” to orchestrate mass destruction, or the ways it would navigate and/or commandeer the necessary physical hardware to wreak its carnage via a worldwide hodgepodge of different interfaces and coding languages of varying degrees of obsolescence and systems that already frequently break down while communicating with each other.
I saw a deep fear that large language models were improving quickly, that the improvements in natural language processing had been so rapid in the last few years that if the lines on the graphs held, we’d be in uncharted territory before long, and maybe already were. But much of the apocalyptic theorizing, as far as I could tell, was premised on AI systems learning how to emulate the work of an AI researcher, becoming more proficient in that field until it is automated entirely. Then these automated AI researchers continue automating that increasingly advanced work, until a threshold is crossed, at which point an AGI emerges. More and more automated systems, and more and more sophisticated prediction software, to me, do not guarantee the emergence of a sentient one. And the notion that this AGI will then be deadly appeared to come from a shared assumption that hyper-intelligent software programs will behave according to tenets of evolutionary psychology, conquering perceived threats to survive, or desirous of converting all materials around it (including humans) into something more useful to its ends. That also seems like a large and at best shaky assumption.
There was little credence or attention paid to recent reports that have shown the pace of progress in the frontier models has slowed—many I spoke to felt this was a momentary setback, or that those papers were simply overstated—and there seemed to be a widespread propensity for mapping assumptions that may serve in engineering or in the tech industry onto much broader social phenomena.
When extrapolating into the future, many AI safety researchers seemed comfortable making guesses about the historical rate of task replacement in the workplace begot by automation, or how quickly remote workers would be replaced by AI systems (another key road-to-AGI metric for the rationalists). One AI safety expert said, let’s just assume in the past that automation has replaced 30% of workplace tasks every generation, as if this were an unknowable thing, as if there were not data about historical automation that could be obtained with research, or as if that data could be so neatly quantified into such a catchy truism. I could not help but think that sociologists and labor historians would have had a coronary on the spot; fortunately, none seem to have been invited.
A lot of these conversations seemed to be animated displays of mutual bias confirmation, in other words, between folks who are surely quite good at computational mathematics, or understanding LLM training benchmarks, but who all share similar backgrounds and preoccupations, and who seem to spend more time examining AI output than how it’s translating into material reality. It often seemed like folks were excitedly participating in a dire, high-stakes game, trying to win it with the best-argued case for alignment, especially when they were quite literally excitedly participating in a game; Sunday morning was dedicated to a 3-hour tabletop role-playing game meant to realistically simulate the next few years of AI development, to help determine what the AI-dominated future of geopolitics held, and whether humanity would survive.
(In the game, which was played by 20 or so attendees divided into two teams, AGI is realized around 2027, the US government nationalizes OpenAI, Elon Musk is put in charge of the new organization, a sort of new Manhattan Project for AI, and competition heats up with China; fortunately, the AI was aligned properly, so in the end, humanity is not extinguished. Some of the players were almost disappointed. “We won on a technicality,” one said.)
The tech press was there, too—Platformer’s Casey Newton, myself, the New York Times’ Kevin Roose, and Vox’s Kelsey Piper, Garrison Lovely, and others. At one point, some of us were sitting on a couch surrounded by Anthropic guys, including co-founder Jack Clark. They were talking about why the public remained skeptical of AI, and someone suggested it was due to the fact that people felt burned by crypto and the metaverse, and just assumed AI was vaporware too. They discussed keeping journals to record what it was like working on AI right now, given the historical magnitude of the moment, and one of the Anthropic staff mentioned that the Manhattan Project physicists kept journals at Los Alamos, too.
It was pretty easy to see why so much of the national press coverage has been taken with the “doomer” camps like the one gathered at Lighthaven—it is an intensely dramatic story, intensely believed by many rich and intelligent people. Who doesn’t want to get the story of the scientists behind the next Manhattan Project—or be a scientist wrestling with the complicated ethics of the next world-shattering Manhattan Project-scale breakthrough? Or making that breakthrough?
Not possessing a degree in computer science, or having studied natural language processing for years myself, if even a third of my AI sources were so sure that an all-powerful AI is on the horizon, that would likely inform my coverage, too. No one is immune to biases; my partner is a professor of media studies, perhaps that leads me to skew more critical to the press, or to be overly pedantic in considering the role of biasses in overly long articles like this one. It’s even possible I am simply too cynical to see a real and present threat to humanity, though I don’t think that’s the case. Of course I wouldn’t.
So many of the AI safety folks I met were nice, earnest, and smart people, but I couldn’t shake the sense that the pervasive AI worry wasn’t adding up. As I walked the grounds, I’d hear snippets of animated chatter; “I don’t want to over-index on regulation” or “imagine 3x remote worker replacement” or “the day you get ASI you’re dead though.” But I heard little to no organizing. There was a panel with an AI policy worker who talked about how to lobby DC politicians to care about AI risk, and a screening of a documentary in progress about SB 1047, the AI safety bill that Gavin Newsom vetoed, but apart from that, there was little sense that anyone had much interest in, you know, fighting for humanity. And there were plenty of employees, senior researchers, even executives from OpenAI, Anthropic, and Google’s Deepmind right there in the building!
If you are seriously, legitimately concerned that an emergent technology is about to *exterminate humanity* within the next three years, wouldn’t you find yourself compelled to do more than argue with the converted about the particular elements of your end times scenario? Some folks were involved in pushing for SB 1047, but that stalled out; now what? Aren’t you starting an all-out effort to pressure those companies to shut down their operations ASAP? That all these folks are under the same roof for three days, and no one’s being confronted, or being made uncomfortable, or being protested—not even a little bit—is some of the best evidence I’ve seen that all the handwringing over AI Safety and x-risk really is just the sort of amped-up cosplaying its critics accuse it of being.
And that would be fine, if it wasn’t taking oxygen from other pressing issues with AI, like AI systems’ penchant for perpetuating discrimination and surveillance, degrading labor conditions, running roughshod over intellectual property, plagiarizing artists’ work, and so on. Some attendees openly weren’t interested in any of this. The politics in the space appeared to skew rightward, and some relished the way AI promises to break open new markets, free of regulations and constrictions. A former Uber executive, who admitted openly that what his last company did “was basically regulatory arbitrage” now says he plans on launching fully automated AI-run businesses, and doesn’t want to see any regulation at all.
Late Saturday night, I was talking with a policy director, a local freelance journalist, and a senior AI researcher for one of the big AI companies. I asked the AI developer if it bothered him that if everything said at the conference thus far was to be believed, his company was on the cusp of putting millions of people out of work. He said yeah, but what should we do about it? I mentioned an idea or two, and said, you know, his company doesn’t have to sell enterprise automation software. A lot of artists and writers were already seeing their wages fall right now. The researcher looked a little pained, and laughed bleakly. It was around that point that the journalist shared that he had made $12,000 that year. The AI researcher easily might have made 30 times that.
It echoed a conversation I had with Jack Clark, of Anthropic. It was a bit strange to see him here, in this context; years ago, he’d been a tech journalist, too, and we’d run in some of the same circles. We’d met for coffee some years ago, around when he’d left journalism to start a comms gig at OpenAI, where he’d do a stint before leaving to co-found Anthropic. At first I wonder if it’s awkward because I’m coming off my second mass layoff event in as many media jobs, and he’s now an executive of a $40 billion company, but then I recall that I’m a member of the press, and he probably just doesn’t want to talk to me.
He said that what AI is doing to labor might get government to finally spark a conversation about AI’s power, and to take it seriously. I wondered—wasn’t his company profiting from selling the automation services that was threatening labor in the first place? Anthropic does not, after all, have to partner with Amazon and sell task-automating software. Clark says that’s a good point, a good question, and they’re gathering data to better understand exactly how job automation is unfolding, and he hopes to be able to make it public. “I want to release some of that data, to spark a conversation,” he said.
I press him about the AGI business, too. Given he is a former journalist, I can’t help but wonder if on some level he doesn’t fully buy the imminent super-intelligence narrative either. But he doesn’t bite. I ask him if he thinks that AGI, as a construct, is useful in helping executives and managers absolve themselves and their companies of actions that might adversely effect people. “I don’t think they think about it,” Clark said, excusing himself.
The contradictions were overwhelming, and omnipresent. Yet relatively few people here were disagreeing. AGI was an inexorable force, to be debated, even wept over, as it risked destroying us all. I do not intend to demean these concerns, just question them, and what’s really going on here. It was all thrown into even sharper relief for me, when, just two weeks after the Curve, I attended a conference in DC on nuclear security, and listened to a former Commander of Stratcom discuss plainly how close we are to the brink of nuclear war, no AI required, at any given time. A phone call would do the trick.
I checked out of the Curve confident that there is no conspiracy afoot in Silicon Valley to convince everyone AI is apocalyptically powerful. I left with the sense that there are some smart people in AI—albeit often with apparently limited knowledge of real-world politics, sociology, or industrial history—who see systems improving, have genuine and deep concerns, and other people in AI who find that deep concern very useful for material purposes. Together, they have cultivated a unique and emotionally charged hyper-capitalist value system with its own singular texture, one that is deeply alienating to anyone who has trouble accepting certain premises. I don’t know if I have ever been more relieved to leave a conference.
The net result, it seems to me, is that the AGI/ASI story imbues the work of building automation software with elevated importance. Framing the rise of AGI as inexorable helps executives, investors, and researchers, even the doom-saying ones, to effectively minimize the qualms of workers and critics worried about more immediate implications of AI software.
You have to build a case, an AI executive said at a widely attended talk at the conference, comparing raising concerns over AGI to the way that the US built its case for the invasion of Iraq.
But that case was built on faulty evidence, an audience member objected.
It was a hell of a demo, though, the AI executive said.
Thanks for reading, and do subscribe for more reporting and writing on Silicon Valley, AI, labor, and our shared future. Oh, before I forget — Paris Marx and I hopped on This Machine Kills with
this week, and had a great chat about AI, China, and tech bubbles. Give it a listen here:
Until next time—hammers up.
Tools & Platforms
AI being used to enhanced learning. Is it dumbing us down?

The use of artificial intelligence has allowed us to rely more on technology for almost everything. But with that, is AI dumbing us down? Are we able to think critically?
A new study from MIT discussed a new problem called “metacognitive laziness” and how there is a red flag being raised in schools to embed AI tools into the classroom.
“Tell me about the history of the telegraph in the United States. We’re building on tools that allow people to come to their own models, at their own knowledge,” said Steve Schneider, an information design and technology professor in the new artificial intelligence exploration center at SUNY Polytechnic Institute.
“I think AI is a tool that unlocks human potential and capabilities and opportunities to advance knowledge and advance society,” said Schneider.
So, when it comes to AI, is it a cheat code for students or can it enhance their learning?
“Some faculty are really worried about students using generative AI to essentially replace their own judgment or their own learning,” said Andrew Russell, the provost and vice president of academic affairs at SUNY Polytechnic Institute. He says it’s important for students to use AI — but the right way.
“Know its capabilities, know its limits and have a sense of when it’s good to apply it and perhaps when they should use something else because they’re going to need to use it when they get out into the world and work in jobs and graduate school after they graduate,” said Russell.
One thing they’re doing in the artificial intelligence exploration center is building new applications for people to access AI.
“And what we’re learning and trying to understand is how do people experience artificial intelligence,” said Schneider.
In class, students will be given a prompt with questions that they can chat into Gemini, Google’s AI model.
“What does it mean to you to think? What happens when you think? And ask them a couple of questions about thinking in cognition. After the students answer them with Gemini, Gemini will flip the script and say, OK, now you ask me questions about how I think and how I learn,” said Schneider.
The Gemini model will then produce a summary of the conversation. Students will then be directed to save that transcript.
“So we take all the 20 or 25 transcripts we get, we’re going to upload that to a large language model, that will generate basically a 10 or 12-minute podcast, audio text that summarizes all of the work that the students did,” said Schneider.
Schneider says this is how AI is allowing students to expand their capabilities.
Tools & Platforms
Top Wall Street Analysts Back These Three AI-Powered Tech Stocks

TLDR
- Broadcom secured a $10 billion customer deal and expects AI revenue to reach $45 billion in fiscal 2026
- Zscaler delivered strong Q4 results with 31% growth in remaining performance obligations for fourth consecutive quarter
- Oracle reported 359% year-over-year growth in remaining performance obligations to $455 billion
- JPMorgan raised Broadcom’s price target to $400, Stifel boosted Zscaler to $330, and Jefferies increased Oracle to $360
- All three companies show strong AI-driven growth with analysts maintaining buy ratings
Wall Street’s top analysts are betting on three technology companies positioned to benefit from artificial intelligence growth. Broadcom, Zscaler, and Oracle all received upgraded price targets from leading analysts following strong earnings results.
Broadcom reported impressive third-quarter results and secured a new $10 billion customer deal. The semiconductor company’s AI revenue grew 18% sequentially in Q3 and is expected to reach $6.2 billion in the fourth quarter.
JPMorgan analyst Harlan Sur raised his price target for Broadcom to $400 from $325. Sur believes the company will deliver about $20 billion in AI revenue for fiscal 2025.
The analyst expects AI revenue to jump 125% to $45 billion in fiscal 2026. This growth comes from Broadcom’s custom AI chips that offer better efficiency and economics than competitors.
Zscaler Shows Strong Zero Trust Demand
Zscaler delivered solid fourth-quarter results driven by demand for Zero Trust and AI security solutions. The cybersecurity company’s remaining performance obligations grew 31% for the fourth consecutive quarter.

Stifel analyst Adam Borg increased his price target to $330 from $295. Borg praised the company’s strong execution across key metrics including billings growth.
The analyst remains positive about Zscaler’s newer solutions like Z-Flex. He believes the company’s portfolio helps organizations improve security while reducing costs through vendor consolidation.
Borg expects Zscaler to maintain high-teens revenue growth in coming years. The company continues expanding its Zero Trust offerings into emerging areas like AI security.
Oracle’s Cloud Contracts Drive Massive Growth
Oracle saw its stock surge after reporting 359% year-over-year growth in remaining performance obligations. The database company reached $455 billion in contracted revenue despite missing Q1 earnings estimates.

Jefferies analyst Brent Thill boosted his price target to $360 from $270. Thill called the RPO results the highlight of Oracle’s quarter.
Oracle added $317 billion in RPO during the quarter from four multi-billion-dollar contracts. This represents nearly five times the company’s estimated fiscal 2026 total revenue of $67 billion.
The Oracle Cloud Infrastructure business is expected to grow 77% to $18 billion in fiscal 2026. Management projects this will jump to $144 billion by fiscal 2030.
Oracle plans to expand to 71 data centers across cloud providers. The company expects multicloud database revenue to grow every quarter for several years.
All three analysts maintain buy ratings on their respective stocks. Sur ranks 39th among over 10,000 analysts tracked by TipRanks with a 67% success rate and 26.1% average return.
Tools & Platforms
Seattle launches new AI plan with hackathons, training, and expanded city services

Seattle Mayor Bruce Harrell on Thursday announced the city’s 2025-2026 Artificial Intelligence Plan, a sweeping initiative that expands on earlier work and aims to position Seattle as a national leader in responsible AI use.
The plan combines updated policies, citywide training, new tools, and a series of public hackathons to encourage innovation.
“Artificial intelligence is more than just a buzzword in Seattle – it’s a powerful tool we are harnessing to build a better city for all,” Harrell said in a statement. “Our new plan ensures we lead with our values, using AI to improve services, empower employees, and speed up processes like permitting.”
Seattle was one of the first U.S. cities to release a generative AI policy in 2023.
The updated plan broadens those principles—innovation, accountability, fairness, privacy, explainability, and security—beyond generative AI to cover all forms of artificial intelligence.
The city is launching new training programs for employees, starting with an introductory course for all staff.
Advanced workshops will cover data science, data integration, and other technical skills, while partnerships with universities and technology companies will provide specialized curricula.
Seattle is also working with labor groups to ensure workers’ rights are protected while services become more efficient.
The city has already tested about 40 AI projects. The new plan shifts focus to applying lessons learned in key areas:
-
Permitting: A pilot with CivCheck is designed to cut application times in half by identifying errors before permits are submitted. Progress will be posted on a public website.
-
Transportation: AI helps Seattle Department of Transportation spot dangerous intersections for safety upgrades. The city also partners with King County Metro to improve bus reliability and with Lime to better manage bike and scooter parking.
-
Infrastructure: Seattle Public Utilities is exploring AI for pipe inspections to catch problems early and protect public health. AI is also being tested for HR support and purchasing.
-
Communication: Tools such as Jasper and Smartcat are being used to draft accessible materials and provide accurate translations with human review, following Harrell’s February executive order on inclusive information.
A new AI leadership role will be added to the city’s IT department to coordinate efforts.
Seattle is also working with partners including Stanford’s Regulation Lab and the Rockefeller Foundation to explore chatbots, digital assistants, and custom-built AI agents.
The city is partnering with AI House to host the Community Innovation Hackathon Series, bringing together students, technologists, entrepreneurs, and community members to design AI-powered solutions to civic challenges.
The first event, held September 11, focused on enhancing the city’s Youth Connector app, which links young people to mental health and enrichment programs. Future hackathons will address permitting, customer service, and small business support.
“Partnering is absolutely the key to success,” said Rob Lloyd, Seattle’s chief technology officer. “In true Seattle style, we’re partnering with AI House to launch the Community Innovation Hackathon Series that invites Seattleites to help us turn responsible AI into practical solutions.”
Nearly one-quarter of the nation’s AI engineers work in Seattle, second only to San Francisco.
With institutions like the University of Washington, a robust tech ecosystem, and public-private partnerships such as AI House, the city is positioning itself as a national hub for AI development.
City leaders say the new plan will help ensure growth aligns with public values while accelerating housing production, improving safety, and making services more accessible.
As Harrell put it, “By using this technology intentionally and responsibly, we are fostering a nation-leading AI economy, creating jobs and opportunities for our residents, while making progress on creating a more innovative, equitable, and efficient future for everyone.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries