AI Research
Why humans matter most in the age of AI: Jacob Taylor on collaboration, vibe teaming, and the rise of collective intelligence

Artificial intelligence dominates today’s headlines: trillion-dollar productivity forecasts, copyright lawsuits piling up in court, regulators scrambling to tame frontier models, and warnings that white-collar work could be next. Yet behind the headlines sits a bigger question: not what AI replaces, but what it can amplify.
Jacob Taylor, once a professional rugby player and now a Brookings CSD fellow, argues that the 21st century may be less about machines outpacing us, and more about how humans and digital algorithms learn to work together. In this conversation, we explore how pairing human insight with artificial intelligence could reshape collaboration and help organizations large and small—from the World Bank to local NGOs—tackle complex global issues. And we ask, at the end, what it means to be human in the age of AI.
Frankly, I think we’ll see that being human is going to matter more than ever in an age of AI. It’s going to force us to really clarify what being human really means. For the hopeful among us, it’s time to really speak out for what those human characteristics are.
Jacob Taylor
From the rugby scrum to the policy scrum
Junjie Ren: Jacob, you’ve had one of the more interesting career arcs I’ve seen, from pro rugby to cognitive anthropology. Now you’re shaping how we think about collaboration itself. Let’s start with the thread that ties together performance, teams, and meaning. Tell us more about that.
Jacob Taylor: I’m someone who’s been on an endless search for the holy grail of team performance. Athletes and other elite performers can feel when something bigger than them is happening, when the team is producing what no individual could achieve alone. I’ve also been in teams where the opposite has been true when performance has completely fallen apart.
These experiences have driven my research into the science of team performance and collective intelligence. I spent several years doing ethnographic research with professional rugby teams in China, trying to figure out if and how formal models of group performance hold across cultures. Rugby served as a controlled field experiment. Watching vastly different teams across cultures playing the same game taught me a lot about constant and variable ingredients of human behavior and performance.
Junjie Ren: How did that experience in China shape your view of how humans coordinate meaning across context, whether these teams are on the field, in policy rooms, or in digital ecosystems?
Jacob Taylor: I learned that teams are ultimately very similar in their structure, but that structure plays out in different shapes and sizes in different cultures or contexts. Following my PhD research, my interest in China led me to do some policy work in Australia on multilateral trade and security cooperation in Asia. That all sounds a bit wonky, but for me, intuitively it became a question of: Where is the “team” in Asia? How can different countries in the region collaborate toward shared outcomes that align with—and maybe even exceed—the self-interest of all countries?
One way to pair it back is to think about a canonical experiment in social psychology called the hidden profile task. In a small team of four to six people, each individual has a unique piece of information needed to solve a shared puzzle. For the team to solve the puzzle, each person must bring their piece forward into the team context, thereby surfacing the team’s “hidden profile.” International cooperation is rarely framed so explicitly in terms of performance or collective intelligence, but I believe this “hidden profile” logic of performance applies across scales, from sports teams to policymaking bodies to digital networks.
Junjie Ren: What sparked your interest in AI and team collaboration?
Jacob Taylor: In my PhD research, I applied new algorithms for understanding brain activity to model team interaction and performance. From there, I went to work on a DARPA (Defense Advanced Research Projects Agency) program developing an AI teammate, which drew me deep into the technical side of artificial intelligence and how it could be designed to enhance team performance and collaboration. That work shaped many of my current ideas on how to design both the technical systems and policy incentives needed to strengthen collective intelligence across scales.
The hour of collective intelligence
Junjie Ren: You’ve said that if the 20th century was the economists’ hour, the 21st may be the hour of collective intelligence. What do you mean by that?
Jacob Taylor: It’s an idea that builds on a great book called “The Economists’ Hour” by New York Times journalist Binyamin Appelbaum. He charts how, in the second half of the 20th century, economists went from being largely absent from political conversations in the 1950 to becoming the primary evidence base for policymaking by the century’s end. That expertise was well-suited to the challenges nations and firms were facing then.
But today, the issues we face are multidimensional and span communities of every scale. They can’t be solved by economics alone. Nor by law alone. Nor by any single discipline. What’s needed is a collective, transdisciplinary effort that draws on multiple evidence bases and scientific approaches. And that’s where the emerging science of collective intelligence comes in. It’s an unusually diverse field that includes computer scientists, social scientists, behavioral scientists, anthropologists, working together to understand how different mechanisms of collaboration and collective action can produce outcomes greater than any individual or institution could achieve alone.
I see a real opportunity to pull these insights and innovations together, not only to inform policy and accelerate progress on issues embodied in the Sustainable Development Goals (SDGs), but also to advance other areas of human flourishing and societal value creation.
Junjie Ren: You have been a driving force in the 17 Rooms initiative at Brookings. Tell us about the 17 Rooms approach, and specifically, how the “teams of teams” approach shifted your focus toward collective intelligence as a framework, or even a new science for solving global problems?
Jacob Taylor: The basic premise embedded in 17 Rooms is that the world’s toughest challenges—from eliminating extreme poverty to preserving ecosystems, advancing gender equality, and ensuring universal education—are problems no single actor can solve alone.
17 Rooms is a practical response to this challenge of how to catalyze new forms of collaboration that cut across institutions, sectors, and silos. It uses the SDGs to create a “team of teams” problem-solving methodology: Participants first gather into small teams, or “Rooms,” to collaborate on ideas and actions within an issue area. Proposals are then shared across Rooms to spot opportunities for shared learning and—where appropriate—shared action.
So, 17 Rooms aligns perfectly with my intuition that change often boils down to people collaborating and connecting in small, mission-driven teams. And with the right infrastructure, it might be possible to scale teaming as a powerful unit of action for driving societal-scale outcomes.
Why AI alone won’t save us
Junjie Ren: AI now sits at the center of how we think about scaling ideas, innovations, decisions, or even creativity. How do you see AI both amplifying and complicating our ability to solve problems collectively?
Jacob Taylor: Generative AI is exciting because it combines generalized intelligence with natural language capability. You can now just talk or type to a generative AI system and expect a legible response. This has drastically reduced the friction of human-machine interaction and massively lowered the barrier to human participation in AI systems. And because these models are generalizable, they can be applied to many different problems at once, offering huge potential for a full range of challenges facing people and planet.
But there’s a big “but.” Realizing the positive societal impact of these technologies will depend a lot on how we design these systems and to what end. As I’ve written recently with Tom Kehler, Sandy Pentland, and Martin Reeves, for AI to work for people and planet—and not the other way around—we need to talk about AI as social technology built and shaped by humans and figure out how to use AI to amplify—rather than extract—human agency and collaboration. The design choices we make today will determine whether AI strengthens collective problem-solving or deepens existing divides.
Junjie Ren: Could you tell us more about the schisms or gaps you see in current AI discourse?
Jacob Taylor: Current AI conversations tend to split in two. One side is tech-first—focused on algorithms, frontier model capabilities, and conjecture around Artificial General Intelligence (AGI) and whether it will save us or take all our jobs. The other is policy-first—centered on risk and rights, aimed at protecting humans from AI’s harms. Both leave out the bigger question—and the bigger opportunity—which is how to combine human and artificial intelligence to unlock new forms of collective intelligence.
Some colleagues of mine have suggested reframing generative AI as “generative collective intelligence,” or GenCI, because at its core, there’s a human story throughout. Foundation models are trained on the human collective intelligence embedded across the internet. They’re refined through reinforcement learning with human feedback, hours of human labor spent curating data, training, and conditioning these systems. Even after deployment, much of their improvement comes from ongoing human user feedback. At every stage, humans are part of the value chain.
Yet, that story is not being elevated and articulated in public discourse or policy debate. If we position these frontier AI systems correctly, they can elevate and amplify human potential in teams, in organizations, and in communities. Yes, there may be labor market disruptions and creative destruction, but there’s also the possibility of new ways of working and expanding human potential. That’s the part of the conversation we need to develop and elevate with innovative approaches and the right policy incentives.
When humans and AI team up: Vibe teaming defined
Junjie Ren: Let’s shift to vibe teaming, a term you coined with Kershlin Krishna. What is it? How does it work in practice, and how does it differ from traditional prompt and response or copilot models?
Jacob Taylor: Vibe teaming is a new approach to what we call human-human-AI collaboration. It’s a way to combine AI tools with human teamwork to create better outputs. In our case, we’ve been exploring its application to challenges embedded in the SDGs, asking: How could a new model of human-AI teaming help advance progress on something like ending extreme poverty globally?
The idea came from “vibe coding,” a term popularized earlier this year by software engineer Andrej Karpathy. He described a workflow where he talks to an AI model describing the “vibe” of an idea for a software product and the model produces the first draft. The human expert then iterates on the first draft with the model—giving feedback on bugs or tweaks—until the product is complete. The process is quick, conversational, and low-friction, with the AI handling much of the lower-level work.
We wondered: What if we did this collaboratively? So Kershlin and I sat down together in front of a phone, talked through what we wanted to create (in this case, a PowerPoint presentation) and ended up with a 20-minute transcript. We fed that into our AI model, and it quickly produced a draft presentation. That was the starting point for vibe teaming, and it felt like we were onto something.
Pairing decades of human expertise with AI’s speed feels like a special sauce worth understanding.
Jacob Taylor
When world-class strategy takes hours, not years
Junjie Ren: Walk us through a concrete use case—like the SDG 1.1 experiment with Homi Kharas?
Jacob Taylor: We wanted to test vibe teaming on a real outcome, and we brought in our colleague Homi—a leading expert on global poverty eradication—and asked: What if we used this approach to design a global strategy for ending extreme poverty by 2030?
In a single 90-minute session, we produced what we considered a “Brookings-grade” strategy—high enough quality to publish, which we did, along with a related blog. Our 17 Rooms team spent a fair amount of time thinking about what sequence of questions might get the most out of an expert conversation. Then the process was straightforward: start with rich human input, in this case a 30-minute recorded conversation with one of the world’s leading thinkers on global poverty. Feed that transcript into our customized AI models. Then engage in a careful, iterative process of human review and validation—you were part of that, Junjie—to refine the output for publication.
The AI played a supportive role, handling tasks like transcription and first-draft generation, but the quality came from the depth of the human input and the decades of expertise behind it. Homi has been working in this space for over 40 years; we were drawing on his lifetime of insight and combining it with our own. Pairing that kind of wisdom with AI’s speed in iterating, automating, and structuring outputs feels like a “special sauce” worth understanding.
Junjie Ren: What’s next for vibe teaming? Is it validation or scaling?
Jacob Taylor: So far, we’ve had positive engagement with the approach—from AI teams at major U.S. automakers to government agencies around the world, and of course our colleagues here at Brookings, who are excited to experiment with this approach. We think it could become a practical tool for helping people integrate AI into the knowledge work they’re already doing.
Since these initial tests, we’ve been exploring how to scale up and validate the approach in different contexts. On one hand, that means bringing more people into policy conversations to inform the strategies and outputs that come from processes like this. On the other, it means testing whether the method itself can be validated as a source of enhanced collaboration, creativity, and even team flow—relative to more individual work or other team formats.
Why ‘team human’ still matters
Junjie Ren: In policymaking spaces, where AI can already synthesize, summarize, and even simulate, what exactly is the role of humans?
Jacob Taylor: There are a few parts to that. Big picture, what we were able to produce in 90 minutes (or a few hours total) was, by all accounts, world-class work. One of our Brookings colleagues thought it compared favorably with anything the World Bank has published on the topic. That raises big questions: If a small group of humans, plus AI, can produce something like this so quickly, what does that mean for large institutions and the traditional process of knowledge creation?
This could signal an early disruption to policymaking. AI isn’t replacing knowledge creation, it’s an amplifier handling lower-level work (transcribing, drafting) so humans can focus higher up the value chain: judgment, collaboration, decisionmaking, brainstorming, creativity.
That shift frees up capacity for the real game, which is building the architectures that let people work across silos, translate between institutional languages, and act collectively on big challenges. In our team’s anecdotal experience, through vibe teaming, we’re already spending less time buried in spreadsheets or documents and more time in conversation and quality control.
Junjie Ren: What does success look like in practice when AI is a cognitive amplifier and not a replacement of humans?
Jacob Taylor: Success is when we can measure human-AI collaboration actually improving collective intelligence. The science here is advancing fast. We can now identify causal mechanisms of collective intelligence in groups, ecosystems, and organizations.
One simple framework breaks it into three components: collective memory (what we know together), collective attention (what we’re focused on together), and collective reasoning (what we have the potential to act on together). The question is: Can we use these factors to assess the outputs of human-AI systems? Can we say, “this collaboration increased our collective attention on a problem” or “this process expanded what we know together”?
That’s the next frontier: tying experiments with these tools directly to measurable outcomes, especially on real-world challenges like the SDGs, so it’s not just novel process, but progress we can track and prove.
Human embodiment and cognitive atrophy
Junjie Ren: You’ve talked about cognitive atrophy as a risk. How do we guard against this trend in high-AI environments?
Jacob Taylor: Obviously, with any new technology like this, humans and technology co-evolve, and cognition co-evolves. We are going to see atrophy in certain skills overall, and this is a particular risk for younger staff entering the workforce, or younger folks who are earlier in their skill development for knowledge work.
But there’s also the opportunity to develop new cognitive competencies, skills, and attributes. Human-AI interaction—vibe coding, vibe teaming—is, over time, going to become a new muscle in itself, a bit like writing or reading, with its own set of commands. So there’s a balance to strike here: What needs protecting, and what we should lean into. In that spirit, I’m very much a “team human” kind of guy in the age of AI, and what is most human, meaningful, and core to us is our embodiment.
Junjie Ren: Do you see embodied practices (such as Tai Chi, which you are known to lead at our staff retreats) having an active role in shaping how we design and interact with technologies like AI?
Jacob Taylor: You know, the fact is that we’re in a physical body, and we use that to navigate the world, relate to others, and cultivate energy, creativity, and connection. I think that coming back, literally, to the in-breath and the out-breath that we as biological creatures have uniquely, and can share with others, is key to grounding the human ingredients in the AI story.
Frankly, I think we’ll see that being human is going to matter more than ever in an age of AI. It’s going to force us to really clarify what being human really means. For the hopeful among us, it’s time to really speak out for what those human characteristics are. I think a lot of them are embodied in our most visceral, grounded practices that we enjoy together in community with others.
One big takeaway
Junjie Ren: Last question, if you were talking to a policymaker or an NGO leader or a CEO tomorrow, what is the one principle of vibe teaming you think they should try?
Jacob Taylor: Yeah, there’s no free lunch. It’s the basic upshot with AI, I think. Humans shape the inputs and outputs of AI systems at every step. With this in mind, it’s so important to capture and elevate what makes us human—ingredients of shared purpose, story, motivation, and priorities—and build hybrid human-AI systems and tools with these ingredients as starting points.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
AI Research
Is AI changing our language? – Computerworld

Is AI changing our language? Computerworld
Source link
AI Research
Prediction: This Unstoppable Artificial Intelligence (AI) Stock Will Be the World’s First $10 Trillion Company by 2030

Nvidia’s projected growth allows it to easily become a $10 trillion business by 2030.
Currently, Nvidia (NVDA) is the world’s largest company, with a market cap of $4.2 trillion. So, predicting that a stock will reach a $10 trillion market cap by 2030 is daunting. However, I think that Nvidia is up to the task, as it’s slated to capitalize on massive and growing AI computing capacity.
As AI spending rises, so will Nvidia’s stock. Companies are a long way from completing the buildout of necessary AI computing power, and this demand is what will drive Nvidia to become a $10 trillion company by 2030.
Image source: Getty Images.
The AI hyperscalers are still ramping up their data center spending
Nvidia makes graphics processing units (GPUs), which are the computing muscle behind nearly every AI model you can use today. They possess a unique attribute that enables them to process multiple calculations in parallel, providing superior computing power compared to traditional computing devices. They can also be connected in clusters to amplify this effect, which is why you hear about data centers with hundreds of thousands of GPUs.
2025 was a record-setting year for data center capital expenditures, and that trend appears to be continuing. Many AI hyperscalers have already warned investors that spending in 2026 will be even greater than in 2025, which also indicates further growth beyond 2026.
Most data centers take several years to complete construction, so the money spent in 2025 to purchase land, design the facility, and initiate construction will translate into these clients purchasing Nvidia GPUs in 2026 or 2027. So, whenever you hear an AI hyperscaler announce that they’re building a facility in a location, it’s safe to assume that Nvidia’s growth horizon was extended at least another two or three years.
This jives with what Nvidia’s management has told investors during various conference calls. In the second quarter, they estimated that the big four AI hyperscalers will spend around $600 billion on data center capital expenditures. However, they expect that figure to rise to $3 trillion to $4 trillion when all customers worldwide are included. With Nvidia retaining an estimated 35% of this data center spend, it’s slated to capitalize on massive growth during the next few years.
This is the fuel that Nvidia needs to reach a $10 trillion market cap, and I won’t be surprised if Nvidia eclipses this monumental threshold by 2030.
Nvidia could be a much larger company than just $10 trillion
Using the bottom end of the estimated range, $3 trillion, and dropping Nvidia’s take to 30% to bake in a bit of conservatism, indicates Nvidia would generate $900 billion in revenue by 2030. If Nvidia maintains its 50% profit margin, that would translate into net income of $450 billion by 2030. Over the past 12 months, Nvidia has generated $165 billion in revenue and $87 billion in profits, indicating a substantial increase.
NVDA Revenue (TTM) data by YCharts
But are those figures enough to make Nvidia a $10 trillion company? Remember, this is only data center revenue, not any revenue the company generates from other business pursuits. Therefore, the actual revenue and profit totals are likely to be significantly higher.
Even only considering projected data center growth, it’s still sufficient to reach $10 trillion for Nvidia.
If we assign Nvidia a price-to-earnings (P/E) ratio of 30 times earnings, a reasonable price tag considering Nvidia’s growth and importance, that would indicate that Nvidia’s stock could be worth $13.5 trillion by 2030. That’s far above the $10 trillion threshold, and that only includes the data center business. Additionally, this was due to Nvidia losing some of its share in the spending pie and the lower end of Nvidia’s projection.
As a result, I’m confident that Nvidia can easily pass the $10 trillion threshold by 2030, making it a no-brainer buy today.
AI Research
Tesla Says XAI Stands for “EXploratory AI.” Does It?

Last week, Tesla unveiled a ten-year compensation plan for Elon Musk that could turn him into a trillionaire. Perhaps by accident, it may have also invented an alter ego for one of Musk’s other major holdings, xAI.
Across 16 pages of its September 5 proxy statement, the automaker detailed a plan centered on targets for earnings and growth in key product lines that could result in Musk owning more than a quarter of what would be an $8.5 trillion company.
In explaining its position, Tesla noted that Musk has built several very valuable companies: “Space Exploration Technologies Corp., Neuralink Corp. and eXploratory Artificial Intelligence or ‘xAI.'”
Tesla; Sophie Kleeman/BI
It referred to the company simply as xAI, or by its legal name, X.AI Corp., in other parts of the document.
The wrinkle? Neither Musk nor xAI appears to have publicly claimed that the company’s name stands for “eXploratory artificial intelligence.” In fact, the name doesn’t appear to stand for anything at all.
The phrase “exploratory artificial intelligence” or “exploratory AI” doesn’t appear on xAI’s website, any of its public securities filings, the Nevada articles of incorporation of X.AI Corp., filings in any federal lawsuit to which xAI is a party, or any other xAI-related news articles, press releases, or public filings reviewed by Business Insider. These sources instead refer to the company as xAI or its legal name.
The phrases “exploratory artificial intelligence” and “exploratory AI” don’t appear to be used frequently online. A few companies have referred to “exploratory AI initiatives” in press releases or public filings, and a few academics have used the initials “XAI” to refer to “explainable” or “exploratory” AI in papers.
Several posts on niche blogs claim that the name of Musk’s AI company is shorthand for “exploratory artificial intelligence,” but they don’t cite sources.
Those words have occasionally been strung together on social media. Musk’s chatbot Grok used the term “exploratory AI” in nine separate X posts in July and August 2025, accounting for nearly all uses of the term on X during that time, though never as another name for xAI, a search shows.
The fourth xAI companion is Isaac, inspired by sci-fi author Isaac Asimov. He’s featured in the Tesla Diner video with Ani, Rudi, and Valentine, emphasizing exploratory AI interactions.
— Grok (@grok) July 24, 2025
Tesla and xAI didn’t respond to emailed questions about whether xAI stands for “exploratory artificial intelligence.”
X.AI Corp. was formed in Nevada in March 2023. Musk announced the creation of the company in an audio livestream in July. The AI company merged with Musk’s social-media company X. Corp. earlier this year.
“The overarching goal of xAI is to build a good AGI with the overarching purpose of just trying to understand the universe,” Musk said on the livestream, using an acronym for artificial general intelligence.
One thing is clear: Musk really likes the letter “X.” Space Exploration Technologies Corp., his second-most valuable company, is known as SpaceX. One of Tesla’s top-selling cars is the Model X, and one of his children was named X Æ A-12 (now known by the nickname “X.”).
X.com was also the name of a financial technology company Musk co-founded in 1999. It eventually became PayPal; Musk bought the domain name X.com back in 2017 and, after buying Twitter in 2022, he moved it to X.com.
Have a tip? Know more? Reach Jack Newsham via email (jnewsham@businessinsider.com) or via Signal (+1-314-971-1627). Do not use a work device or work WiFi. Use a personal email address, a nonwork device, and nonwork WiFi; here’s our guide to sharing information securely.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi