Connect with us

AI Research

Why humans matter most in the age of AI: Jacob Taylor on collaboration, vibe teaming, and the rise of collective intelligence

Published

on


Artificial intelligence dominates today’s headlines: trillion-dollar productivity forecasts, copyright lawsuits piling up in court, regulators scrambling to tame frontier models, and warnings that white-collar work could be next. Yet behind the headlines sits a bigger question: not what AI replaces, but what it can amplify.

Jacob Taylor, once a professional rugby player and now a Brookings CSD fellow, argues that the 21st century may be less about machines outpacing us, and more about how humans and digital algorithms learn to work together. In this conversation, we explore how pairing human insight with artificial intelligence could reshape collaboration and help organizations large and small—from the World Bank to local NGOs—tackle complex global issues. And we ask, at the end, what it means to be human in the age of AI.

Frankly, I think we’ll see that being human is going to matter more than ever in an age of AI. It’s going to force us to really clarify what being human really means. For the hopeful among us, it’s time to really speak out for what those human characteristics are.


Jacob Taylor


From the rugby scrum to the policy scrum

Junjie Ren: Jacob, you’ve had one of the more interesting career arcs I’ve seen, from pro rugby to cognitive anthropology. Now you’re shaping how we think about collaboration itself. Let’s start with the thread that ties together performance, teams, and meaning. Tell us more about that.

Jacob Taylor: I’m someone who’s been on an endless search for the holy grail of team performance. Athletes and other elite performers can feel when something bigger than them is happening, when the team is producing what no individual could achieve alone. I’ve also been in teams where the opposite has been true when performance has completely fallen apart.

These experiences have driven my research into the science of team performance and collective intelligence. I spent several years doing ethnographic research with professional rugby teams in China, trying to figure out if and how formal models of group performance hold across cultures. Rugby served as a controlled field experiment. Watching vastly different teams across cultures playing the same game taught me a lot about constant and variable ingredients of human behavior and performance.

Junjie Ren: How did that experience in China shape your view of how humans coordinate meaning across context, whether these teams are on the field, in policy rooms, or in digital ecosystems?

Jacob Taylor: I learned that teams are ultimately very similar in their structure, but that structure plays out in different shapes and sizes in different cultures or contexts. Following my PhD research, my interest in China led me to do some policy work in Australia on multilateral trade and security cooperation in Asia. That all sounds a bit wonky, but for me, intuitively it became a question of: Where is the “team” in Asia? How can different countries in the region collaborate toward shared outcomes that align with—and maybe even exceed—the self-interest of all countries?

One way to pair it back is to think about a canonical experiment in social psychology called the hidden profile task. In a small team of four to six people, each individual has a unique piece of information needed to solve a shared puzzle. For the team to solve the puzzle, each person must bring their piece forward into the team context, thereby surfacing the team’s “hidden profile.” International cooperation is rarely framed so explicitly in terms of performance or collective intelligence, but I believe this “hidden profile” logic of performance applies across scales, from sports teams to policymaking bodies to digital networks.

Junjie Ren: What sparked your interest in AI and team collaboration?

Jacob Taylor: In my PhD research, I applied new algorithms for understanding brain activity to model team interaction and performance. From there, I went to work on a DARPA (Defense Advanced Research Projects Agency) program developing an AI teammate, which drew me deep into the technical side of artificial intelligence and how it could be designed to enhance team performance and collaboration. That work shaped many of my current ideas on how to design both the technical systems and policy incentives needed to strengthen collective intelligence across scales.

The hour of collective intelligence

Junjie Ren: You’ve said that if the 20th century was the economists’ hour, the 21st may be the hour of collective intelligence. What do you mean by that?

Jacob Taylor: It’s an idea that builds on a great book called The Economists’ Hour by New York Times journalist Binyamin Appelbaum. He charts how, in the second half of the 20th century, economists went from being largely absent from political conversations in the 1950 to becoming the primary evidence base for policymaking by the century’s end. That expertise was well-suited to the challenges nations and firms were facing then.

But today, the issues we face are multidimensional and span communities of every scale. They can’t be solved by economics alone. Nor by law alone. Nor by any single discipline. What’s needed is a collective, transdisciplinary effort that draws on multiple evidence bases and scientific approaches. And that’s where the emerging science of collective intelligence comes in. It’s an unusually diverse field that includes computer scientists, social scientists, behavioral scientists, anthropologists, working together to understand how different mechanisms of collaboration and collective action can produce outcomes greater than any individual or institution could achieve alone.

I see a real opportunity to pull these insights and innovations together, not only to inform policy and accelerate progress on issues embodied in the Sustainable Development Goals (SDGs), but also to advance other areas of human flourishing and societal value creation.

Junjie Ren: You have been a driving force in the 17 Rooms initiative at Brookings. Tell us about the 17 Rooms approach, and specifically, how the “teams of teams” approach shifted your focus toward collective intelligence as a framework, or even a new science for solving global problems?

Jacob Taylor: The basic premise embedded in 17 Rooms is that the world’s toughest challenges—from eliminating extreme poverty to preserving ecosystems, advancing gender equality, and ensuring universal education—are problems no single actor can solve alone.

17 Rooms is a practical response to this challenge of how to catalyze new forms of collaboration that cut across institutions, sectors, and silos. It uses the SDGs to create a “team of teams” problem-solving methodology: Participants first gather into small teams, or “Rooms,” to collaborate on ideas and actions within an issue area. Proposals are then shared across Rooms to spot opportunities for shared learning and—where appropriate—shared action.

So, 17 Rooms aligns perfectly with my intuition that change often boils down to people collaborating and connecting in small, mission-driven teams. And with the right infrastructure, it might be possible to scale teaming as a powerful unit of action for driving societal-scale outcomes.

Why AI alone won’t save us

Junjie Ren: AI now sits at the center of how we think about scaling ideas, innovations, decisions, or even creativity. How do you see AI both amplifying and complicating our ability to solve problems collectively?

Jacob Taylor: Generative AI is exciting because it combines generalized intelligence with natural language capability. You can now just talk or type to a generative AI system and expect a legible response. This has drastically reduced the friction of human-machine interaction and massively lowered the barrier to human participation in AI systems. And because these models are generalizable, they can be applied to many different problems at once, offering huge potential for a full range of challenges facing people and planet.

But there’s a big “but.” Realizing the positive societal impact of these technologies will depend a lot on how we design these systems and to what end. As I’ve written recently with Tom Kehler, Sandy Pentland, and Martin Reeves, for AI to work for people and planet—and not the other way around—we need to talk about AI as social technology built and shaped by humans and figure out how to use AI to amplify—rather than extract—human agency and collaboration.  The design choices we make today will determine whether AI strengthens collective problem-solving or deepens existing divides.

Junjie Ren: Could you tell us more about the schisms or gaps you see in current AI discourse?

Jacob Taylor:  Current AI conversations tend to split in two. One side is tech-first—focused on algorithms, frontier model capabilities, and conjecture around Artificial General Intelligence (AGI) and whether it will save us or take all our jobs. The other is policy-first—centered on risk and rights, aimed at protecting humans from AI’s harms. Both leave out the bigger question—and the bigger opportunity—which is how to combine human and artificial intelligence to unlock new forms of collective intelligence.

Some colleagues of mine have suggested reframing generative AI as “generative collective intelligence,” or GenCI, because at its core, there’s a human story throughout. Foundation models are trained on the human collective intelligence embedded across the internet. They’re refined through reinforcement learning with human feedback, hours of human labor spent curating data, training, and conditioning these systems. Even after deployment, much of their improvement comes from ongoing human user feedback. At every stage, humans are part of the value chain.

Yet, that story is not being elevated and articulated in public discourse or policy debate. If we position these frontier AI systems correctly, they can elevate and amplify human potential in teams, in organizations, and in communities. Yes, there may be labor market disruptions and creative destruction, but there’s also the possibility of new ways of working and expanding human potential. That’s the part of the conversation we need to develop and elevate with innovative approaches and the right policy incentives.

When humans and AI team up: Vibe teaming defined

Junjie Ren: Let’s shift to vibe teaming, a term you coined with Kershlin Krishna. What is it? How does it work in practice, and how does it differ from traditional prompt and response or copilot models?

Jacob Taylor: Vibe teaming is a new approach to what we call human-human-AI collaboration. It’s a way to combine AI tools with human teamwork to create better outputs. In our case, we’ve been exploring its application to challenges embedded in the SDGs, asking: How could a new model of human-AI teaming help advance progress on something like ending extreme poverty globally?

The idea came from “vibe coding,” a term popularized earlier this year by software engineer Andrej Karpathy. He described a workflow where he talks to an AI model describing the “vibe” of an idea for a software product and the model produces the first draft. The human expert then iterates on the first draft with the model—giving feedback on bugs or tweaks—until the product is complete. The process is quick, conversational, and low-friction, with the AI handling much of the lower-level work.

We wondered: What if we did this collaboratively? So Kershlin and I sat down together in front of a phone, talked through what we wanted to create (in this case, a PowerPoint presentation) and ended up with a 20-minute transcript. We fed that into our AI model, and it quickly produced a draft presentation. That was the starting point for vibe teaming, and it felt like we were onto something.

Pairing decades of human expertise with AI’s speed feels like a special sauce worth understanding.


Jacob Taylor

When world-class strategy takes hours, not years

Junjie Ren: Walk us through a concrete use case—like the SDG 1.1 experiment with Homi Kharas?

Jacob Taylor: We wanted to test vibe teaming on a real outcome, and we brought in our colleague Homi—a leading expert on global poverty eradication—and asked: What if we used this approach to design a global strategy for ending extreme poverty by 2030?

In a single 90-minute session, we produced what we considered a “Brookings-grade” strategy—high enough quality to publish, which we did, along with a related blog. Our 17 Rooms team spent a fair amount of time thinking about what sequence of questions might get the most out of an expert conversation. Then the process was straightforward: start with rich human input, in this case a 30-minute recorded conversation with one of the world’s leading thinkers on global poverty. Feed that transcript into our customized AI models. Then engage in a careful, iterative process of human review and validation—you were part of that, Junjie—to refine the output for publication.

The AI played a supportive role, handling tasks like transcription and first-draft generation, but the quality came from the depth of the human input and the decades of expertise behind it. Homi has been working in this space for over 40 years; we were drawing on his lifetime of insight and combining it with our own. Pairing that kind of wisdom with AI’s speed in iterating, automating, and structuring outputs feels like a “special sauce” worth understanding.

Junjie Ren: What’s next for vibe teaming? Is it validation or scaling?

Jacob Taylor: So far, we’ve had positive engagement with the approach—from AI teams at major U.S. automakers to government agencies around the world, and of course our colleagues here at Brookings, who are excited to experiment with this approach. We think it could become a practical tool for helping people integrate AI into the knowledge work they’re already doing.

Since these initial tests, we’ve been exploring how to scale up and validate the approach in different contexts. On one hand, that means bringing more people into policy conversations to inform the strategies and outputs that come from processes like this. On the other, it means testing whether the method itself can be validated as a source of enhanced collaboration, creativity, and even team flow—relative to more individual work or other team formats.

Why ‘team human’ still matters

Junjie Ren: In policymaking spaces, where AI can already synthesize, summarize, and even simulate, what exactly is the role of humans?

Jacob Taylor: There are a few parts to that. Big picture, what we were able to produce in 90 minutes (or a few hours total) was, by all accounts, world-class work. One of our Brookings colleagues thought it compared favorably with anything the World Bank has published on the topic. That raises big questions: If a small group of humans, plus AI, can produce something like this so quickly, what does that mean for large institutions and the traditional process of knowledge creation?

This could signal an early disruption to policymaking. AI isn’t replacing knowledge creation, it’s an amplifier handling lower-level work (transcribing, drafting) so humans can focus higher up the value chain: judgment, collaboration, decisionmaking, brainstorming, creativity.

That shift frees up capacity for the real game, which is building the architectures that let people work across silos, translate between institutional languages, and act collectively on big challenges. In our team’s anecdotal experience, through vibe teaming, we’re already spending less time buried in spreadsheets or documents and more time in conversation and quality control.

Junjie Ren: What does success look like in practice when AI is a cognitive amplifier and not a replacement of humans?

Jacob Taylor: Success is when we can measure human-AI collaboration actually improving collective intelligence. The science here is advancing fast. We can now identify causal mechanisms of collective intelligence in groups, ecosystems, and organizations.

One simple framework breaks it into three components: collective memory (what we know together), collective attention (what we’re focused on together), and collective reasoning (what we have the potential to act on together). The question is: Can we use these factors to assess the outputs of human-AI systems? Can we say, “this collaboration increased our collective attention on a problem” or “this process expanded what we know together”?

That’s the next frontier: tying experiments with these tools directly to measurable outcomes, especially on real-world challenges like the SDGs, so it’s not just novel process, but progress we can track and prove.

Human embodiment and cognitive atrophy

Junjie Ren: You’ve talked about cognitive atrophy as a risk. How do we guard against this trend in high-AI environments?

Jacob Taylor: Obviously, with any new technology like this, humans and technology co-evolve, and cognition co-evolves. We are going to see atrophy in certain skills overall, and this is a particular risk for younger staff entering the workforce, or younger folks who are earlier in their skill development for knowledge work.

But there’s also the opportunity to develop new cognitive competencies, skills, and attributes. Human-AI interaction—vibe coding, vibe teaming—is, over time, going to become a new muscle in itself, a bit like writing or reading, with its own set of commands. So there’s a balance to strike here: What needs protecting, and what we should lean into. In that spirit, I’m very much a “team human” kind of guy in the age of AI, and what is most human, meaningful, and core to us is our embodiment.

Junjie Ren: Do you see embodied practices (such as Tai Chi, which you are known to lead at our staff retreats) having an active role in shaping how we design and interact with technologies like AI?

Jacob Taylor: You know, the fact is that we’re in a physical body, and we use that to navigate the world, relate to others, and cultivate energy, creativity, and connection. I think that coming back, literally, to the in-breath and the out-breath that we as biological creatures have uniquely, and can share with others, is key to grounding the human ingredients in the AI story.

Frankly, I think we’ll see that being human is going to matter more than ever in an age of AI. It’s going to force us to really clarify what being human really means. For the hopeful among us, it’s time to really speak out for what those human characteristics are. I think a lot of them are embodied in our most visceral, grounded practices that we enjoy together in community with others.

One big takeaway

Junjie Ren: Last question, if you were talking to a policymaker or an NGO leader or a CEO tomorrow, what is the one principle of vibe teaming you think they should try?

Jacob Taylor: Yeah, there’s no free lunch. It’s the basic upshot with AI, I think. Humans shape the inputs and outputs of AI systems at every step. With this in mind, it’s so important to capture and elevate what makes us human—ingredients of shared purpose, story, motivation, and priorities—and build hybrid human-AI systems and tools with these ingredients as starting points.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).



Source link

AI Research

Lewis Honors College introduces ‘Ideas that Matter’ program series

Published

on


LEXINGTON, Ky. (Sept. 16, 2025) — This fall, the Lewis Honors College (LHC) launches its “Ideas that Matter” series, a program connecting students with leading scholars, innovators and changemakers on issues shaping today’s world — from free speech and artificial intelligence to nonprofit innovation.

LHC Director of College Life Libby Hannon, who initiated the series, said the goal is to spark lively dialogue.

“The ‘Ideas that Matter’ discussions combine intellectually engaging questions with interactive conversations and allow our students to speak with some of the most forward-thinking scholars, changemakers and entrepreneurs from Lexington and beyond,” Hannon said.

The series begins Sept. 18 with University Research Professor Neal Hutchens, Ph.D., who will explore the historical and legal background of free speech and academic freedom in campus life. His talk, 5-6 p.m. in the Lewis Scholars Lounge, will conclude with an interactive Q&A.

“I’m especially looking forward to the conversation part of the evening, where we engage in and model the kind of vibrant back-and-forth that is crucial to maintaining systems of free speech and academic freedom,” Hutchens said.

On Oct. 6, Lewis Lecturer Sherelle Roberts, Ph.D., will moderate a panel of experts on artificial intelligence as they discuss “The Future of Earth and AI,” including the current and potential impacts of artificial intelligence on the future of work, the economy and the environment.

“Artificial Intelligence is quickly becoming a part of our everyday lives. Some even believe AI will transform our world as dramatically as the Industrial Revolution,” Roberts said. “This event will get our students thinking critically about our possible AI-driven future, while also having some fun.”

The event will begin at 5:30 p.m. with movie snacks and will transition into the panel discussion at 6 p.m., featuring faculty and staff from a variety of disciplines. The movie, an animated film that conceptualizes our AI-powered future, will begin at 7 p.m.

The final event of the semester on Nov. 11, will spotlight local nonprofit Operation Secret Santa (OSS), 5-6 p.m. in the Lewis Scholars Lounge. Founder Katie Keys and honors program alum Lucy Jett Waterbury will share the story of OSS’s creation in 2016 and its growing impact on the community.

“Operation Secret Santa is built on the belief that no child should face barriers to feeling loved and celebrated,” said Keys. “We meet families where they are, right at their doorsteps, bringing not only gifts and food, but the reminder that their village sees them and cares.”

“From (Katie’s) big heart, she has built a big, yet lean and efficient, nonprofit that has one very simple goal, to bring joy to Kentucky kids at Christmas time,” Waterbury said.

Through this series, LHC offers students a chance to engage with pressing issues, broaden their perspectives and learn directly from those making a difference.



Source link

Continue Reading

AI Research

Ethereum Foundation Bets Big on AI Agents with New Research Team

Published

on


TLDR

  • Ethereum Foundation launches new dAI Team led by research scientist Davide Crapis to connect blockchain and AI economies
  • Team focuses on enabling AI agents to make payments and coordinate without intermediaries on Ethereum
  • Group continues work on ERC-8004 standard for proving AI agent identity and trust
  • Initiative aims to make Ethereum the settlement layer for autonomous machine transactions
  • Foundation hiring AI researcher and project manager to staff the new specialized unit

The Ethereum Foundation has formed a specialized artificial intelligence research team to position Ethereum as the foundation for autonomous machine transactions. Research scientist Davide Crapis announced the new dAI Team on Monday, outlining plans to merge blockchain technology with AI systems.

The team will pursue two main goals according to Crapis. First, enabling AI agents to conduct payments and coordinate activities without human intermediaries. Second, building a decentralized AI infrastructure that reduces dependence on major technology companies.

Crapis leads the new unit and will connect its work with the Foundation’s protocol development group and ecosystem support division. The team has begun hiring for an AI researcher position and a project manager role to drive coordination efforts.

The dAI Team builds on existing work around ERC-8004, a proposed Ethereum standard co-authored by Crapis. This standard aims to establish identity and reputation systems for autonomous AI agents. The protocol would allow these agents to prove their trustworthiness and coordinate activities without centralized oversight.

AI Agent Infrastructure Development

The Ethereum Foundation sees growing demand for settlement systems as AI agents begin conducting more transactions. Crapis stated that intelligent agents need neutral infrastructure for handling value transfers and reputation management. Ethereum’s censorship resistance and verifiability make it suitable for these functions.

Current blockchain activity supports this vision of expanded use cases. CryptoQuant data shows Ethereum processed 12 million daily smart contract calls on Thursday. The analytics firm noted that network activity remains in expansion mode with record transaction volumes and active addresses.



AI agents operate as programs that make decisions with minimal human supervision. They can execute transactions and perform tasks on behalf of their programmers. Blockchains with programmable features like smart contracts provide suitable environments for these autonomous systems.

The Foundation restructured in 2025 to handle Ethereum’s growth through specialized units. The dAI Team represents part of this shift toward addressing emerging technologies. Previous focus areas included layer-2 scaling solutions and zero-knowledge proof development.

Decentralized AI Stack Goals

Multiple blockchain projects are working to integrate AI and distributed ledger technology. Matchain launched a decentralized AI blockchain in 2024. KiteAI announced an AI-driven blockchain in the Avalanche ecosystem in February 2025.

The Ethereum Foundation’s approach differs by focusing on standards and infrastructure rather than creating new blockchains. The dAI Team will support public goods and projects that combine AI with existing Ethereum capabilities.

Crapis emphasized the mutual benefits of linking AI and Ethereum. He stated that Ethereum makes AI more trustworthy while AI makes Ethereum more useful. This relationship could expand as more autonomous agents require blockchain services.

The team operates under Ethereum’s decentralized acceleration philosophy. This approach prioritizes open and verifiable AI development while maintaining human oversight of intelligent systems. The Foundation aims to prevent AI infrastructure lock-in by major technology companies.

Industry experts see potential for AI agents and blockchain technology to reshape digital commerce. The combination could enable new forms of autonomous economic activity without traditional intermediaries.

The Ethereum Foundation has begun publishing resources for the new team according to Crapis. He stated the Foundation will work with urgency to connect AI developers with the Ethereum ecosystem and accelerate research between the two fields.





Source link

Continue Reading

AI Research

Gachon University launched the “AI and Computing Research Institute” in earnest to strengthen global..

Published

on


Convergence of AI, semiconductors, batteries, and bio-integrated AI education to leap forward as a global research hub

The opening ceremony of the AI and Computing Research Institute. Courtesy of Gachon University

Gachon University launched the “AI and Computing Research Institute” in earnest to strengthen global competitiveness in the field of artificial intelligence.

Gachon University held the opening ceremony of the AI and Computing Research Institute at the Gachon Convention Center on the 16th and began its official activities. The event was held in the order of introducing the achievements of the university, awarding an appointment letter, and presenting the researcher’s vision.

With artificial intelligence as its core axis, the AI and Computing Research Institute promotes convergence research in various ICT fields such as △6G network △ cloud and edge computing △ quantum computing △ physical AI △ new drug development. It plans to actively hold joint projects, discussions, and international events with academia, industry, public institutions, leading overseas universities and research institutes, and Hallimwon to strengthen the industry-academic cooperation system and lead the establishment of an AI+X ecosystem and enhance national competitiveness.

Starting next year, various research and industry-academia cooperation programs such as the Global AI and Computing Symposium, the hosting of IEEE-level international academic conferences, the establishment of an international joint research center, and AI-based regional innovation projects will also be promoted in earnest.

Lee Won-jun, a professor at Korea University, was appointed as the first researcher on this day. Professor Lee is a professor of computer science at Korea University and the Graduate School of Information Protection, and has achieved global research achievements in the fields of wired and wireless communication networking systems, AI-based cloud-edge computing, and wireless security, and was selected as IEEE Fellow, an authority in computing and networking in 2021.

Gachon University has already led AI innovation in overall education, including establishing the first artificial intelligence department in Korea in 2020 and △ mandatory basic AI education for all students △ expanding AI convergence research linked to medicine, pharmaceuticals, and bio △ establishing AI specialized courses for each major △ establishing the first AI humanities university in Korea.

The launch of this research institute is a strategic step to leap into a global research base based on educational achievements.

Lee Gil-yeo, president of Gachon University, said, “Gachon University has been leading AI education by opening the nation’s first artificial intelligence department. Now, we have launched a researcher to prepare a new electricity in research, he said. “In particular, the unexpected recruitment of Professor Lee Won-jun reflects the will to grow the researcher into a global hub and develop it to a world-class level through strategic convergence with the semiconductor, battery, and bio (BBC) fields.”



Source link

Continue Reading

Trending