Connect with us

Tools & Platforms

Coinbase CEO explains why he fired engineers who didn’t try AI immediately

Published

on


It’s hard to find programmers these days who aren’t using AI coding assistants in some capacity, especially to write the repetitive, mundane bits.

But those who refused to try the tools when Coinbase bought enterprise licenses for GitHub Copilot and Cursor got promptly fired, CEO Brian Armstrong said this week on John Collison’s podcast “Cheeky Pint.” (Collison is the co-founder and president of the payments company Stripe.)

After getting licenses to cover every engineer, some at the cryptocurrency exchange warned Armstrong that adoption would be slow, predicting it would take months to get even half the engineers using AI. 

Armstrong was shocked at the thought. “I went rogue,” he said, and posted a mandate in the company’s main engineering Slack channel. “I said, ‘AI is important. We need you to all learn it and at least onboard. You don’t have to use it every day yet until we do some training, but at least onboard by the end of the week. And if not, I’m hosting a meeting on Saturday with everybody who hasn’t done it and I’d like to meet with you to understand why.’” 

At the meeting, some people had reasonable explanations for not getting their AI assistant accounts set up during the week, like being on vacation, Armstrong said.

“I jumped on this call on Saturday and there were a couple people that had not done it. Some of them had a good reason, because they were just getting back from some trip or something, and some of them didn’t [have a good reason]. And they got fired.”

Armstrong admits that it was a “heavy-handed approach” and there were people in the company who “didn’t like it.”

Techcrunch event

San Francisco
|
October 27-29, 2025

While it doesn’t sound like very many people were fired, Armstrong said it sent a clear message that AI is not optional. Still, everything about that story is wild: that there were engineers who wouldn’t spend a few minutes of their week signing up for and testing the AI assistant — the most hyped tech for coders ever — and that Armstrong was willing to fire them over it.

Coinbase did not respond to a request for comment.

Since then, Armstrong has leaned further into the training. He said the company hosts monthly meetings where teams who have mastered creative ways to use AI share what they have learned.

Interestingly, Collison, who has been programming since childhood, questioned how much companies should be relying on AI-generated code.

“It’s clear that it is very helpful to have AI helping you write code. It’s not clear how you run an AI-coded code base,” he commented. Armstrong replied, “I agree.”

Indeed, as TechCrunch previously reported, a former OpenAI engineer described that company’s central code repository as “a bit of a dumping ground.” The engineer said management had begun dedicating engineering resources to improve the situation.

We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Nvidia unveils AI chips for video, software generation

Published

on


FILE PHOTO: Nvidia said it would launch a new AI chip by the end of next year, designed to handle complex functions like creating videos and software.
| Photo Credit: Reuters

Nvidia said on Tuesday it would launch a new artificial intelligence chip by the end of next year, designed to handle complex functions such as creating videos and software.

The chips, dubbed “Rubin CPX”, will be built on Nvidia’s next-generation Rubin architecture — the successor to its latest “Blackwell” technology that marked the company’s foray into providing larger processing systems.

As AI systems grow more sophisticated, tackling data-heavy tasks such as “vibe coding” or AI-assisted code generation and video generation, the industry’s processing needs are intensifying.

AI models can take up to 1 million tokens to process an hour of video content — a challenging feat for traditional GPUs, the company said. Tokens refer to the units of data processed by an AI model.

To remedy this, Nvidia will integrate various steps of the drawn-out processing sequence such as video decoding, encoding, and inference — when AI models produce an output — together into its new chip.

Investing $100 million in these new systems could help generate $5 billion in token revenue, the company said, as Wall Street increasingly focuses on the return from pouring hundreds of billions of dollars into AI hardware.

The race to develop the most sophisticated AI systems has made Nvidia the world’s most valuable company, commanding a dominant share of the AI chip market with its pricey, top-of-the-line processors.



Source link

Continue Reading

Tools & Platforms

Top Japan start-up Sakana AI touts nature-inspired tech – personcountylife.com

Published

on



Top Japan start-up Sakana AI touts nature-inspired tech  personcountylife.com



Source link

Continue Reading

Tools & Platforms

The challenge goes beyond merely understanding how AI works

Published

on


As AI evolves from simple automation to sophisticated autonomous agents, HR executives face one of the most significant workforce transformations in modern history. The challenge isn’t just understanding the technology — it’s navigating culture change, skills development and workforce planning when AI capabilities double every six months.

Simon Brown, EY’s global learning and development leader, has spent nearly 2 years helping the firm’s 400,000 employees prepare for an AI-driven future. With his past experience as chief learning officer at Novartis and his work with Microsoft, Brown offers critical insights on positioning organizations for success in an autonomous AI world.

What are the top questions C-suite executives need to ask their teams about agentic AI initiatives?

Are people aware of what’s possible with agents? Are we experimenting to find ways agents can help us? Do we have the skills and knowledge to do that properly?

But the most critical question is: Is the culture there to support this? Most organizations are feeling their way through which tools work, what the use cases are, what drives value. There’s a lot of ambiguity. Some organizations manage well through uncertainty; others need clear answers and can’t fail — that’s hard when there’s no clear path and people need to experiment.

How can leaders assess whether their organization has the right culture for agentic AI?

Look at how AI tools like Microsoft Copilot are being embraced. Are people experimenting and finding productivity value, or are they threatened and not using it? If leaders are role modeling use and encouraging their people, that comes through in adoption metrics. Culture shows through communication, leadership role modeling, skill building and time to learn.

What are common blind spots when executives evaluate AI readiness?

Two major issues. First, executives often aren’t aware of what’s possible with the latest AI systems due to security constraints and procurement processes that create 6-to-12-month lags.

Second, the speed of improvement. If I tried an AI tool a month ago versus today, I may get a completely different experience because the underlying model improved. Copilot now has GPT-5 access, giving it a significant overnight boost. Leaders need to shift from thinking about AI as static systems upgraded annually to something constantly improving and doubling in power every six months.

How should leaders approach change management with AI agents?

Change management is essential. When OpenAI releases new capabilities, everyone has access to the technology. Whether organizations get the benefit depends entirely on change management — culture, experimentation ability, skills and whether people feel encouraged rather than fearful. We’re addressing this through AI badges, curricula, enterprise-wide learning — all signaling the organization values building AI skills.

What’s your framework for evaluating whether AI investment will drive real business value?

I think about three loops. First, can I use this to do current tasks cheaper, faster, better? Second, can I realize new value — serving more customers, new products and services? Third, if everyone’s using AI, how do we reinvent ourselves to create new value? It’s moving beyond just doing the same things better to what AI helps us do differently.

How should HR leaders rethink workforce planning given AI’s potential to automate job functions?

Understand which skills AI will impact, which remain uniquely human and what new roles get created. The World Economic Forum predicts significant reduction in certain roles but net increase overall. We’re seeing new, more sophisticated roles created that move people higher up the value chain.

From HR’s perspective, are our processes still fit for AI speed? How are we incentivizing reskilling? Are we ensuring learning access and time? How are we signaling which skills are in demand versus at risk of automation?

How should HR measure success after implementing agentic AI?

Tie back to why it was implemented — business value. Use similar metrics as before but look at what changed. Maybe same output but cheaper, faster, better. Or new capabilities — our third-party risk team uses agents to provide much more extensive supplier analysis than before. Same team size, more client value.

What’s your timeline perspective on when agentic AI becomes competitive necessity versus advantage?

That’s the ultimate question. I’m amazed daily by what I achieve using AI and agents. ChatGPT-5’s recent capabilities are mind-blowing, suggesting dramatic impact quickly. But when deep AI experts have vastly different views — from AGI around the corner to decades away — it’s understandable why leaders struggle to navigate this landscape.



Source link

Continue Reading

Trending