Tools & Platforms
Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it

Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded.
Rover has been in the industry for 15 years, mainly working as a web developer. She’s now building a startup, alongside her son, that creates custom machine learning models for marketplaces.
She called vibe coding a beautiful, endless cocktail napkin on which one can perpetually sketch ideas. But dealing with AI-generated code that one hopes to use in production can be “worse than babysitting,” she said, as these AI models can mess up work in ways that are hard to predict.
She had turned to AI coding in a need for speed with her startup, as is the promise of AI tools.
“Because I needed to be quick and impressive, I took a shortcut and did not scan those files after the automated review,” she said. “When I did do it manually, I found so much wrong. When I used a third-party tool, I found more. And I learned my lesson.”
She and her son wound up restarting their whole project — hence the tears. “I handed it off like the copilot was an employee,” she said. “It isn’t.”
Rover is like many experienced programmers turning to AI for coding help. But such programmers are also finding themselves acting like AI babysitters — rewriting and fact-checking the code the AI spits out.
Techcrunch event
San Francisco
|
October 27-29, 2025
A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers.
These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.
Working with AI-generated code has become such a problem that it’s given rise to a new corporate coding job known as “vibe code cleanup specialist.”
TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go.
“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said.
Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you. “It doesn’t make the kid less clever,” she continued. “It just means you can’t delegate [a task] like that completely.”
“You’re absolutely right!”
Feridoon Malekzadeh also compared vibe coding to a child.
He’s worked in the industry for more than 20 years, holding various roles in product development, software, and design. He’s building his own startup and heavily using vibe-coding platform Lovable, he said. For fun, he also vibe codes apps like one that generates Gen Alpha slang for Boomers.
He likes that he’s able to work alone on projects, saving time and money, but agrees that vibe coding is not like hiring an intern or a junior coder. Instead, vibe coding is akin to “hiring your stubborn, insolent teenager to help you do something,” he told TechCrunch.
“You have to ask them 15 times to do something,” he said. “In the end, they do some of what you asked, some stuff you didn’t ask for, and they break a bunch of things along the way.”
Malekzadeh estimates he spends around 50% of his time writing requirements, 10% to 20% of his time on vibe coding, and 30% to 40% of his time on vibe fixing — remedying the bugs and “unnecessary script” created by AI-written code.
He also doesn’t think vibe coding is the best at systems thinking — the process of seeing how a complex problem could impact an overall result. AI-generated code, he said, tries to solve more surface-level problems.
“If you’re creating a feature that should be broadly available in your product, a good engineer would create that once and make it available everywhere that it’s needed,” Malekzadeh said. “Vibe coding will create something five different times, five different ways, if it’s needed in five different places. It leads to a lot of confusion, not only for the user, but for the model.”
Meanwhile, Rover finds that AI “runs into a wall” when data conflicts with what it was hard-coded to do. “It can offer misleading advice, leave out key elements that are vital, or insert itself into a thought pathway you’re developing,” she said.
She also found that rather than admit to making errors, it will manufacture results.
She shared another example with TechCrunch, where she questioned the results an AI model initially gave her. The model started to give a detailed explanation pretending it used the data she uploaded. Only when she called it out did the AI model confess.
“It freaked me out because it sounded like a toxic co-worker,” she said.

On top of this, there are the security concerns.
Austin Spires is the senior director of developer enablement at Fastly and has been coding since the early 2000s.
He’s found through his own experience — along with chatting with customers — that vibe code likes to build what is quick rather than what is “right.” This may introduce vulnerabilities to the code of the kind that very new programmers tend to make, he said.
“What often happens is the engineer needs to review the code, correct the agent, and tell the agent that they made a mistake,” Spires told TechCrunch. “This pattern is why we’ve seen the trope of ‘you’re absolutely right’ appear over social media.”
He’s referring to how AI models, like Anthropic Claude, tend to respond “you’re absolutely right” when called out on their mistakes.
Mike Arrowsmith, the chief technology officer at the IT management software company NinjaOne, has been in software engineering and security for around 20 years. He said that vibe coding is creating a new generation of IT and security blind spots to which young startups in particular are susceptible.
“Vibe coding often bypasses the rigorous review processes that are foundational to traditional coding and crucial to catching vulnerabilities,” he told TechCrunch.
NinjaOne, he said, counters this by encouraging “safe vibe coding,” where approved AI tools have access controls, along with mandatory peer review and, of course, security scanning.
The new normal
While nearly everyone we spoke to agrees that AI-generated code and vibe-coding platforms are useful in many situations — like mocking up ideas — they all agree that human review is essential before building a business on it.
“That cocktail napkin is not a business model,” Rover said. “You have to balance the ease with insight.”
But for all the lamenting on its errors, vibe coding has changed the present and the future of the job.
Rover said vibe coding helped her tremendously in crafting a better user interface. Malekzadeh simply said that, despite the time he spends fixing code, he still gets more done with AI coders than without them.
“‘Every technology carries its own negativity, which is invented at the same time as technical progress,” Malekzadeh said, quoting the French theorist Paul Virilio, who spoke about inventing the shipwreck along with the ship.
The pros far outweigh the cons.
The Fastly survey found that senior developers were twice as likely to put AI-generated code into production compared to junior developers, saying that the technology helped them work faster.
Vibe coding is also part of Spires’ coding routine. He uses AI coding agents on several platforms for both front-end and back-end personal projects. He called the technology a mixed experience but said it’s good in helping with prototyping, building out boilerplate, or scaffolding out a test; it removes menial tasks so that engineers can focus on building, shipping, and scaling products.
It seems the extra hours spent combing through the vibe weeds will simply become a tolerated tax on using the innovation.
Elvis Kimara, a young engineer, is learning that now. He just graduated with a master’s in AI and is building an AI-powered marketplace.
Like many coders, he said vibe coding has made his job harder and has often found vibe coding a joyless experience.
“There’s no more dopamine from solving a problem by myself. The AI just figures it out,” he said. At one of his last jobs, he said senior developers didn’t look to help young coders as much — some not understanding new vibe-coding models, while others delegated mentorship tasks to said AI models.
But, he said, “the pros far outweigh the cons,” and he’s prepared to pay the innovation tax.
“We won’t just be writing code; we’ll be guiding AI systems, taking accountability when things break, and acting more like consultants to machines,” Kimara said of the new normal for which he’s preparing.
“Even as I grow into a senior role, I’ll keep using it,” he continued. “It’s been a real accelerator for me. I make sure I review every line of AI-generated code so I learn even faster from it.”
Tools & Platforms
US tech giants bind developers to closed-source AI ecosystems with open tools: Ant Group

American tech giants such as OpenAI and Nvidia are trying to “lock in” developers to their closed-source artificial intelligence ecosystems by open-sourcing tools in other layers of the tech stack, according to a report from Chinese fintech company Ant Group.
While most leading US large language models are closed-source, Chinese players from Alibaba Cloud to TikTok-owner ByteDance have instead opted to “open-source” their models, meaning that developers can download and build on top of them.
A tech stack is the collection of technologies needed to develop and deploy an application. In AI, this includes hardware such as semiconductor chips and software like algorithms and frameworks.
US companies have focused their open-source efforts on AI development “toolchains” in particular to drive adoption of their proprietary AI models and hardware, Ant Group said in its report on the global open-source AI landscape, released on Saturday at the Inclusion Conference on the Bund in Shanghai.
Ant Group cited the example of Dynamo, open-sourced by US chip giant Nvidia in March, an inference platform optimised for deploying large-scale AI models. Nvidia has marketed Dynamo as the “operating system of AI”.
While the platform can be integrated with popular open-source AI development frameworks such as PyTorch and SGLang, it is designed to be paired with Nvidia’s powerful graphics processing units (GPUs), according to the Alibaba Group Holding affiliate. Alibaba owns the Post.
Tools & Platforms
Florida should embrace, not regulate, AI innovation

The development of Artificial Intelligence (AI) in recent years has been one of the most consequential technological advances since the emergence of the internet.
AI has the potential to change and improve every facet of our lives, from automating simple routine tasks like scheduling a doctor’s appointment to more complex efforts like coding a new computer program.
Yet, this transformative technology may never reach its potential if policymakers rush to regulate what they do not yet fully understand.
Like all breakthrough technologies, AI needs room to grow, including opportunities for innovators to experiment, iterate, and scale new applications. Just as the United States led the global digital revolution, empowering American tech companies to achieve superior market positions with limited regulatory interference, we now face a similar crossroads with AI.
Unfortunately, some state-level efforts risk undermining this progress.
States like Colorado and California have recently introduced or passed regulatory frameworks that could deter investment, suppress AI deployment in their respective states, and slow national momentum. With international competitors racing ahead with their own AI development programs, every unnecessary regulatory barrier we erect gives them a strategic advantage.
Federal leadership plays an important role. President Donald Trump’s recently announced AI Action Plan sets the framework for how the government can support technological advancement by prioritizing innovation, investing in AI infrastructure, and promoting U.S. leadership in global standards-setting.
While national initiatives lay the groundwork for progress, state-level action is vital in translating these goals into tangible outcomes.
Here in Florida, we are committed to fostering a regulatory environment that encourages responsible innovation. By aligning with forward-looking national efforts and resisting the urge to overregulate, we can ensure AI remains a force for economic opportunity, technological leadership, and public benefit.
With the right policies, we can ensure those benefits are realized without unnecessary barriers or delays.
___
John Snyder is the state Representative of Florida House District 86 and served as Chair of the House Information Technology Budget and Policy Subcommittee in the 2025 Legislative Session.
Post Views: 0
Tools & Platforms
AI challenges the dominance of Google search

Suzanne BearneTechnology Reporter

Like most people, when Anja-Sara Lahady used to check or research anything online, she would always turn to Google.
But since the rise of AI, the lawyer and legal technology consultant says her preferences have changed – she now turns to large language models (LLMs) such as OpenAI’s ChatGPT.
“For example, I’ll ask it how I should decorate my room, or what outfit I should wear,” says Ms Lahady, who lives in Montreal, Canada.
“Or, I have three things in the fridge, what should I make? I don’t want to spend 30 minutes thinking about these admin tasks. These aren’t my expertise; they make me more fatigued.”
Ms Lahady says her usage of LLMs overtook Google Search in the past year when they became more powerful for what she needed.
“I’ve always been an early adopter… and in the past year have started using ChatGPT for just about everything. It’s become a second assistant.”
While she says she won’t use LLMs for legal tasks – “anything that needs legal reasoning” – she uses it in a professional capacity for any work that she describes as “low risk”, for example, drafting an email.
“I also use it to help write code or find the best accounting software for my business.”
Ms Lahady is not alone. A growing number are heading straight for LLMs, such as ChatGPT, for recommendations and to answer everyday questions.
ChatGPT attracts more than 800 million weekly active users, up from 400 million in February 2025, according to Demandsage, a data and research firm.
Traditional search engines like Google and Microsoft’s Bing still dominate the market for search. But LLMs are growing fast.
According to research firm Datos, in July 5.99% of search on desktop browsers went to LLMs, that’s more than double the figure from a year earlier.

Professor Feng Li, associate dean for research and innovation at Bayes Business School in London, says people are using LLMs because they lower the “cognitive load” – the amount of mental effort required to process and act on information – compared to search.
“Instead of juggling 10 links with search, you get a brief synthesis that you can edit and iterate in plain English,” he says. “LLMs are particularly useful for summarising long documents, first-pass drafting, coding snippets, and ‘what-if’ exploration.”
However, he says outputs still require verification before use, as hallucinations and factual errors remain common.
While the use of AI might have exploded, Google denies that it is at the expense of its search engine.
It says overall queries and commercial queries continued to grow year-over-year and its new AI tools significantly contributed to this increase in usage.
Those new tools include AI Mode, which allows users to ask more conversational questions and receive more tailored responses in return.
That followed the rollout of AI Overviews, which produces summaries of queries at the top of the search page.
While Google plays down the impact of LLMs on its search business, an indication of the affect came in May during testimony in an antitrust trial bought by the US Department of Justice against Google.
A top Apple executive said that the number of Google searches on Apple devices, via its browser Safari, fell for the first time in more than 20 years.
Nevertheless, Prof Li doesn’t believe there will be a replacement of search but a hybrid model will exist.
“LLM usage is growing, but so far it remains a minority behaviour compared with traditional search. It is likely to continue to grow but stabilise somewhere, when people primarily use LLMs for some tasks and search for others such as transactions like shopping and making bookings, and verification purposes.”

As a result of the rise of LLMs, companies are having to change their marketing strategies.
They need to understand “which sources the model considers authoritative within their category,” says Leila Seith Hassan, chief data officer at digital marketing agency Digitas UK.
“For example, in UK beauty we saw news outlets and review sites like Vogue and Sephora referenced heavily, whereas in the US there was more emphasis on content from brands’ own websites.”
She says that LLMs place more trust in official websites, press releases, established media, and recognised industry rankings than in social media posts.
And that could be important, as Ms Seith Hassan says there are signs that people who have used AI to search for a product, are more likely to buy.
“Referrals coming directly from LLMs often appear to be higher quality, with people are more likely to convert to sales.”
There is plenty of anecdotal evidence that people are turning to LLMs when searching for products.
Hannah Cooke, head of client strategy at media and influencer agency Charlie Oscar, says she started using LLMs in a “more serious and strategic way” about 18 months ago.
She mainly uses ChatGPT but has experimented with Google Gemini to personally and professionally streamline her work and life.
Ms Cooke, who lives in London, says rather than turning to Google, she will ask ChatGPT for personalised skincare recommendations for her skin type. “There’s fewer websites I need to go through,” she says of the benefits.
And it’s the same with travel planning.
“ChatGPT is much easier to find answers and recommendations,” she says.
“For example, I used ChatGPT to research ahead of a recent visit to Japan. I asked it to plan two weeks travelling and find me restaurants with vegetarian dishes. It saved [me] hours of research.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries