Connect with us

Business

AI Promised Faster Coding. This Study Disagrees

Published

on


Welcome back to In the Loop, TIME’s new twice-weekly newsletter about the world of AI. We’re publishing installments both as stories on Time.com and as emails.

If you’re reading this in your browser, you can subscribe to have the next one delivered straight to your inbox.

What to Know:
Could coding with AI slow you down?

In just the last couple of years, AI has totally transformed the world of software engineering. Writing your own code (from scratch, at least,) has become quaint. Now, with tools like Cursor and Copilot, human developers can marshal AI to write code for them. The human role is now to understand what to ask the models for the best results, and to iron out the inevitable problems that crop up along the way.

Conventional wisdom states that this has accelerated software engineering significantly. But has it? A new study by METR, published last week, set out to measure the degree to which AI speeds up the work of experienced software developers. The results were very unexpected.

What the study found — METR measured the speed of 16 developers working on complex software projects, both with and without AI assistance. After finishing their tasks, the developers estimated that access to AI had accelerated their work by 20% on average. In fact, the measurements showed that AI had slowed them down by about 20%. The results were roundly met with surprise in the AI community. “I was pretty skeptical that this study was worth running, because I thought that obviously we would see significant speedup,” wrote David Rein, a staffer at METR, in a post on X.

Why did this happen? — The simple technical answer seems to be: while today’s LLMs are good at coding, they’re often not good enough to intuit exactly what a developer wants and answer perfectly in one shot. That means they can require a lot of back and forth, which might take longer than if you just wrote the code yourself. But participants in the study offered several more human hypotheses, too. “LLMs are a big dopamine shortcut button that may one-shot your problem,” wrote Quentin Anthony, one of the 16 coders who participated in the experiment. “Do you keep pressing the button that has a 1% chance of fixing everything? It’s a lot more enjoyable than the grueling alternative.” (It’s also easy to get sucked into scrolling social media while you wait for your LLM to generate an answer, he added.)

What it means for AI — The study’s authors urged readers not to generalize too broadly from the results. For one, the study only measures the impact of LLMs on experienced coders, not new ones, who might benefit more from their help. And developers are still learning how to get the most out of LLMs, which are relatively new tools with strange idiosyncrasies. Other METR research, they noted, shows the duration of software tasks that AI is able to do doubling every seven months—meaning that even if today’s AI is detrimental to one’s productivity, tomorrow’s might not be.

Who to Know:
Jensen Huang, CEO of Nvidia

Huang finds himself in the news today after he proclaimed on CNN that the U.S. government doesn’t “have to worry” about the possibility of the Chinese military using the market-leading AI chips that his company, Nvidia, produces. “They simply can’t rely on it,” he said. “It could be, of course, limited at any time.”

Chipping away — Huang was arguing against policies that have seen the U.S. heavily restrict the export of graphics processing units, or GPUs, to China, in a bid to hamstring Beijing’s military capabilities and AI progress. Nvidia claims that these policies have simply incentivized China to build its own rival chip supply chain, while hurting U.S. companies and by extension the U.S. economy.

Self-serving argument — Huang of course would say that, as CEO of a company that has lost out on billions as a result of being blocked from selling its most advanced chips to the Chinese market. He has been attempting to convince President Donald Trump of his viewpoints in a recent meeting at the White House, Bloomberg reported.

In fact… The Chinese military does use Nvidia chips, according to research by Georgetown’s Center for Security and Emerging Technology, which analyzed 66,000 military purchasing records to come to that conclusion. A large black market has also sprung up to smuggle Nvidia chips into China since the export controls came into place, the New York Times reported last year.

AI in Action

Anthropic’s AI assistant, Claude, is transforming the way the company’s scientists keep up with the thousands of pages of scientific literature published every day in their field.

Instead of reading papers, many Anthropic researchers now simply upload them into Claude and chat with the assistant to distill the main findings. “I’ve changed my habits of how I read papers,” Jan Leike, a senior alignment researcher at Anthropic, told TIME earlier this year. “Where now, usually I just put them into Claude, and ask: can you explain?”

To be clear, Leike adds, sometimes Claude gets important stuff wrong. “But also, if I just skim-read the paper, I’m also gonna get important stuff wrong sometimes,” Leike says. “I think the bigger effect here is, it allows me to read much more papers than I did before.” That, he says, is having a positive impact on his productivity. “A lot of time when you’re reading papers is just about figuring out whether the paper is relevant to what you’re trying to do at all,” he says. “And that part is so fast 1752604110, you can just focus on the papers that actually matter.”

What We’re Reading

Microsoft and OpenAI’s AGI Fight Is Bigger Than a Contract — By Steven Levy in Wired

Steven Levy goes deep on the “AGI” clause in the contract between OpenAI and Microsoft, which could decide the fate of their multi-billion dollar partnership. It’s worth reading to better understand how both sides are thinking about defining AGI. They could do worse than Levy’s own description: “a technology that makes Sauron’s Ring of Power look like a dime-store plastic doodad.”



Source link

Business

I Landed a Job at an AI Startup. Here Are My Tips for Working in AI.

Published

on


This as-told-to essay is based on a conversation with Lambert Liu, a software engineer. The following has been edited for length and clarity. Business Insider has verified his employment and academic history.

For most computer science graduates, it’s a no-brainer to work for Big Tech.

Most of my classmates were drawn to Big Tech companies like Google, Meta, and Amazon because they promised prestige, stability, and a structured career path.

But I found myself falling into a second group of college students, one that actively seeks job opportunities at startups for the steep learning opportunities and potential equity upside if the startup goes public.

I reached that decision after doing internships in both Big Tech and startups.

I did two internships at Google during my sophomore and junior years in college.

When I interned at Google for the first time, I really liked it. But when I went back for a second round, I thought my growth there was plateauing. I didn’t see myself working there in the long term.

At the end of my junior year, I did an internship at Replit, an AI software development startup. That experience was refreshing because I got to lead impactful projects. I realized I wanted to work at a startup, and that led me to my first job at Graphite, an AI code review platform.

Here are the top tips I have if you want to land a job at an AI startup.

Big Tech experience helps

If you are like me and want to give startups a shot after interning only at Big Tech, don’t worry. You don’t need past internship experiences at startups to work at one.

Interning at a Big Tech company helps demonstrate to employers that you have a strong overall technical foundation. You will know how to do great technical design and great testing. Your stint with Big Tech tells recruiters that you are capable of writing clean code and shipping reliably.

It’s good to have startup experience because you will be more used to dealing with ambiguity and thinking quickly on your feet. But that can be easily solved by working on your own personal projects, which takes me to my next point.

Build more projects

I personally worked on a lot of passion projects in my downtime when I was not working. Working on those projects not only developed my skills, but it also helped strengthen my approach toward solving problems.

In fact, those projects do not have to be AI-related. You can use AI tools to amplify your productivity as an engineer, but you should not limit yourself to just working on AI projects.

Building AI projects also isn’t a prerequisite to working in an AI startup. They generally look for great engineers, and whether you build projects with AI or not, there are many ways to demonstrate your thinking and technical skills.

LeetCode still matters, but not as much

Solving algorithmic and coding problems on LeetCode, an online learning platform, still matters when you are preparing for technical interviews at startups.

That said, there’s a lot more emphasis on one’s ability to deal with ambiguity and tackle non-technical areas like product thinking. This is especially the case since every engineer can use AI to write code.

Working on your own projects will help you strengthen your problem-solving skills. Having to build something new forces you to develop your perspective and taste for approaching problems, which will help you better handle the interview.

Get good at system design thinking

My job interview at Graphite was the first time I was ever asked about system design. That is not usually asked of new graduates. When it comes to system design, companies assess not only your technical skills but also your approach to problems.

I learned a lot about system design thinking when I took a course on human-computer interaction in college. I learned how to scope problems and then build a technical foundation to solve them. The course also gave me some hands-on experience when I built a project.

Foundational courses like algorithms and data science are important, but going into areas like human-computer interaction will be useful when you start interviewing.

Be a holistic engineer

If you want to excel at a startup, you must strive to be a holistic engineer above all else. You need to work at a fast pace. And on top of that, you have to show that you really care about your users.

You can start doing that now when you are interning. Show your bosses that you really care about your craft and want to make the best possible product.

Take ownership of your work as much as possible. At AI startups like Graphite, we move fast, so we are looking for hires who can cope with that velocity and produce high-quality work.

Do you have a story to share about working at an AI startup? Contact this reporter at ktan@businessinsider.com.





Source link

Continue Reading

Business

Get paid faster: How an AI productivity assistant can save small businesses hours every week by chasing late payments

Published

on


For Nadia, Sage Copilot is an essential ally when it comes to tackling a chore most small businesses waste time on – chasing overdue invoices.

Nadia is the volunteer treasurer of a rowing club in Hammersmith, and uses Sage Copilot to automatically detect when invoices are due – and at the click of a button can send automatic payment reminders to chase unpaid invoices.

Every week, Sage Copilot saves her five hours, she estimates – and Sage customers report that using the function gets them paid a week earlier.

Lisa Ewans, Senior Vice President at the Newcastle-based accounting software firm says: ‘Through our AI, Nadia saves hours of admin a week, which is hugely valuable as a volunteer, because it’s not a day job she’s getting paid for so time is her currency.’

‘Sage Copilot automatically runs in the background, and at the click of a button Sage Copilot chases those customers whose payments are overdue and helps to get the payments in.’

Small businesses can customise the tone of their invoice-chasing emails so it sounds like them, giving them control of how the AI works for them.

Sage Copilot helps small business owners stay on top of their finances and in control of cash flow: as well as chasing invoices. It can help business owners spot duplicate payments and other errors in their accounts, a common problem, and spot trends that might affect the business.

From tradespeople to coffee shop owners

Sage Copilot helps small business owners stay on top of their finances and in control of cash flow

Sage Copilot has a huge amount to offer for businesses in many sectors: tradespeople, for example, often spend considerable time chasing late payments.

Tradespeople can use the time saved to focus on finding new customers or delivering for existing ones.

With Sage Copilot, small business owners can stay on top of their finances with confidence. From chasing invoices to catching duplicate payments and errors, it even spots trends that might affect their business—helping them stay one step ahead.

And when it identifies changes, like rising prices, Sage Copilot provides clear, timely alerts so you can act quickly and make confident decisions.

For other business owners such as cafe owners, the ability to easily spot duplicate payments within VAT returns, or payments that have been miscategorised allows business owners to file returns with peace of mind.

Lisa Ewans says: ‘The focus is around identifying errors in the data, so Sage Copilot spots duplicate transactions for example and alerts business owners automatically. It gives business owners the confidence to sign off returns quickly, and saves them time.’

Sage is a British business, born and bred, with its global headquarters in Newcastle—our home for over 40 years. In that time, it has worked closely with businesses across every sector to understand their challenges and develop financial AI that delivers real value to businesses, solving real world challenges. 

Helping with the details

For accountants and bookkeepers, Sage Copilot offers other benefits, helping them deal with both tax returns and ‘Know Your Customer’ (‘KYC’) checks, which are a series of procedures businesses must follow to verify their customers’ identities and check their risk profiles.

Sage’s experts identified the real ‘pain points’ faced by accountants and bookkeepers, who often spend too much time chasing clients for documents such as identity documents and paper copies of receipts.

Instead of chasing clients via phone and email, Sage delivers automatic assistance when it comes to the documents accountants and bookkeepers need for tax returns and KYC processes.

Ewans says: ‘One of the biggest pain points we hear from accountants is that they spend a load of time chasing up documents,transactions that haven’t been submitted properly, and chasing up paper copies of receipts.’

Sage created a workflow specifically designed so that accountants and bookkeepers could spend less time chasing customers for information.

Instead, Sage Copilot automatically reaches out to customers to request information, and customers upload receipts (for example) as images.

This means that accountants have to spend less time chasing customers via email.

Trusting AI

For Sage, it was important that businesses should be able to trust the AI software to deliver securely, privately and effectively. Sage’s AI Trust Label is a direct response to this – it is designed to provide customers with clear, accessible information about the way AI functions across Sage products.

Sage Copilot was made in the UK, and was built from the ground up with the needs of this country’s businesses in mind.

Ewans says: ‘We know that trust is really important to our customers. You don’t have to be a huge Silicon Valley company to deliver for customers.

‘We have 40 years of experience working with small businesses, and 400 UK-based engineers and data scientists building Sage Copilot to deliver an AI copilot that focuses on the real needs of small businesses today.’



Source link

Continue Reading

Business

AI was practically invisible at new phone launch, does health tech in AirPods, Watch make up for it?

Published

on


Of course, there was the usual promotional video featuring people whose lives have been saved as the result of the watch, which toes the line between feeling genuinely heartfelt and feeling like it’s preying on people’s insecurities around sudden health emergencies to sell watches.

But either way, the Apple Watch really does notice when people have a hard fall or are in a car accident, and it can call emergency services. It can also help prevent emergencies by picking up on heart health concerns or guiding outdoor explorers back to a point they had been to previously.

At the event, Apple said its Watch Series 11 could detect high blood pressure, meaning it could alert people to an increased risk of stroke or heart attack, though that feature will require regulatory approval in each region before it works.

Fitness tracking

Aside from watches, Apple unveiled a new version of the AirPods Pro — which it claims are the world’s most popular headphones — that have integrated heart rate sensors. This is something the company introduced earlier this year in a set of fitness-focused Beats headphones, but having them in such a mainstream-friendly product could be a big deal.

The buds have a similar photoplethysmography sensor to what you’d find in a smartwatch for keeping an eye on a user’s blood flow, and they also have accelerometers, a gyroscope and GPS systems inside, so people without an Apple Watch will get the same kind of workout-tracking through the buds. We won’t know if this comes with any particular limitations until we’ve tried it ourselves, though we do know the heart tracking is active only during workouts, and that the buds have to be connected to an iPhone to do it.

The AirPods Pro 3 have upgraded waterproofing to protect them from sweat or rain (IP57 vs IPX4 on the Pro 2); they introduce a new live translation feature; they have improved noise-cancelling; and they also inherit the health-focused capabilities of the previous Pro buds.

Namely, they can function as clinical-grade hearing aids; they can administer hearing tests; and they can protect ears by lowering loud ambient sounds.

And incidentally, all of this health and fitness stuff is absolutely powered by AI. It’s just not the chatty kind; its use is isolated to specific functions, and it’s backed by a lot of research and development.

Software ecosystem

It’s all well and good for me to show health features that might really help somebody’s quality of life, and contrast it with generative AI chatbots that often do anything but. Yet, you may rightly wonder why Apple shouldn’t have both. Can’t it match all the AI tools found on Samsung and Google phones, while also keeping up its health and wearables innovations?

Loading

Of course it totally can, but it doesn’t necessarily have to build those tools. The iPhone is practically the de facto general computing platform of our era, and while that could be better reflected in some of Apple’s App Store policies, the big sensations from the likes of ChatGPT, Perplexity and Gemini will all come to iPhone.

Saying Apple needs to develop its own AI is a little like saying Apple needs to develop its own video games. Why should it need to? It owns the platform that the games are played on. It can capitalise on their popularity by running the platform’s best store, offering subscriptions and services and designing its hardware and operating system in a way that keeps developers and players coming back.

Apple already makes plenty of incredible apps and features for its own devices, and personally I see no reason for it to start integrating generative AI into all of them. By offering developers APIs that let them dig into the machine learning tech on Apple’s chips, and providing the mechanism for users to install apps and customise their devices to use whatever services they want, iPhone will become a natural home for any AI innovations. And when those innovations turn out to be inaccurate or dangerous, Apple will more easily wash its hands of them.

Industrial design

Loading

Some might not like to admit it, but what a device looks and feels like is a major factor in how enjoyable it is to use, which Apple attempted to reassert this week by evoking Steve Jobs’ arguments about form equalling function, and then unveiling an extremely thin phone it said was also its most durable yet, and a bold but divisive redesign for its iPhone Pro.

We’re not in 2012 any more, where most Android phones were dinky and weird compared to the iPhone, and there are currently many beautifully made phones from companies all over the world. But this was the first Apple event in a long time where the company seemed to lean hard on its bona fides as a design company, and I think that’s one of its major strengths. If I had to choose between phones purely by watching the iPhone Air introduction video and the Pixel 10 rundown starring Jimmy Fallon, it wouldn’t be particularly close.

It’s not all whimsical advertising and orange anodised aerospace aluminium alloy, though. Apple pushes durability and device longevity further every year, so people get a well-made product they keep for longer, or which is worth more when they decide to re-sell it. And that kind of philosophy is almost diametrically opposed to the logical conclusion of a smartphone run by AI; that physical devices will eventually disappear in favour of cloud-based voice interfaces and content services.

Apple can keep all of its strengths while adding more AI, and I’m sure it will. But training, testing and implementing generative AI in a responsible way is a massive undertaking, and a very different game to making good, reliable devices and software services. Maybe it’s the one company doesn’t need to do both, and in fact there’s something to be said for being the platform that has more important things going on.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.



Source link

Continue Reading

Trending