Connect with us

Business

Generative AI Is Making Running an Online Business a Nightmare

Published

on


Sometime last year, Ian Lamont’s inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn’t opened any new positions, but when he logged onto LinkedIn, he found one for a “Data Entry Clerk” linked to his business’s name and logo.

Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company’s “manager.” The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company’s site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied.

Generative AI’s potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it’s expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI’s ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses.

Since ChatGPT’s debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it’s increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the “industrial revolution for scams” — as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets.

The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization’s chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and assuming the request came through the CFO, he green-lit the transaction.

Business Insider spoke with professionals in several industries — including recruitment, graphic design, publishing, and healthcare — who are scrambling to keep themselves and their customers safe against AI’s ever-evolving threats. Many feel like they’re playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning.


Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company’s 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it — and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive.

It was a rude awakening for Hankiewicz. She’s since ramped up the company’s cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, “the incident actually strengthened our relationship with many customers who appreciated our proactive approach,” she says.

Her alarm bells really went off once the interviewer asked her to share her driver’s license.

Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, isn’t surprised at the surge in personalized phishing attacks against small businesses like Oishya. GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand’s image and write flawless, convincing scam messages within minutes, he says. With cheap tools, “attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels,” Duncan says.

Though mainstream AI tools like ChatGPT have precautions in place when you ask them to infringe copyright, there are now plenty of free or inexpensive online services that allow users to replicate a business’s website with simple text prompts. Using a tool called Llama Press, I was able to produce a near-exact clone of Hankiewicz’s store, and personalize it from a few words of instructions. (Kody Kendall, Llama Press’s founder, says cloning a store like Oshiya’s doesn’t trigger a safety block because there can be legitimate reasons to do so, like when a business owner is trying to migrate their website to a new hosting platform. He adds that Llama Press relies on Anthropic’s and OpenAI’s built-in safety checks to weed out bad-faith requests.)

Text is just one front of the war businesses are fighting against malicious uses of AI. With the latest tools, it takes a solo adversary — again with no technical expertise — as little as an hour to create a convincing fake job candidate to attend a video interview.

Tatiana Becker, a tech recruiter based in New York, tells me deepfake job candidates have become an “epidemic.” Over the past couple years, she has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews. At this point, she’s able to discern some of their telltale signs of fakery, including a glitchy video quality and the candidate’s refusal to switch up any element of their appearance during the call, such as taking off their headphones. Now, at the start of every interview she asks for the candidates’ ID and poses more open-ended questions, like what they like to do in their free time, to suss out if they’re a human. Ironically, she’s made herself more robotic at the outset of interviews to sniff out the robots.

Nicole Yelland, a PR executive, says she found herself on the opposite end of deepfakery earlier this year. A scammer impersonating a startup recruiter approached her over email saying he was looking for a head of comms, with an offer package that included generous pay and benefits. The purported person even shared with her an exhaustive slide deck, decorated with AI-generated visuals, outlining the role’s responsibilities and benefits. Enticed, she scheduled an interview.

During the video meeting, however, the “hiring manager” refused to speak, and instead asked Yelland to type her responses to the written questions in the Microsoft Teams chat section. Her alarm bells really went off once the interviewer started asking her to share a series of private documents, including her driver’s license.

Yelland now runs a background check with tools like Spokeo before engaging with any stranger online. “It’s annoying and takes more time, but engaging with a spammer is more annoying and time-consuming; so this is where we are,” she says.

While videoconferencing platforms like Teams and Zoom are getting better at detecting AI-generated accounts, some experts say the detection itself risks creating an vicious cycle. The data these platforms collect on what’s fake is ultimately used to train more sophisticated GenAI models, which will help them get better at escaping fakery detectors and fuel “an arms race defenders cannot win,” says Jasson Casey, the CEO of Beyond Identity, a cybersecurity firm that specializes in identity theft. Casey and his company believe the focus should instead be on authenticating a person’s identity. Beyond Identity sells tools that can be plugged into Zoom that verify meeting participants through their device’s biometrics and location data. If it detects a discrepancy, the tools label the participants’ video feed as “unverified.” Tramèr Florian, a computer science professor at ETH Zurich, agrees that authenticating identity will likely become more essential to ensure that you’re always talking to a legitimate colleague.

It’s not just fake job candidates entrepreneurs now have to contend with, it’s always fake versions of themselves. In late 2024, scammers ran ads on Facebook for a video featuring Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute in Melbourne. Although the person in it looked and sounded exactly like Dr. Shaw, the voice had been deepfaked and edited to say that metformin — a first-line treatment for type 2 diabetes — is “dangerous,” and patients should instead switch to an unproven dietary supplement. The fake ad was accompanied by a fake written news interview with Shaw.

Several of his clinic’s patients, believing the video was genuine, reached out asking how to get a hold of the supplement. “One of my longstanding patients asked me how come I continued to prescribe metformin to him, when ‘I’ had said on the video that it was a poor drug,” Shaw tells me. Eventually he was able to get Facebook to take down the video.

Then there’s the equally vexing and annoying issue of AI slop — an inundation of low-quality, mass-produced images and text that is flooding the internet and making it ever-more difficult for the average person to tell what’s real or fake. In her research, DiResta found instances where social platforms’ recommendation engines have promoted malicious slop — where scammers would put up images of items like nonexistent rental properties, appliances, and more that users were frequently falling for it and giving away their payment details.

On Pinterest, AI-generated “inspo” posts have plagued people’s mood boards — so much so that Philadelphia-based Cake Life Shop now often receives orders from customers asking them to recreate what are actually AI-generated cakes. In one shared with Business Insider, the cake resembles a moss-filled rainforest, and features a functional waterfall. Thankfully for cofounder Nima Etemadi, most customers are “receptive to hearing about what is possible with real cake after we burst their AI bubble,” he says.

Similarly, AI-generated books have swarmed Amazon and are now hurting publisher sales.

Pauline Frommer, the president of the travel guide publisher Frommer Media, says that AI-generated guidebooks have managed to reach the top of lists with the help of fake reviews. An AI publisher buys a few Prime memberships, sets the guidebook’s ebook price to zero, and then leaves seemingly “verified reviews” by downloading its copies for free. These practices, she says, “will make it virtually impossible for a new, legitimate brand of guidebook to enter the business right now.” Ian Lamont says he received an AI-generated guidebook as a gift last year: a text-only guide to Taiwan, with no pictures or maps.


While the FTC now considers it illegal to publish fake, AI-generated product reviews, official policies haven’t yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but since it’s not error-free yet, some worry these measures may do more harm than good. DiResta fears that a potential unintended consequence of ubiquitous AI labels would be people experiencing “label fatigue,” where they blindly assume that unlabeled content is therefore always “real.” “It’s a potentially dangerous assumption if a sophisticated manipulator, like a state actor’s intelligence service, manages to get disinformation content past a labeler,” she says.

For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they’re dealing with an actual human and that the money they’re sending is actually going where they intend it to go.

Etemadi of Cake Life Shop recognizes that for as much as GenAI can help his business become more efficient, scam artists will ultimately use the same tools to become just as efficient. “Doing business online gets more necessary and high risk every year,” he says. “AI is just part of that.”


Shubham Agarwal is a freelance technology journalist from Ahmedabad, India, whose work has appeared in Wired, The Verge, Fast Company, and more.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.





Source link

Business

‘the Face of Gemini:’ How Google Found Its AI Hype Guy

Published

on


He’s not an executive, a company spokesperson, or a world-class researcher. But he might be Google’s secret weapon in winning the AI race.

If you’re an AI developer, you’ve likely heard of Logan Kilpatrick. As Google’s head of developer relations, Kilpatrick, 27, runs AI Studio, the company’s AI developer software program.

He has also become Google’s delegate for speaking to the AI community and — intentionally or not — a one-man marketing machine for the company’s AI products. He’s a prolific poster on X, where he’ll sometimes hype Google’s latest Gemini releases or tease something new on the horizon.

Above all, he is one of the people tasked with translating Google’s AI breakthroughs to the global developer community. It’s a crucial job at a time when the search giant needs to not just convince developers to use its products, but capture a new generation of builders entering the fray as AI makes it easier for anyone to make software.

“If you want AI to have the level of impact on humanity that I think it could have, you need to be able to provide a platform for developers in order to go and do this stuff,” he told Business Insider in an interview. “The reality is there’s a thousand and one things that Google is never going to build, and doesn’t make sense for us to build, that developers want to build.”

Company insiders say Google has recognized Kilpatrick’s strength and given him more responsibilities and visibility. He could be seen onstage at this year’s Google I/O conference and even had a fireside chat with Google cofounder Sergey Brin.

“People really crave legitimacy, authenticity, and competency, and Logan combines all three,” Asara Near, a startup founder who has occasionally contacted Kilpatrick with development questions, told BI.

LoganGPT

In 2022, OpenAI was preparing to launch ChatGPT and fire the starting gun on one of history’s most profound technological shifts. Kilpatrick, who has a technical background and worked at Apple and NASA, saw an online job ad for OpenAI and was soon facing a tricky decision: to work at what was then Sam Altman’s little-known startup, or take a gig at IBM.

He decided that OpenAI was worth a shot — and within a few months, found himself at the center of the biggest tech launch since the debut of the iPhone in 2007.

“The OpenAI experience was a startup experience for about six months and then it became basically a hyperscaler,” he told BI. It was chaotic, but it helped Kilpatrick learn how to build an ecosystem and cut his teeth as the developers’ go-to guy. There, developers nicknamed him “LoganGPT.”


Logan Kilpatrick

Kilpatrick joined OpenAI months before the public launch of ChatGPT.

Brett A. Sims



When he left OpenAI in 2024 for Google, developers and peers made clear it was a huge loss for the ChatGPT maker, and a big win for Google in the AI talent transfer window. AI Studio was then still a project inside Google’s Labs division, and Kilpatrick and his team were tasked with migrating it into a fully-fledged product inside Google’s Cloud unit. It was again like going from zero to one: AI Studio was pre-revenue with no customers, but with a long tail of developers ready to jump on board.

“It has felt oddly almost like the same exact experience I’ve lived through at two different companies and two different cultures,” he told BI.

In May this year, Kilpatrick was promoted, and his team running AI Studio was moved from the Cloud unit to Google DeepMind, bringing them closer to the researchers working on the underlying models and the employees working on its Gemini chatbot.

“He’s kind of all over the place, and that’s his superpower,” said one senior employee who requested anonymity because they were not permitted to speak to the media. They said that Google has put Kilpatrick in charge of more products as leaders have recognized his ability to engage so effectively with the developer community. “Logan is 90% of Google’s marketing,” they said.

Helping Google win

On paper, Google is an AI winner. The reality is more complicated.

Its latest Gemini 2.0 Pro model ranks top of multiple leaderboards across a range of testing areas, but this hasn’t always been reflected in the number of users. Google’s CEO, Sundar Pichai, said in May that the company’s Gemini app has more than 400 million monthly active users. That’s well behind the 500 million weekly active users for ChatGPT, according to figures shared by Altman in April.

“DeepMind doesn’t get nearly as much credit and attention as they deserve, and that’s because comms is vastly underperforming capabilities,” communications executive Lulu Meservey posted on X in May. Responding to another person, she wrote: “Logan is like 90% of their comms.”

Some of the struggle, insiders say, is due to Google owning multiple products that aren’t always clearly distinct. Developers can build using Vertex in Google Cloud or AI Studio. Meanwhile Google has a consumer-facing app simply called Gemini. The same models aren’t necessarily always available across all three places at the same time, which can get confusing for users and developers.

There’s also the problem of being a quarter-century-old tech behemoth with more nimble startups nipping at its heels. “OpenAI can put all their messaging arrows behind one thing, while Google has messaging arrows behind 10,000 things,” former Google product manager Rajat Paharia told BI.


Logan Kilpatrick speaks at Google IO

Logan Kilpatrick speaking at Google I/O.

Google/Ryan Trostle



Kilpatrick recognizes that Google has work to do. “I think Google on a net basis is doing so much in the world right now, and AI is around everything that we’re doing, and I think a lot of narrative doesn’t capture innovation is happening,” he said.

A big part of Kilpatrick’s job is trying to cement that narrative among the global developer base. At OpenAI, Sam Altman’s Jobsian showmanship has made him a highly effective salesman both for his company’s products and his vision for the future of this technology. Or, as Paharia described Altman to BI, a “showman with rizz.”

Google may have found its equivalent in Kilpatrick. He told BI that he often posts on X because it has become something of a town square for AI developers and enthusiasts, all champing at the bit for the latest crumb of news. It’s a community filled with hype, AI “vagueposting”, and steeped deeply in lore (what did Ilya see?).

On a day that OpenAI’s latest release sucking is grabbing everyone’s attention, Kilpatrick may log on and post a single word — “Gemini” — just to rev the hype engine a little.

Kilpatrick often has “a thousand” emails from developers that need responding to, he told BI. “I spend probably as much time as I physically can responding to stuff these days,” he said. And that’s between the numerous product meetings (he had 22 meetings scheduled on the day we spoke in early July, 23 the day before). He once posted on X: “I am online 7 days a week, ~8+ hours a day. If you need something as you build with Gemini, please ping me!”

Developers say they like that Kilpatrick takes the time to engage and listen to their feedback. “The few times I’ve emailed him to get help with something, they near-instantly responded and helped resolve the issue,” said Near, the startup founder. “This is the opposite of my experience through normal support channels.”

Andrew Curran, an AI commentator who frequently posts to X, wrote last month that Kilpatrick had been “an incredible hire” for Google. “To a lot of people he is now the face of Gemini, I bet most people don’t even remember his OAI days,” he wrote.

Kilpatrick told BI that because he is a developer himself, he finds it easy to understand the core target user. He said this has helped in building out Google’s AI Studio, and that engaging with developers comes naturally. “It’s just the obvious thing to do if you want to build a product for developers, is like, go talk to your users,” he said.

But the definition of developer is changing with approaches like vibe coding, which lets non-technical people create software by describing what they’d like to an AI tool.

“What it means to be a developer right now looks a little different than it did two years ago or three years ago, and I think it’s going to look fundamentally different in 10 years,” said Kilpatrick. He believes the developer group will “massively expand” in the next five years. His job at Google is to make the next generation believe Google is where they should be developing, but that job is also evolving in this new era of artificial intelligence.

“Our mandate is actually AI builders, already encompassing this group of people who maybe don’t identify as developers and don’t write code, but they build software using AI, and I think that’s going to accelerate in the next few years,” he said.

Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here’s our guide to sharing information securely.





Source link

Continue Reading

Business

Donald Trump threatens extra 10% tariff for ‘anti-American’ Brics policies, as trade war deadline approaches – business live | Business

Published

on


Donald Trump threatens extra 10% tariff for “anti-American” Brics policies

Good morning, and welcome to our rolling coverage of business, the financial markets, and the world economy.

Donald Trump has targeted the BRICS group of developing nations in the latest salvo of his ongoing trade war, as the deadline to agree deals before the president’s 90-day tariff pause looms.

Trump has warned overnight that he will impose a new 10% tariff on any country that aligns itself with the BRICS group, claiming they are “anti-American”.

Writing on his Truth Social site, Trump declared:

Any Country aligning themselves with the Anti-American policies of BRICS, will be charged an ADDITIONAL 10% Tariff. There will be no exceptions to this policy. Thank you for your attention to this matter!

Trump’s attack comes after the Brics group — which was originally made up of Brazil, Russia, India, China and South Africa but now includes other nations — met in Brazil at the weekend.

Brazil’s president Luiz Inacio Lula da Silva, told the meeting in Rio de Janeiro that BRICS was the heir to the “Non-Aligned Movement” – the bloc of countries who declined to ally with either side in the Cold War.

Lula criticised the move (driven by Trump) towards increased spending on the military rather than on international development, pointing out: “It is always easier to invest in war than in peace”.

He told leaders they were witnessing “the unparalleled collapse of multilateralism”, before warning:

“If international governance does not reflect the new multipolar reality of the 21st century, it is up to BRICS to help bring it up to date.”

The BRICS group also condemned US and Israeli attacks on Iran and urged “just and lasting” solutions to conflicts across the Middle East.

All of which appears to have stirred Trump into another tariff threat.

There’s also confusion this morning about the status of the original ‘liberation day’ tariffs which Trump announced at the start of April, and then paused for 90 days after the markets slumped.

The president told reporters on Sunday that his administration plans to start sending letters later today to US trade partners dictating new tariffs.

But there’s confusion about when these levies would kick in. Trump implied they would start on Wednesday, saying “I think we’ll have most countries done by July 9, yeah. Either a letter or a deal.”

But commerce secretary Howard Lutnick then weighed in to explain:

“But they go into effect on August 1. Tariffs go into effect August 1, but the president is setting the rates and the deals right now.”

Trump has subsequently posted that “TARIFF Letters, and/or Deals” will be delivered from 12:00 PM (Eastern)“ today, (that’s 5pm BST)

The agenda

Key events

Tesla shares drop after Musk launches America Party

Over in Frankfurt, shares in Tesla are sliding as the row between Elon Musk and Donald Trump escalates.

Telsa have fallen 3% in early trading, an indication that they could fall Wall Street when trading resumes, as investors react to Musk’s plan to launch a new US political party called the America Party.

Trump called the idea “ridiculous”, and claimed Musk had gone completely ‘off the rails’.

Veteran tech analyst Dan Ives of Wedbush said Musk was Tesla’s “biggest asset” and his decision to dive deeper into politics could hurt the car maker’s share price.

Ives wrote:

“Tesla needs Musk as CEO and its biggest asset and not heading down the political route yet again…while at the same time getting on Trump’s bad side.

“It would also not shock us if the Tesla board gets involved at some point given the political nature of this endeavour depending on how far Musk takes it.”





Source link

Continue Reading

Business

AI is the ‘best business partner’ says youngest self-made female billionaire

Published

on


Co-founder of Scale AI and founder of Passes, Lucy Guo pivoted from the tech-bro world of artificial intelligence to the ‘Hollywood’ creator space. But AI has its place in content creation, says Lucy

Lucy Guo left Scale AI back in 2018 for hazy reasons, citing “differences in product vision and road map”

Lucy Guo, founder and CEO of Passes, wants to turn content creators into millionaires. The 30 year old recently became a billionaire in her own right, though it’s “all on paper” as she told Forbes right before they crowned her the youngest self-made female billionaire in the world.

Passes is Lucy’s big bet in the creator economy. Speaking to The Mirror, she describes seeing “untapped potential” in the creator monetisation space back in 2020 after falling in with some content creators in Miami.

“I just saw how they could sell anything with an Instagram post or story” recalls Lucy. “I also saw how inconsistent their income could be.”

Her solution to the instability was for creators to monetise directly off their fan base, which would not only give creators direct, consistent income but the means to invest in other interests or business ventures. Ventures that could be passion projects or, as Lucy envisions, potentially large-scale product-based businesses.

Image of Lucy at Passes UK launch
Passes launched in the UK in June 2025

READ MORE: Audible’s new AI plans will put jobs at major risk, say translators and voice actors

Given Lucy’s significant background in AI, Passes’ approach is decidedly tech-forward compared to other fan subscriber platforms. While the technical approach separates Passes from its competitors, there’s been a lot of scepticism from creators about AI – viewed as both a potential competitor and thief. But Lucy is adamant AI’s utility will become clear.

“When creators realise the benefits of AI, they’re going to change their perception and they’re going to be very excited about it. But at the moment, there’s a lot of fear. And fear prevents you from looking at all the upsides.”

She continues: “The whole world is like ‘AI is going to take over’ and I’m just like ‘no, it’s going to be our co-pilot. It’s gonna be our best business partner’.”

AI will help content creators post quickly and often – which is key to long-term success according to Lucy. “We’ve actually noticed our creators that make the most money, they’re actually smaller. They have 200, 300,000 followers,” says Lucy. “My hypothesis is that it’s because they just churn out more content because it doesn’t need to be perfect”.

The question of what matters to fans boils down to speed and community, according to Lucy. “I would say in terms of what everyone wants it’s very, very fast customer service – whether the customer service they want is from the creator or from [Passes].”

Image of twitch logo and gaming controller
Lucy names Twitch as a prime example of a platform where fan communities are valued and thrive(Image: AFP via Getty Images)

Lucy also believes that women tend to lean more towards content creation and, simply put: “they’re better at it”.

“I think being a content creator requires a lot of empathy and being able to build relationships especially when they’re not in person. You’re building relationships with your fans digitally. And the traits needed to do that I think women are better at,” she explains.

By Lucy’s estimation, AI will make building those relationships easier and faster because it will free up creator’s time to engage fans and think creatively. But she will need to work on building meaningful relationships with creators to test her bet.

After Passes acquired the competitor site, Fanhouse in 2023, Lucy faced backlash from creators who felt blindsided by the acquisition. Creators found Passes’ lack of content guidelines and AI push alarming.

As reported by TechCrunch at the time, some creators grew worried about a tweet of Guo’s in which she stated that Passes was working on technology that could optionally make AI likenesses of creators. Concerns escalated after Twitch streamer Riley Rose pointed out that Passes does not have content guidelines on its website.

“It’s just that [Fanhouse’s] content guidelines are very, very specific,” Guo clarified to TechCrunch. She said that because Fanhouse used Stripe as its payment processor, the company had to be very clear with users about what they can and cannot post. “We do have content guidelines, it’s just more lax,” she explained.

Now, convincing creators to embrace AI and bring their fanbase to a new platform – many of whom aren’t accustomed to paying directly for their content – promises to be a tough sell even if Lucy is promising significant returns. And just as with fans, it isn’t all about the money for creators.

Help us improve our content by completing the survey below. We’d love to hear from you!



Source link

Continue Reading

Trending