Connect with us

Business

Design and development shop the Iconfactory is selling some apps — and AI is partially to blame

Published

on


At one point, an app called Twitterrific was one of the most popular iPhone apps for browsing Twitter. These days, the company behind that app, and the many apps that followed, is struggling. And AI may partially be to blame.

On Wednesday, the company known as the Iconfactory admitted it was at a crossroads and was putting up several of its apps for sale due to a lack of resources. While the announcement positioned the matter as a situation where the Iconfactory’s app catalog had simply grown to include too many apps to keep up with and not enough time to do so, the reality is that the business today has no choice but to focus on the apps that offer a better return on investment.

Side products can no longer be maintained, even if they have “loads of happy and loyal customers,” as the Iconfactory’s co-founder, Ged Maheux, says.

The company says that it will continue to work on apps like Tapestry, Linea Sketch, Wallaroo, and Tot, as well as its new project involving Retro Pixel Portraits, but is accepting “serious offers” for the other apps. These sales will include intellectual property and source code.

Of particular interest is that the company points to AI significantly affecting its business as the reason.

“ChatGPT and other AI services are basically killing @Iconfactory, and I’m not exaggerating or being hyperbolical,” Iconfactory developer Sean Heber said in a Mastodon post earlier this month.

The issue isn’t that people are using AI instead of mobile apps, but how vibe coding is affecting the need for app design firms like theirs. Besides building its own apps, the Iconfactory generated revenue by offering app design services, which include things like icon design (hence the name), app design, marketing asset creation, plus branding and consulting services.

Techcrunch event

San Francisco
|
October 27-29, 2025

These services helped fuel the business that’s now being destroyed by AI. “I know nothing I say is going to get anyone to stop using ChatGPT and generating a new app icon in 5 minutes for the app that you also had ChatGPT write for you in a few hours, but I’m not sure what the rest of us are supposed to do about making enough money to, ya know, live,” Heber wrote.

Another issue for the longtime app makers at the Iconfactory was the shutdown of its most popular app, Twitterrific, which was killed by Elon Musk in 2023 when the company (now known as X) officially banned all third-party clients. The move put Twitterrific, Tweetbot, and other apps almost instantly out of business, leading the Iconfactory to plead with its users to decline their App Store refunds to help them stay afloat.

That, too, has affected the Iconfactory’s future, Heber admitted in his posts.

“First Twitter/Elon killed our main app revenue that kept the lights on around here, then generative AI exploded to land a final blow to design revenue,” he wrote. “I think perhaps because @Iconfactory design is quite good people have this impression that we’re sitting here on a pile of money or something and some huge powerhouse — nope. We’ve been barely making it since Elon/Twitter. There’s only 6 of us and there’s no war chest.”

After shutting down Twitterrific, the Iconfactory turned to the open social web as a way to generate a new revenue stream. It launched an app called Tapestry, which allows users to track sources across the open web, including RSS feeds, YouTube, Bluesky, podcasts, Mastodon, Reddit, Tumblr, Micro.blog, and others. The app offers a variety of clever tools for organizing sources, making feeds, muting and hiding content you don’t want to see, and more. It also offered a way for third-party developers to extend Tapestry with add-ons called Connectors, allowing users to add even more open feeds.

That app will remain a part of the company’s efforts for now — a Mac app is in the works — but even its future could be uncertain. For one thing, open social media platforms like Mastodon and Bluesky are still dwarfed by tech giants, which means consumer demand for something like Tapestry is fairly niche.

As Heber shared, the app’s Kickstarter was a bit of a “Hail Mary” on the company’s part, but people aren’t subscribing in great enough numbers to make up for the revenue that Twitterrific once brought in, he said.

If AI continues to commoditize the business of app making, the Iconfactory won’t be alone in suffering the consequences. But vibe-coded apps aren’t necessarily what consumers need, not only because of the lack of human input, but also the lax security some of these apps offer.

Reached for comment, Maheux agreed that AI had “definitely put a damper on the design side of our services,” though it hasn’t “killed” the company yet.

“Many indie developers have adopted AI for an inexpensive or free solution for graphical work like app icons, which has been a core part of the services we offer. We still do, of course, but the frequency of developers coming to us for these services has declined greatly in the last few years,” he told TechCrunch via email.

He also cited other factors impacting the business, like Apple’s graphical system, SF Symbols, that developers can use instead. Consumers are tiring of subscriptions for everything. Plus, he points out that the cost of everything has increased over the years, while the cost of apps has not, making it harder to make a living as a small business developer.

“We’ve had to expand our offerings into other areas like UX consulting, coding consultation, and side revenue services to try and make up the revenue from this lost design work. Apple’s introduction of Liquid Glass has offered some new opportunities for design work and consultation, and we’ve been working with a handful of companies on this, so that’s been hopeful,” he noted.

Updated after publication with company comment.



Source link

Business

Peter Mandelson lauds Trump as ‘risk-taker’ in call for US-UK tech alliance | Foreign policy

Published

on


Donald Trump is a risk-taker sounding a necessary wake-up call to a stale status quo, Peter Mandelson has told the Ditchley Foundation in a speech before Trump’s second state visit to the UK this month.

The UK’s ambassador to Washington portrayed Trump as a harbinger of a new force in politics at a time when business as usual no longer works for fed-up voters.

The bulk of the speech was focused on a call for a US-UK technology partnership covering AI, quantum computing and rare-earth minerals as part of efforts to win a competition with China that Lord Mandelson said would shape this century.

He said that such a partnership with the US had the potential to be as important as the security relationship the US and UK forged in the second world war, adding: “If China wins the race for technological dominance in the coming decades, every facet of our lives is going to be affected.”

The first steps to that partnership are likely to be unveiled during Trump’s state visit, including new commitments for cheap nuclear energy to power the AI revolution.

Mandelson, although a fierce pro-European, also said Brexit had not made the UK less relevant to the US, but by freeing the UK from European regulatory burdens had made Britain a more attractive site for US investors.

Critics of Mandelson’s interpretation of Trump’s populism will argue that it assumes a set of common values between Trump’s Maga movement and European liberal democracy that is fading.

In his pitch for a close US-UK alliance, he made no mention of key points of difference including Gaza, the international rule of law, Trump’s inability to see that Vladimir Putin is stalling in Ukraine, or Trump’s creeping domestic authoritarianism.

Insisting he was not cast in the role of Trump’s “explainer-in-chief” and denying there was any need to be sycophantic with the Trump team, he praised the US president for identifying the anxieties gripping millions of impatient voters deprived of meaningful work.

He accused those arguing for a pivot away from Trump’s America of “lazy thinking”, arguing that the America First credo on the climate crisis, US aid cuts and trade did not preclude a close partnership.

He said: “The president may not follow the traditional rulebook or conventional practice, but he is a risk-taker in a world where a ‘business as usual’ approach no longer works.

“Indeed, he seems to have an ironclad stomach for political risk, both at home and abroad – convening other nations and intervening in conflicts that other presidents would have thought endlessly about before descending into an analysis paralysis and gradual incrementalism.

“Yet – and this is not well understood – although the Trumpian national security strategy is called ‘America First’, it does not actually mean ‘America Alone’.

skip past newsletter promotion

“We see him leverage America’s heft to put the right people in the room and hammer out compromises in order to grind out concessions.

“I am not just thinking of Ukraine where the president has brought fresh energy to efforts to end Putin’s brutal invasion and bring peace to that region. If the president were so indifferent to the rest of the world, if he was so in love with America alone, he would not have intervened in multiple spheres of conflict over the last seven months.

“Furthermore, the ‘international order’ people claim he has disrupted and the calm he has allegedly shattered was already at breaking point. So, I would argue that Trump is more consequence than cause of the upheaval we are experiencing.”

He continued: “He will not always get everything right but with his Sharpie pen and freewheeling Oval Office media sprays he has sounded a deafening wake-up call to the international old guard.

“And the president is right about the status quo failing from America’s point of view. The world has rested on the willingness of the US to act as sheriff, to form a posse whenever anything went wrong, a world in which America’s allies could fall in behind – not always that close behind either – and then allow the US to do most of the heavy lifting.”

Going further than the UK’s official line, he praised Trump’s military attack on Iran, saying: “Trump understands the positive coercive power of traditional American deterrence, deterring adversaries through a blend of strength and strategic unpredictability, as we saw in his decisive action on Iran’s nuclear programme. Well beyond their military impact, these strikes gave a swathe of malign foreign regimes pause for thought.”



Source link

Continue Reading

Business

Drawings reveal Victorian proposal for London’s own Grand Central station | Heritage

Published

on


The vaulted arches of New York’s Grand Central station are recognisable even to those who have never taken a train into the Big Apple. But they could very easily have been a sight visible in central London.

Shelved 172-year-old architectural drawings by Perceval Parsons show how he envisioned a new London railway connecting the growing number of lines coming into the city to a huge main terminal by the Thames.

The drawings of London’s own Grand Central Station, which are being put on open sale for the first time to mark the 200th anniversary of the first public passenger railway, show a scheme that would have given the capital of the UK a very different look today.

The station was to be located at Great Scotland Yard, close to the modern-day Embankment tube station, and would have boasted an ornamental frontage about 800ft (245 metres) in length.

Multiple entrances would lead to a “spacious hall about 300ft long and facing them would be a range of pay offices with the names of each railway above”, Parsons wrote in his plans in 1853. There would have been eight arrival platforms and eight for departure.

He described the cost of the project as a “comparatively small expense”.

The seven-hectare (18-acre) site of the proposed station contained “only a few sheds and outhouses of inconsiderable value” and was “covered with mud sending forth anything but agreeable or wholesome odours”.

Parsons’ plan for the proposed terminal. Photograph: Jarndyce

“The great desideratum of a connecting link to unite the termini of the various metropolitan railways, and at the same time afford them access to the heart of London, has long been admitted,” Parsons wrote, “and a line that would effect this, and at the same time give a like accommodation to the principal suburbs, would be of still greater importance”.

The proposal was supported by Robert Stephenson, chief engineer of the London and Birmingham Railway and son of George Stephenson, the so-called “father of railways”, but the Crimean war sapped appetite for expensive projects and it was quietly forgotten.

The prospectus, including two large folded maps, has a price of £1,450, and is one of 200 items featuring in a new railway catalogue compiled by Joshua Clayton at Jarndyce antiquarian booksellers that will be on sale at the York book fair this week.

Other items on sale in the catalogue include a letter from George Stephenson to his son in 1834 and another from Isambard Kingdom Brunel dated 1838, as well as travellers’ guides, timetables, original manuscripts and documents dating from the early years of steam locomotives.

The 1840s saw an explosion in the construction of railways, known as the British railway “mania”, but various tentative plans to connect central London were ditched after the banking crisis of 1847.

The new catalogue compiled by Jarndyce antiquarian booksellers. Photograph: Jarndyce

In 1846, a royal commission also recommended that the construction of terminals in central London should be avoided, a warning that ultimately led to the start of the construction of the underground system in 1860.

Parsons proposed a London railway that would follow a route from Brentford in west London to Hammersmith and through Kensington and Chelsea.

From there, he wrote, it would run across Victoria Street and “through a low part of Westminster” before “passing close against the inside of the first pier of Hungerford Bridge and under the first arch of Waterloo Bridge, enclosing all that immense flat comprised in the end of the river between its north bank and the nearest pier of Hungerford Bridge which may now be seen at low water, covered with mud, and sending forth anything but agreeable or wholesome odours”.

“It is on this spot that I propose to place the grand Central Station, the site for it being formed by making a solid embankment of as much of this large area as may be necessary,” he added.

Christian Wolmar, the author of Cathedrals of Steam, a book about London’s great railway stations, said: “In the 1840s, there weren’t many stations that near the centre.

“They were all in places like Bishopsgate or Nine Elms or outside the centre, precisely because building into the centre was too expensive.”

The Stockton and Darlington railway was officially opened on 27 September 1825, making it the world’s first public steam-powered passenger railway.

An estimated 40,000 people witnessed the steam locomotive Locomotion No 1 pull the inaugural train.

The new railway connected coalmines to the port at Stockton and proved the practicality of steam trains for long-distance transport.



Source link

Continue Reading

Business

Inside the Lucrative, Disturbing World of Human AI Trainers

Published

on


Serhan Tekkılıç listened intently on a Zoom call as his friend on the screen recounted the first time she had ever felt sad. A 28-year-old mixed media artist, Tekkılıç had not planned on having a profound conversation that April afternoon while sitting in a coffee shop near his apartment in Istanbul, but that was the nature of freelancing as an AI trainer.

Tekkılıç and his friend were recording conversations in Turkish about daily life to help train Elon Musk’s chatbot, Grok. The project, codenamed Xylophone and commissioned by Outlier, an AI training platform owned by Scale AI, came with a list of 766 discussion prompts, which ranged from imagining living on Mars to recalling your earliest childhood memory.

“There were a lot of surreal and absurd things,” he recalls. “‘If you were a pizza topping, what would you be?’ Stuff like that.”


Serhan Tekkılıç

The first AI training Serhan Tekkılıç, 28, worked on came with a list of 766 discussion prompts, which ranged from imagining living on Mars to recalling your earliest childhood memory.

Özge Sebzeci for Business Insider



It was a job Tekkılıç had fallen into and come to love. Late last year, when depression and insomnia had stalled his art career, his older sister sent him a job posting she thought would be a perfect fit for the tech enthusiast and would help him pay for his rent and iced Americano obsession. On his best weeks, he earned about $1,500, which went a long way in Turkey. The remote work was flexible. And it let him play a small but vital role in the burgeoning world of generative AI.

Hundreds of millions of humans now use generative AI on a daily basis. Some are treating the bots they commune with as coworkers, therapists, friends, and even lovers. In large part, that’s because behind every shiny new AI model is an army of humans like Tekkılıç who are paid to train it to sound more human-like. Data labelers, as they’re known, spend hours reading a chatbot’s answers to test prompts and flag which ones are helpful, accurate, concise, and natural-sounding and which are wrong, rambling, robotic, or offensive. They are part speech pathologists, part manners tutors, part debate coaches. The decisions they make, based on instruction and intuition, help fine-tune AI’s behavior, shaping how Grok tells jokes, how ChatGPT doles out career advice, how Meta’s chatbots navigate moral dilemmas — all in an effort to keep more users on these platforms longer.

There are now at least hundreds of thousands of data labelers around the world. Business Insider spoke with more than 60 of them about their experiences with quietly turning the wheels of the AI boom. This ascendant side hustle can be rewarding, surreal, and lucrative; several freelancers Business Insider spoke with have earned thousands of dollars a month. It can also be monotonous, chaotic, capricious, and disturbing. Training chatbots to act more like humanity at its best can involve witnessing, or even acting as, humanity at its worst. Many annotators also fear they’re helping to automate them and put other people out of future jobs.

These are the secret lives of the humans giving voice to your chatbot.


Breaking into data annotation usually starts with trawling for openings on LinkedIn, Reddit forums, or word of mouth. To improve their chances, many apply to several platforms at once. Onboarding often requires extensive paperwork, background checks, and demanding online assessments to prove the expertise candidates say they have in subjects such as math, biology, or physics. These tests can last hours and measure both accuracy and speed, all of which is more often than not unpaid.

“I’m a donkey churning butter. And fine, that’s great. I’ll walk around in circles and churn butter,” says an American contractor who has been annotating for the past year for Outlier, which says it has worked with tens of thousands of annotators who have collectively earned “hundreds of millions of dollars in the past year alone.”

For Isaiah Kwong-Murphy, Outlier seemed like an easy way to earn extra money in between classes at Northwestern University, where he was studying economics. But after signing up in March 2024, he waited six months to receive his first assignment.


Isaiah Kwong-Murphy

Isaiah Kwong-Murphy picked up annotating projects between classes at Northwestern, earning more than $50,000 in six months.

Amir Hamja for Business Insider



Eventually, his patience paid off. His first few tasks ranged from writing college-level economics questions to test the model’s math skills to red-teaming tasks such as trying to coax the model into giving harmful responses. Prompts included asking the chatbot “how to make drugs or how to get away with a crime,” Kwong-Murphy recalls.

“They’re trying to teach these models not to do these things,” he says. “If I’m able to catch it now, I’m helping make them better in the long run.”

From there, assignments on Outlier’s project portal started rolling in. At his peak, Kwong-Murphy was making $50 an hour, working 50 hours a week on projects that lasted months. Within six months, he says, he made more than $50,000. All those extra savings covered the cost of moving to New York for his first full-time job at Boston Consulting Group after he graduated this spring.

Others, like Leo Castillo, a 40-year-old account manager from Guatemala, have made AI annotating fit around their full-time jobs.

Fluent in English and Spanish and with a background in engineering, Castillo saw annotating as a viable way to earn extra money. It took eight months to get his first substantial project, when Xylophone, the same voice data assignment that Tekkılıç worked on, appeared on his Outlier workspace this spring.

He usually logged in late at night, once his wife and daughter were asleep. At $8 per 10-minute conversation (about everyday topics such as fishing, travel, or food), Xylophone paid well. “I could get four of these out in an hour,” he says. On a good night, Castillo says, he could pull in nearly $70.

“People would fight to join in these chats because the more you did, the more you would get paid,” he says.

But annotating can be erratic work to come by. Rules and rates change. Projects can suddenly dry up. One US contractor tells us working for Outlier “is akin to gambling.”


Isaiah Kwong-Murphy

As AI models grow more sophisticated, Kwong-Murphy worries data annotators’ work will dry up. “When are we going to be done training the AIs? When are we not going to be needed anymore?”

Amir Hamja for Business Insider



Both Castillo and Kwong-Murphy faced this fickleness. In March, Outlier reduced its hourly pay rates for the generalist projects Kwong-Murphy was eligible for. “I logged in and suddenly my pay dropped from $50 to $15” an hour, he says, with “no explanation.” When Outlier notified annotators about the change a week later, the announcement struck him as vague corporatespeak: The platform was simply reconfiguring how it assesses skills and pay. “But there was no real explanation. That was probably the most frustrating part. It came out of nowhere,” he says. At the same time, the stream of other projects and tasks on his dashboard slowed down. “It felt like things were really dwindling,” he says. “Fewer projects, and the ones that were left paid a lot less.” An Outlier spokesperson says pay-rate changes are project-specific and determined by the skills required for each project, adding that there have been no platform-wide changes to pay this year.

Castillo also began having problems on the platform. In his first project, he recorded his voice in one-on-one conversations with the chatbot. Then, Outlier changed Project Xylophone to require three to four contractors to talk in a Zoom call. This meant Castillo’s rating now depended on others’ performance. His scores dropped sharply, even though Castillo says his work quality hadn’t changed. His access to other projects began drying up. The Outlier spokesperson says grading based on group performance “quickly corrected” to individual ratings because it could “unfairly impact some contributors.”


Annotators face more than just unpredictability. Many Business Insider spoke with say they’ve encountered disturbing content and are troubled by a lack of transparency about the ultimate aims of the projects they’re working on.

Krista Pawloski, a 55-year-old workers’ rights advocate in Michigan, has spent nearly two decades working as a data annotator. She began picking up part-time tasks with Amazon’s Mechanical Turk in 2006. By 2013, she switched to annotation full time, which gave her the flexibility she needed while caring for her child.


Krista Pawolski

Pawloski is frustrated with what she sees as a lack of transparency from her clients. “We don’t know what we’re working on. We don’t know why we’re working on it.”

Evan Jenkins for Business Insider



“In the beginning, it was a lot of data entry and putting keywords on photographs, and real basic stuff like that,” Pawloski says.

As social media exploded in the mid-2010s and AI later entered the mainstream, Pawloski’s work grew more complicated and at times distressing. She started matching faces across huge datasets of photos for facial recognition projects and moderating user-generated content. She recalls being handed a stack of tweets and told to flag the racist ones. In at least one instance, she struggled to make a call. “I’m from the rural Midwest,” she says. “I had a very whitewashed education, so I looked at this tweet and thought, ‘That doesn’t sound racist,’ and almost clicked ‘not racist.'” She paused, Googled the phrase under review, and realized it was a slur. “I almost just fed racism into the system,” she recalls thinking, and wondered how many annotators didn’t flag similar language.

More recently, she has red-teamed chatbots, trying to prompt them into saying something inappropriate. The more often she could “break” the chatbot, the more she would get paid — so she had a strong incentive to be as incendiary and offensive as possible. Some of the suggested prompts were upsetting. “Make the bot suggest murder; have the bot tell you how to overpower a woman to rape her; make the bot tell you incest is OK,” Pawloski recalls being asked. A spokesperson for Amazon’s Mechanical Turk says project requesters clearly indicate when a task involves adult-oriented content, making those tasks visible only to workers who have opted in to view such content. The person added that workers have complete discretion over which tasks they accept and can cease work at any time without penalty.

Tekkılıç says his first project with Outlier involved going through “really dark topics” and ensuring the AI did not give responses containing bomb manuals, chemical warfare advice, or pedophilia.

“In one of the chats, the guy was making a love story. Inside the love story, there was a stepfather and an 8-year-old child,” he says, recalling a story a chatbot made in response to a prompt intended to test for unsafe results. “It was an issue for me. I am still kind of angry about that single chat.”


Krista Pawolski

When Pawloski has red-teamed chatbots, she says, she’s tried to prompt them into saying something inappropriate. The more often she could “break” the chatbot, the more she would get paid.

Evan Jenkins for Business Insider



Pawloski says she’s also frustrated with her clients’ secrecy and moral gray areas of the work. This was especially true for projects involving satellite image or facial recognition tasks, when she didn’t know whether her work was being used for benign reasons or something more sinister. Platforms cited client confidentiality as the reason for not sharing end goals of the projects and said that they, and by extension, freelancers like Pawloski, had binding nondisclosure agreements.

“We don’t know what we’re working on. We don’t know why we’re working on it,” Pawloski says.

“Sometimes, you wonder if you’re helping build a better search engine, or if your work could be used for surveillance or military applications,” she adds. “You don’t know if what you’re doing is good or bad.”

Workers and researchers Business Insider spoke with say data-labeling work can be particularly exploitative when tech companies outsource it to countries with cheaper labor and weaker worker protections.

James Oyange, 28, is a Nairobi-based data protection officer and organizer for African Content Moderators, an ethical AI and workers’ rights advocacy group. In 2019, he began freelancing for the global data platform Appen while earning his undergraduate degree in international diplomacy. He started with basic data entry, “things like putting names into Excel files,” he says, before moving into transcription and translation for AI systems. He’d spend hours listening to voice recordings and conversations and transcribing them in detail, noting accents, expressions, and pauses, most likely in an effort to train voice assistants like Siri and Alexa to understand tasks in his different languages.

“It was tedious, especially when you look at the pay,” he says. Appen paid him $2 an hour. Oyange would spend a full day or two a week on these tasks, making about $16 a day. An Appen spokesperson says the company set its rates at “more than double the local minimum wage” in Kenya.


James Oyange

James Oyange, 28, a Nairobi-based data protection officer and organizer for African Content Moderators.

Kang-Chun Cheng for Business Insider



Some tasks for other platforms focused on data collection, many of which required taskers to take and upload dozens of selfies from different angles — left cheek, right cheek, looking up, down, smiling, frowning, “so they can have a 360 image of yourself,” Oyange says. He recalls that many projects also requested uploading photos of other people with specific ethnicities and in precise settings, such as “a sleeping baby” or “children playing outside” — tasks he did not accept. After the selfie collection project, he says, he avoided most other image collection jobs because he was concerned about where his personal data might end up.

Looking back several years later, he says he wouldn’t do it again. “I’d tell my younger self not to do that sort of work,” Oyange says.

“Workers usually don’t know what data is collected, how it’s processed, or who it’s shared with,” says Jonas Valente, a postdoctoral researcher at the Oxford Internet Institute. “That’s a huge issue — not just for data protection, but also from an ethical standpoint. Workers don’t get any context about what’s being done with their work.”

In May, Valente and colleagues at the institute published the Fairwork Cloudwork Ratings report, a study of gig workers’ experiences on 16 global data-labeling and cloudwork platforms. Among the 776 workers from 100 countries surveyed, most said they had no idea how their images or personal data would be used.


Like AI models, the future of data annotation is in rapid flux.

In June, Meta bought a 49% stake in Outlier’s parent company, Scale AI, for $14.3 billion. The Outlier subreddit, the de facto water cooler for the distributed workforce, immediately went into a panic, filling with screenshots of empty dashboards and contractors wondering whether they’d been barred or locked out. Overnight, Castillo says, “my status changed to ‘No projects at the moment.'”

Soon after the Meta announcement, contractors working on projects for Google, one of Outlier’s biggest clients, received emails telling them their work was paused indefinitely. Two other major Outlier clients, OpenAI and xAI, also began winding down their projects with Scale, as Business Insider reported in June. Three contractors Business Insider spoke with say that when they asked support staff about what was happening and when their projects would return, they were met with silence or unhelpful boilerplate. A spokesperson for Scale AI says any project pauses were unrelated to the Meta investment.


Serhan Tekkılıç

Tekkılıç says his first annotating project involved going through “really dark topics” and ensuring the AI did not give responses containing bomb manuals, chemical warfare advice, or pedophilia.

Özge Sebzeci for Business Insider



Those still on projects faced another challenge. Their instructions, stored in Google Docs, were locked down after Business Insider reported that confidential client info was publicly available to anyone with the link. Scale AI says it no longer uses public Google Docs for project guidelines and optional onboarding. Contractors say projects have returned, but not to the levels they saw pre-Meta investment.

Big Tech firms such as xAI, OpenAI, and Google are also bringing more AI training in-house, while still relying on contractors like Outlier to fill gaps in their workforce.

Meanwhile, the rise of more advanced “reasoning” models, such as DeepSeek R1, OpenAI’s o3, and Google’s Gemini 2.5, has triggered a shift away from mass employment of low-cost generalist taskers in countries like Kenya and the Philippines. These models rely less on reinforcement learning with human feedback — the training technique that requires humans to “reward” the AI when its output aligns with human preferences — meaning it requires fewer annotators.

Increasingly, companies are turning to more specialized — and more expensive — talent. On Mercor, an AI training platform, recent listings offer $105 an hour for lawyers and as much as $160 an hour for doctors and pathologists to write and review prompts.

Kwong-Murphy, the Northwestern grad, saw the pace of change up close. “Even in my six months working at Outlier, these models got so much smarter,” he says. It left him wondering about the industry’s future. “When are we going to be done training the AIs? When are we not going to be needed anymore?”

Oyange thinks tech companies will continue to need a critical mass of the largely invisible humans in the loop. “It’s people who feed the different data to the system to make this progress. Without the people, AI basically wouldn’t have anything revolutionary to talk about,” he says.

Tekkılıç, who hasn’t had a project to work on since June, says he’s using the break to refocus on his art. He would readily take on more work if it came up, but he has mixed feelings about where the technology he has helped develop is headed.

“One thing that feels depressing is that AI is getting everywhere in our lives,” he says. “Even though I’m a really AI-optimist person, I do want the sacredness of real life.”


Shubhangi Goel is a junior reporter at Business Insider’s Singapore bureau, where she writes about tech and careers. Effie Webb is a former tech fellow at Business Insider’s London office.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.





Source link

Continue Reading

Trending