Business
Scale AI’s Public Google Docs Reveal Security Holes in AI Projects
As Scale AI seeks to reassure customers that their data is secure following Meta’s $14.3 billion investment, leaked files and the startup’s own contractors indicate it has some serious security holes.
Scale AI routinely uses public Google Docs to track work for high-profile customers like Google, Meta, and xAI, leaving multiple AI training documents labeled “confidential” accessible to anyone with the link, Business Insider found.
Contractors told BI the company relies on public Google Docs to share internal files, a method that’s efficient for its vast army of at least 240,000 contractors and presents clear cybersecurity and confidentiality risks.
Scale AI also left public Google Docs with sensitive details about thousands of its contractors, including their private email addresses and whether they were suspected of “cheating.” Some of those documents can be viewed and also edited by anyone with the right URL.
There’s no indication that Scale AI has suffered a breach because of this. Two cybersecurity experts told BI that such practices could leave the company and its clients vulnerable to various kinds of hacks, such as hackers impersonating contractors or uploading malware into accessible files.
Scale AI told Business Insider it takes data security seriously and is looking into the matter.
“We are conducting a thorough investigation and have disabled any user’s ability to publicly share documents from Scale-managed systems,” a Scale AI spokesperson said. “We remain committed to robust technical and policy safeguards to protect confidential information and are always working to strengthen our practices.”
Meta declined to comment. Google and xAI didn’t respond to requests for comment.
In the wake of Meta’s blockbuster investment, clients like Google, OpenAI, and xAI paused work with Scale. In a blog post last week, Scale reassured Big Tech clients that it remains a neutral and independent partner with strict security standards.
The company said that “ensuring customer trust has been and will always be a top priority,” and that it has “robust technical and policy safeguards to protect customers’ confidential information.”
BI’s findings raise questions about whether it did enough to ensure security and whether Meta was aware of the issue before writing the check.
Confidential AI projects were accessible
BI was able to view thousands of pages of project documents across 85 individual Google Docs tied to Scale AI’s work with Big Tech clients. The documents include sensitive details, such as how Google used ChatGPT to improve its own struggling chatbot, then called Bard.
Scale also left public at least seven instruction manuals marked “confidential” by Google, which were accessible to anyone with the link. Those documents spell out what Google thought was wrong with Bard — that it had difficulties answering complex questions — and how Scale contractors should fix it.
For Elon Musk’s xAI, for which Scale ran at least 10 generative AI projects as of April, public Google documents and spreadsheets show details of “Project Xylophone,” BI reported earlier this month. Training documents and a list of 700 conversation prompts revealed how the project focused on improving the AI’s conversation skills about a wide array of topics, from zombie apocalypses to plumbing.
Meta training documents, marked confidential at the top, were also left public to anyone with the link. These included links to accessible audio files with examples of “good” and “bad” speech prompts, suggesting the standards Meta set for expressiveness in its AI products.
Some of those projects focused on training Meta’s chatbots to be more conversational and emotionally engaging while ensuring they handled sensitive topics safely, BI previously reported. As of April, Meta had at least 21 generative AI projects with Scale.
Several Scale AI contractors interviewed by BI said it was easy to figure out which client they worked for, even though they were codenamed, often just from the nature of the task or the way the instructions were phrased. Sometimes it was even easier: One presentation seen by BI had Google’s logo.
Even when projects were meant to be anonymized, contractors across different projects described instantly recognizing clients or products. In some cases, simply prompting the model or asking it directly which chatbot it was would reveal the underlying client, contractors said.
Scale AI left contractor information public
Other Google Docs exposed sensitive personal information about Scale’s contractors. BI reviewed spreadsheets that were not locked down and that listed the names and private Gmail addresses of thousands of workers. Several contacted by BI said they were surprised to learn their details were accessible to anyone with the URL of the document.
Many documents include details about their work performance.
One spreadsheet titled “Good and Bad Folks” categorizes dozens of workers as either “high quality” or suspected of “cheating.” Another list of hundreds of personal email addresses is titled “move all cheating taskers,” which also flagged workers for “suspicious behavior.”
Another sheet names nearly 1,000 contractors who were “mistakenly banned” from Scale AI’s platforms.
Other documents show how much individual contractors were paid, along with detailed notes on pay disputes and discrepancies.
The system seemed ‘incredibly janky’
Five current and former Scale AI contractors who worked on separate projects told BI that the use of public Google Docs was widespread across the company.
Contractors said that using them streamlined operations for Scale, which relies mostly on freelance contributors. Managing individual access permissions for each contractor would have slowed down the process.
Scale AI’s internal platform requires workers to verify themselves, sometimes using their camera, contractors told BI.
At the same time, many documents containing information on training AI models can be accessed through public links or links in other documents without verification.
“The whole Google Docs system always seemed incredibly janky,” one worker said.
Two other workers said they retained access to old projects they no longer worked on, which were sometimes updated with requests from the client company regarding how the models should be trained.
‘Of course it’s dangerous’
Organizing internal work through public Google Docs can create serious cybersecurity risks, Joseph Steinberg, a Columbia University cybersecurity lecturer, told BI.
“Of course it’s dangerous. In the best-case scenario, it’s just enabling social engineering,” he said.
Social engineering refers to attacks where hackers trick employees or contractors into giving up access, often by impersonating someone within the company.
Leaving details about thousands of contractors easily accessible creates many opportunities for that kind of breach, Steinberg said.
At the same time, investing more in security can slow down growth-oriented startups.
“The companies that actually spend time doing security right very often lose out because other companies move faster to market,” Steinberg said.
The fact that some of the Google Docs were editable by anyone creates risks, such as bad actors inserting malicious links into the documents for others to click, Stephanie Kurtz, a regional director at cyber firm Trace3, told BI.
Kurtz added that companies should start with managing access via invites.
“Putting it out there and hoping somebody doesn’t share a link, that’s not a great strategy there,” she said.
Have a tip? Contact this reporter via email at crollet@insider.com or Signal and WhatsApp at 628-282-2811. Use a personal email address and a nonwork device; here’s our guide to sharing information securely.
Business
Why AI alone can’t guarantee business success, expert cautions
As companies around the world race to adopt artificial intelligence (AI), strategy expert Shotunde Taiwo urges business leaders to look beyond the hype and focus on aligning technology with clear strategic goals.
Taiwo, a finance and strategy professional, cautions that while AI offers transformative potential, it is not a guaranteed path to success. Without a coherent strategy, organisations risk misdirecting resources, entrenching inefficiencies, and failing to deliver meaningful value from their AI investments.
“AI cannot substitute for strategic clarity,” she explains, stressing the importance of purposeful direction before deploying advanced digital tools. Business leaders, she says, must first define their objectives, only then can AI act as an effective enabler rather than an expensive distraction.
Taiwo stated that many organisations are investing heavily in AI labs, data infrastructure, and talent acquisition without clearly defined business outcomes. This approach, she notes, risks undermining the very efficiencies these technologies are meant to create.
For example, a retail business lacking a distinctive value proposition cannot expect a recommendation engine to deliver meaningful differentiation. Similarly, manufacturers without well-structured pricing strategies will find limited benefit in predictive analytics. “AI amplifies what’s already there,” she adds. “It rewards businesses with strong foundations and exposes those without.”
According to Taiwo, the true value of AI emerges when it is guided by intelligent, strategic intent. High-performing organisations use AI to solve well-defined problems aligned with commercial goals, often framed by business analysts or strategic leaders who understand both operational realities and broader business priorities.
She cites Amazon’s recommendation engine and UPS’s route optimisation algorithms as models of effective AI deployment. In both cases, technology served a clear purpose: boosting customer retention and streamlining logistics, respectively. When guided by strategy, AI becomes a force multiplier, enhancing forecasting, enabling automation, and improving personalisation where workflows are already well-defined.
On the other hand, even the most advanced AI systems falter in the absence of sound strategy. Common pitfalls include deploying machine learning models without a business case, focusing on tools rather than problems, collecting data without a clear use, and optimising narrow metrics at the expense of enterprise-wide goals. These missteps often result in underwhelming pilots and disillusioned stakeholders, issues strategic professionals are well-equipped to navigate and avoid.
In this sense, AI adoption can serve as a strategic diagnostic. Taiwo suggests that when business leaders struggle to define impactful AI use cases, it often reflects deeper ambiguity in their organisational direction. Key questions, such as where value is created, who the primary customer is, or which decisions would benefit most from improved speed or accuracy, are not technical, but fundamentally strategic.
AI, she says, acts as a mirror, revealing strengths and weaknesses in how a business is positioned, differentiated, and aligned across functions. Strategic leaders and business analysts are uniquely positioned to interpret these insights, inform course corrections, and guide effective technology investments.
Looking ahead, Taiwo argues that strategy in the AI era must be data-literate, agile, ethically grounded, and above all, human-centred. Leaders must understand what data they have, and how it can be harnessed, without needing to become technologists themselves.
Organisations must be nimble enough to act on AI-driven insights, whether through supply chain reconfiguration or dynamic pricing. Ethics, too, are critical, especially as AI increasingly impacts areas such as hiring, lending, and content moderation. “AI is not a replacement for strategy – it is a reflection of it,” she said.
In organisations with clarity and discipline, AI can unlock significant value. In those without, it risks adding cost and complexity. The responsibility for today’s leaders is to ensure that technology serves the business, not the other way around.
Business
No imminent change to tax-free allowance
There will be no immediate changes to cash Individual Savings Accounts (Isas), the BBC understands.
Chancellor Rachel Reeves was widely expected to announce plans to reduce the £20,000 tax-free allowance.
The move was aimed at encouraging more investment in stocks and shares, which the goverment says it will still focus on.
“Our ambition is to ensure people’s hard-earned savings are delivering the best returns and driving more investment into the UK economy,” a Treasury spokesperson said.
The Treasury is expected to continue to talk to banks, building societies and investment firms about options for reform.
An Isa is a savings or investment product that is treated differently for tax purposes.
Any returns you make from an Isa are tax-free, but there is a limit to how much money you can put in each year.
The current £20,000 annual allowance can be used in one account or spread across multiple Isa products as you wish.
Business
UK economy shrank unexpectedly in May
The economy shrank by 0.1%, the second month in a row it has contracted.
Source link
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained