Tools & Platforms
Tech companies are stealing our books, music and films for AI. It’s brazen theft and must be stopped | Anna Funder and Julia Powles

Today’s large-scale AI systems are founded on what appears to be an extraordinarily brazen criminal enterprise: the wholesale, unauthorised appropriation of every available book, work of art and piece of performance that can be rendered digital.
In the scheme of global harms committed by the tech bros – the undermining of democracies, the decimation of privacy, the open gauntlet to scams and abuse – stealing one Australian author’s life’s work and ruining their livelihood is a peccadillo.
But stealing all Australian books, music, films, plays and art as AI fodder is a monumental crime against all Australians, as readers, listeners, thinkers, innovators, creators and citizens of a sovereign nation.
The tech companies are operating as imperialists, scouring foreign lands whose resources they can plunder. Brazenly. Without consent. Without attribution. Without redress. These resources are the products of our minds and humanity. They are our culture, the archives of our collective imagination.
If we don’t refuse and resist, not just our culture but our democracy will be irrevocably diminished. Australia will lose the wondrous, astonishing, illuminating outputs of human creative toil that delight us by exploring who we are and what we can be. We won’t know ourselves any more. The rule of law will be rendered dust. Colony indeed.
Tech companies have valorised the ethos “move fast and break things”, in this case, the law and all it binds. To “train” AI, they started by “scraping” the internet for publicly available text, a lot of which is rubbish. They quickly realised that to get high-quality writing, thinking and words they would have to steal our books. Books, as everyone knows, are property. They are written, often over years, licensed for production to publishers and the rental returns to authors are called royalties. No one will write them if they can be immediately stolen.
Copyright law rightfully has its critics, but its core protections have enabled the flourishing of book creation and the book business, and the wide (free but not “for free”) transmission of ideas. Australian law says you can quote a limited amount from a book, which must be attributed (otherwise it’s plagiarism). You cannot take a book, copy it entirely and become its distributor. That is illegal. If you did, the author and the publisher would take you to court.
Yet what is categorically disallowed for humans is being seriously discussed as acceptable for the handful of humans behind AI companies and their (not yet profit-making) machines.
To the extent they care, tech companies try to argue the efficiency or necessity of this theft rather than having to negotiate consent, attribution, appropriate treatment and a fee, as copyright and moral rights require. No kidding. If you are setting up a business, in farming or mining or manufacturing or AI, it will indeed be more efficient if you can just steal what you need – land, the buildings someone else constructed, the perfectly imperfect ideas honed and nourished through dedicated labour, the four corners of a book that ate a decade.
Under the banner of progress, innovation and, most recently, productivity, the tech industry’s defence distils to “we stole because we could, but also because we had to”. This is audacious and scandalous, but it is not surprising. What is surprising is the credulity and contortions of Australia’s political class in seriously considering retrospectively legitimising this flagrantly unlawful behaviour.
The Productivity Commission’s proposal for legalising this theft is called “text and data mining” or TDM. Socialised early in the AI debate by a small group of tech lobbyists, the open secret about TDM is that even its proponents considered it was an absolute long shot and would not be taken seriously by Australian policymakers.
Devised as a mechanism primarily to support research over large volumes of information, TDM is entirely ill-suited to the context of unlawful appropriation of copyright works for commercial AI development. Especially when it puts at risk the 5.9% of Australia’s workforce in creative industries and, speaking of productivity, the $160bn national contribution they generate. The net effect if adopted would be that the tech companies can continue to take our property without consent or payment, but additionally without the threat of legal action for breaking the law.
Let’s look at just who the Productivity Commission would like to give this huge free-kick to.
Big Tech’s first fortunes were made by stealing our personal information, click by click. Now our emails can be read, our conversations eavesdropped on, our whereabouts and spending patterns tracked, our attention frayed, our dopamine manipulated, our fears magnified, our children harmed, our hopes and dreams plundered and monetised.
The values of the tech titans are not only undemocratic, they are inhumane. Mark Zuckerberg’s empathy atrophied as his algorithm expanded. He has said, “A squirrel dying in front of your house may be more relevant to you right now than people dying in Africa.” He now openly advocates “a culture that celebrates aggression” and for even more “masculine energy” in the workplace. Eric Schmidt, former head of Google, has said, “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”
The craven, toadying, data-thieving, unaccountable broligarchs we saw lined up on inauguration day in the US have laid claim to our personal information, which they use for profit, for power and for control. They have amply demonstrated that they do not have the flourishing of humans and their democracies at heart.
And now, to make their second tranche of fortunes under the guise of AI, this sector has stolen our work.
Our government should not legalise this outrageous theft. It would be the end of creative writing, journalism, long-form nonfiction and essays, music, screen and theatre writing in Australia. Why would you work if your work can be stolen, degraded, stripped of your association, and made instantly and universally available for free? It will be the end of Australian publishing, a $2bn industry. And it will be the end of us knowing ourselves by knowing our own stories.
Copyright is in the sights of the technology firms because it squarely protects Australian creators and our national engine of cultural production, innovation and enterprise. We should not create tech-specific regulation to give it away to this industry – local or overseas – for free, and for no discernible benefit to the nation.
The rub for the government is that much of the mistreatment of Australian creators involves acts outside Australia. But this is all the more reason to reinforce copyright protection at home. We aren’t satisfied with “what happens overseas stays overseas” in any other context – whether we’re talking about cars or pharmaceuticals or modern slavery. Nor should we be when it comes to copyright.
Over the last quarter-century, tech firms have honed the art of win-win legal exceptionalism. Text and data mining is a win if it becomes law, but it’s a win even if it doesn’t – because the debate itself has very effectively diverted attention, lowered expectations, exhausted creators, drained already meagerly resourced representatives and, above all, delayed copyright enforcement in a case of flagrant abuse.
So what should the government do? It should strategise, not surrender. It should insist that any AI product made available to Australian consumers demonstrate compliance with our copyright and moral rights regime. It should require the deletion of stolen work from AI offerings. And it should demand the negotiation of proper – not token or partial – consent and payment to creators. This is a battle for the mind and soul of our nation – let’s imagine and create a future worth having.
Tools & Platforms
“No process without AI” – Volkswagen gears-up €1bn industrial AI drive

Volkswagen will invest €1bn in AI by 2030 to transform vehicle development, production, and IT, targeting €4bn savings, faster innovation cycles, digital sovereignty in Europe, and “AI everywhere” across its industrial value chain.
In sum – what to know:
Billion-dollar AI drive – VW targets “no process without AI” to transform design, production, logistics, and IT.
Bigger industrial gains – 1,200+ AI production apps deployed; €4bn savings and 25% faster production targeted.
Sovereignty and support – push for digital sovereignty in Europe; request for AI-friendly support and regulation.
German automaker Volkswagen Group is to invest €1 billion ($1.17bn) by 2030 in AI-related industrial technologies to boost vehicle development, industrial applications, and IT infrastructure. It made the announcement at the IAA Mobility trade fair in Munich this week, with an AI-rules kind of message about its future Industry 4.0 strategy: “no process without AI”, it said. The firm reckons it will save €4 billion by 2035 from efficiency gains and cost avoidance through “consistent and scalable use of AI” across its entire “value chain”.
Hauke Stars, member of the management board for IT at Volkswagen Group, said: “Wherever we see potential, we utilize AI in a targeted manner. Scalable, responsible, and with clear industrial benefits. Our ambition: AI everywhere, in every process.” The group is working with unnamed technology and industry partners to develop a domain-specific Industry 4.0 language model, a so-called Large Industry Model (LIM), which uses design, production, and sundry automotive process data from participating companies.
It stated: “Collective industrial process knowledge could be used to train an AI model that helps optimize internal workflows and enables more efficient logistics and process control across industries and for all participants.” An organizational blueprint for such an initiative, still in the “exploration” phase, might be the open Catena-X platform for the automotive sector and broader industrial value chain, it suggested. The Catena-X platform is designed to allow secure data exchange between manufacturers, suppliers, and other tech providers.
Volkswagen is a founding member, alongside BMW, BASF, Mercedes-Benz, SAP, Siemens, ZF, and T-Systems. In the end, its total strategy is to make vehicles better, and faster, and AI looks like the answer. Volkswagen claims to have 1,200 AI applications in production already, and “several hundred more” in development or implementation. It has a proprietary “factory cloud”, connecting more than 40 production sites across the group. “Volkswagen is continuously introducing new AI applications into its manufacturing processes,” it said.
Its centralised “factory cloud”, presented as a Digital Production Platform (DPP), is part of its group-wide private cloud infrastructure. This will be “significantly expanded” in the coming years, in line with its digital sovereignty play, and its hard line on resiliency “against external risks and influences”. It stated: “Technological independence and resilience begin with maintaining control over data – and that only works if data is stored, processed, and protected within Europe.” Sustainability, cybersecurity, and knowledge sharing are all part of its smarter production strategy.
In vehicle development, before its vehicles are connected to its “factory cloud” in its manufacturing sites, Volkswagen is working with Dassault Systèmes to build an “AI-powered engineering environment” to help engineers with virtual testing and component simulations across all its brands in all its markets. It wants to reduce its development cycle by around 12 months (25 percent) – to 36 months, “or less”.
But there was a political message in its address at IAA Mobility, as well: it wants support, in exchange for support. It stated: “Volkswagen is committed to actively shaping the future of AI in Europe and supporting political and economic frameworks at both national and European levels. In an increasingly challenging environment – marked by high energy prices, elevated location costs, and administrative complexity – the company sees a clear need to advance technological innovation in AI in Germany and Europe through political support.”
It wants “nnovation-friendly frameworks in the global AI race”, it said. Stars said: “We support the innovation-friendly evolution of European regulation. In addition, targeted incentives are needed: We must make more of what we’re capable of. This includes, above all, funding programs that strengthen spin-offs from universities and research institutions and accelerate the transfer of scientific knowledge into market-ready applications.”
The company has a large-scale internal AI training programme in place, since last year, which has already trained 130,000 staff across all levels, in all its markets. As a footnote, a blog post to go with the news of its AI investment makes AI need humans – in charge of it, and also accepting of it. Hence all the training. It stated: “AI needs rules… That is why we act on the basis of ethical standards and European regulation. When it comes to sensitive personnel issues, for example, a human being will make the final decision. Always. The key to the success of AI is acceptance.”
Stars said: “With AI, we are igniting the next stage on our path to becoming the global automotive tech driver. AI is our key to greater speed, quality, and competitiveness – across the entire value chain, from vehicle development to production. Our ambition is to accelerate our development of attractive, innovative vehicles and bring them to our customers faster than ever before. To achieve this, we deploy AI with purpose: scalable, responsible, and with clear industrial benefits. Our ambition: no process without AI.”
Tools & Platforms
AI Tool Predicts Stem Cell Transplant Infection Risk

University at Buffalo researchers and collaborators have completed a series of studies that reveal how much painful mouth sores known as oral mucositis increase infection risks in stem cell transplant patients and how artificial intelligence can be used to more accurately predict those risks.
Their paper, published Aug. 14 in the journal Cancers, revealed that patients undergoing hematopoietic stem cell transplants (HSCT) for blood cancers who develop oral mucositis are at nearly four times the risk of developing a severe infection compared to those without the condition. This is the first time that risk has been quantified.
The paper is the most comprehensive synthesis to date of recent findings on individual risk factors for oral mucositis, whether the transplant involves a patient’s own stem cells or donor cells. Risk factors are identified as specific drugs, such as methotrexate, high-dose chemotherapy, female gender, younger age, kidney issues, and reactivation of the herpes simplex virus.
A significant portal for infections
“Oral mucositis is not simply a source of discomfort; it serves as a significant portal for infections in immunocompromised patients,” says Satheeshkumar Poolakkad Sankaran, DDS, corresponding author on the paper and research scientist in the Division of Hematology/Oncology in the Department of Medicine at the Jacobs School of Medicine and Biomedical Sciences at UB. “All my patients with oral mucositis experience poorer outcomes, adversely impacting their quality of life.”
For this reason, he says, screening every cancer patient for oral mucositis risk ahead of time makes sense because the condition is so common — it occurs in up to 80% of HSCT patients. “Knowing risk factors can help doctors spot patients at high risk early,” he says. “This can allow for preventive steps, like oral hygiene or cryotherapy, where extremely cold temperatures are used to reduce inflammation, thus improving outcomes and quality of life.”
To better assess who is at risk, Poolakkad Sankaran and colleagues published a paper in July in Support Cancer Care describing a nomogram tool they developed to predict which patients are more likely to develop oral mucositis. A nomogram is a statistical instrument that is used to model relationships among variables. The researchers used age, gender, race, total body irradiation, and fluid/electrolyte disorders to estimate risks of developing ulcerative mucositis, a severe form of oral mucositis.
“This nomogram simplifies complex data for clinicians, enabling targeted oral care before HSCT,” explains Joel Epstein, DMD, co-author at the City of Hope Comprehensive Cancer Center.
Explainable AI better predicts adverse events
At the Multinational Association of Supportive Care in Cancer 2025 meeting in June, Poolakkad Sankaran presented additional related findings on a nomogram-based model that can better predict adverse events. He explains that this model was evaluated against a new framework that uses explainable AI, employing machine learning algorithms to assess intricate clinical and demographic facts. Explainable AI is designed to provide the rationale behind the output of an AI system.
“The AI model exhibited enhanced predictive accuracy, recognizing patterns linked to toxicities that conventional nomograms failed to detect,” he adds. “By synthesizing demographic and clinical data, the system can predict adverse events, facilitating individualized therapy modifications to reduce toxicities.”
Poolakkad Sankaran is validating the model with other cancer adverse events such as immune-related adverse events in a larger cohort, working with Roberto Pili, MD, a co-author and associate dean for cancer research and integrative oncology in the Jacobs School. Their ultimate goal is widespread clinical adoption of the model in assessing cancer patients.
“These interconnected studies underscore the oral-systemic connection in cancer therapy, urging multidisciplinary collaboration among oncologists, dentists, and AI specialists,” Poolakkad Sankaran says. “As cancer management such as HSCT and immunotherapy grows — particularly for older patients — these tools promise reduced complications, shorter hospitalizations, and lower costs.”
Reference: Eichhorn S, Rudin L, Ramasamy C, et al. Elevated likelihood of infectious complications related to oral mucositis after hematopoietic stem cell transplantation: A systematic review and meta-analysis of outcomes and risk factors. Cancers. 2025;17(16). doi: 10.3390/cancers17162657
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.
Tools & Platforms
Building a solid foundation to support AI adoption at Bentley

On determination: Years ago, there was a program called Tomorrow’s World, and it was all about the future and what that could look like. I was fascinated by it because in one episode, there was a woman presenter. So that inspired me then, and showed you could aspire to that. I went into my first role working in a software development division in the mid-90s where everyone was frantically recoding everything because of Y2K. Those were my very early days in technology. But even then, I had female role models who made it seem accessible. I was curious but not very academic at school, so I felt lucky to get a job in technology. Being able to learn coding was very interesting, and I found my routine, and continued to grow and move through the ranks.
On data: I always say there’s no AI without data. So the thing we’ve been working on for the last two years is the data strategy, understanding the governance, the framework, and everything else we need there as foundations. We’ve done a lot of work around data literacy and upskilling in the organization, and we’ve been doing that in preparedness for AI because we know everybody wants it, and they want it now, but they don’t necessarily know what they want it for. So it’s about creating that safe space where people can test and learn. I’ve been working in partnership with the chief strategy officer to say this needs to be a joint business, and we need an IT strategy around data and AI. It can’t just be directly from it. We need to work together with the business to understand what we want to use AI for so we can get to value sooner. And if you think about what we’ve been doing for the last three years in moving to the enterprise systems, reducing the systems landscape, and making sure we understand what data we’ve got in those systems, it’s all been creating the pathway and the foundation levels we need to get us there sooner.
On streamlining efficiencies: When I came into automotive, it was essentially learning a whole different language because many of the abbreviations are tied to German words. So even when you know them, you don’t understand what they mean. For me, it was about taking that big step back, looking at our business and saying we’re all about designing and creating an amazing product. We then have customers we service through the web or the app. So it’s broken down into value streams, and within those are the processes of designing, building, marketing, and selling cars. I like logic, so I try to apply it when we examine capabilities, like reducing cost of delivery from an IT perspective. That gives us more money to invest in other things, or future ready our organization.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi