Tools & Platforms
Palantir CEO on why Silicon Valley over-hyped AI

I could say artificial intelligence is overhyped and businesses that have tried implementing it are finding it useless. I could also say AI is already transforming industries in profound ways. Both statements would be correct.
For a good illustration of why, look at Palantir, which had its annual AI conference on Thursday.
I was first introduced to the company in 2012, while covering white-collar crime at The Wall Street Journal. Palantir was working with federal prosecutors and the SEC to catch insider traders by finding unusual patterns in trading data that humans missed.
Nobody was calling it AI then, but it still seemed like magic. In reality, it was the result of painstaking work. Palantir had to send its “forward deployed engineers” in for each client to customize and tweak its software in what is often a trial-and-error saga that doesn’t always work.
When a breakthrough happened in large language models, it didn’t mean Palantir could simply throw out the software it had spent a couple of decades building. It meant it had a new tool that could, under the right circumstances, expand the capabilities of existing products.
If you believed tech company CEOs and AI doomers — that the technology had reached an inflection point and would soon surpass human intelligence — you might have thought Palantir was about to become irrelevant.
Instead, the opposite has happened. Palantir was well-positioned to quickly adopt large language models because it had already built a lot of the existing infrastructure or had the muscles to do it.
Companies that never would have hired Palantir thought they could shortcut the work and just use a glorified chatbot to get the same result.
“Silicon Valley totally effed up,” said Palantir CEO Alex Karp, speaking at the event, “in overhyping LLMs” and promising artificial intelligence was right around the corner.
But Karp isn’t exactly underhyping the technology, either.
“An LLM is a raw material that has to be processed, and the processing of the LLM will change America and change the world,” he said.
Tools & Platforms
Tech companies are stealing our books, music and films for AI. It’s brazen theft and must be stopped | Anna Funder and Julia Powles

Today’s large-scale AI systems are founded on what appears to be an extraordinarily brazen criminal enterprise: the wholesale, unauthorised appropriation of every available book, work of art and piece of performance that can be rendered digital.
In the scheme of global harms committed by the tech bros – the undermining of democracies, the decimation of privacy, the open gauntlet to scams and abuse – stealing one Australian author’s life’s work and ruining their livelihood is a peccadillo.
But stealing all Australian books, music, films, plays and art as AI fodder is a monumental crime against all Australians, as readers, listeners, thinkers, innovators, creators and citizens of a sovereign nation.
The tech companies are operating as imperialists, scouring foreign lands whose resources they can plunder. Brazenly. Without consent. Without attribution. Without redress. These resources are the products of our minds and humanity. They are our culture, the archives of our collective imagination.
If we don’t refuse and resist, not just our culture but our democracy will be irrevocably diminished. Australia will lose the wondrous, astonishing, illuminating outputs of human creative toil that delight us by exploring who we are and what we can be. We won’t know ourselves any more. The rule of law will be rendered dust. Colony indeed.
Tech companies have valorised the ethos “move fast and break things”, in this case, the law and all it binds. To “train” AI, they started by “scraping” the internet for publicly available text, a lot of which is rubbish. They quickly realised that to get high-quality writing, thinking and words they would have to steal our books. Books, as everyone knows, are property. They are written, often over years, licensed for production to publishers and the rental returns to authors are called royalties. No one will write them if they can be immediately stolen.
Copyright law rightfully has its critics, but its core protections have enabled the flourishing of book creation and the book business, and the wide (free but not “for free”) transmission of ideas. Australian law says you can quote a limited amount from a book, which must be attributed (otherwise it’s plagiarism). You cannot take a book, copy it entirely and become its distributor. That is illegal. If you did, the author and the publisher would take you to court.
Yet what is categorically disallowed for humans is being seriously discussed as acceptable for the handful of humans behind AI companies and their (not yet profit-making) machines.
To the extent they care, tech companies try to argue the efficiency or necessity of this theft rather than having to negotiate consent, attribution, appropriate treatment and a fee, as copyright and moral rights require. No kidding. If you are setting up a business, in farming or mining or manufacturing or AI, it will indeed be more efficient if you can just steal what you need – land, the buildings someone else constructed, the perfectly imperfect ideas honed and nourished through dedicated labour, the four corners of a book that ate a decade.
Under the banner of progress, innovation and, most recently, productivity, the tech industry’s defence distils to “we stole because we could, but also because we had to”. This is audacious and scandalous, but it is not surprising. What is surprising is the credulity and contortions of Australia’s political class in seriously considering retrospectively legitimising this flagrantly unlawful behaviour.
The Productivity Commission’s proposal for legalising this theft is called “text and data mining” or TDM. Socialised early in the AI debate by a small group of tech lobbyists, the open secret about TDM is that even its proponents considered it was an absolute long shot and would not be taken seriously by Australian policymakers.
Devised as a mechanism primarily to support research over large volumes of information, TDM is entirely ill-suited to the context of unlawful appropriation of copyright works for commercial AI development. Especially when it puts at risk the 5.9% of Australia’s workforce in creative industries and, speaking of productivity, the $160bn national contribution they generate. The net effect if adopted would be that the tech companies can continue to take our property without consent or payment, but additionally without the threat of legal action for breaking the law.
Let’s look at just who the Productivity Commission would like to give this huge free-kick to.
Big Tech’s first fortunes were made by stealing our personal information, click by click. Now our emails can be read, our conversations eavesdropped on, our whereabouts and spending patterns tracked, our attention frayed, our dopamine manipulated, our fears magnified, our children harmed, our hopes and dreams plundered and monetised.
The values of the tech titans are not only undemocratic, they are inhumane. Mark Zuckerberg’s empathy atrophied as his algorithm expanded. He has said, “A squirrel dying in front of your house may be more relevant to you right now than people dying in Africa.” He now openly advocates “a culture that celebrates aggression” and for even more “masculine energy” in the workplace. Eric Schmidt, former head of Google, has said, “We don’t need you to type at all. We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.”
The craven, toadying, data-thieving, unaccountable broligarchs we saw lined up on inauguration day in the US have laid claim to our personal information, which they use for profit, for power and for control. They have amply demonstrated that they do not have the flourishing of humans and their democracies at heart.
And now, to make their second tranche of fortunes under the guise of AI, this sector has stolen our work.
Our government should not legalise this outrageous theft. It would be the end of creative writing, journalism, long-form nonfiction and essays, music, screen and theatre writing in Australia. Why would you work if your work can be stolen, degraded, stripped of your association, and made instantly and universally available for free? It will be the end of Australian publishing, a $2bn industry. And it will be the end of us knowing ourselves by knowing our own stories.
Copyright is in the sights of the technology firms because it squarely protects Australian creators and our national engine of cultural production, innovation and enterprise. We should not create tech-specific regulation to give it away to this industry – local or overseas – for free, and for no discernible benefit to the nation.
The rub for the government is that much of the mistreatment of Australian creators involves acts outside Australia. But this is all the more reason to reinforce copyright protection at home. We aren’t satisfied with “what happens overseas stays overseas” in any other context – whether we’re talking about cars or pharmaceuticals or modern slavery. Nor should we be when it comes to copyright.
Over the last quarter-century, tech firms have honed the art of win-win legal exceptionalism. Text and data mining is a win if it becomes law, but it’s a win even if it doesn’t – because the debate itself has very effectively diverted attention, lowered expectations, exhausted creators, drained already meagerly resourced representatives and, above all, delayed copyright enforcement in a case of flagrant abuse.
So what should the government do? It should strategise, not surrender. It should insist that any AI product made available to Australian consumers demonstrate compliance with our copyright and moral rights regime. It should require the deletion of stolen work from AI offerings. And it should demand the negotiation of proper – not token or partial – consent and payment to creators. This is a battle for the mind and soul of our nation – let’s imagine and create a future worth having.
Tools & Platforms
AI-related court cases surge in Beijing

A Beijing court has observed an increasing number of cases related to artificial intelligence in recent years, highlighting the need for collaborative efforts to strengthen oversight in the development and application of this advanced technology.
Since the Beijing Internet Court was established in September 2018, it has concluded more than 245,000 lawsuits. “Among them, cases involving AI have been growing rapidly, primarily focusing on issues such as the ownership of copyright for AI-generated works and whether the use of AI-powered products or services constitutes infringement,” Zhao Changxin, vice-president of the court, said on Wednesday.
He told a news conference that as AI empowers more industries, disputes involving this technology are no longer limited to the internet sector but are now widely permeating fields including culture, entertainment, finance, and advertising.
“The fast development of the technology has not only introduced new products and services, but also brought about new legal risks such as AI hallucinations and algorithmic problems,” he said, adding judicial decisions should seek a balance between encouraging technological innovation and upholding social ethics.
In the handling of AI-related disputes, he emphasized that the priority needs to be given to safeguarding individual dignity and rights. For example, the court last year issued a landmark ruling that imitating someone”s voice through AI without their permission constitutes an infringement of their personal rights.
He suggested that internet users enhance legal awareness, urging technology developers to strictly abide by laws to ensure the legality of their data sources and the foundational modes of origin.
Meanwhile, he said that AI service providers should fulfill their information security obligation by promptly taking measures to halt the generation, transmission, and elimination of illegal content, and make necessary corrections.
In addition, he called on judicial bodies to work with other authorities, including those on cyberspace management, market regulation, and public security, to tighten the supervision in AI application, drawing clear boundaries of responsibilities and duties for the technology developers and service providers.
Tools & Platforms
S. Korea launches W150tr bet to up ante in AI tech race

Lee to hold news conference Thursday, marking 100th day in office
South Korea on Wednesday unveiled plans to bet big on the growth of artificial intelligence, semiconductors, biotechnology, defense, robots and green mobility to up the ante in the global race for strategic technologies.
Under the plan, a massive investment fund worth 150 trillion won ($108.1 billion) will be pooled over the next five years, and the plan will create up to 125 trillion won worth of added value in South Korea’s economy, according to the government.
Amid a slowdown in Korea’s economic growth due to heated international competition for technologies, President Lee Jae Myung stressed the need to secure a new engine for growth.
“Major countries like the United States and China are ramping up their state support to strategic industries backed by cutting-edge technologies,” Lee said at the event he hosted in Seoul before some 150 participants on Wednesday. “We are engaged in a war without gun smoke.”
Lee also said the scale of the pooled fund was later confirmed to be 1.5 times larger than the initially set 100 trillion won, according to the policy blueprint suggested by the de facto transition team for Lee, the State Affairs Planning Committee.
Half of the pooled fund will stem from a 75 trillion won fund newly established by the state-run policy lender Korea Development Bank. The government projected KDB’s fund to launch in December following the promulgation of relevant legislation on Tuesday. KDB will join forces with government ministries to explore investment destinations.
The rest of the fund will comprise contributions from South Korean pension funds and financial institutions, as well as individual citizens.
The fund will invest in securities of late-stage venture firms with technology prowess, investment vehicles that invest in such securities, low-interest loans, as well as large-scale infrastructure such as AI expressways and data centers, according to the government.
The destinations for investment will be businesses related to artificial intelligence, semiconductor chips, biotechnology, vaccines, defense equipment, robots, hydrogen energy, secondary batteries, display panels and futuristic mobility, according to the government.
The investment fund will also open the way for South Korea’s “grand transition to productive finance,” Lee said.
“We will take advantage of the growth opportunity and share its fruits with citizens,” Lee said, adding that the newly established fund could alleviate the chronic overheating of the housing market and banks’ reliance on lending rates.
Meanwhile, Lee is poised to hold a news conference Thursday at 10 a.m. Marking his 100th day in office, the conference will be the second of its kind since his inauguration in June.
Some 150 journalists from Korea and abroad will take part in the conference, which will center on economics, social affairs and culture, said Lee Kyu-youn, senior presidential secretary for public relations and communication, in a briefing on Wednesday.
consnow@heraldcorp.com
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi