AI Research
California must lead where Congress fails on AI oversight

In a federal regulatory vacuum, California, which is home to many AI companies, has an opportunity to become a national leader.
Sunday marks the 56th anniversary of Neil Armstrong taking humanity’s first step onto the moon. The Apollo program succeeded because NASA embraced calculated risks, learned from failures and worked toward ambitious goals. A similar approach is necessary as artificial intelligence is on course to transform the world. California must provide guardrails, because Washington will not.
Congress has introduced hundreds of bills to regulate AI, but very few have passed. President Donald Trump issued an executive order in January dismantling many of those, and the Senate removed a moratorium on state regulations that might have sparked greater federal oversight from the One Big Beautiful Bill Act.
In that federal regulatory vacuum, California, which is home to many AI companies, has an opportunity to become a national leader.
Last year, Gov. Gavin Newsom signed 18 laws related to AI, and lawmakers are considering more than 30 more this year. They should use a light touch. It would be all too easy for state lawmakers to conclude that they know what is best for a highly technical industry and inadvertently cripple its development. Artificial intelligence, like space exploration, requires room to innovate, experiment and occasionally fail.
Consider the recent and infamous “MechaHitler” incident. The Grok chatbot owned by Elon Musk’s xAI began generating wildly disturbing and offensive content. Under some proposed regulations, such incidents could lead to harsh penalties. After all, we don’t want racist AIs promulgating antisemitism or other hate speech.
It’s important to note what followed. Developers pulled the content, learned from the unexpected outcome and improved their training models. When mistakes occur, there must be an opportunity to correct them without facing state punishment at every turn. Advanced AI sometimes exhibits unexpected, emergent behavior. That is a feature, not a bug.
Largely, California has walked that fine line reasonably well so far, thanks in part to some strategic vetoes by Newsom. For example, the state’s Artificial Intelligence Training Data Transparency Act (AB 2013) requires companies to disclose training data sources but allows flexible technical implementation. Likewise, attempts to protect children from harmful applications or exploitation have been reasonable.
However, there have already been problems, especially as the state runs into free speech protections. Lawmakers overstepped with a law that allowed people to sue over AI-generated deepfakes of political candidates, elected officials and election workers. A federal court struck that law down last year on First Amendment grounds.
The danger is that AI regulations fall into the same sort of politically charged battles that the nation experienced over misinformation in recent years. Overregulation stifled legitimate expression and exacerbated the left-right divide. Far from increasing confidence in public discourse and government, regulations eroded it.
The state’s best path forward will target narrow, documented problems rather than impose broad restrictions out of fear or misunderstanding. Transparency requirements for AI systems that do not undermine trade secrets, safeguards for children and whistleblower protections for safety concerns are achievable goals with bipartisan support.
The Apollo program ran into its share of setbacks — Apollo 13, for example — but the government didn’t meddle in the minutia. The same approach is needed now. Establish high-level guardrails that protect citizens while allowing artificial intelligence to reach its full potential.
You can send letters to the editor to letters@pressdemocrat.com.
AI Research
OpenAI Projects $115 Billion Cash Burn by 2029

OpenAI has sharply raised its projected cash burn through 2029 to $115 billion, according to The Information. This marks an $80 billion increase from previous estimates, as the company ramps up spending to fuel the AI behind its ChatGPT chatbot.
The company, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, about $1.5 billion higher than its earlier forecast. The surge in spending comes as OpenAI seeks to maintain its lead in the rapidly growing artificial intelligence market.
To control these soaring costs, OpenAI plans to develop its own data center server chips and facilities to power its technology.
The company is partnering with U.S. semiconductor giant Broadcom to produce its first AI chip, which will be used internally rather than made available to customers, as reported by The Information.
In addition to this initiative, OpenAI has expanded its partnership with Oracle, committing to a 4.5-gigawatt data center capacity to support its growing operations.
This is part of OpenAI’s larger plan, the Stargate initiative, which includes a $500 billion investment and is also supported by Japan’s SoftBank Group. Google Cloud has also joined the group of suppliers supporting OpenAI’s infrastructure.
OpenAI’s projected cash burn will more than double in 2024, reaching over $17 billion. It will continue to rise, with estimates of $35 billion in 2027 and $45 billion in 2028, according to The Information.
AI Research
PromptLocker scared ESET, but it was an experiment

The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.
On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.
The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.
According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.
Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.
Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.
New York University researchers noted that the economic side of this experiment is particularly interesting. Traditional ransomware campaigns require experienced teams, custom code, and significant infrastructure investments. In the case of Ransomware 3.0, the entire attack consumed about 23 thousand AI tokens, which is only $0.70 in value if you use commercial APIs with flagship models.
Moreover, the researchers emphasized that open source AI models completely eliminate even these costs. This means that cybercriminals can do without any costs at all, getting the most favorable ratio of investment to result. And this ratio far exceeds the efficiency of any legal investment in AI development.
However, this is still only a hypothetical scenario. The research looks convincing, but it is too early to say that cybercriminals will massively integrate AI into their attacks. Perhaps we will have to wait until the cybersecurity industry can prove in practice that artificial intelligence will be the driving force behind the new wave of hacking.
The New York University research paper titled “Ransomware 3.0: Self-Composing and LLM-Orchestrated” is distributed by in the public domain.
Source: tomshardware
AI Research
Deutsche Bank on ‘the summer AI turned ugly’: ‘more sober’ than the dotcom bubble, with som troubling data-center math

Deutsche Bank analysts have been watching Amazon Prime, it seems. Specifically, the “breakout” show of the summer, “The Summer I Turned Pretty.” In the AI sphere, analysts Adrian Cox and Stefan Abrudan wrote, it was the summer AI “turned ugly,” with several emerging themes that will set the course for the final quarter of the year. Paramount among them: The rising fear over whether AI has driven Big Tech stocks into the kind of frothy territory that precedes a sharp drop.
The AI news cycle of the summer captured themes including the challenge of starting a career, the importance of technology in the China/U.S. trade war, and mounting anxiety about the impact of the technology. But in terms of finance and investing, Deutsche Bank sees markets “on edge” and hoping for a soft landing amid bubble fears. In part, it blames tech CEOs for egging on the market with overpromises, leading to inflated hopes and dreams, many spurred on by tech leaders’ overpromises. It also sees a major impact from the venture capital space, boosting startups’ valuations, and from the lawyers who are very busy filing lawsuits for all kinds of AI players. It’s ugly out there. But the market is actually “more sober” in many ways than the situation from the late 1990s, the German bank argues.
Still, Wall Street is not Main Street, and Deutsche Bank notes troubling math about the data centers sprouting up on the outskirts of your town. Specifically, the bank flags a back-of-the-envelope analysis from hedge fund Praetorian Capital that suggests hyperscalers’ massive data center investments could be setting up the market for negative returns, echoing past cycles of “capital destruction.”
AI hype and market volatility
AI has captured the market’s imagination, with Cox and Abrudan noting, “it’s clear there is a lot of hype.” Web searches for AI are 10 times as high as they ever were for crypto, the bank said, citing Google Trends data, while it also finds that S&P 500 companies mentioned “AI” over 3,300 times in their earnings calls this past quarter.
Stock valuations overall have soared alongside the “Magnificent Seven” tech firms, which collectively comprise a third of the S&P 500’s market cap. (The most magnificent: Nvidia, now the world’s most valuable company at a market cap exceeding $4 trillion.) Yet Deutsche Bank points out that today’s top tech players have healthier balance sheets and more resilient business models than the high flyers of the dotcom era.
By most ratios, the bank said, valuations “still look more sober than those for hot stocks at the height of the dot-com bubble,” when the Nasdaq more than tripled in less than 18 months to March 2000, then lost 75% of its value by late 2002. By price-to-earnings ratio, Alphabet and Meta are in the mid-20x range, while Amazon and Microsoft trade in the mid-30x range. By comparison, Cisco surpassed 200x during the dotcom bubble, and even Microsoft reached 80x. Nvidia is “only” 50x, Deutsche Bank noted.
Those data centers, though
Despite the relative restraint in share prices, AI’s real risk may be lurking away from its stock-market valuations, in the economics of its infrastructure. Deutsche Bank cites a blog post by Praetorian Capital “that has been doing the rounds.” The post in “Kuppy’s Korner,” named for the fund’s CEO Harris “Kuppy” Kupperman, estimates that hyperscalers’ total data-center spending for 2025 could hit $400 billion, and the bank notes that is roughly the size of the GDP of Malaysia or Egypt. The problem, according to the hedge fund, is that the data centers will depreciate by roughly $40 billion per year, while they currently generate no more than $20 billion of annual revenue. How is that supposed to work?
“Now, remember, revenue today is running at $15 to $20 billion,” the blog post says, explaining that revenue needs to grow at least tenfold just to cover the depreciation. Even assuming future margins rise to 25%, the blog post estimates that the sector would require a stunning $160 billion in annual revenue from the AI powered by those data centers just to break even on depreciation—and nearly $480 billion to deliver a modest 20% return on invested capital. For context, even giants like Netflix and Microsoft Office 365 at their peaks brought in less than a fraction of that figure. Even at that level, “you’d need $480 billion of AI revenue to hit your target return … $480 billion is a LOT of revenue for guys like me who don’t even pay a monthly fee today for the product.” Going from $20 billion to $480 billion could take a long time, if ever, is the implication, and sometime before the big AI platforms reach those levels, their earnings, and presumably their shares, could take a hit.
Deutsche Bank itself isn’t as pessimistic. The bank notes that the data-center buildout is producing a greatly reduced cost for each use of an AI model, as startups are reaching “meaningful scale in cloud consumption.” Also, consumer AI such as ChatGPT and Gemini is growing fast, with OpenAI saying in August that ChatGPT had over 700 million weekly users, plus 5 million paying business users, up from 3 million three months earlier. The cost to query an AI model (subsidized by the venture capital sector, to be sure) has fallen by around 99.7% in the two years since the launch of ChatGPT and is still headed downward.

Echoes of prior bubbles
Praetorian Capital draws two historical parallels to the current situation: the dotcom era’s fiber buildout, which led to the bankruptcy of Global Crossing, and the more recent capital bust of shale oil. In each case, the underlying technology is real and transformative—but overzealous spending with little regard for returns could leave investors holding the bag if progress stalls.
The “arms race” mentality now gripping the hyperscalers’ massive capex buildout mirrors the capital intensity of those past crises, and as Praetorian notes, “even the MAG7 will not be immune” if shareholder patience runs out. Per Kuppy’s Korner, “the megacap tech names are forced to lever up to keep buying chips, after having outrun their own cash flows; or they give up on the arms race, writing off the past few years of capex … Like many things in finance, it’s all pretty obvious where this will end up, it’s the timing that’s the hard part.”
This cycle, Deutsche Bank argues, is being sustained by robust earnings and more conservative valuations than the dotcom era, but “periodic corrections are welcome, releasing some steam from the system and guarding against complacency.” If revenue growth fails to keep up with depreciation and replacement needs, investors may force a harsh reckoning—one characterized not by spectacular innovation but by a slow realization of negative returns.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi