By Choonsik Yoo ( August 5, 2025, 08:11 GMT | Insight) — President Donald Trump’s AI policy lead, Michael Kratsios, promoted the US AI Action Plan at an APEC event in South Korea today, emphasizing AI exports as a tool for diplomacy and innovation. While avoiding direct mention of China, he contrasted the US approach with Europe’s regulation-heavy stance and urged APEC members to partner with the US on AI development.President Donald Trump is “unapologetically America First,” but the US holds a commanding lead in artificial intelligence innovation and knows each country wants what is best for its citizens, making the US AI packages the right choice to make, a senior White House AI official said at an APEC event today….
AI Research
‘Take our offered handshake’ and get US AI exports, White House policy lead tells APEC | MLex
Prepare for tomorrow’s regulatory change, today
MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.
Know what others in the room don’t, with features including:
- Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
- Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
- Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
- Curated case files bringing together news, analysis and source documents in a single timeline
Experience MLex today with a 14-day free trial.
AI Research
Nvidia says ‘We never deprive American customers in order to serve the rest of the world’ — company says GAIN AI Act addresses a problem that doesn’t exist

The bill, which aimed to regulate shipments of AI GPUs to adversaries and prioritize U.S. buyers, as proposed by U.S. senators earlier this week, made quite a splash in America. To a degree, Nvidia issued a statement claiming that the U.S. was, is, and will remain its primary market, implying that no regulations are needed for the company to serve America.
“The U.S. has always been and will continue to be our largest market,” a statement sent to Tom’s Hardware reads. “We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips. While it may have good intentions, this bill is just another variation of the AI Diffusion Rule and would have similar effects on American leadership and the U.S. economy.”
The new export rules would obviously apply even to older AI GPUs — assuming they are still in production, of course — like Nvidia’s HGX H20 or L2 PCIe, which still meet the defined performance thresholds set by the Biden administration. Although Nvidia has claimed that H20 shipments to China do not interfere with the domestic supply of H100, H200, or Blackwell chips, the new legislation could significantly formalize such limitations on transactions in the future.
AI Research
OpenAI Projects $115 Billion Cash Burn by 2029

OpenAI has sharply raised its projected cash burn through 2029 to $115 billion, according to The Information. This marks an $80 billion increase from previous estimates, as the company ramps up spending to fuel the AI behind its ChatGPT chatbot.
The company, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, about $1.5 billion higher than its earlier forecast. The surge in spending comes as OpenAI seeks to maintain its lead in the rapidly growing artificial intelligence market.
To control these soaring costs, OpenAI plans to develop its own data center server chips and facilities to power its technology.
The company is partnering with U.S. semiconductor giant Broadcom to produce its first AI chip, which will be used internally rather than made available to customers, as reported by The Information.
In addition to this initiative, OpenAI has expanded its partnership with Oracle, committing to a 4.5-gigawatt data center capacity to support its growing operations.
This is part of OpenAI’s larger plan, the Stargate initiative, which includes a $500 billion investment and is also supported by Japan’s SoftBank Group. Google Cloud has also joined the group of suppliers supporting OpenAI’s infrastructure.
OpenAI’s projected cash burn will more than double in 2024, reaching over $17 billion. It will continue to rise, with estimates of $35 billion in 2027 and $45 billion in 2028, according to The Information.
AI Research
PromptLocker scared ESET, but it was an experiment

The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.
On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.
The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.
According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.
Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.
Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.
New York University researchers noted that the economic side of this experiment is particularly interesting. Traditional ransomware campaigns require experienced teams, custom code, and significant infrastructure investments. In the case of Ransomware 3.0, the entire attack consumed about 23 thousand AI tokens, which is only $0.70 in value if you use commercial APIs with flagship models.
Moreover, the researchers emphasized that open source AI models completely eliminate even these costs. This means that cybercriminals can do without any costs at all, getting the most favorable ratio of investment to result. And this ratio far exceeds the efficiency of any legal investment in AI development.
However, this is still only a hypothetical scenario. The research looks convincing, but it is too early to say that cybercriminals will massively integrate AI into their attacks. Perhaps we will have to wait until the cybersecurity industry can prove in practice that artificial intelligence will be the driving force behind the new wave of hacking.
The New York University research paper titled “Ransomware 3.0: Self-Composing and LLM-Orchestrated” is distributed by in the public domain.
Source: tomshardware
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi