Connect with us

Business

US and China end ‘constructive’ trade talks without breakthrough

Published

on


EPA US Treasury Secretary Scott Bessent smiles sitting in front of a microphone and a backdrop of American flags while US Trade Representative Jamieson Greer is in the backgroundEPA

The US and China have wrapped up another round of trade talks without any major breakthroughs, despite discussions that both sides described as “constructive”.

The negotiations, held in Stockholm, Sweden came as a truce established in May is set to expire next month, threatening to revive the turmoil that hit in April when the two countries exchanged escalating tit-for-tat tariffs.

US Treasury Secretary Scott Bessent said any extension of that truce, in which both sides agreed to drop some measures, would be up to President Donald Trump.

China’s trade negotiator Li Chenggang said that both sides would push to preserve that agreement.

Beijing and Washington have been at loggerheads on a range of issues as well as tariffs, including US demands that China’s ByteDance sell TikTok and that China speed up its export of critical minerals.

Trump started hiking tariffs on Chinese goods shortly after his return to the White House. China ultimately responded with tariffs of its own. Tensions escalated, with tariff rates hitting the triple digits, before a trade truce in May.

That left Chinese goods facing an additional 30% tariff compared with the start of the year, with US goods facing a new 10% tariff in China.

Without the truce being extended by the 12 August deadline, tariffs could “boomerang” back up, US officials said.

“Nothing is agreed until we speak with President Trump,” Bessent said, while downplaying the risks of escalation.

“Just to tamp down that rhetoric, the meetings were very constructive. We just haven’t given the sign off,” he said.

This was the the third meeting between the US and China since April.

Negotiators for the two sides said they discussed each others’ economies, implementation of terms previously agreed by Trump and Chinese President Xi Jinping and rare earths, a key sticking point because of their importance in new technology including electric vehicles.

The US also pressed China on its dealings with Russia and Iran.

Li Chenggang said both sides were “fully aware of the importance of safeguarding a stable and sound China-US trade and economic relationship”.

Bessent said he felt the the US had momentum, after recent agreements that Trump has secured with Japan and the European Union.

“I believe they were in more of a mood for wide-ranging discussion,” he said.

President Trump has long complained about the trade deficit with China, which last year saw the US buy $295bn more goods from China than the other way round.

US Trade Representative Jamieson Greer said the US was already on track to reduce that gap by $50bn this year.

But Bessent said the US was not looking to completely “de-couple”.

“We just need to de-risk with certain strategic industries, whether it’s the rare earths, semiconductors, medicines,” he said at a briefing for reporters after the conclusion of the talks.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

The Toy Business Forum 2026 Spotlights AI, Retail, and Global Toy Trends

Published

on


The Toy Business Forum serves as a trend barometer for the global toy industry. It returns to Hall 3A next year, from Jan. 27-31, 2026. The program brings together international speakers, panels, live podcasts, and exhibitor showcases, offering professionals insights into the latest trends shaping the market.

This year’s sessions focus on artificial intelligence, retail strategies, marketing, sustainability, toy trends, and the kidult segment. Formats include inspiring presentations, panel discussions, and the “Exhibitors on Stage” series. Visitors can also connect during the midday “Networking Break” at outdoor food trucks, with live music adding to the atmosphere.

Highlights include the Toy Pitch competition and a dedicated Press Day on Tuesday morning, providing media with early access to new product reveals. A new “Value of Play Conference” debuts ahead of the fair, exploring the significance of play across industries and disciplines, followed by the annual ToyAward presentation. Later in the week, a fireside chat between Spielwarenmesse eG Executive Board Spokesperson Christian Ulrich and Hape Founder and CEO Peter Handstein provides firsthand insights into the global toy business. On Thursday, the Model Car Hall of Fame takes the stage to announce the Class of 2025.

Confirmed speakers include Reyne Rice of ToyTrends, who will address emerging technologies; Dennis Book of Thalia, who will discuss combining books and toys; Marilyn Repp of The Community Building Company, presenting on community building for brick-and-mortar retail; and Jasmin Karatas of RAW-R Agency and Carol Rapp of Spiel Essen, who will host a live podcast titled Kidults vs. Nerds. Additional presentations include Sabine van Almsick of ECC Next with a session blending AI, play, and pop culture.

The full Toy Business Forum program publishes in mid-November on the Spielwarenmesse website. In the meantime, highlights from last year’s presentations are available to stream on Spielwarenmesse Digital.



Source link

Continue Reading

Business

There’s a Looming AI Data Shortage. Google Researchers Have a New Fix.

Published

on


Google DeepMind researchers have an idea for how to solve the AI data drought, and it might involve your Social Security number.

The large language models powering AI require vast amounts of training data pulled from webpages, books, and other sources. When it comes to text specifically, the amount of data on the web considered fair game for training AI models is being scraped faster than new data is being created.

However, a large portion of the data isn’t used because it’s deemed toxic, inaccurate, or it contains personally identifiable information.

In a newly published paper, a group of Google DeepMind researchers claim to have found a way to clean up this data and make it usable for training, which they claim could be a “powerful tool” for scaling up frontier models.

They refer to the idea as Generative Data Refinement, or GDR. The method uses pretrained generative models to rewrite the unusable data, effectively purifying it so it can be safely trained on. It’s not clear if this is a technique Google is using for its Gemini models.

Minqi Jiang, one of the paper’s researchers who has since left the company to Meta, told Business Insider that a lot of AI labs are leaving usable training data on the table because it’s intermingled with bad data. For example, if there’s a document on the web that contains something considered unusable, such as someone’s phone number or an incorrect fact, labs will often discard the entire thing.

“So you essentially lose all those tokens inside of that document, even if it was a small single line that contained some personally identifying information,” said Jiang. Tokens are the units of data, processed by AI, which make up words within text.

The authors give an example of raw data that included someone’s Social Security number or information that may soon be out of date (“the incoming CEO is…”). In these instances, the GDR would swap or remove the numbers, ignore the information that risks becoming obsolete, and retain the remainder of usable data.

The paper was written more than a year ago and was only published this month. A Google DeepMind spokesperson did not respond to a request for comment about whether the researcher’s work was being applied to the company’s AI models.

The authors’ findings could prove helpful for labs as the usable well of data runs dry. They cite a research paper from 2022 that predicted AI models could soak up all the human-generated text between 2026 and 2032. This prediction was based upon the amount of indexed web data, using statistics from Common Crawl, a project that continuously scrapes web pages and makes them openly available for AI labs to use.

For the GDR paper, the researchers performed a proof of concept by taking over one million lines of code and having human expert labelers annotate the data line by line. They then compared the results with the GDR method.

“It completely crushes the existing industry solutions being used for this kind of stuff,” said Jiang.

The authors also said their method is better than the use of synthetic data (data generated by AI models for the purpose of training themselves or other models), which has been a topic of exploration among AI labs. However, using synthetic data can degrade the quality of model output and, in some cases, lead to “model collapse.”

The authors compared the GDR data against synthetic data created by an LLM and discovered that their approach created a better dataset for training AI models.

They also said further testing could be conducted on other complicated types of data considered a no-go, such as copyrighted materials and personal data that is inferred across multiple documents rather than explicitly spelled out.

The paper has not been peer reviewed, said Jiang, adding that this is common in the tech industry and that all papers are reviewed internally.

The researchers only tested GDR on text and coding. Jiang said that it could also be tested on other modalities, such as video and audio. However, given the rate at which new videos are generated each day, they’re still providing a firehose of data for AI to train on.

“With video, you’re just going to have a lot more of it, just because there’s a constant stream of millions of hours of video generated each day,” said Jiang. “So I do think, going across new modalities beyond text, video, and images, we’re going to unlock a lot more data.”

Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a non-work device; here’s our guide to sharing information securely.





Source link

Continue Reading

Business

Otto Aerospace develops AI model for Phantom 3500 business jet aerodynamics

Published

on


Source | Otto Aerospace

Otto Aerospace (formerly Otto Aviation, Fort Worth, Texas, U.S.) has announced the development of a proprietary aerodynamic AI model designed to optimize and accelerate the configuration of next-generation laminar flow airfoils and ultra-efficient sustainable aircraft. The AI model is trained on extensive computational fluid dynamics (CFD) simulations and wind-tunnel test data. This new AI capability enables Otto to explore the aircraft design space for optimal configurations within a day, a process that previously required months or years.

The AI model will operate on Luminary Cloud’s (San Mateo, Calif., U.S.) GPU-accelerated Physics AI platform, enabling detailed aerodynamic analysis of current and future Otto aircraft configurations. Recognized for its SHIFT family of pre-trained physics AI models, including SHIFT-Wing for aerodynamic analysis of transonic wings, Luminary provides advanced tools to support fast and accurate design evaluations.

“Our Phantom 3500 program has generated extensive high-fidelity simulation and wind-tunnel test data,” says Obi K. Ndu, Ph.D., chief information and digital officer at Otto Aerospace. “At Otto, we believe that the future of aircraft design is at the intersection of AI and first principles. Luminary’s platform gives us the computational power and infrastructure to quickly train an AI model optimized for next-generation laminar flow aircraft and our design approach.”

The Phantom 3500’s laminar-flow fuselage and airfoil demand precise aerodynamic modeling for ultra-low-drag and long-range efficiency. Using Luminary’s accelerated cloud computing capabilities, Otto will significantly expedite parametric design exploration compared to traditional CFD simulation workflows, fast-tracking the current and future development of aircraft designed to burn up to 60% less fuel and achieve up to 90% lower emissions when operating on sustainable aviation fuel.





Source link

Continue Reading

Trending