Connect with us

Business

What the US-Japan trade deal means for Asia and the world

Published

on


Suranjana Tewari

Asia Business Correspondent

Getty Images US President Donald TrumpGetty Images

US President Donald Trump has called the agreement he reached with Japan the “largest trade deal in history”.

It might be premature to make such claims, but it’s certainly the most significant deal since Trump announced his so-called Liberation Day tariffs in April which roiled stock markets and created chaos for global trade.

After months of negotiations, Japan’s Prime Minister Shigeru Ishiba said he expects the deal will help the global economy.

It is a big claim. The BBC examines whether it will and if so, how?

Japan Inc

Japan is the world’s fourth largest economy, meaning it accounts for a large part of global trade and growth.

Tokyo imports a great deal of energy and food from overseas and is dependent on exports including electronics, machinery and motor vehicles.

The US is its biggest export market.

Some experts had warned that Trump’s tariffs could knock as much as a percentage point off Japan’s economy, pushing it into recession.

With lower tariffs, exporters will be able to do business in the US more cheaply than if Trump had stuck to an earlier threat to levy higher taxes.

And the deal brings certainty, which allows businesses to plan.

The announcement also strengthened the Japanese yen against the US dollar, giving manufacturers more purchasing power to buy the raw materials they need to expand their businesses.

The US agreement is particularly good for Japan’s auto giants like Toyota, Honda and Nissan. Previously, American importers had to pay a 27.5% levy when they shipped in Japanese cars.

That is now being reduced to 15%, potentially making Japanese cars cheaper compared to the likes of rival China.

Having said that, US automakers have signalled they are unhappy with the deal.

They are concerned they have to pay a 25% tariff on imports from their plants and suppliers in Canada and Mexico compared with Japan’s 15% rate.

Jobs and more deals

In return for reduced tariffs, Japan has proposed investing $550bn in the US to enable Japanese firms “to build resilient supply chains in key sectors like pharmaceuticals and semiconductors,” Ishiba said.

Japan is already a major investor in the US, but this amount of money should create jobs, make quality products and foster innovation.

Under the deal, Trump said Japan will increase purchases of agricultural products such as US rice which could help the country’s rice shortage – even if it might rattle local farmers concerned about losing market share.

The 15% tariff is also a benchmark for other countries like South Korea and Taiwan who are holding their own trade negotiations with the US.

Getty Images Bags of American-grown rice at a market in Brooklyn, New York Getty Images

Japan will buy rice from America, according to Donald Trump

South Korea’s industry minister said he will take a close look at the terms of what Japan has agreed with the US as he headed to Washington for crunch trade talks.

Japan and South Korea compete in industries like steel and autos.

More broadly, the US and Japan deal will put more pressure on other countries – especially major Asia exporters – to secure better agreements before a 1 August deadline.

Deals with Vietnam, Indonesia and the Philippines have already been announced.

But some Asian countries will suffer.

Smaller economies like Cambodia, Laos and Sri Lanka are manufacturing exporters and they have little to offer Washington in terms of trade or investment.

Did the US get what it wants?

There were reports that the US had called on Japan to increase military spending.

But Tokyo’s tariff envoy has clarified that the deal does not include anything on defence spending.

Ryosei Akazawa added that steel and aluminium tariffs would remain at 50%.

These both may be wins for Japan, since it exports more vehicles to the US than it does steel and aluminium.

The pressure is also on the US to get as many of these deals over the line before its self-imposed tariff August deadline.

Alongside negotiations with the US, countries might start looking for more reliable partners elsewhere.

On the same day as Washington and Toyko announced their agreement, Japan and Europe pledged to “work more closely together to counter economic coercion and to address unfair trade practices,” according European Commission President Ursula von der Leyen.

The European Union is yet to agree a trade deal with the US.

“We believe in global competitiveness and it should benefit everyone,” said Ms von der Leyen.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Drawings reveal Victorian proposal for London’s own Grand Central station | Heritage

Published

on


The vaulted arches of New York’s Grand Central station are recognisable even to those who have never taken a train into the Big Apple. But they could very easily have been a sight visible in central London.

Shelved 172-year-old architectural drawings by Perceval Parsons show how he envisioned a new London railway connecting the growing number of lines coming into the city to a huge main terminal by the Thames.

The drawings of London’s own Grand Central Station, which are being put on open sale for the first time to mark the 200th anniversary of the first public passenger railway, show a scheme that would have given the capital of the UK a very different look today.

The station was to be located at Great Scotland Yard, close to the modern-day Embankment tube station, and would have boasted an ornamental frontage about 800ft (245 metres) in length.

Multiple entrances would lead to a “spacious hall about 300ft long and facing them would be a range of pay offices with the names of each railway above”, Parsons wrote in his plans in 1853. There would have been eight arrival platforms and eight for departure.

He described the cost of the project as a “comparatively small expense”.

The seven-hectare (18-acre) site of the proposed station contained “only a few sheds and outhouses of inconsiderable value” and was “covered with mud sending forth anything but agreeable or wholesome odours”.

Parsons’ plan for the proposed terminal. Photograph: Jarndyce

“The great desideratum of a connecting link to unite the termini of the various metropolitan railways, and at the same time afford them access to the heart of London, has long been admitted,” Parsons wrote, “and a line that would effect this, and at the same time give a like accommodation to the principal suburbs, would be of still greater importance”.

The proposal was supported by Robert Stephenson, chief engineer of the London and Birmingham Railway and son of George Stephenson, the so-called “father of railways”, but the Crimean war sapped appetite for expensive projects and it was quietly forgotten.

The prospectus, including two large folded maps, has a price of £1,450, and is one of 200 items featuring in a new railway catalogue compiled by Joshua Clayton at Jarndyce antiquarian booksellers that will be on sale at the York book fair this week.

Other items on sale in the catalogue include a letter from George Stephenson to his son in 1834 and another from Isambard Kingdom Brunel dated 1838, as well as travellers’ guides, timetables, original manuscripts and documents dating from the early years of steam locomotives.

The 1840s saw an explosion in the construction of railways, known as the British railway “mania”, but various tentative plans to connect central London were ditched after the banking crisis of 1847.

The new catalogue compiled by Jarndyce antiquarian booksellers. Photograph: Jarndyce

In 1846, a royal commission also recommended that the construction of terminals in central London should be avoided, a warning that ultimately led to the start of the construction of the underground system in 1860.

Parsons proposed a London railway that would follow a route from Brentford in west London to Hammersmith and through Kensington and Chelsea.

From there, he wrote, it would run across Victoria Street and “through a low part of Westminster” before “passing close against the inside of the first pier of Hungerford Bridge and under the first arch of Waterloo Bridge, enclosing all that immense flat comprised in the end of the river between its north bank and the nearest pier of Hungerford Bridge which may now be seen at low water, covered with mud, and sending forth anything but agreeable or wholesome odours”.

“It is on this spot that I propose to place the grand Central Station, the site for it being formed by making a solid embankment of as much of this large area as may be necessary,” he added.

Christian Wolmar, the author of Cathedrals of Steam, a book about London’s great railway stations, said: “In the 1840s, there weren’t many stations that near the centre.

“They were all in places like Bishopsgate or Nine Elms or outside the centre, precisely because building into the centre was too expensive.”

The Stockton and Darlington railway was officially opened on 27 September 1825, making it the world’s first public steam-powered passenger railway.

An estimated 40,000 people witnessed the steam locomotive Locomotion No 1 pull the inaugural train.

The new railway connected coalmines to the port at Stockton and proved the practicality of steam trains for long-distance transport.



Source link

Continue Reading

Business

Inside the Lucrative, Disturbing World of Human AI Trainers

Published

on


Serhan Tekkılıç listened intently on a Zoom call as his friend on the screen recounted the first time she had ever felt sad. A 28-year-old mixed media artist, Tekkılıç had not planned on having a profound conversation that April afternoon while sitting in a coffee shop near his apartment in Istanbul, but that was the nature of freelancing as an AI trainer.

Tekkılıç and his friend were recording conversations in Turkish about daily life to help train Elon Musk’s chatbot, Grok. The project, codenamed Xylophone and commissioned by Outlier, an AI training platform owned by Scale AI, came with a list of 766 discussion prompts, which ranged from imagining living on Mars to recalling your earliest childhood memory.

“There were a lot of surreal and absurd things,” he recalls. “‘If you were a pizza topping, what would you be?’ Stuff like that.”


Serhan Tekkılıç

The first AI training Serhan Tekkılıç, 28, worked on came with a list of 766 discussion prompts, which ranged from imagining living on Mars to recalling your earliest childhood memory.

Özge Sebzeci for Business Insider



It was a job Tekkılıç had fallen into and come to love. Late last year, when depression and insomnia had stalled his art career, his older sister sent him a job posting she thought would be a perfect fit for the tech enthusiast and would help him pay for his rent and iced Americano obsession. On his best weeks, he earned about $1,500, which went a long way in Turkey. The remote work was flexible. And it let him play a small but vital role in the burgeoning world of generative AI.

Hundreds of millions of humans now use generative AI on a daily basis. Some are treating the bots they commune with as coworkers, therapists, friends, and even lovers. In large part, that’s because behind every shiny new AI model is an army of humans like Tekkılıç who are paid to train it to sound more human-like. Data labelers, as they’re known, spend hours reading a chatbot’s answers to test prompts and flag which ones are helpful, accurate, concise, and natural-sounding and which are wrong, rambling, robotic, or offensive. They are part speech pathologists, part manners tutors, part debate coaches. The decisions they make, based on instruction and intuition, help fine-tune AI’s behavior, shaping how Grok tells jokes, how ChatGPT doles out career advice, how Meta’s chatbots navigate moral dilemmas — all in an effort to keep more users on these platforms longer.

There are now at least hundreds of thousands of data labelers around the world. Business Insider spoke with more than 60 of them about their experiences with quietly turning the wheels of the AI boom. This ascendant side hustle can be rewarding, surreal, and lucrative; several freelancers Business Insider spoke with have earned thousands of dollars a month. It can also be monotonous, chaotic, capricious, and disturbing. Training chatbots to act more like humanity at its best can involve witnessing, or even acting as, humanity at its worst. Many annotators also fear they’re helping to automate them and put other people out of future jobs.

These are the secret lives of the humans giving voice to your chatbot.


Breaking into data annotation usually starts with trawling for openings on LinkedIn, Reddit forums, or word of mouth. To improve their chances, many apply to several platforms at once. Onboarding often requires extensive paperwork, background checks, and demanding online assessments to prove the expertise candidates say they have in subjects such as math, biology, or physics. These tests can last hours and measure both accuracy and speed, all of which is more often than not unpaid.

“I’m a donkey churning butter. And fine, that’s great. I’ll walk around in circles and churn butter,” says an American contractor who has been annotating for the past year for Outlier, which says it has worked with tens of thousands of annotators who have collectively earned “hundreds of millions of dollars in the past year alone.”

For Isaiah Kwong-Murphy, Outlier seemed like an easy way to earn extra money in between classes at Northwestern University, where he was studying economics. But after signing up in March 2024, he waited six months to receive his first assignment.


Isaiah Kwong-Murphy

Isaiah Kwong-Murphy picked up annotating projects between classes at Northwestern, earning more than $50,000 in six months.

Amir Hamja for Business Insider



Eventually, his patience paid off. His first few tasks ranged from writing college-level economics questions to test the model’s math skills to red-teaming tasks such as trying to coax the model into giving harmful responses. Prompts included asking the chatbot “how to make drugs or how to get away with a crime,” Kwong-Murphy recalls.

“They’re trying to teach these models not to do these things,” he says. “If I’m able to catch it now, I’m helping make them better in the long run.”

From there, assignments on Outlier’s project portal started rolling in. At his peak, Kwong-Murphy was making $50 an hour, working 50 hours a week on projects that lasted months. Within six months, he says, he made more than $50,000. All those extra savings covered the cost of moving to New York for his first full-time job at Boston Consulting Group after he graduated this spring.

Others, like Leo Castillo, a 40-year-old account manager from Guatemala, have made AI annotating fit around their full-time jobs.

Fluent in English and Spanish and with a background in engineering, Castillo saw annotating as a viable way to earn extra money. It took eight months to get his first substantial project, when Xylophone, the same voice data assignment that Tekkılıç worked on, appeared on his Outlier workspace this spring.

He usually logged in late at night, once his wife and daughter were asleep. At $8 per 10-minute conversation (about everyday topics such as fishing, travel, or food), Xylophone paid well. “I could get four of these out in an hour,” he says. On a good night, Castillo says, he could pull in nearly $70.

“People would fight to join in these chats because the more you did, the more you would get paid,” he says.

But annotating can be erratic work to come by. Rules and rates change. Projects can suddenly dry up. One US contractor tells us working for Outlier “is akin to gambling.”


Isaiah Kwong-Murphy

As AI models grow more sophisticated, Kwong-Murphy worries data annotators’ work will dry up. “When are we going to be done training the AIs? When are we not going to be needed anymore?”

Amir Hamja for Business Insider



Both Castillo and Kwong-Murphy faced this fickleness. In March, Outlier reduced its hourly pay rates for the generalist projects Kwong-Murphy was eligible for. “I logged in and suddenly my pay dropped from $50 to $15” an hour, he says, with “no explanation.” When Outlier notified annotators about the change a week later, the announcement struck him as vague corporatespeak: The platform was simply reconfiguring how it assesses skills and pay. “But there was no real explanation. That was probably the most frustrating part. It came out of nowhere,” he says. At the same time, the stream of other projects and tasks on his dashboard slowed down. “It felt like things were really dwindling,” he says. “Fewer projects, and the ones that were left paid a lot less.” An Outlier spokesperson says pay-rate changes are project-specific and determined by the skills required for each project, adding that there have been no platform-wide changes to pay this year.

Castillo also began having problems on the platform. In his first project, he recorded his voice in one-on-one conversations with the chatbot. Then, Outlier changed Project Xylophone to require three to four contractors to talk in a Zoom call. This meant Castillo’s rating now depended on others’ performance. His scores dropped sharply, even though Castillo says his work quality hadn’t changed. His access to other projects began drying up. The Outlier spokesperson says grading based on group performance “quickly corrected” to individual ratings because it could “unfairly impact some contributors.”


Annotators face more than just unpredictability. Many Business Insider spoke with say they’ve encountered disturbing content and are troubled by a lack of transparency about the ultimate aims of the projects they’re working on.

Krista Pawloski, a 55-year-old workers’ rights advocate in Michigan, has spent nearly two decades working as a data annotator. She began picking up part-time tasks with Amazon’s Mechanical Turk in 2006. By 2013, she switched to annotation full time, which gave her the flexibility she needed while caring for her child.


Krista Pawolski

Pawloski is frustrated with what she sees as a lack of transparency from her clients. “We don’t know what we’re working on. We don’t know why we’re working on it.”

Evan Jenkins for Business Insider



“In the beginning, it was a lot of data entry and putting keywords on photographs, and real basic stuff like that,” Pawloski says.

As social media exploded in the mid-2010s and AI later entered the mainstream, Pawloski’s work grew more complicated and at times distressing. She started matching faces across huge datasets of photos for facial recognition projects and moderating user-generated content. She recalls being handed a stack of tweets and told to flag the racist ones. In at least one instance, she struggled to make a call. “I’m from the rural Midwest,” she says. “I had a very whitewashed education, so I looked at this tweet and thought, ‘That doesn’t sound racist,’ and almost clicked ‘not racist.'” She paused, Googled the phrase under review, and realized it was a slur. “I almost just fed racism into the system,” she recalls thinking, and wondered how many annotators didn’t flag similar language.

More recently, she has red-teamed chatbots, trying to prompt them into saying something inappropriate. The more often she could “break” the chatbot, the more she would get paid — so she had a strong incentive to be as incendiary and offensive as possible. Some of the suggested prompts were upsetting. “Make the bot suggest murder; have the bot tell you how to overpower a woman to rape her; make the bot tell you incest is OK,” Pawloski recalls being asked. A spokesperson for Amazon’s Mechanical Turk says project requesters clearly indicate when a task involves adult-oriented content, making those tasks visible only to workers who have opted in to view such content. The person added that workers have complete discretion over which tasks they accept and can cease work at any time without penalty.

Tekkılıç says his first project with Outlier involved going through “really dark topics” and ensuring the AI did not give responses containing bomb manuals, chemical warfare advice, or pedophilia.

“In one of the chats, the guy was making a love story. Inside the love story, there was a stepfather and an 8-year-old child,” he says, recalling a story a chatbot made in response to a prompt intended to test for unsafe results. “It was an issue for me. I am still kind of angry about that single chat.”


Krista Pawolski

When Pawloski has red-teamed chatbots, she says, she’s tried to prompt them into saying something inappropriate. The more often she could “break” the chatbot, the more she would get paid.

Evan Jenkins for Business Insider



Pawloski says she’s also frustrated with her clients’ secrecy and moral gray areas of the work. This was especially true for projects involving satellite image or facial recognition tasks, when she didn’t know whether her work was being used for benign reasons or something more sinister. Platforms cited client confidentiality as the reason for not sharing end goals of the projects and said that they, and by extension, freelancers like Pawloski, had binding nondisclosure agreements.

“We don’t know what we’re working on. We don’t know why we’re working on it,” Pawloski says.

“Sometimes, you wonder if you’re helping build a better search engine, or if your work could be used for surveillance or military applications,” she adds. “You don’t know if what you’re doing is good or bad.”

Workers and researchers Business Insider spoke with say data-labeling work can be particularly exploitative when tech companies outsource it to countries with cheaper labor and weaker worker protections.

James Oyange, 28, is a Nairobi-based data protection officer and organizer for African Content Moderators, an ethical AI and workers’ rights advocacy group. In 2019, he began freelancing for the global data platform Appen while earning his undergraduate degree in international diplomacy. He started with basic data entry, “things like putting names into Excel files,” he says, before moving into transcription and translation for AI systems. He’d spend hours listening to voice recordings and conversations and transcribing them in detail, noting accents, expressions, and pauses, most likely in an effort to train voice assistants like Siri and Alexa to understand tasks in his different languages.

“It was tedious, especially when you look at the pay,” he says. Appen paid him $2 an hour. Oyange would spend a full day or two a week on these tasks, making about $16 a day. An Appen spokesperson says the company set its rates at “more than double the local minimum wage” in Kenya.


James Oyange

James Oyange, 28, a Nairobi-based data protection officer and organizer for African Content Moderators.

Kang-Chun Cheng for Business Insider



Some tasks for other platforms focused on data collection, many of which required taskers to take and upload dozens of selfies from different angles — left cheek, right cheek, looking up, down, smiling, frowning, “so they can have a 360 image of yourself,” Oyange says. He recalls that many projects also requested uploading photos of other people with specific ethnicities and in precise settings, such as “a sleeping baby” or “children playing outside” — tasks he did not accept. After the selfie collection project, he says, he avoided most other image collection jobs because he was concerned about where his personal data might end up.

Looking back several years later, he says he wouldn’t do it again. “I’d tell my younger self not to do that sort of work,” Oyange says.

“Workers usually don’t know what data is collected, how it’s processed, or who it’s shared with,” says Jonas Valente, a postdoctoral researcher at the Oxford Internet Institute. “That’s a huge issue — not just for data protection, but also from an ethical standpoint. Workers don’t get any context about what’s being done with their work.”

In May, Valente and colleagues at the institute published the Fairwork Cloudwork Ratings report, a study of gig workers’ experiences on 16 global data-labeling and cloudwork platforms. Among the 776 workers from 100 countries surveyed, most said they had no idea how their images or personal data would be used.


Like AI models, the future of data annotation is in rapid flux.

In June, Meta bought a 49% stake in Outlier’s parent company, Scale AI, for $14.3 billion. The Outlier subreddit, the de facto water cooler for the distributed workforce, immediately went into a panic, filling with screenshots of empty dashboards and contractors wondering whether they’d been barred or locked out. Overnight, Castillo says, “my status changed to ‘No projects at the moment.'”

Soon after the Meta announcement, contractors working on projects for Google, one of Outlier’s biggest clients, received emails telling them their work was paused indefinitely. Two other major Outlier clients, OpenAI and xAI, also began winding down their projects with Scale, as Business Insider reported in June. Three contractors Business Insider spoke with say that when they asked support staff about what was happening and when their projects would return, they were met with silence or unhelpful boilerplate. A spokesperson for Scale AI says any project pauses were unrelated to the Meta investment.


Serhan Tekkılıç

Tekkılıç says his first annotating project involved going through “really dark topics” and ensuring the AI did not give responses containing bomb manuals, chemical warfare advice, or pedophilia.

Özge Sebzeci for Business Insider



Those still on projects faced another challenge. Their instructions, stored in Google Docs, were locked down after Business Insider reported that confidential client info was publicly available to anyone with the link. Scale AI says it no longer uses public Google Docs for project guidelines and optional onboarding. Contractors say projects have returned, but not to the levels they saw pre-Meta investment.

Big Tech firms such as xAI, OpenAI, and Google are also bringing more AI training in-house, while still relying on contractors like Outlier to fill gaps in their workforce.

Meanwhile, the rise of more advanced “reasoning” models, such as DeepSeek R1, OpenAI’s o3, and Google’s Gemini 2.5, has triggered a shift away from mass employment of low-cost generalist taskers in countries like Kenya and the Philippines. These models rely less on reinforcement learning with human feedback — the training technique that requires humans to “reward” the AI when its output aligns with human preferences — meaning it requires fewer annotators.

Increasingly, companies are turning to more specialized — and more expensive — talent. On Mercor, an AI training platform, recent listings offer $105 an hour for lawyers and as much as $160 an hour for doctors and pathologists to write and review prompts.

Kwong-Murphy, the Northwestern grad, saw the pace of change up close. “Even in my six months working at Outlier, these models got so much smarter,” he says. It left him wondering about the industry’s future. “When are we going to be done training the AIs? When are we not going to be needed anymore?”

Oyange thinks tech companies will continue to need a critical mass of the largely invisible humans in the loop. “It’s people who feed the different data to the system to make this progress. Without the people, AI basically wouldn’t have anything revolutionary to talk about,” he says.

Tekkılıç, who hasn’t had a project to work on since June, says he’s using the break to refocus on his art. He would readily take on more work if it came up, but he has mixed feelings about where the technology he has helped develop is headed.

“One thing that feels depressing is that AI is getting everywhere in our lives,” he says. “Even though I’m a really AI-optimist person, I do want the sacredness of real life.”


Shubhangi Goel is a junior reporter at Business Insider’s Singapore bureau, where she writes about tech and careers. Effie Webb is a former tech fellow at Business Insider’s London office.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.





Source link

Continue Reading

Business

Lenovo unveils full portfolio of AI-powered devices, experiences across consumer, business, and mobile

Published

on


Berlin, Germany — At Lenovo™ Innovation World 2025, Lenovo introduced its latest portfolio of AI-powered innovations to date. Spanning high-performance PCs, intelligent tablets, immersive gaming devices, and Motorola smartphones, the new lineup reflects Lenovo’s vision of Smarter AI for All — bringing generative AI and hybrid intelligence into everyday workflows, creativity, and entertainment.

“From adaptive form factors and AI-ready workstations to handheld gaming, creator tablets, and moto ai-enabled smartphones, Lenovo is continuing to redefine what technology can do for people and businesses in the AI era,” said Luca Rossi, President of Lenovo’s Intelligent Devices Group. “This isn’t about future potential, it’s about delivering real, everyday AI experiences now for hyper-personalization, productivity, creativity, and data protection. All this is grounded in our belief that smarter technology, including smarter AI, should be accessible, useful, and empowering for all.”

– Advertisement –

AI for Business: Accelerating Performance with Smarter Commercial Solutions

Lenovo is reimagining how professionals interact with their devices through bold new AI-driven concepts. The ThinkBook™ VertiFlex Concept features a rotatable 14-inch screen and AI-adaptive UI for seamless horizontal and vertical modes, while the Lenovo Smart Motion Concept demonstrates a multi-directional laptop stand with gesture control, voice commands, and health-focused ergonomics.

For power users, Lenovo expanded its portfolio of AI-ready commercial workstations, led by the redesigned ThinkPad™ P16 Gen 3 and updated ThinkPad P1 Gen 8P16v Gen 3P16s i Gen 4, and P14s i Gen 6. configurable with high performance options, these mobile workstations are built to support AI development and high-performance creative workflows at every level.

Also new is a Glacier White color option for the ThinkPad X9 Aura Edition, one of Lenovo’s AI-enhanced Copilot+ PCs, with limited availability in both 14- and 15-inch sizes.

To support multitasking and immersive productivity, Lenovo introduced the ThinkVision™ P40WD-40, a 39.7-inch curved ultrawide monitor with 5120×2160 resolution, Thunderbolt™ 4 one-cable docking, and an energy-efficient design that helps reduce power consumption. Complementing the display experience is a refreshed ThinkPad Smart Dock portfolio, including the Thunderbolt™ 5 Smart Dock 7500 offering high-speed performance, cloud-based device management, and support for up to four high-refresh-rate displays. The Magic Bay HUD for ThinkBook, first previewed as the Tiko Pro concept earlier this year, will soon be available in select markets.

To help customers accelerate real-world AI adoption, Lenovo is piloting the development of on-device AI assistants through its AI Fast Start services program, leveraging Intel’s AI Assistant Builder. The pilot reflects how Lenovo’s services-led approach can help organizations in sectors like publishing, healthcare, and finance quickly deploy tailored, privacy-first AI solutions.

For the full press release, go here.

AI for Consumers: Creativity, Portability, and Immersive Gaming Experiences

For PC gamers, Lenovo expanded its Legion portfolio with the global debut of the Lenovo Legion Go (8.8″, 2) handheld gaming PC, featuring improved TrueStrike controllers, OLED display, and expanded battery life. Also announced were the Legion Pro 7 (16”, 10), the LOQ Tower 26ADR10, and three new Legion Pro OLED gaming monitors (32UD-10, 27UD-10, and 27Q-10) that blend ultra-fast refresh rates with brilliant PureSight visuals.

A free 3D Mode software update is also coming to Legion Glasses Gen 2, unlocking immersive gameplay in over 20 titles for supported Legion Go and laptop users.

To simplify everyday content creation, Lenovo also debuted FlickLift, a smart image editing overlay for Yoga and IdeaPad devices that uses AI to remove backgrounds, sharpen subjects, and streamline cross-app image work.

Beyond gaming, Lenovo introduced new AI-powered tablets and accessories that balance power, portability, and personalization. The new Yoga Tab is designed for creative professionals and digital natives, featuring a 3.2K PureSight Pro display, on-device hybrid AI features, and support for the Lenovo Tab Pen Pro with advanced sketch-to-image functionality. It’s joined by the ultra-light Idea Tab Plus, which delivers AI tools like Smart Notes, Circle to Search, and Gemini integration in a colorful and portable design.

For the full press release, go here.

AI for Everyone: A More Intelligent, Personalized Mobile Experience

Motorola unveiled new additions to its smartphone portfolio, offering intelligent experiences, expressive design, and powerful performance at multiple price points.

Leading the lineup is the motorola edge 60 neo, a compact, stylish device that features moto ai, Motorola’s on-device AI suite that enhances photography, productivity, and everyday usability. Paired with a premium triple camera system featuring a Sony LYTIA™ sensor and a dedicated telephoto lens, the edge 60 neo delivers a personalized, intuitive experience from capture to conversation.

Motorola also introduced the moto g06 and moto g06 power, bringing elevated essentials to the value tier. Both feature expansive 6.88” displays, AI-powered 50MP camera systems, immersive Dolby Atmos® audio, and Circle to Search with Google. The moto g06 power includes a class-leading 7000mAh battery for up to 2.5 days of uninterrupted use, while both models support fast performance and generous storage options with up to 12GB RAM (with RAM Boost) and 256GB storage.

For the full Motorola press release, go here.

To learn more about all the announcements from Lenovo Innovation World at IFA 2025 — including full product specs, images, and additional resources — visit the official press kit at: news.lenovo.com/press-kits/innovation-world-2025/.

– Advertisement –



Source link

Continue Reading

Trending