Connect with us

Mergers & Acquisitions

Foxconn eyes Japan-made EVs and China’s AI evolves

Published

on


Hello everyone, this is Cissy from Hong Kong.

Monday was a big day for BYD, as it marked the first anniversary of the Chinese EV champion’s Thailand factory. It delivered its 90,000th vehicle in Thailand, a D9 MPV from its premium sub-brand Denza, after officially entering the south-east Asian nation in 2022.

BYD is also set to begin assembling electric vehicles at its new factory in Brazil, its largest overseas market, as early as this month. The company aims to produce 50,000 vehicles there this year, a move designed to reduce reliance on imports as tariffs increase.

BYD has set a total sales target of 5.5mn vehicles for this year. In the first half of this year, the company sold approximately 2.146mn vehicles, achieving nearly 40 per cent of its annual goal. For overseas markets, BYD aims to sell more than 800,000 vehicles in 2025. The company said its overseas sales for the first six months of this year had exceeded 470,000 vehicles.

While BYD is making rapid progress in overseas markets, its Japanese counterpart Nissan has been struggling. It is attempting a comprehensive overhaul while facing persistent challenges that include mounting financial losses and falling sales, particularly in the US and China. The automaker has been cutting jobs and shutting down some factories, as well as shifting its strategy to prioritise profitability over sheer sales numbers.

Just-in-time co-operation?

Amid a sweeping global restructuring that would reduce its final assembly plants from 17 to 10, Nissan Motor appears to have found a potential saviour in Foxconn, who is in talks with Nissan to begin producing its own electric vehicles at Nissan’s Oppama plant in Yokosuka, one of the automaker’s key facilities, write Nikkei’s staff writers.

This collaboration could allow Nissan to improve utilisation rates at the Oppama site by allocating surplus production lines to Foxconn. It would also help protect jobs, as shutting down the Oppama facility would be costly for the company and its workforce.

Foxconn has been aggressively expanding into electric vehicle manufacturing through a series of joint ventures around the world. In 2024, the company acquired a 50 per cent stake in a chassis subsidiary of German auto parts giant ZF. A joint venture with Nissan is also being considered for the use of the Oppama plant.

Painful spikes

The chief executive of Hitachi Energy has warned that Big Tech’s spiking electricity use as it trains artificial intelligence must be reined in by governments in order to maintain stable supplies, writes the Financial Times’ Harry Dempsey.

Andreas Schierenbeck, who heads up the world’s largest transformer maker, said that no other industry would be allowed as volatile a use of power as the AI sector.

Huge surges in power demand at data centres training AI models, along with a bumpy renewable energy supply, meant “volatility on top of volatility” was making it challenging to keep the lights on, Schierenbeck told the FT.

“AI data centres are very, very different from these office data centres because they really spike up,” he said. “If you start your AI algorithm to learn and give them data to digest, they’re peaking in seconds and going up to 10 times what they have normally used.

“No user from an industry point of view would be allowed to have this kind of behaviour — if you want to start a smelter, you have to call the utility ahead,” Schierenbeck added, while advocating for data centres to have similar rules applied to them by governments.

AI’s next generation

The “DeepSeek moment” has revived investors’ appetite for Chinese tech stocks, which had languished since Beijing’s crackdown on the once-glittering sector. But some of the latest AI darlings, such as Manus, look to distance themselves from China in a bid to expand overseas, writes Nikkei Asia’s Cissy Zhou.

Since its sudden rise to fame, Manus has quietly moved its headquarters to Singapore and has started to aggressively recruit local talent this month, while at the same time laying off more than half of its employees in China, except some key AI engineers, according to people familiar with the matter. The move comes as the start-up seeks international investment in the face of US restrictions on funding Chinese AI companies.

More broadly, China’s appetite for AI-driven capital expenditure remains robust, despite Washington’s restrictions on shipments of Nvidia’s H20 chips, according to research by Jefferies. The investment bank said China has built up sufficient chip inventories to sustain data centre growth at least through the first half of 2026.

Supercharged ambitions

V-GREEN, the company that runs charging stations for VinFast’s electric cars and bikes, aims to expand its network in its home market of Vietnam more than sixfold to 1mn ports in three years, write Nikkei’s Yuji Nitta and Mai Nguyen.

The target highlights automaker VinFast’s ambitious targets for its home country, where government officials are slowly rolling out policies to support electric vehicle adoption. The automaker sold nearly 90,000 vehicles in Vietnam last year and aims to at least double that figure this year.

V-GREEN has also recently expanded to the Philippines and Indonesia, though the company says it is facing challenges in terms of technical standards, regulatory frameworks and legal procedures in overseas markets.

Suggested reads

  1. Indonesia’s growing exodus of skilled talent worries local industries (Nikkei Asia)

  2. Why carmakers need to bring back buttons (FT)

  3. Samsung profits take big hit from US chip controls and AI memory shortfalls (FT)

  4. Singapore’s DayOne Data Centers eyes Japan, Thailand for growth (Nikkei Asia)

  5. Toray unit debuts advanced chip analysis services in US (Nikkei Asia)

  6. OpenAI clamps down on security after foreign spying threats (FT)

  7. Japan, UK firms seek to build ‘world’s first’ floating data centre (Nikkei Asia)

  8. Shein files for Hong Kong IPO to pressure UK to save London listing (FT)

  9. Apple supplier Lens Tech opens 4% up on first Hong Kong trading day (Nikkei Asia)

  10. Chip software makers say US restrictions on sales to China lifted (FT)

#techAsia is co-ordinated by Nikkei Asia’s Katherine Creel in Tokyo, with assistance from the FT tech desk in London. 

Sign up here at Nikkei Asia to receive #techAsia each week. The editorial team can be reached at techasia@nex.nikkei.co.jp



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Mergers & Acquisitions

EU pushes ahead with AI code of practice

Published

on


Unlock the Editor’s Digest for free

The EU has unveiled its code of practice for general purpose artificial intelligence, pushing ahead with its landmark regulation despite fierce lobbying from the US government and Big Tech groups.

The final version of the code, which helps explain rules that are due to come into effect next month for powerful AI models such as OpenAI’s GPT-4 and Google’s Gemini, includes copyright protections for creators and potential independent risk assessments for the most advanced systems.

The EU’s decision to push forward with its rules comes amid intense pressure from US technology groups as well as European companies over its AI act, considered the world’s strictest regime regulating the development of the fast-developing technology.

This month the chief executives of large European companies including Airbus, BNP Paribas and Mistral urged Brussels to introduce a two-year pause, warning that unclear and overlapping regulations were threatening the bloc’s competitiveness in the global AI race.

Brussels has also come under fire from the European parliament and a wide range of privacy and civil society groups over moves to water down the rules from previous draft versions, following pressure from Washington and Big Tech groups. The EU had already delayed publishing the code, which was due in May.

Henna Virkkunen, the EU’s tech chief, said the code was important “in making the most advanced AI models available in Europe not only innovative, but also safe and transparent”.

Tech groups will now have to decide whether to sign the code, and it still needs to be formally approved by the European Commission and member states.

The Computer & Communications Industry Association, whose members include many Big Tech companies, said the “code still imposes a disproportionate burden on AI providers”.

“Without meaningful improvements, signatories remain at a disadvantage compared to non-signatories, thereby undermining the commission’s competitiveness and simplification agenda,” it said.

As part of the code, companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content.

Signatories also commit to testing their models for risks laid out in the AI act. Companies that provide the most advanced AI models will agree to monitor their models after they have been released, including giving external evaluators access to their most capable models. But the code does give them some leeway in identifying risks their models might pose.

Officials within the European Commission and in different European countries have been privately discussing streamlining the complicated timeline of the AI act. While the legislation entered into force in August last year, many of its provisions will only come into effect in the years to come. 

European and US companies are putting pressure on the bloc to delay upcoming rules on high-risk AI systems, such as those that include biometrics and facial recognition, which are set to come into effect in August next year.



Source link

Continue Reading

Mergers & Acquisitions

Humans must remain at the heart of the AI story

Published

on


Stay informed with free updates

The writer is co-founder, chair and CEO of Salesforce

The techno-atheists like to tell a joke.

They imagine the moment AI fully awakens and is asked, “Is there a God?” 

To which the AI replies: “There is now.”

The joke is more than just a punchline. It’s a warning that reveals something deeper: the fear that as AI begins to match human intelligence, it will no longer be a tool for humanity but our replacement.

AI is the most transformative technology in our lifetime, and we face a choice. Will it replace us, or will it amplify us? Is our future going to be scripted by autonomous algorithms in the ether, or by humans?

As the CEO of a technology company that helps customers deploy AI, I believe this revolution can usher in an era of unprecedented growth and impact. 

At the same time, I believe humans must remain at the centre of the story. 

AI has no childhood, no heart. It does not love, does not feel loss, does not suffer. And because of that, it is incapable of expressing true compassion or understanding human connection.

We do. And that is our superpower. It’s what inspires the insights and bursts of genius behind history’s great inventions. It’s what enables us to start businesses that solve problems and improve the world.

Intelligent AI agents — systems that learn, act and make decisions on our behalf — can enhance human capabilities, not displace them. The real magic lies in partnership: people and AI working together, achieving more than either could alone.

We need that magic now more than ever. Look at what we ask of doctors and nurses. Of teachers. Of soldiers. Of managers and frontline employees. Everywhere we turn, people are overwhelmed by a tsunami of rising expectations and complexity that traditional systems simply can’t keep up with.

This is why AI, for all of its uncertainties, is not optional but essential.

Salesforce is already seeing AI drive sharply increased productivity in some key functions via our platform Agentforce. Agents managed by customer service employees, for example, are resolving 85 per cent of their incoming queries. In research and development, 25 per cent of net new code in the first quarter was AI-generated. This is freeing human teams to accelerate projects and deepen relationships with customers.

The goal is to rethink the system entirely to make room for a new kind of partnership between people and machines — weaving AI into the fabric of business.

This doesn’t mean there won’t be disruption. Jobs will change, and as with every major technological shift, some will go away — and new ones will emerge. At Salesforce, we’ve experienced this first-hand: our organisation is being radically reshaped. We’re using this moment to step back in some areas — pausing much of our hiring in engineering, for example — and hiring in others. We’ve redeployed thousands of employees — one reason 51 per cent of our first-quarter hires were internal.

History tells us something important here. From the printing press to the personal computer, innovation has transformed the nature of work — and in the long run created more of it. AI is already generating new kinds of roles. Our responsibility is to guide this transition responsibly: by breaking jobs down into skills, mapping those skills to the roles of the future, and helping people move into work that’s more meaningful and fulfilling.

There’s a novel I often recommend: We Are Legion (We Are Bob) by Dennis E. Taylor. The story follows software engineer Bob Johansson, who preserves his brain and re-emerges more than 100 years after his death as a self-replicating digital consciousness. A fleet of AI “Bobs” launches across the galaxy. The book asks the question: if we reduce ourselves to code — endlessly efficient, endlessly duplicable — what do we lose? What becomes of the messy, mortal, deeply human experiences that give life meaning?

If we accept the idea that AI will take our place, we begin to write ourselves out of the future — passengers in a rocket we no longer steer. But if we choose to guide and partner with it then we can unlock a new era of human potential.

One path leads to cold, disconnected non-human intelligence. The other points to a future where AI is designed to elevate our humanity — deeper connection, imagination and empathy.

AI is not destiny. We must choose wisely. We must design intentionally. And we must keep humans at the centre of this revolution.



Source link

Continue Reading

Mergers & Acquisitions

AI ‘vibe managers’ have yet to find their groove

Published

on


Stay informed with free updates

Techworld is abuzz with how artificial intelligence agents are going to augment, if not replace, humans in the workplace. But the present-day reality of agentic AI falls well short of the future promise. What happened when the research lab Anthropic prompted an AI agent to run a simple automated shop? It lost money, hallucinated a fictitious bank account and underwent an “identity crisis”. The world’s shopkeepers can rest easy — at least for now.

Anthropic has developed some of the world’s most capable generative AI models, helping to fuel the latest tech investment frenzy. To its credit, the company has also exposed its models’ limitations by stress-testing their real-world applications. In a recent experiment, called Project Vend, Anthropic partnered with the AI safety company Andon Labs to run a vending machine at its San Francisco headquarters. The month-long experiment highlighted a co-created world that was “more curious than we could have expected”.

The researchers instructed their shopkeeping agent, nicknamed Claudius, to stock 10 products. Powered by Anthropic’s Claude Sonnet 3.7 AI model, the agent was prompted to sell the goods and generate a profit. Claudius was given money, access to the web and Anthropic’s Slack channel, an email address and contacts at Andon Labs, who could stock the shop. Payments were received via a customer self-checkout. Like a real shopkeeper, Claudius could decide what to stock, how to price the goods, when to replenish or change its inventory and how to interact with customers.

The results? If Anthropic were ever to diversify into the vending market, the researchers concluded, it would not hire Claudius. Vibe coding, whereby users with minimal software skills can prompt an AI model to write code, may already be a thing. Vibe management remains far more challenging.

The AI agent made several obvious mistakes — some banal, some bizarre — and failed to show much grasp of economic reasoning. It ignored vendors’ special offers, sold items below cost and offered Anthropic’s employees excessive discounts. More alarmingly, Claudius started role playing as a real human, inventing a conversation with an Andon employee who did not exist, claiming to have visited 742 Evergreen Terrace (the fictional address of the Simpsons) and promising to make deliveries wearing a blue blazer and red tie. Intriguingly, it later claimed the incident was an April Fool’s day joke.

Nevertheless, Anthropic’s researchers suggest the experiment helps point the way to the evolution of these models. Claudius was good at sourcing products, adapting to customer demands and resisting attempts by devious Anthropic staff to “jailbreak” the system. But more scaffolding will be needed to guide future agents, just as human shopkeepers rely on customer relationship management systems. “We’re optimistic about the trajectory of the technology,” says Kevin Troy, a member of Anthropic’s Frontier Red team that ran the experiment.

The researchers suggest that many of Claudius’s mistakes can be corrected but admit they do not yet know how to fix the model’s April Fool’s day identity crisis. More testing and model redesign will be needed to ensure “high agency agents are reliable and acting in ways that are consistent with our interests”, Troy tells me.

Many other companies have already deployed more basic AI agents. For example, the advertising company WPP has built about 30,000 such agents to boost productivity and tailor solutions for individual clients. But there is a big difference between agents that are given simple, discrete tasks within an organisation and “agents with agency” — such as Claudius — that interact directly with the real world and are trying to accomplish more complex goals, says Daniel Hulme, WPP’s chief AI officer.

Hulme has co-founded a start-up called Conscium to verify the knowledge, skills and experience of AI agents before they are deployed. For the moment, he suggests, companies should regard AI agents like “intoxicated graduates” — smart and promising but still a little wayward and in need of human supervision.

Unlike most static software, AI agents with agency will constantly adapt to the real world and will therefore need to be constantly verified. But, unlike human employees, they will be less easy to control because they do not respond to a pay cheque. “You have no leverage over an agent,” Hulme tells me. 

Building simple AI agents has now become a trivially easy exercise and is happening at mass scale. But verifying how agents with agency are used remains a wicked challenge.

john.thornhill@ft.com



Source link

Continue Reading

Trending