Connect with us

Business

Trump threatens 35% tariffs on Canadian goods

Published

on


US President Donald Trump has said he will slap a 35% tariff on Canadian goods starting 1 August, even as the two countries are days away from a self-imposed deadline to reach a new deal on trade.

The missive came as Trump also threatened blanket tariffs of 15% or 20% on most trade partners, and said he would soon notify the European Union of a new tariff rate on its goods.

Trump announced the latest levies on Canada on Thursday in a letter posted to social media and addressed to Prime Minister Mark Carney.

The US has already imposed a blanket 25% tariff on some Canadian goods, and the country is feeling the pain of the Trump administration’s global steel, aluminium and auto tariffs.

The letter is among more than 20 that Trump had posted this week to US trade partners, including Japan, South Korea and Sri Lanka.

Like Canada’s letter, Trump has vowed to implement those tariffs on trade partners by 1 August.

The US has imposed a 25% tariff on all Canadian imports, though there is a current exemption in place for goods that comply with a North American free trade agreement.

It is unclear if the latest tariffs threat would apply to goods covered by the Canada-United States-Mexico Agreement (CUSMA).

Trump has also imposed a global 50% tariff on aluminium and steel imports, and a 25% tariff on all cars and trucks not build in the US.

He also recently announced a 50% tariff on copper imports, scheduled to take effect next month.

Canada sells about three-quarters of its goods to the US, and is an auto manufacturing hub and a major supplier of metals, making the US tariffs especially damaging to those sectors.

Trump’s letter said the 35% tariffs are separate to those sector-specific levies.

“As you are aware, there will be no tariff if Canada, or companies within your country, decide to build or manufacture products within the United States,” Trump stated.

He also tied the tariffs to what he called “Canada’s failure” to stop the flow of fentanyl into the US, as well as Canada’s existing levies on US dairy farmers and the trade deficit between the two countries.

“If Canada works with me to stop the flow of Fentanyl, we will, perhaps, consider an adjustment to this letter. These Tariffs may be modified, upward or downward, depending on our relationship with Your Country,” Trump said.

President Trump has accused Canada – alongside Mexico – of allowing “vast numbers of people to come in and fentanyl to come in” to the US.

According to data from the US Customs and Border Patrol, only about 0.2% of all seizures of fentanyl entering the US are made at the Canadian border, almost all the rest is confiscated at the US border with Mexico.

In response to Trump’s complaints, Canada announced more funding towards border security and had appointed a fentanyl czar earlier this year.

Canada has been engaged in intense talk with the US in recent months to reach a new trade and security deal.

At the G7 Summit in June, Prime Minister Carney and Trump said they were committed to reaching a new deal on within 30 days, setting a deadline of 21 July.

Trump threatened in the letter to increase levies on Canada if it retaliated. Canada has already imposed counter-tariffs on the US, and has vowed more if they failed to reach a deal by the deadline.

In late June, Carney removed a tax on big US technology firms after Trump labelled it a “blatant attack” and threatened to call off trade talks.

Carney said the tax was dropped as “part of a bigger negotiation” on trade between the two countries.

The Prime Minister’s office told the BBC they did not have immediate comment on Trump’s letter.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

National Trust to cut 550 jobs after Budget pushes up costs

Published

on


The National Trust has announced plans to cut 6% of its current workforce, about 550 jobs, partly blaming an inflated pay bill and tax rises introduced by Chancellor Rachel Reeves.

The heritage and conservation charity said it was under “sustained cost pressures beyond our control”.

These include the increase in National Insurance contributions by employers and the National Living Wage rise from April, which the National Trust said had driven up wage costs by more than £10m a year.

The cost-cutting measures are part of a plan to find £26m worth of savings.

“Although demand and support for our work are growing with yearly increases in visitors and donations; increasing costs are outstripping this growth,” the charity said in a statement.

“Pay is the biggest part of our costs, and the recent employer’s National Insurance increase and National Living Wage rise added more than £10m to our annual wage bill.”

A 45-day consultation period with staff began on Thursday and the Trust – which currently has about 9,500 employees – said it was working with the Prospect union “to minimise compulsory redundancies”.

Prospect said though cost pressures were partly to blame, “management decisions” also contributed to the Trust’s financial woes.

The union’s deputy general secretary, Steve Thomas, said “once again it is our member who will have to pay the price”.

“Our members are custodians of the country’s cultural, historic and natural heritage – cuts of this scale risk losing institutional knowledge and skills which are vital to that mission,” he said.

The Trust is running a voluntary redundancy scheme, and is expecting that to significantly reduce compulsory redundancies, a spokeswoman said.

The job cuts will affect all staff from management down, and everyone whose job is at risk will be offered a suitable alternative where available, the spokeswoman added.

Following consultations, which will finish in mid-to-late August, the cuts will be made in the autumn.

Chancellor Rachel Reeves announced the rise in National Insurance contributions by employers in last October’s Budget.

But the move led to strong criticism from many firms, with retailers warning that High Street job losses would be “inevitable” when coupled with other cost increases.

The hike in employer NICs is forecast to raise £25bn in revenues by the end of the parliament.



Source link

Continue Reading

Business

GPTBots.ai’s Business AI Agent Solutions at The MarTech Summit Hong Kong

Published

on



As enterprises worldwide race to adopt AI, GPTBots.ai made its mark at The MarTech Summit Hong Kong, Asia’s premier marketing technology conference attended by world-renowned brands such as JPMorgan, Yahoo, Nike, and DBS, alongside leading Hong Kong enterprises including Cathay Pacific, Hong Kong Disneyland, and The Hong Kong Jockey Club.

With 85% of enterprises prioritizing AI adoption in 2024 (Gartner), yet struggling with implementation gaps, GPTBots.ai demonstrated how its no-code AI Agent platform turns complex AI concepts into deployable solutions—without coding or data science teams.

Spotlight: Real-World AI in Action
At the summit, GPTBots.ai engaged with forward-thinking organizations, including:
A Top Hong Kong University: Their admissions team explored AI-powered chatbots to streamline student inquiries and application processes, aiming to:

  • Automate 80% of FAQs (e.g., program requirements, deadlines).
  • Guide applicants through form-filling with smart error detection.
  • Free staff to focus on students’ in-depth support.

A Leading Hong Kong Airline: Their tech team discussed internal efficiency AI Agents for:

  • AI search to make enterprise knowledge instantly accessible and empower every role.
  • Reducing IT helpdesk tickets by 50% via self-service troubleshooting.

Why GPTBots.ai Stood Out

  • Enterprise-Ready: Built to adapt to your business, no matter the size or complexity.
  • Proven at Scale: Powers AI Agents for financial services, healthcare, and retail giants.
  • End-to-End Capabilities: From strategy to deployment, we manage every step of your AI journey.

“The gap isn’t AI potential—it’s practical adoption,” said Tanya Quan, Marketing Director at GPTBots.ai. “We’re helping enterprises skip the lab and go straight to ROI.”

Previous ArticleNext Article



Source link

Continue Reading

Business

Misinformation was a problem during the Texas floods. AI chatbots weren’t always helping

Published

on


When deadly flash floods hit central Texas last week, people on social media site X turned to artificial intelligence chatbot Grok for answers about whom to blame.

Grok confidently pointed to President Trump.

“Yes. Trump’s 2025 cuts slashed NOAA/NWS funding by ~30% and staff by 17%, impairing forecast accuracy. Warnings underestimated rainfall, contributing to inadequate alerts for the flash floods that killed 24 and left ~24 Camp Mystic girls missing, presumed dead in ongoing searches,” Grok said in a post on Saturday before the death toll rose to at least 120.

Facing backlash from X users that it jumped to conclusions and was “woke,” the chatbot then backtracked.

“That screenshot is a fabrication—I never posted it. Actual facts: Trump’s 2025 NOAA cuts reduced funding ~30% and staff 17%, raising concerns (Axios, Fox). Texas floods killed 43+, with warnings issued hours ahead (CNN), but rainfall underestimated. No evidence directly ties cuts to deaths yet; investigation ongoing. Truth matters,” Grok said in another post.

The contradictory remarks show how AI chatbots can sometimes offer straightforward but inaccurate answers, adding confusion to online chatter already filled with falsehoods and conspiracy theories.

Later in the week, Grok had more problems. The chatbot posted antisemitic remarks and praised Adolf Hitler, prompting xAI to remove the offensive posts. Company owner Elon Musk said on X that the chatbot was “too eager to please and be manipulated,” an issue that would be addressed.

Grok isn’t the only chatbot that has made inappropriate and inaccurate statements. Last year, Google’s chatbot Gemini created images showing people of color in German military uniforms from World War II, which wasn’t common at the time. The search giant paused Gemini’s ability to generate images of people, noting that it resulted in some “inaccuracies.” OpenAI’s ChatGPT has also generated fake court cases, resulting in lawyers getting fined.

The trouble chatbots sometimes have with the truth is a growing concern as more people are using them to find information, ask questions about current events and help debunk misinformation. Roughly 7% of Americans use AI chatbots and interfaces for news each week. That number is higher — around 15% — for people under 25 years old, according to a June report from the Reuters Institute. Grok is available on a mobile app but people can also ask the AI chatbot questions on social media site X, formerly Twitter.

As the popularity of these AI-powered tools increase, misinformation experts say people should be wary about what chatbots say.

“It’s not an arbiter of truth. It’s just a prediction algorithm. For some things like this question about who’s to blame for Texas floods, that’s a complex question and there’s a lot of subjective judgment,” said Darren Linvill, a professor and co-director of the Watt Family Innovation Center Media Forensics Hub at Clemson University.

Republicans and Democrats have debated whether job cuts in the federal government contributed to the tragedy.

Chatbots are retrieving information available online and give answers even if they aren’t correct, he said. If the data they’re trained on are incomplete or biased, the AI model can provide responses that make no sense or are false in what’s known as “hallucinations.”

NewsGuard, which conducts a monthly audit of 11 generative AI tools, found that 40% of the chatbots’ responses in June included false information or a non-response, some in connection with some breaking news such as the Israel-Iran war and the shooting of two lawmakers in Minnesota.

“AI systems can become unintentional amplifiers of false information when reliable data is drowned out by repetition and virality, especially during fast-moving events when false claims spread widely,” the report said.

During the immigration sweeps conducted by the U.S. Immigration and Customs Enforcement in Los Angeles last month, Grok incorrectly fact-checked posts.

After California Gov. Gavin Newsom, politicians and others shared a photo of National Guard members sleeping on the floor of a federal building in Los Angeles, Grok falsely said the images were from Afghanistan in 2021.

The phrasing or timing of a question might yield different answers from various chatbots.

When Grok’s biggest competitor, ChatGPT, was asked a yes or no question about whether Trump’s staffing cuts led to the deaths in the Texas floods on Wednesday, the AI chatbot had a different answer. “no — that claim doesn’t hold up under scrutiny,” ChatGPT responded, citing posts from PolitiFact and the Associated Press.

While all types of AI can hallucinate, some misinformation experts said they are more concerned about Grok, a chatbot created by Musk’s AI company xAI. The chatbot is available on X, where people ask questions about breaking news events.

“Grok is the most disturbing one to me, because so much of its knowledge base was built on tweets,” said Alex Mahadevan, director of MediaWise, Poynter’s digital media literacy project. “And it is controlled and admittedly manipulated by someone who, in the past, has spread misinformation and conspiracy theories.”

In May, Grok started repeating claims of “white genocide” in South Africa, a conspiracy theory that Musk and Trump have amplified. The AI company behind Grok then posted that an “unauthorized modification” was made to the chatbot that directed it to provide a specific response on a political topic.

xAI, which also owns X, didn’t respond to a request for comment. The company released a new version of Grok this week, which Musk said will also be integrated into Tesla vehicles.

Chatbots are usually correct when they fact-check. Grok has debunked false claims about the Texas floods including a conspiracy theory that cloud seeding — a process that involves introducing particles into clouds to increase precipitation — from El Segundo-based company Rainmaker Technology Corp. caused the deadly Texas floods.

Experts say AI chatbots also have the potential to help people reduce people’s beliefs in conspiracy theories, but they might also reinforce what people want to hear.

While people want to save time by reading summaries provided by AI, people should ask chatbots to cite their sources and click on the links they provide to verify the accuracy of their responses, misinformation experts said.

And it’s important for people to not treat chatbots “as some sort of God in the machine, to understand that it’s just a technology like any other,” Linvill said.

“After that, it’s about teaching the next generation a whole new set of media literacy skills.”



Source link

Continue Reading

Trending