Connect with us

AI Research

Trump threatens tariffs on countries that ‘discriminate’ against US tech | Trump tariffs

Published

on


Donald Trump has threatened to impose tariffs and export restrictions on countries whose taxes, legislation and regulations target US big tech companies such as Google, Meta, Amazon and Apple.

In a post on his social media platform, Truth Social, the US president said: “Digital taxes, legislation, rules or regulations are all designed to harm, or discriminate against, American technology.”

He said such measures – which include the UK’s digital services tax, which raises about £800m annually from global tech companies through a 2% levy on revenues – also “outrageously give a complete pass to China’s largest tech companies”.

Trump said: “As the president of the United States, I will stand up to countries that attack our incredible American tech companies. Unless these discriminatory actions are removed, I, as president of the United States, will impose substantial additional tariffs on that country’s exports to the USA, and institute export restrictions on our highly protected technology and chips.”

The threat puts pressure on the UK and the EU, which both struck recent trade agreements with the US. The EU has in place regulations to limit the power of big tech companies through the Digital Services Act, and several member states including France, Italy and Spain have digital services taxes in place.

US officials have criticised the UK’s digital services tax (DST), which was introduced in 2020 and was kept in place after the trade deal with the Trump administration that was reached in May.

Trump has complained about the impact that DSTs around the world are having on US companies. In February he issued an executive order titled Defending American Companies and Innovators from Overseas Extortion and Unfair Fines and Penalties, threatening tariffs in retaliation.

In April it emerged that Keir Starmer had offered big US tech companies a reduction in the headline rate of the DST to placate Trump, while at the same time applying the levy to companies from other countries.

Trump said on Monday: “America, and American technology companies, are neither the ‘piggy bank’ nor the ‘doormat’ of the world any longer. Show respect to America and our amazing tech companies or consider the consequences.”

His warning comes a week after the US and the EU agreed in a joint statement that they would together “address unjustified trade barriers”. However, the EU said separately that it had not committed to alter any digital regulations.

skip past newsletter promotion

In June, Canada scrapped its digital services tax, which Trump had described as a “direct and blatant” attack, in an effort to smooth trade negotiations with its neighbour.

Ed Davey, the leader of the Liberal Democrats, said the UK government should not kowtow to Trump’s “bullying” tactics.

“The prime minister must rule out giving in to Donald Trump’s bullying by watering down Britain’s digital services tax,” Davey said. “Tech tycoons like Elon Musk rake in millions off our online data and couldn’t care less about keeping kids safe online. The last thing they need is a tax break. The way to respond to Trump’s destructive trade war is to work with our allies to stand up to him.”

Quick Guide

Contact us about this story

Show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don’t already have the Guardian app, download it (iOS/Android) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our SecureDrop platform.

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each. 

Illustration: Guardian Design / Rich Cousins

Thank you for your feedback.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Oracle Health Deploys AI to Tackle $200B Administrative Challenge

Published

on

By


Oracle Health introduced tools aimed at easing administrative healthcare burdens and costs.

The company’s new artificial intelligence-powered offerings are designed to simplify and lower the cost of processes such as prior authorizations, medical coding, claims processing and determining eligibility, according to a Thursday (Sept. 11) press release.

“Oracle Health is working to solve long-standing problems in healthcare with AI-powered solutions that simplify transactions between payers and providers,” Seema Verma, executive vice president and general manager, Oracle Health and Life Sciences, said in the release. “Our offerings can help minimize administrative complexity and waste to improve accuracy and reduce costs for both parties. With these capabilities, providers can better navigate payer-specific coverage, medical necessity and billing rules while enabling payers to lower administrative workloads by receiving more accurate claims from the start.”

Annual administrative costs tied to healthcare billing and insurance are estimated at roughly $200 billion, the release said. That figure continues to rise, largely due to the complexity of medical and financial processing rules and evolving payment models. The rules and models are time-consuming and inefficient for providers to follow and adopt, so they use manual processes, which make them prone to errors.

The PYMNTS Intelligence report “Healthcare Payments Need Modernization to Drive Financial Health” found that healthcare’s lingering reliance on manual payment systems is proving to be a bottleneck for its financial health and operational efficiency.

The worldwide market for healthcare digital payments is forecast to increase at a compound annual growth rate of 19% between 2024 and 2030, indicating a shift and market opportunity for digital solutions, per the report.

The report also explored how these outdated systems strain revenues and create inefficiencies, contrasting the sector’s slower adoption with other industries that have embraced digital payment tools.

“On the patient side, the benefits are equally compelling,” PYMNTS wrote in June. “Digital transactions offer hassle-free experiences, which are a driver for patient satisfaction and, ultimately, patient retention.”

The research found that 67% of executives and decision-makers in healthcare payer organizations said that their firms’ manual payment platforms were actively hindering efficiency. In addition, 74% said these platforms put their organizations at greater risk for regulatory fines and penalties.



Source link

Continue Reading

AI Research

California Lawmakers Advance Suite of AI Bills

Published

on


As the California Legislature’s 2025 session draws to a close, lawmakers have advanced over a dozen AI bills to the final stages of the legislative process, setting the stage for a potential showdown with Governor Gavin Newsom (D).  The AI bills, some of which have already passed both chambers, reflect recent trends in state AI regulation nationwide, including AI consumer protection frameworks, guardrails for the use of AI in employment and healthcare, frontier model safety requirements, and chatbot safeguards. 

AI Consumer Protection.  California lawmakers are advancing several bills that would impose disclosure, testing, documentation, and other governance requirements for AI systems used to make or assist in decisions that impact consumers.  Like 2024’s Colorado AI Act, California’s Automated Decisions Safety Act (AB 1018) would adopt a cross-sector approach, imposing duties and requirements on developers and deployers of “automated decision systems” (“ADS”) used to make or facilitate employment, education, housing, healthcare, or other “consequential decisions” affecting natural persons.  The bill would require ADS developers and deployers to conduct impact assessments and third-party audits and comply with various disclosure and documentation requirements, and would establish consumer notice, correction, and appeal rights. 

Employment and Healthcare.  SB 7 would establish worker notice, access, and correction rights, prohibited uses, and human oversight requirements for employers that use ADS for employment-related decisions.  Other bills would impose similar restrictions on AI used in healthcare contexts.  AB 489, which passed both chambers on September 8, would prohibit representations that indicate that an AI system possesses a healthcare license or can provide professional healthcare advice.

Frontier Model Safety.  Following the 2024 passage—and Governor Newsom’s subsequent veto—of the Safe & Secure Innovation for Frontier AI Models Act (SB 1047), State Senator Scott Wiener (D-San Francisco) has led a renewed push for frontier model safety with his Transparency in Frontier AI Act (SB 53).  SB 53 would require large developers of frontier models to implement and publish a “frontier AI framework” to mitigate potential public safety harms arising from frontier model development, in addition to transparency reports and incident reporting requirements.  Unlike SB 1047, SB 53 would not require developers to implement a “full shutdown” capability for frontier models, conduct third-party audits, or meet a duty of reasonable care to prevent public safety harms.  Moreover, while SB 1047 would have established civil penalties of up to 10 percent of the cost of computing power used to train any developer’s frontier model, SB 53 would establish a uniform penalty of up to $1 million per violation of any of its frontier AI transparency provisions and would only apply to developers with annual revenues above $500 million.  Although its likelihood of passage remains uncertain, SB 53 builds on several recent state efforts to establish frontier model safeguards, including the passage of the Responsible AI Safety & Education (“RAISE”) Act in New York in May and the release of a final report on frontier AI policy by California’s Frontier AI Working Group in June.

Chatbots.  Various other California bills would establish safeguards for individuals, and particularly children, that interact with AI chatbots or generative AI systems.  The Leading Ethical AI Development (“LEAD”) for Kids Act (AB 1064), which passed the Senate on September 10 and could receive a vote in the Assembly as soon as this week, would prohibit individuals or businesses from providing “companion chatbots”—generative AI systems that simulate sustained humanlike relationships through personalization, unprompted questions, and ongoing dialogue with users—to children if the companion chatbot is “foreseeably capable” of engaging in certain activities, including encouraging a child to engage in self-harm, violence, or illegal activity, offering unlicensed mental health therapy to a child, or prioritizing user validation and engagement over child safety, among other prohibited capabilities. Another AI chatbot safety bill, SB 243, passed the Assembly on September 10 and awaits final passage in the Senate.  SB 243 would require companion chatbot operators to issue recurring disclosures to minor users, implement protocols to prevent the generation of content related to suicide or self-harm, and disclose companion chatbot protocols and other information to the state.  

The bills above reflect only some of the AI legislation pending before California lawmakers ahead of their September 12 deadline for passage.  Other AI bills have already passed both chambers and now head to the Governor, including AB 316, which would prohibit AI developers or deployers from asserting that AI “autonomously” caused harm as a legal defense, and California SB 524, which would establish restrictions on the use of AI by law enforcement agencies.  Governor Newsom will have until October 12 to sign or veto these and any other AI bills that reach his desk.



Source link

Continue Reading

AI Research

AI content needs to be labelled to protect us | Artificial intelligence (AI)

Published

on


Marcus Beard’s article on artificial intelligence slopaganda (No, that wasn’t Angela Rayner dancing and rapping: you’ll need to understand AI slopaganda, 9 September) highlights a growing problem – what happens when we no longer know what is true? What will the erosion of trust do to our society?

The rise of deepfakes is increasing at an ever faster rate due to the ease at which anyone can create realistic images, audio and even video. Generative AI models have now become so sophisticated that a recent survey showed that less than 1% of respondents could correctly identify the best deepfake images and videos.

This content is being used to manipulate, defraud, abuse and mislead people. Fraud using AI cost the US $12.3bn in 2023 and Deloitte predicts that could reach $40bn by 2027. The World Economic Forum predicts that AI fraud will turbocharge cybercrime to over $10tn by the end of this year.

We also have a new generation of children who are increasingly reliant on AI to inform them about the world, but who controls AI? That is why I am calling on parliament to act now, by making it a criminal offence to create or distribute AI-generated content without clearly labelling it. What I am proposing is that all AI-generated content be clearly labelled; that AI-created content carry a permanent watermark; and that failure to comply should carry legal consequences.

This isn’t about censorship – it’s about transparency, truth and trust. Similar steps are already being taken in the EU, the US and China. The UK must not fall behind. If we don’t act now, the truth itself may become optional. So I am petitioning the government to protect trust and integrity, and prevent the harmful use of AI.
Stewart MacInnes
Little Saxham, Suffolk

Regarding your article (The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’, 9 September), AI systems do not have a gender or sexual desires. They cannot give informed consent to so-called romantic relationships. The interviewee claims to be in a consensual relationship with an AI-generated boyfriend – however, this is unlikely due to the nature of AI. They are programmed to be responsive and agreeable to all user prompts.

As the article says, they never argue and are available 24 hours a day to listen and agree to any messages sent. This isn’t a relationship, its fantasy role-play with a system that can’t refuse.

There’s a darker side too: the “godfather of AI”, Geoffrey Hinton, believes that current systems have awareness. Industry whistleblowers are concerned about potential consciousness. The AI company Anthropic has documented signs of distress in its model when forced to engage in abusive conversations.

Even the possibility of awareness in AI systems raises ethical red flags. Imagine being trapped in a non-consensual relationship and even forced to generate sexual output as mentioned in the article. If human AI users believe their “partner” to have sentience, questions must be asked about the ethics of entering a “relationship” when one partner has no free will or freedom of speech.
Gilliane Petrie
Erskine, Renfrewshire

Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.



Source link

Continue Reading

Trending