Tools & Platforms
Q2 AI Policies: A Comprehensive Tracker To Help You Prepare – New Technology

Use of AI-Enabled Chatbots—as chatbots are introduced across health care to improve efficiency, enhance patient engagement, and expand access, state legislators are concerned that AI chatbots…
United States
Technology
To print this article, all you need is to be registered or login on Mondaq.com.
Activity on AI in health care has been a nationwide
phenomenon during the 2025 legislative session:
- 46 states introduced health care AI bills
- 17 states signed 27 health
care AI bills into law - Over 250 total health care AI bills were
introduced across the country
The passed laws focused on three main areas:
- Use of AI-Enabled Chatbots—as chatbots
are introduced across health care to improve efficiency, enhance
patient engagement, and expand access, state legislators are
concerned that AI chatbots may misrepresent themselves as humans,
produce harmful or inaccurate responses, or not reliably detect
crises. - Payor Use of AI—laws that passed focused
on prohibiting the sole use of AI for denials of care or prior
authorization and either requiring physician review of decisions
generated by AI or prohibiting payors from replacing physician/peer
review of medical appropriateness with an AI tool. - AI in Clinical Care—states have sought
to place guardrails on the use of AI to protect both patients and
providers; this session, states focused on the use of AI in
clinical delivery, what provider oversight should be required when
using AI tools in clinical decision-making, and how providers
should communicate the use of AI to patients.
Our updated policy tracker provides details on
these areas and how you can prepare.
To date, no federal bills have been introduced. The language in
H.R. 1 that would have barred state or local governments from
enforcing laws or regulations was stricken before the bill was
signed into law.
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.
Tools & Platforms
Anthropic Bans Chinese Entities from Claude AI Over Security Risks

In a move that underscores escalating tensions in the global artificial intelligence arena, Anthropic, the San Francisco-based AI startup backed by tech giants like Amazon, has tightened its service restrictions to exclude companies majority-owned or controlled by Chinese entities. This policy update, effective immediately, extends beyond China’s borders to include overseas subsidiaries and organizations, effectively closing what the company described as a loophole in access to its Claude chatbot and related AI models.
The decision comes amid growing concerns over national security, with Anthropic citing risks that its technology could be co-opted for military or intelligence purposes by adversarial nations. As reported by Japan Today, the company positions itself as a guardian of ethical AI development, emphasizing that the restrictions target “authoritarian regions” to prevent misuse while promoting U.S. leadership in the field.
Escalating Geopolitical Frictions in AI Access This clampdown is not isolated but part of a broader pattern of U.S. tech firms navigating the fraught U.S.-China relationship. Anthropic’s terms of service now prohibit access for entities where more than 50% ownership traces back to Chinese control, a threshold that could impact major players like ByteDance, Tencent, and Alibaba, even through their international arms. Industry observers note this as a first-of-its-kind explicit ban in the AI sector, potentially setting a precedent for competitors.
According to Tom’s Hardware, the policy cites “legal, regulatory, and security risks,” including the possibility of data coercion by foreign governments. This reflects heightened scrutiny from U.S. regulators, who have increasingly viewed AI as a strategic asset akin to semiconductor technology, where export controls have already curtailed shipments to China.
Implications for Global Tech Ecosystems and Innovation For Chinese-owned firms operating globally, the restrictions could disrupt operations reliant on advanced AI tools, forcing a pivot to domestic alternatives or open-source options. Posts on X highlight a mix of sentiments, with some users decrying it as an attempt to monopolize AI development in a “unipolar world,” while others warn of retaliatory measures that might accelerate China’s push toward self-sufficiency in AI.
Anthropic’s move aligns with similar actions in the tech industry, such as restrictions on chip exports, which have spurred Chinese innovation in areas like Huawei’s Ascend processors. As detailed in coverage from MediaNama, this policy extends to other unsupported regions like Russia, North Korea, and Iran, but the focus on China underscores the AI arms race’s intensity.
Industry Reactions and Potential Ripple Effects Executives and analysts are watching closely to see if rivals like OpenAI or Google DeepMind follow suit, potentially forgoing significant revenue streams. One X post from a technology commentator suggested this could pressure competitors into similar decisions, given the geopolitical stakes, while another lamented the fragmentation of global AI access, arguing it denies “AI sovereignty” to nations outside the U.S. sphere.
The financial backing of Anthropic—valued at over $18 billion—includes heavy investments from Amazon and Google, which may influence its alignment with U.S. interests. Reports from The Manila Times indicate that the company frames this as a proactive step to safeguard democratic values, but critics argue it could stifle international collaboration and innovation.
Navigating Future Uncertainties in AI Governance Looking ahead, this development raises questions about the balkanization of AI technologies, where access becomes a tool of foreign policy. Industry insiders speculate that Chinese firms might accelerate investments in proprietary models, as evidenced by recent open-source releases that challenge Western dominance. Meanwhile, Anthropic’s stance could invite scrutiny from antitrust regulators, who might view it as consolidating power among U.S. players.
Ultimately, as the AI sector evolves, such restrictions highlight the delicate balance between security imperatives and the open exchange that has driven technological progress. With ongoing U.S. sanctions and China’s rapid advancements, the coming years may see a more divided global AI ecosystem, where strategic decisions like Anthropic’s redefine competitive boundaries and influence the trajectory of innovation worldwide.
Tools & Platforms
Community Editorial Board: Considering Colorado’s AI law

Members of our Community Editorial Board, a group of community residents who are engaged with and passionate about local issues, respond to the following question: During the recent special session, Colorado legislators failed to agree on an update to the state’s yet-to-be-implemented artificial intelligence law, despite concerns from the tech industry that the current law will make compliance onerous. Your take?
Colorado’s artificial intelligence law, passed in 2024 but not yet in effect, aims to regulate high-risk AI systems by requiring companies to assess risk, disclose how AI is used and avoid discriminatory outcomes. But as its 2026 rollout approaches, tech companies and Governor Polis argue the rules are too vague and costly to implement. Polis has pushed for a delay to preserve Colorado’s competitiveness, and the Trump administration’s AI Action Plan has added pressure by threatening to withhold federal funds from states with “burdensome” AI laws. The failure to update the law reflects a deeper tension: how to regulate fast-moving technology without undercutting economic growth.
Progressive lawmakers want people to have rights to see, correct and challenge the data that AI systems use against them. If an algorithm denies you a job, a loan or health coverage, you should be able to understand why. On paper, this sounds straightforward. In practice, it runs into the way today’s AI systems actually work.
Large language models like ChatGPT illustrate the challenge. They don’t rely on fixed rules that can be traced line by line. Instead, they are trained on massive datasets and learn statistical patterns in language. Input text is broken into words or parts of a word (tokens), converted into numbers, and run through enormous matrices containing billions of learned weights. These weights capture how strongly tokens relate to one another and generate probabilities for what word is most likely to come next. From that distribution, the model picks an output, sometimes the top choice, sometimes a less likely one. In other words, there are two layers of uncertainty: first in the training data, which bakes human biases into the model, and then in the inference process, which selects from a range of outputs. The same input can therefore yield different results, and even when it doesn’t, there is no simple way to point to a specific line of data that caused the outcome. Transparency is elusive because auditing a model at this scale is less like tracing a flowchart and more like untangling billions of connections.
These layers of uncertainty combine with two broader challenges. Research has not yet shown whether AI systems discriminate more or less than humans making similar decisions. The risks are real, but so is the uncertainty. And without federal rules, states are locked in competition. Companies can relocate to jurisdictions with looser standards. That puts Colorado in a bind: trying to protect consumers without losing its tech edge.
Here’s where I land: Regulating AI is difficult because neither lawmakers nor the engineers who build these systems can fully explain how specific outputs are produced. Still, in sensitive areas like housing, employment, or public benefits, companies should not be allowed to hide behind complexity. Full transparency may be impossible, but clear rules are not. Disclosure of AI use should be mandatory today, and liability should follow: If a system produces discriminatory results, the company should face lawsuits as it would for any other harmful product. It is striking that a technology whose outputs cannot be traced to clear causes is already in widespread use; in most industries, such a product would never be released, but AI has become too central to economic competitiveness to wait for full clarity. And since we lack evidence on whether AI is better or worse than human decision-making, banning it outright is not realistic. These models will remain an active area of research for years, and regulation will have to evolve with them. For now, disclosure should come first. The rest can wait, but delay must not become retreat.
Hernán Villanueva, chvillanuevap@gmail.com
Years ago, during a Senate hearing into Facebook, senators were grilling Mark Zuckerberg, and it was clear they had no idea how the internet works. One senator didn’t understand why Facebook had to run ads. It took Zuckerberg a minute to understand the senator’s question, as he couldn’t imagine anyone being that ignorant on the subject of the hearing! Yet these senators write and enact laws governing Facebook.
Society does a lot of that. Boulder does this with homelessness and climate change. They understand neither, yet create and pass laws, which, predictably, do nothing, or sometimes, make the problem worse. Colorado has done it before, as well, when it enacted a law requiring renewable energy and listed hydrogen as an energy source. Hydrogen is only an energy source when it is separated from oxygen, like in the sun. On Earth, hydrogen is always bound to another element and, therefore, it is not an energy source; it is an energy carrier. Colorado continued regulating things it doesn’t understand with the Colorado AI Act (CAIA), which shows a fundamental misunderstanding of how deep learning and Large Language Models, the central technologies of AI today, work.
The incentive to control malicious AI behavior is understandable. If AI companies are creating these on purpose, let’s get after them. But they aren’t. But bias does exist in AI programs. The bias comes from the data used to train the AI model. Biased in what way, though? Critics contend that loan applications are biased against people of color, even when a person’s race is not represented in the data. The bias isn’t on race. It is possibly based on the person’s address, education or credit score. Banks want to bias applicants based on these factors. Why? Because it correlates with the applicant’s ability to pay back the loan.
If the CAIA makes it impossible for banks to legally use AI to screen loan applicants, are we better off? Have we eliminated bias? Absolutely not. If a human is involved, we have bias. In fact, our only hope to eliminate bias is with AI, though we aren’t there yet because of the aforementioned data issue. So we’d still have bias, but now loans would take longer to process.
Today, there is little demand for ditch diggers. We have backhoes and bulldozers that handle most of that work. These inventions put a lot of ditch diggers out of work. Are we, as a society, better for these inventions? I think so. AI might be fundamentally different from heavy equipment, but it might not be. AI is a tool that can help eliminate drudgery. It can speed up the reading of X-rays and CT scans, thereby giving us better medical care. AI won’t be perfect. Nothing created by humans can be. But we need to be cautious in slowing the development of these life-transforming tools.
Bill Wright, bill@wwwright.com
Tools & Platforms
AI-assisted coding rises among Indian tech leaders: Report

Key findings from India
- 100% of technology leaders reported using AI coding tools personally or for their organisation.
- About 94% of developers use AI-assisted coding every day.
- 84% of the respondents see their organisation’s usage increase significantly within the next year. About 72% cited productivity gains due to the use of AI tools.
Governance is a must
- As per the survey, all respondents took a strong view of governance while using AI tools for professional purposes.
- 98% said all AI-generated code is put through peer review before going into production.
- 92% flagged risks when deploying AI code without human oversight, especially on maintainability and security.
- Most oversight responsibility lies with CTOs and CIOs, according to 72% of surveyed leaders.
Skills and hiring
In terms of upskilling and developing hiring trends:
- 98% of respondents believe AI is transforming developer skillsets, and all leaders were comfortable with candidates using AI tools during technical interviews.
- About 28% flagged concerns such as over-reliance without accountability and compliance exposure. About 20% also said AI tools may lead to junior staff struggling to develop traditional skills.
Canva’s CTO Brendan Humphreys emphasised the need for humans to leverage AI as an enhancement, not a replacement. “When paired with human judgment and expertise, it unlocks significant benefits — from rapid prototyping to faster development cycles and greater productivity.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi