Tools & Platforms
AWS to Show New Generative AI, News Distribution Innovations

AWS will be at stand 5.C90 in Hall 5 at the RAI during exhibit hours and in sessions throughout the week offering engaging demos, in-depth guidance, and thoughtful discussions.
At the stand (5.C90)
The AWS stand will feature 56 AWS Partners across various demos that showcase the technologies and top use cases shaping the future of the Media & Entertainment (M&E) industry. We are demonstrating the latest cloud and AI technologies that help customers across broadcast, streaming, media supply chain, and monetization. This includes a Builder Zone and Generative AI Zone for more technical deep dives.
Throughout the AWS stand, attendees will learn how M&E companies can adapt to a shifting entertainment environment, at scale and within budget, by using the latest generative AI and cloud-based tools from AWS and its Partners.
For example, global news organization Reuters will join AWS in demonstrating live content captured from the floor of the United Kingdom (UK) Parliament—replicated in near real-time to multiple news organizations simultaneously. They will be using the open-source Time-Addressable Media Store (TAMS) API specification and AWS services including Amazon Simple Storage Service (Amazon S3), AWS Elemental MediaConvert, and AWS Step Functions. Reuters and AWS will show how cloud-native collaboration and interoperability can transform the way news is created, exchanged and monetized. AWS will also show how TAMS can be integrated with Amazon Bedrock and the TwelveLabs video understanding models to enable embedding of news content.
M&E companies want to leverage generative AI to unlock revenue from their existing content libraries and IP, reducing operational costs, while enhancing viewing experiences. AWS will feature demos showing practical use of generative AI workloads, including examples of how to use services like Amazon Bedrock, featuring Amazon Nova foundation models, and Amazon Connect.
AI agents are also helping M&E companies support new engagement features, streamline operations, and enhance advertising and monetization opportunities. The AWS booth will include more agentic AI demos than ever before, showcasing capabilities like Amazon Bedrock AgentCore, Amazon Q, and Kiro to support M&E use cases. This includes demos featuring the latest update to the Guidance for a Media Lake on AWS, which has added agentic AI capabilities from AWS Partners MASV and Nomad Media.
Attendees can explore how AWS services can support their unique needs alongside experts in the Builder Zone. This includes learning to deploy infrastructure across multiple availability zones and regions using Amazon CloudFront, and capturing detailed video quality metrics for every frame in your output with AWS Elemental MediaConvert. Explore monetizing live video while offering digital video recorder (DVR) features with AWS Elemental MediaTailor. Reimagine search and discoverability within media libraries with Cloud Storage on AWS and Amazon S3 Vectors, along with streamlining business operations with Amazon Q.
New this year, the Builder Zone will also feature experts who can discuss Digital Sovereignty at AWS. This includes the AWS European Sovereign Cloud, the first fully featured, independently operated sovereign cloud backed by strong technical controls, sovereign assurances, and legal protections required by European governments and enterprises. Anticipated to launch at the end of 2025, the AWS European Sovereign Cloud will be entirely located within the EU, physically and logically separate from other AWS Regions.
Celebrating a decade of AWS Elemental
September 3, 2025 marks 10 years since AWS acquired Elemental Technologies, now known as AWS Elemental. In many ways, the acquisition marked the beginning of a new journey for AWS in M&E, and IBC provides the ideal forum to celebrate with the customers and partners who have helped shape that journey.
Today, AWS Elemental services support innovative new streaming platforms, live digital broadcast techniques, cloud production workflows, and more. These services help customers like BBC, FOX, Netflix, PBS, and Prime Video create, transform, and deliver high-quality digital content. For example, using AWS Elemental Media Services, Tubi delivered the most-streamed Super Bowl in history to 15.5 million peak concurrent streaming viewers. Formula 1 debuted its new F1 TV Premium streaming subscription tier, and NBCUniversal delivered personalized ads across 5,000 hours of 2024 Paris Olympics content on Peacock.
The AWS stand will feature a dynamic timeline mapping out a decade of milestones and industry breakthroughs. It will give attendees a look back on the hundreds of features released since the acquisition, supporting over 1,500 customers worldwide. IBC attendees can also meet with AWS Elemental experts at the stand.
You can visit the AWS Elemental 10-year anniversary page to learn more about all of the activities planned for IBC.
Speaking sessions and more
AWS has also teamed up with NVIDIA to sponsor the Future Tech Hall (Hall 14, Stand 14.A13). Join us to learn from AWS technical leaders, AWS Partner Network (APN) partners, and customers to hear how they are accelerating the implementation of AWS for media workloads.
AWS will participate in ten sessions during IBC2025, including seven sessions on the AWS and NVIDIA Innovation Stage in the Future of Tech Hall (Hall 14) and two sessions on the Future Tech Stage. Sessions include:
- Thursday (9/11)
- 1:35 PM – World Skills Café, stand E102 – Is the Lack of Skills Holding Companies Back? (Featuring: Nina Walsh, Global Leader, Industry Business Development, AWS)
- Friday (9/12)
- 1:45 PM – Future Tech Stage – Pushing the Limits: AI Innovation in Live Sports with Formula 1 (Featuring: Ruth Buscombe, Lead race strategist at Formula 1; Sepi Motamedi, Sr. Product Manager of Live Media Solutions, NVIDIA; Chris Blandy, Director of Media & Entertainment, Games, and Sports Business Development, AWS)
- Saturday (9/13)
- 11:15 AM – AWS and NVIDIA Innovation Stage – The 2025 Club World Cup as the Engine of Innovation: How Cloud Technology Reshaped Sports Rights and Delivery (Featuring: James Pearce, SVP broadcast and streaming at DAZN; Andy Wilson, CTO at M2A Media; Paul Devlin, Global Strategy Leader for Betting, Gaming & Sports at AWS; Larissa Gorner-Meeus, CTO at Proximus Media)
- 12:00 PM – AWS and NVIDIA Innovation Stage – Future Fit Media Supply Chains (Featuring: Adam Jakubowski, VP Technical Operations at BBC Studios)
- 1:00 PM – Future Tech Stage – How Prime Video Is Changing Production with ‘Last One Laughing’ in the cloud (Featuring: Tim Bock, Head of Production, Innovation – International Originals at Amazon MGM Studios; Greg Young, Worldwide Production and Post Technology at Prime Video)
- 3:45 PM – From Rights to Revenue: Unlocking Content Value in a Digital-First World (Featuring: Richard Clarke and Damien Viel from Banijay; Kathleen Barrett, CEO at Backlight; Alex Buchanan, Director of Programs at Base)
- 4:30 PM – AWS and NVIDIA Innovation Stage – AI-Driven Media Transformation: Redefining audience experiences(Featuring: Sebastien Westerduin, CEO, Amplify; Lewis Smithingham, EVP of Strategic Industries – Media, Entertainment, Gaming & Sports, Monks; Joeri Lambert, SVP of Growth, Platforms & Tech Services EMEA, Monks; Stuart Lepkowsky – Global Head of Telco, Media & Entertainment, Games, and Sports Partner Strategy, AWS)
- Sunday (9/14)
- 12:45 PM – AWS and NVIDIA Innovation Stage – Breaking News: Next-generation workflows in News and Sports(Featuring: Sam Ross, Executive Product Manager for Shortform Production and Publication at BBC; Rebecca Light, Group Head of Content Management and Quality Control at Sky)
- 2:15 PM – AWS and NVIDIA Innovation Stage – 10 Years of Video Innovation with AWS Elemental: How it Started, How it’s Going, and What’s Next (Featuring: Eric Orme, VP Live Sports Product & Engineering at Prime Video; Alan Robinson, Executive Product Manager, BBC; Greg Truax, Director of Engineering, AWS Elemental)
- 4:30 PM – AWS and NVIDIA Innovation Stage – MovieLabs: Broadcasting the vision (Featuring: Rich Burger, CEO at MovieLabs; Tom Sharma, CTO at Avid; Raf Soltanovich, VP of Technology at Prime Video)
Additionally, IBC attendees can join AWS, AWS Partners, and other industry peers at one of our ancillary events throughout the week. This includes:
Tools & Platforms
Anthropic Bans Chinese Entities from Claude AI Over Security Risks

In a move that underscores escalating tensions in the global artificial intelligence arena, Anthropic, the San Francisco-based AI startup backed by tech giants like Amazon, has tightened its service restrictions to exclude companies majority-owned or controlled by Chinese entities. This policy update, effective immediately, extends beyond China’s borders to include overseas subsidiaries and organizations, effectively closing what the company described as a loophole in access to its Claude chatbot and related AI models.
The decision comes amid growing concerns over national security, with Anthropic citing risks that its technology could be co-opted for military or intelligence purposes by adversarial nations. As reported by Japan Today, the company positions itself as a guardian of ethical AI development, emphasizing that the restrictions target “authoritarian regions” to prevent misuse while promoting U.S. leadership in the field.
Escalating Geopolitical Frictions in AI Access This clampdown is not isolated but part of a broader pattern of U.S. tech firms navigating the fraught U.S.-China relationship. Anthropic’s terms of service now prohibit access for entities where more than 50% ownership traces back to Chinese control, a threshold that could impact major players like ByteDance, Tencent, and Alibaba, even through their international arms. Industry observers note this as a first-of-its-kind explicit ban in the AI sector, potentially setting a precedent for competitors.
According to Tom’s Hardware, the policy cites “legal, regulatory, and security risks,” including the possibility of data coercion by foreign governments. This reflects heightened scrutiny from U.S. regulators, who have increasingly viewed AI as a strategic asset akin to semiconductor technology, where export controls have already curtailed shipments to China.
Implications for Global Tech Ecosystems and Innovation For Chinese-owned firms operating globally, the restrictions could disrupt operations reliant on advanced AI tools, forcing a pivot to domestic alternatives or open-source options. Posts on X highlight a mix of sentiments, with some users decrying it as an attempt to monopolize AI development in a “unipolar world,” while others warn of retaliatory measures that might accelerate China’s push toward self-sufficiency in AI.
Anthropic’s move aligns with similar actions in the tech industry, such as restrictions on chip exports, which have spurred Chinese innovation in areas like Huawei’s Ascend processors. As detailed in coverage from MediaNama, this policy extends to other unsupported regions like Russia, North Korea, and Iran, but the focus on China underscores the AI arms race’s intensity.
Industry Reactions and Potential Ripple Effects Executives and analysts are watching closely to see if rivals like OpenAI or Google DeepMind follow suit, potentially forgoing significant revenue streams. One X post from a technology commentator suggested this could pressure competitors into similar decisions, given the geopolitical stakes, while another lamented the fragmentation of global AI access, arguing it denies “AI sovereignty” to nations outside the U.S. sphere.
The financial backing of Anthropic—valued at over $18 billion—includes heavy investments from Amazon and Google, which may influence its alignment with U.S. interests. Reports from The Manila Times indicate that the company frames this as a proactive step to safeguard democratic values, but critics argue it could stifle international collaboration and innovation.
Navigating Future Uncertainties in AI Governance Looking ahead, this development raises questions about the balkanization of AI technologies, where access becomes a tool of foreign policy. Industry insiders speculate that Chinese firms might accelerate investments in proprietary models, as evidenced by recent open-source releases that challenge Western dominance. Meanwhile, Anthropic’s stance could invite scrutiny from antitrust regulators, who might view it as consolidating power among U.S. players.
Ultimately, as the AI sector evolves, such restrictions highlight the delicate balance between security imperatives and the open exchange that has driven technological progress. With ongoing U.S. sanctions and China’s rapid advancements, the coming years may see a more divided global AI ecosystem, where strategic decisions like Anthropic’s redefine competitive boundaries and influence the trajectory of innovation worldwide.
Tools & Platforms
Community Editorial Board: Considering Colorado’s AI law

Members of our Community Editorial Board, a group of community residents who are engaged with and passionate about local issues, respond to the following question: During the recent special session, Colorado legislators failed to agree on an update to the state’s yet-to-be-implemented artificial intelligence law, despite concerns from the tech industry that the current law will make compliance onerous. Your take?
Colorado’s artificial intelligence law, passed in 2024 but not yet in effect, aims to regulate high-risk AI systems by requiring companies to assess risk, disclose how AI is used and avoid discriminatory outcomes. But as its 2026 rollout approaches, tech companies and Governor Polis argue the rules are too vague and costly to implement. Polis has pushed for a delay to preserve Colorado’s competitiveness, and the Trump administration’s AI Action Plan has added pressure by threatening to withhold federal funds from states with “burdensome” AI laws. The failure to update the law reflects a deeper tension: how to regulate fast-moving technology without undercutting economic growth.
Progressive lawmakers want people to have rights to see, correct and challenge the data that AI systems use against them. If an algorithm denies you a job, a loan or health coverage, you should be able to understand why. On paper, this sounds straightforward. In practice, it runs into the way today’s AI systems actually work.
Large language models like ChatGPT illustrate the challenge. They don’t rely on fixed rules that can be traced line by line. Instead, they are trained on massive datasets and learn statistical patterns in language. Input text is broken into words or parts of a word (tokens), converted into numbers, and run through enormous matrices containing billions of learned weights. These weights capture how strongly tokens relate to one another and generate probabilities for what word is most likely to come next. From that distribution, the model picks an output, sometimes the top choice, sometimes a less likely one. In other words, there are two layers of uncertainty: first in the training data, which bakes human biases into the model, and then in the inference process, which selects from a range of outputs. The same input can therefore yield different results, and even when it doesn’t, there is no simple way to point to a specific line of data that caused the outcome. Transparency is elusive because auditing a model at this scale is less like tracing a flowchart and more like untangling billions of connections.
These layers of uncertainty combine with two broader challenges. Research has not yet shown whether AI systems discriminate more or less than humans making similar decisions. The risks are real, but so is the uncertainty. And without federal rules, states are locked in competition. Companies can relocate to jurisdictions with looser standards. That puts Colorado in a bind: trying to protect consumers without losing its tech edge.
Here’s where I land: Regulating AI is difficult because neither lawmakers nor the engineers who build these systems can fully explain how specific outputs are produced. Still, in sensitive areas like housing, employment, or public benefits, companies should not be allowed to hide behind complexity. Full transparency may be impossible, but clear rules are not. Disclosure of AI use should be mandatory today, and liability should follow: If a system produces discriminatory results, the company should face lawsuits as it would for any other harmful product. It is striking that a technology whose outputs cannot be traced to clear causes is already in widespread use; in most industries, such a product would never be released, but AI has become too central to economic competitiveness to wait for full clarity. And since we lack evidence on whether AI is better or worse than human decision-making, banning it outright is not realistic. These models will remain an active area of research for years, and regulation will have to evolve with them. For now, disclosure should come first. The rest can wait, but delay must not become retreat.
Hernán Villanueva, chvillanuevap@gmail.com
Years ago, during a Senate hearing into Facebook, senators were grilling Mark Zuckerberg, and it was clear they had no idea how the internet works. One senator didn’t understand why Facebook had to run ads. It took Zuckerberg a minute to understand the senator’s question, as he couldn’t imagine anyone being that ignorant on the subject of the hearing! Yet these senators write and enact laws governing Facebook.
Society does a lot of that. Boulder does this with homelessness and climate change. They understand neither, yet create and pass laws, which, predictably, do nothing, or sometimes, make the problem worse. Colorado has done it before, as well, when it enacted a law requiring renewable energy and listed hydrogen as an energy source. Hydrogen is only an energy source when it is separated from oxygen, like in the sun. On Earth, hydrogen is always bound to another element and, therefore, it is not an energy source; it is an energy carrier. Colorado continued regulating things it doesn’t understand with the Colorado AI Act (CAIA), which shows a fundamental misunderstanding of how deep learning and Large Language Models, the central technologies of AI today, work.
The incentive to control malicious AI behavior is understandable. If AI companies are creating these on purpose, let’s get after them. But they aren’t. But bias does exist in AI programs. The bias comes from the data used to train the AI model. Biased in what way, though? Critics contend that loan applications are biased against people of color, even when a person’s race is not represented in the data. The bias isn’t on race. It is possibly based on the person’s address, education or credit score. Banks want to bias applicants based on these factors. Why? Because it correlates with the applicant’s ability to pay back the loan.
If the CAIA makes it impossible for banks to legally use AI to screen loan applicants, are we better off? Have we eliminated bias? Absolutely not. If a human is involved, we have bias. In fact, our only hope to eliminate bias is with AI, though we aren’t there yet because of the aforementioned data issue. So we’d still have bias, but now loans would take longer to process.
Today, there is little demand for ditch diggers. We have backhoes and bulldozers that handle most of that work. These inventions put a lot of ditch diggers out of work. Are we, as a society, better for these inventions? I think so. AI might be fundamentally different from heavy equipment, but it might not be. AI is a tool that can help eliminate drudgery. It can speed up the reading of X-rays and CT scans, thereby giving us better medical care. AI won’t be perfect. Nothing created by humans can be. But we need to be cautious in slowing the development of these life-transforming tools.
Bill Wright, bill@wwwright.com
Tools & Platforms
AI-assisted coding rises among Indian tech leaders: Report

Key findings from India
- 100% of technology leaders reported using AI coding tools personally or for their organisation.
- About 94% of developers use AI-assisted coding every day.
- 84% of the respondents see their organisation’s usage increase significantly within the next year. About 72% cited productivity gains due to the use of AI tools.
Governance is a must
- As per the survey, all respondents took a strong view of governance while using AI tools for professional purposes.
- 98% said all AI-generated code is put through peer review before going into production.
- 92% flagged risks when deploying AI code without human oversight, especially on maintainability and security.
- Most oversight responsibility lies with CTOs and CIOs, according to 72% of surveyed leaders.
Skills and hiring
In terms of upskilling and developing hiring trends:
- 98% of respondents believe AI is transforming developer skillsets, and all leaders were comfortable with candidates using AI tools during technical interviews.
- About 28% flagged concerns such as over-reliance without accountability and compliance exposure. About 20% also said AI tools may lead to junior staff struggling to develop traditional skills.
Canva’s CTO Brendan Humphreys emphasised the need for humans to leverage AI as an enhancement, not a replacement. “When paired with human judgment and expertise, it unlocks significant benefits — from rapid prototyping to faster development cycles and greater productivity.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi