Dream Lab LA’s Jon Finger (left) and Verena Puhm
Tools & Platforms
Terror Groups Exploit AI and Emerging Tech as Domestic Attacks Surge 357% HS Today
Domestic terrorism incidents in the United States surged by 357% between 2013 and 2021, as terrorist organizations began leveraging artificial intelligence, drones, and other advanced technologies for recruitment and attack planning. During this eight-year period, the Department of Homeland Security (DHS) documented 230 domestic terrorism incidents, with racially and ethnically motivated attacks proving to be the most lethal and destructive.
The alarming statistics come as federal agencies grapple with significant coordination challenges and emerging technologies that experts warn are creating unprecedented security vulnerabilities across critical infrastructure.
“We’re seeing attacks against hospitals, water supply systems, rural schools—targets that would have been unthinkable in previous conflicts,” said Nitin Natarajan, former Deputy Director of the Cybersecurity and Infrastructure Security Agency (CISA). “The rules are changing in what we’re seeing nation-states and cyber criminals do.”
Digital Weapons in Terrorist Hands
The convergence of accessible technology and extremist ideology has created what security professionals describe as a “perfect storm” for modern terrorism. Unlike traditional threats that required extensive resources and training, today’s digital weapons can be deployed by amateur users with devastating effect.
“The beauty of cyberattacks is they don’t require boots on the ground; they can be executed globally, without borders, from anywhere,” Natarajan explained during a recent gathering of experts convened by Homeland Security Today to discuss evolving cyber, technology, weapons of mass destructions (WMDs), and tactics in the digital age. “Many can be low-cost yet still have disruptive impacts and effects.”
Terrorist groups like ISIS have established sophisticated cyber units, including the United Cyber Caliphate, conducting everything from website defacements to denial-of-service attacks. While these may seem like small-scale operations, experts warn that advancing technology will enable more destructive capabilities with fewer resources.
Federal Agencies Face Coordination Crisis
Despite the growing threat, federal agencies tasked with combating domestic terrorism are struggling with fundamental coordination problems. A Government Accountability Office (GAO) investigation revealed that the Federal Bureau of Investigation (FBI) and DHS agents often don’t know when or with whom to share critical threat information.
“When we spoke with agents on the ground, they said they didn’t always know who to share the threat information with and when to do it,” said Triana McNeil, Director of GAO’s Homeland Security and Justice Team. “That’s an issue when you’re trying to counter these threats.”
The coordination problems extend beyond federal agencies. The nation’s first-ever domestic terrorism strategy, released in 2021, lacked: clear roles for state and local partners, performance measures to track progress, and identified resources to achieve its goals – all considered essential elements of effective national strategies – according to GAO’s report examining the National Strategy on Countering Domestic Terrorism.
Private Sector Partnerships Under Strain
Social media and gaming companies have become unlikely frontlines in the fight against domestic terrorism, with 33% of mass attack perpetrators posting content online and 20% of adult gamers exposed to extremist material. However, government partnerships with these companies remain haphazard.
“There was no strategy, there were no clear goals about what you’re trying to achieve when you’re making these connections with different companies,” McNeil noted, describing the current approach as scattered and ineffective.
The GAO found that while FBI and DHS have developed various tools to share and receive threat information from private companies, the efforts lack coordination and strategic direction.
Next Generation Vulnerabilities
Perhaps most concerning is how the next generation approaches cybersecurity. At a recent New York City event, college students shocked security experts when asked, “Are they thinking about cybersecurity in their day-to-life as they are using technology?” by declaring they “don’t care about privacy and we don’t care if people take our personal information theft.
As Natarajan relayed this story, he warned. “We’d be remiss if we didn’t factor in how the next generation is looking at cybersecurity as part of their day-to-day life—it’s very different from how we look at it.”
Critical Infrastructure in Crosshairs
The water sector represents the next major vulnerability, with 141,000 utilities nationwide, many lacking basic cybersecurity protections. Iranian hackers recently exploited water systems using default passwords of “1111”—attacks that could have been prevented by changing passwords to “2222.”
“When those victims were notified, they didn’t even know how to change the default password,” Natarajan revealed. “Some said the person who installed the system left five years ago and doesn’t work here anymore.”
Food and agriculture systems face similar risks, with modern tractors now containing two million lines of code and extensive data flows that could be manipulated to disrupt everything from seeding to harvesting.
Resource Constraints Amid Growing Threats
These mounting challenges come as security agencies face potential budget cuts. CISA, which grew from 2,100 to 3,400 employees over four years with strong bipartisan support, now faces proposed reductions of 25-33%.
“We are already outnumbered 50 to 1” against Chinese cyber operations alone, Natarajan pointed out, citing FBI Director Christopher Wray’s testimony. “That situation is only getting worse as we see reductions in funding and government workforce.”
The intersection of emerging technologies, resource constraints, and evolving terrorist tactics creates an unprecedented challenge for homeland security. As experts noted, the threat landscape will only grow more complex as artificial intelligence and other emerging technologies become more accessible to those seeking to cause harm.
“We need to make sure we’re doing more to build resilience into our nation’s critical infrastructure,” Natarajan emphasized, “and continue to take the lead internationally in setting standards that reflect our values and those of like-minded allies.”
This article is based on key insights shared at Homeland Security Today’s COUNTERTERRORISM2025 summit.
Tools & Platforms
Luma AI Launches Dream Lab LA for AI Use in Filmmaking
July 10, 2025
Generative AI company Luma AI has announced the launch of its initiative “Dream Lab LA” headquartered in Los Angeles that combines AI technology with expertise in filmmaking.
“Dream Lab LA is designed as a creative engine room where Hollywood veterans, emerging storytellers, studios, and curious minds come together to shape the next era of storytelling — before it arrives,” according to a Luma AI press release.
According to the company, Dream Lab LA will allow filmmakers to collaborate, learn and tell new stories; studios get embedded support to modernize workflows and upskill teams; and “curious daredevils push boundaries and experiment freely.”
“Dream Lab LA is where we build what everyone else is still guessing at,” Amit Jain, CEO and founder of Luma AI, said in a statement. “This is not about chasing trends, this is about defining what’s next.”
Dream Lab LA exists to explore how AI can empower creativity, not replace it, according to Luma AI, offering a space for experimentation, education, and collaboration between studios and creators.
Luma AI also announced the leadership team for Dream Lab LA, naming Verena Puhm as head of the studio. With experience in both traditional and AI-driven storytelling, Puhm has shaped content for global giants such as CNN, BBC, Netflix, Red Bull Media and Leonine Studios. As one of the earliest creatives to embrace AI in filmmaking, she’s led projects recognized by Sundance, Project Odyssey, Curious Refuge and OpenAI’s Sora Selects. In her new role, Puhm will spearhead the studio’s vision for next-generation content and lead a slate of productions.
“I believe the future of storytelling should be shaped by the people who tell stories, not just the people who build the tools,” Puhm said in a statement. “We’re cultivating a community, a creative lab, and a launchpad for what’s next. This isn’t just another platform; it’s a creative studio built from the ground up to blend technological innovation with artistic intention.”
Jon Finger, creative workflow executive, brings more than 15 years of experience at the intersection of emerging technology and content creation. A pioneer in at-home motion capture, 3-D scanning, and virtual production, he has worked across various entertainment sectors with brands such as Paramount Network, The Game Awards, and Comedy Central, and has developed for Netflix. For the past three years, Finger has focused on AI integration in filmmaking, developing workflows that give creators physicalized control over AI-driven productions.
“The focus here is to find the best experiences for passionate creatives,” Finger said in a statement. “The world is changing quickly, and we want to find the best ways for fun, fulfilling human-centric creative expression to not only continue but be amplified, so more creative people can find a new prosperous way forward.”
From its Modify Video, Reframe, and Keyframes to its foundation models Ray2 and Photon, Luma creates instruments explicitly designed for narrative storytelling.
Subscribe HERE to the FREE Media Play News Daily Newsletter!
Tools & Platforms
Why AI and Blockchain Are About to Transform Compliance
Opinions expressed by Entrepreneur contributors are their own.
Any fintech founder will tell you that compliance is important — that’s because it is. But in today’s world of unparalleled financial innovation, whole new currencies, entirely new payment methods and borderless money, compliance is not nearly the most exciting topic.
For money to move, however, it needs to be compliant. Whether we like it or not, compliance is a necessary consideration that, if done incorrectly, could result in hefty fines.
It’s, therefore, no surprise that organizations continuously find ways to delegate compliance responsibility. Realistically, this is where most major banks that have received headline-worthy fines for non-compliance have faltered. It’s also no surprise that, as an industry, we’ve found ways for AI to streamline these processes for us.
The fact of the matter is that compliance is made simpler through the integration of artificial intelligence technology. But the real promise of compliance innovation isn’t just the application of artificial intelligence; it’s the integration of blockchain technology and tokenization — technology that isn’t being widely used yet in the traditional finance industry.
Achieving compliance with AI
When you boil fintech compliance down to its fundamental principles, it rests on thorough AML (anti-money laundering) and KYC (know your customer) screenings. These protocols have been in place since the dawn of financial record-keeping requirements in the 1970s and have been compulsory for organizations ever since.
AML and KYC processes involve heavy levels of paperwork; rigorous background checks are required of banking customers and vendors, and a meticulous eye on transaction activity must be maintained constantly to make sure no suspicious or illegitimate activity is processed.
It’s these tedious and time-consuming processes that are the most automatable through the application of AI. AI models are able to detect anomalies in transaction activity on a 24/7 basis to quickly flag and respond to suspicious activity. The promise and realization of real-time compliance monitoring have a positive impact on fintech’s ability to keep up with compliance requirements. A diversion away from reliance on human monitoring leaves much less room for error and saves company resources, too.
AI is also able to efficiently cross-reference user applications with requirements and provide the necessary approvals for customers to be onboarded quickly. More than that, when routine re-verification is required, AI is able to automate this to facilitate KYC renewal checks automatically — streamlining the process and fulfilling the requirement in the background.
The next level of compliance
But if we look even beyond AI, there’s a new and exciting wave of compliance technology on the horizon that will further transform the way fintechs and broader industries are able to follow compliance requirements. Blockchain technology, as it continues to revolutionize finance as we know it through the advent of regulated stablecoins, CBDCs and broader cryptocurrencies, will eventually infiltrate wider operations in the fintech sector, including compliance.
It’s the core principles of blockchain technology, such as tokenized information, immutable ledgers and private/public cryptography that make it such a game-changer for compliance.
The concept of tokenization doesn’t just apply to assets; tokenizing information allows companies to translate personal identifiable information (PII) — critical information for the KYC and AML screening process — into encrypted code, which can be shared between financial organizations and vendors as a means of verifying someone’s identity and therefore the transaction.
The benefit of tokenizing the information is that personal information can be verified from one organization to another without revealing PII. It removes the need for constant data-sharing requests while preserving the data’s privacy and integrity.
Related: 6 Ways Automation Can Eliminate Your Company’s Compliance Risks
All of this is performed on an immutable ledger. That is, a record that is unchangeable and permanent, a hallmark of transparency that complies with requirements for regulatory oversight and audit processes. The digitization of this ledger propels financial institutions out of manual record-keeping processes and into a world where transaction information is more standardized, accessible and transparent.
This technology is already being implemented today and will continue to redefine how organizations treat and achieve compliance moving further into the future. AI and blockchain technology in themselves drive significant impact on facilitating compliant transactions, and together, the benefits scale dramatically.
When we think of compliance, many people still think of a drawn-out, tedious process, but AI and blockchain technology will soon say goodbye to that perception, ushering in a new era of efficiency, accuracy and automation — and it’s about time.
Any fintech founder will tell you that compliance is important — that’s because it is. But in today’s world of unparalleled financial innovation, whole new currencies, entirely new payment methods and borderless money, compliance is not nearly the most exciting topic.
For money to move, however, it needs to be compliant. Whether we like it or not, compliance is a necessary consideration that, if done incorrectly, could result in hefty fines.
It’s, therefore, no surprise that organizations continuously find ways to delegate compliance responsibility. Realistically, this is where most major banks that have received headline-worthy fines for non-compliance have faltered. It’s also no surprise that, as an industry, we’ve found ways for AI to streamline these processes for us.
The rest of this article is locked.
Join Entrepreneur+ today for access.
Tools & Platforms
AFT to Launch National Academy for AI Instruction with Microsoft, OpenAI, Anthropic and United Federation of Teachers
NEW YORK, July 10, 2025 — The AFT, alongside the United Federation of Teachers and lead partner Microsoft Corp., founding partner OpenAI, and Anthropic, announced the launch of the National Academy for AI Instruction. The groundbreaking $23 million education initiative will provide access to free AI training and curriculum for all 1.8 million members of the AFT, starting with K-12 educators. It will be based at a state-of-the-art bricks-and-mortar Manhattan facility designed to transform how artificial intelligence is taught and integrated into classrooms across the United States.
The academy will help address the gap in structured, accessible AI training and provide a national model for AI-integrated curriculum and teaching that puts educators in the driver’s seat.
Teachers are facing tremendous technological changes, which include the challenges of navigating AI wisely, ethically and safely. They are overwhelmed and looking for ways to gain the skills they need to help their students succeed. The program is the first partnership between a national union and tech companies, structured to create a sustainable education infrastructure for AI.
“To best serve students, we must ensure teachers have a strong voice in the development and use of AI,” said Brad Smith, vice chair and president of Microsoft. “This partnership will not only help teachers learn how to better use AI, it will give them the opportunity to tell tech companies how we can create AI that better serves kids.”
The announcement was made at the headquarters of the AFT’s largest affiliate, the 200,000-member New York City-based UFT, where hundreds of educators were on hand for a three-day training session, including six hours of AI-focused material that highlighted practical, hands-on ways to marry the emerging technology with established pedagogy.
“AI holds tremendous promise but huge challenges—and it’s our job as educators to make sure AI serves our students and society, not the other way around,” said AFT President Randi Weingarten. “The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.
“The academy is a place where educators and school staff will learn about AI—not just how it works, but how to use it wisely, safely and ethically. This idea started with the partnership between lead partner Microsoft and the AFL-CIO in late 2023. We jointly hosted symposiums over the past two summers, but never reached critical mass to ensure America’s educators are coaches in the game, not spectators on the sidelines. Today’s announcement would not be possible without the cooperation of Microsoft, OpenAI, Anthropic and the leadership at the United Federation of Teachers, and I thank them for their efforts.”
“When it comes to AI in schools, the question is whether it is being used to disrupt education for the benefit of students and teachers or at their expense. We want this technology to be used by teachers for their benefit, by helping them to learn, to think and to create,” said Chris Lehane, chief global affairs officer of OpenAI. “This AI academy will help ensure that AI is being deployed to help educators do what they do best—teach—and in so doing, help advance the small-‘d’ democratizing power of education.”
“We’re at a pivotal moment in education, and how we introduce AI to educators today will shape teaching for generations to come,” said Anthropic Co-founder and Head of Policy Jack Clark. “That’s why we’re thrilled to partner with the AFT to empower teachers with the knowledge and tools to guide their students through this evolving landscape. Together, we’re building a future where AI supports great teaching in ethical and effective ways.”
Anchored by the New York City facility, the National Academy for AI Instruction will serve as a premier hub for AI education, equipped with cutting-edge technology and operated under the leadership of the AFT and a coalition of public and private stakeholders. The academy will begin instruction later this fall and then scale nationally. Over five years, the program aims to support 400,000 educators—approximately 10% of the U.S. teaching workforce—reaching more than 7.2 million students.
Through the training of thousands of teachers annually and by offering credential pathways and continuing education credits, the academy will facilitate broad AI instruction and expand opportunity for all.
“For so long, there have been many new programs that were weaponized against educators,” said UFT President Michael Mulgrew. “Our goal is to develop a tool that gives educators the ability to train their AI and incorporate it into their instructional planning, giving them more one-on-one time with their students.”
“Sometimes as a teacher you suffer burnout and you can’t always communicate to the class in the right voice or find the right message and I feel like these AI tools we are working with can really help with that—especially phrasing things in a way that helps students learn better,” says Marlee Katz, teacher for the deaf and hard of hearing in multiple New York City public schools in the borough of Queens. “The tools don’t take away your voice, but if I need to sound more professional or friendly or informed, I feel like these tools are like a best friend that can help you communicate. I love it.”
“As an instructional technology specialist for over 27 years, watching educators learn and work with AI reminds me of when teachers were first using word processors. We are watching educators transform the way people use technology for work in real time, but with AI it’s on another unbelievable level because it’s just so much more powerful,” says Vincent Plato, New York City Public Schools K-8 educator and UFT Teacher Center director. “I think the UFT and the AFT were right to say AI is something educators should take ownership of, not only because it can assist with enhancing the way they interact with and meet the needs of students, but also because AI assists with educator workflow. It can be a thought partner when they’re working by themselves, whether that’s late-night lesson planning, looking at student data, or filing any types of reports—a tool that’s going to be transformative for teachers and students alike.”
Together, Microsoft, OpenAI, Anthropic and the AFT are proud to help our nation’s teachers become AI-proficient educators and to leverage this unique partnership to democratize access to AI skills, ensuring that students from all backgrounds are prepared to thrive in an AI-driven future.
Designed by leading AI experts and experienced educators, the program will include workshops, online courses, and hands-on training sessions, ensuring that teachers are well-equipped to navigate an AI-driven future. It will bring together interdisciplinary research teams to drive innovation in AI education and establish a national model for AI-integrated teaching environments. Finally, the academy will provide ongoing support and resources to help educators stay updated with the latest advancements in AI. Innovation labs and feedback cycles will ensure these tools are refined based on actual classroom experiences.
Through scalable training modules, virtual learning environments and credential pathways, the program empowers a diverse range of educators to become confident leaders in AI instruction. In turn, these teachers will bring AI literacy, ethical reasoning and creative problem-solving into classrooms that might otherwise be left behind in the digital transformation.
The idea for the academy was first proposed by venture capitalist, educator, activist and AFT member Roy Bahat. He is currently the head of Bloomberg Beta, the venture capital arm of Bloomberg, and will be joining the academy’s board of directors.
For more information about the National Academy for AI Instruction, please visit AIinstruction.org.
About the AFT
The AFT represents 1.8 million pre-K through 12th-grade teachers; paraprofessionals and other school-related personnel; higher education faculty and professional staff; federal, state and local government employees; nurses and healthcare workers; and early childhood educators.
About Microsoft
Microsoft creates platforms and tools powered by AI to deliver innovative solutions that meet the evolving needs of our customers. The technology company is committed to making AI available broadly and doing so responsibly, with a mission to empower every person and every organization on the planet to achieve more.
About OpenAI
OpenAI is an AI research and deployment company with a mission to ensure that artificial general intelligence benefits all of humanity.
About Anthropic
Anthropic is an AI safety and research company that creates reliable, interpretable, and steerable AI systems. Anthropic’s flagship product is Claude, a large language model trusted by millions of users worldwide. Learn more about Anthropic and Claude at anthropic.com.
About UFT
The UFT represents nearly 200,000 members and is the sole bargaining agent for most of the nonsupervisory educators who work in the New York City public schools. This includes teachers; retired members; classroom paraprofessionals; and many other school-based titles including school secretaries, school counselors, occupational and physical therapists, family child care providers, nurses, and other employees at several private educational institutions and some charter schools.
Source: Microsoft
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle