Tools & Platforms
$33B Defense Bill Drives AI and Drone Technology Expansion
Safe Pro Group (NASDAQ:SPAI) is positioning itself to capitalize on the newly passed One Big Beautiful Bill Act (OBBBA), which allocates $33 billion for drone and AI defense modernization. The bill, signed on July 4, 2025, includes $13.5 billion for unmanned systems and $16 billion for AI initiatives across defense and border operations.
The company’s AI-powered drone imagery analysis platform, featuring SpotlightAI™ technology, can detect over 150 types of landmines and unexploded ordnance in under a second. The system has already analyzed 1.66 million drone images and identified 28,000+ threats across 6,705 hectares in Ukraine. Safe Pro is also developing integration with the U.S. Army’s ATAK platform for enhanced force protection capabilities.
Safe Pro Group (NASDAQ:SPAI) si sta preparando per sfruttare al meglio il recentemente approvato One Big Beautiful Bill Act (OBBBA), che destina 33 miliardi di dollari alla modernizzazione della difesa nel campo dei droni e dell’intelligenza artificiale. La legge, firmata il 4 luglio 2025, prevede 13,5 miliardi di dollari per i sistemi senza pilota e 16 miliardi di dollari per iniziative di IA nelle operazioni di difesa e di frontiera.
La piattaforma di analisi delle immagini da drone basata sull’IA dell’azienda, che utilizza la tecnologia SpotlightAI™, è in grado di rilevare più di 150 tipi di mine terrestri e ordigni inesplosi in meno di un secondo. Il sistema ha già analizzato 1,66 milioni di immagini da drone e identificato oltre 28.000 minacce su 6.705 ettari in Ucraina. Safe Pro sta inoltre sviluppando un’integrazione con la piattaforma ATAK dell’esercito USA per migliorare le capacità di protezione delle forze.
Safe Pro Group (NASDAQ:SPAI) se está posicionando para aprovechar la recién aprobada Ley One Big Beautiful Bill Act (OBBBA), que asigna 33 mil millones de dólares para la modernización de la defensa en drones e inteligencia artificial. La ley, firmada el 4 de julio de 2025, incluye 13,5 mil millones de dólares para sistemas no tripulados y 16 mil millones de dólares para iniciativas de IA en operaciones de defensa y fronteras.
La plataforma de análisis de imágenes de drones impulsada por IA de la compañía, que cuenta con la tecnología SpotlightAI™, puede detectar más de 150 tipos de minas terrestres y municiones sin explotar en menos de un segundo. El sistema ya ha analizado 1,66 millones de imágenes de drones e identificado más de 28,000 amenazas en 6,705 hectáreas en Ucrania. Safe Pro también está desarrollando una integración con la plataforma ATAK del Ejército de EE.UU. para mejorar las capacidades de protección de las fuerzas.
Safe Pro Group (NASDAQ:SPAI)는 최근 통과된 One Big Beautiful Bill Act (OBBBA)를 활용하여 드론 및 AI 방위 현대화에 330억 달러를 배정하는 법안을 기반으로 사업을 확장하고 있습니다. 2025년 7월 4일 서명된 이 법안에는 무인 시스템에 135억 달러, 국방 및 국경 작전 전반에 걸친 AI 이니셔티브에 160억 달러가 포함되어 있습니다.
회사의 AI 기반 드론 이미지 분석 플랫폼인 SpotlightAI™ 기술은 150종 이상의 지뢰 및 미폭발 탄약을 1초 이내에 탐지할 수 있습니다. 이 시스템은 이미 166만 장의 드론 이미지를 분석했으며 우크라이나 6,705헥타르 지역에서 28,000개 이상의 위협을 식별했습니다. Safe Pro는 또한 미 육군의 ATAK 플랫폼과의 통합을 개발 중이며, 이를 통해 부대 보호 능력을 강화할 계획입니다.
Safe Pro Group (NASDAQ:SPAI) se positionne pour tirer parti de la nouvelle loi One Big Beautiful Bill Act (OBBBA), qui alloue 33 milliards de dollars à la modernisation de la défense dans les domaines des drones et de l’IA. La loi, signée le 4 juillet 2025, prévoit 13,5 milliards de dollars pour les systèmes sans pilote et 16 milliards de dollars pour les initiatives d’IA dans les opérations de défense et aux frontières.
La plateforme d’analyse d’images de drones alimentée par l’IA de l’entreprise, intégrant la technologie SpotlightAI™, peut détecter en moins d’une seconde plus de 150 types de mines terrestres et de munitions non explosées. Le système a déjà analysé 1,66 million d’images de drones et identifié plus de 28 000 menaces sur 6 705 hectares en Ukraine. Safe Pro développe également une intégration avec la plateforme ATAK de l’armée américaine pour renforcer les capacités de protection des forces.
Safe Pro Group (NASDAQ:SPAI) positioniert sich, um von dem kürzlich verabschiedeten One Big Beautiful Bill Act (OBBBA) zu profitieren, der 33 Milliarden US-Dollar für die Modernisierung der Drohnen- und KI-Verteidigung bereitstellt. Das am 4. Juli 2025 unterzeichnete Gesetz sieht 13,5 Milliarden US-Dollar für unbemannte Systeme und 16 Milliarden US-Dollar für KI-Initiativen in Verteidigungs- und Grenzoperationen vor.
Die KI-gestützte Drohnenbildanalyseplattform des Unternehmens mit der SpotlightAI™-Technologie kann in weniger als einer Sekunde über 150 Arten von Landminen und nicht explodierten Kampfmitteln erkennen. Das System hat bereits 1,66 Millionen Drohnenbilder analysiert und über 28.000 Bedrohungen auf 6.705 Hektar in der Ukraine identifiziert. Safe Pro entwickelt zudem eine Integration mit der ATAK-Plattform der US-Armee zur Verbesserung des Schutzes der Streitkräfte.
Positive
- Potential access to $33 billion in new government funding for AI and drone technologies
- Proven track record with 1.66 million drone images analyzed and 28,000+ threats identified
- Integration capabilities with U.S. Army’s ATAK platform
- Battle-tested technology with real-world implementation in Ukraine
- Scalable solution available both on-site and cloud-based through AWS
Negative
- High dependence on government contracts and funding
- Competitive market for defense contracts with established prime contractors
- Success contingent on securing portion of newly allocated funding
Insights
Safe Pro positioned to benefit from $33B defense spending on AI/drones, leveraging proprietary threat detection technology with extensive Ukraine deployment data.
The One Big Beautiful Bill Act (OBBBA) signed on July 4, 2025 represents a substantial opportunity for Safe Pro Group (NASDAQ:SPAI). With
Safe Pro’s technology stands out due to its specialized capabilities in explosive threat detection. Their AI platform can identify over 150 types of landmines and unexploded ordnance in milliseconds, providing critical battlefield intelligence. What gives their solution particular credibility is its extensive real-world deployment data from Ukraine operations—1.66 million drone images analyzed and 28,000+ threats identified across 6,705 hectares (approximately the size of Manhattan).
The company’s strategic integration with the TAK software ecosystem, particularly the U.S. Army’s Android Tactical Assault Kit (ATAK), positions them well to capture defense contracts. This integration enables real-time threat information sharing across thousands of soldier-carried and vehicle-mounted devices already in use by U.S. Armed Forces, making adoption potentially smoother than competing solutions.
While the press release suggests ongoing discussions with the Department of Defense and prime contractors, it does not confirm any secured contracts yet. The
Company advancing ongoing discussions with Department of Defense and prime contractors who will benefit from massive additional funding availability
AVENTURA, FL / ACCESS Newswire / July 9, 2025 / Safe Pro Group Inc. (NASDAQ:SPAI) (“Safe Pro” or the “Company”), an emerging leader in artificial intelligence (AI)-powered security and threat detection solutions, today announces that it sees significant opportunities for its patented AI-powered computer vision technologies for the rapid analysis of drone-based imagery following the passage of the U.S. government’s One Big Beautiful Bill Act (OBBBA). The bill allocates as much as
Signed into law on July 4, 2025, the OBBBA represents a historic federal commitment to unmanned systems (or drones) and artificial intelligence. Included in the bill is
“As the United States seeks to harness the power of drones and AI to support the warfighter and protect its borders, we believe that the passage of the OBBBA creates significant opportunities for our unique, battle-tested imagery analysis technology within the Department of Defense. We look forward to advancing our activities with the multiple program executive offices within the DoD and prime contractors supporting customers on fulfilling new AI capabilities with this new funding,” said Dan Erdberg, chairman and CEO of Safe Pro Group Inc.
The Company’s AI-powered drone-based imagery analysis platform can detect and identify over 150 types of landmines and unexploded ordnance in a fraction of a second per image, rapidly delivering mission-critical situational awareness. Whether deployed on the edge in real-time (SpotlightAITM OnSite) or leveraging Amazon Web Services (AWS) on the cloud (SpotlightAITM), the Company’s Safe Pro Object Threat Detection (SPOTD) technology can scale globally, offering solutions for rapid battlefield analysis as well as supporting large-scale commercial and humanitarian demining operations. Powering the Company’s SPOTD technology, Safe Pro’s unique real-world datasets include high-resolution drone imagery and GPS-tagged geospatial data encompassing over 1.66 million drone images analyzed to date, and 28,000+ threats identified across 6,705 hectares in Ukraine, an area nearly equivalent in size to Manhattan.
This dataset is also being used to develop new, real-time force protection solutions for soldiers by integrating the technology into the TAK software ecosystem which includes the U.S. Army’s ATAK (Android Tactical Assault Kit or ATAK) platform. Integration of SPOTD into ATAK can allow detections of small explosive threats instantly identified in drone-based imagery by the Company’s AI technology to be quickly shared across potentially hundreds of thousands of soldier-carried and vehicle-mounted, wireless connected devices widely utilized by the U.S. Armed Forces.
For more information about Safe Pro Group, its subsidiaries, and technologies, please visit https://safeprogroup.com and connect with us on LinkedIn, Facebook, X and Instagram.
About Safe Pro Group Inc.
Safe Pro Group Inc. is a mission-driven technology company delivering AI-enabled security and defense solutions. Through cutting-edge platforms like SPOTD, Safe Pro provides advanced situational awareness tools for defense, humanitarian, and homeland security applications globally. It is a leading provider of artificial intelligence (AI) solutions specializing in drone imagery processing leveraging commercially available “off-the-shelf” drones with its proprietary machine learning and computer vision technology to enable rapid identification of explosives threats, providing a much safer and more efficient alternative to traditional human-based analysis methods. Built on a cloud-based ecosystem and powered by Amazon Web Services (AWS), Safe Pro Group’s scalable platform is targeting multiple markets that include commercial, government, law enforcement and humanitarian sectors where its Safe Pro AI software, Safe-Pro USA protective gear and Airborne Response drone-based services can work in synergy to deliver safety and operational efficiency. For more information on Safe Pro Group Inc., please visit https://safeprogroup.com/.
Forward-Looking Statements
Some of the statements in this press release are forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, Section 21E of the Securities Exchange Act of 1934 and the Private Securities Litigation Reform Act of 1995, which involve risks and uncertainties. Although Safe Pro Group believes the expectations reflected in such forward-looking statements are reasonable as of the date made, expectations may prove to have been materially different from the results expressed or implied by such forward-looking statements. Safe Pro Group has attempted to identify forward-looking statements by terminology including ”believes,” ”estimates,” ”anticipates,” ”expects,” ”plans,” ”projects,” ”intends,” ”potential,” ”may,” ”could,” ”might,” ”will,” ”should,” ”approximately” or other words that convey uncertainty of future events or outcomes to identify these forward-looking statements. These statements are only predictions and involve known and unknown risks, uncertainties and other factors, including market and other conditions. There can be no assurance that inclusion in the Russell Microcap Index will have any appreciable effect on the Company’s market capitalization or market liquidity. More detailed information about the Company and the risk factors that may affect the realization of forward-looking statements is set forth under Item 1A. in the Company’s most recently filed Form 10-K and updated from time to time in the Company’s Form 10-Q filings and in other filings with the Securities and Exchange Commission (the “SEC”), copies of which may be obtained from the SEC’s website at www.sec.gov. Any forward-looking statements contained in this press release speak only as of its date. Safe Pro Group undertakes no obligation to update any forward-looking statements contained in this press release to reflect events or circumstances occurring after its date or to reflect the occurrence of unanticipated events, except as required by law.
Media Relations for Safe Pro Group Inc.:
media@safeprogroup.com
Investor Relations for Safe Pro Group Inc.:
Brett Maas, Managing Partner
Hayden IR
(646) 536-7331
Brett@haydenir.com
SOURCE: Safe Pro Group Inc.
View the original press release on ACCESS Newswire
FAQ
How much funding does the One Big Beautiful Bill Act allocate for AI and drone defense?
The OBBBA allocates $33 billion in total, with $13.5 billion for unmanned systems and $16 billion for AI initiatives across defense and border operations.
What is Safe Pro’s (SPAI) main AI technology capability?
Safe Pro’s AI technology can detect and identify over 150 types of landmines and unexploded ordnance in less than a second per image through their SPOTD (Safe Pro Object Threat Detection) system.
How many threats has Safe Pro’s (SPAI) technology identified in Ukraine?
Safe Pro has identified over 28,000 threats across 6,705 hectares in Ukraine, analyzing 1.66 million drone images in total.
How is Safe Pro (SPAI) integrating with military systems?
Safe Pro is integrating its SPOTD technology into the TAK software ecosystem, including the U.S. Army’s ATAK (Android Tactical Assault Kit) platform for enhanced force protection capabilities.
What deployment options does Safe Pro (SPAI) offer for its AI technology?
Safe Pro offers two deployment options: SpotlightAI OnSite for real-time edge deployment and SpotlightAI for cloud-based deployment through Amazon Web Services (AWS).
Tools & Platforms
Empowering, not replacing: A positive vision for AI in executive recruiting
Image courtesy of Terri Davis
Tamara is a thought leader in Digital Journal’s Insight Forum (become a member).
“So, the biggest long‑term danger is that, once these artificial intelligences get smarter than we are, they will take control — they’ll make us irrelevant.” — Geoffrey Hinton, Godfather of AI
Modern AI often feels like a threat, especially when the warnings come from the very people building it. Sam Altman, the salesman behind ChatGPT (not an engineer, but the face of OpenAI and someone known for convincing investors), has said with offhand certainty, as casually as ordering toast or predicting the sun will rise, that entire categories of jobs will be taken over by AI. That includes roles in health, education, law, finance, and HR.
Some companies now won’t hire people unless AI fails at the given task, even though these models hallucinate, invent facts, and make critical errors. They’re replacing people with a tool we barely understand.
Even leaders in the field admit they don’t fully understand how AI works. In May 2025, Dario Amodei, CEO of Anthropic, said the quiet part out loud:
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned. This lack of understanding is essentially unprecedented in the history of technology.”
In short, no one is fully in control of AI. A handful of Silicon Valley technocrats have appointed themselves arbiters of the direction of AI, and they work more or less in secret. There is no real government oversight. They are developing without any legal guardrails. And those guardrails may not arrive for years, by which time they may be too late to have any effect on what’s already been let out of Pandora’s Box.
So we asked ourselves: Using the tools available to us today, why not model something right now that can in some way shape the discussion around how AI is used? In our case, this is in the HR space.
What if AI didn’t replace people, but instead helped companies discover them?
Picture a CEO in a post-merger fog. She needs clarity, not another résumé pile. Why not introduce her to the precise leader she didn’t know she needed, using AI?
Instead of turning warm-blooded professionals into collateral damage, why not use AI to help, thoughtfully, ethically, and practically solve problems that now exist across the board in HR, recruitment, and employment?
An empathic role for AI
Most job platforms still rely on keyword-stuffed resumés and keyword matching algorithms. As a result, excellent candidates often get filtered out simply for using the “wrong” terms. That’s not just inefficient, it’s fundamentally malpractice. It’s hurting companies and candidates. It’s an example of technology poorly applied, but this is the norm today.
Imagine instead a platform that isn’t keyword driven, that instead guides candidates through discovery to create richer, more dimensional profiles that showcase unique strengths, instincts, and character that shape real-world impact. This would go beyond skillsets or job titles to deeper personal qualities that differentiate equally experienced candidates, resulting in a better fitted leadership candidate to any given role.
One leader, as an example, may bring calm decisiveness in chaos. Another may excel at building unity across silos. Another might be relentless at rooting out operational bloat and uncovering savings others missed.
A system like this that helps uncover those traits, guides candidates to articulate them clearly, and discreetly learns about each candidate to offer thoughtful, evolving insights, would see AI used as an advocate, not a gatekeeping nemesis.
For companies, this application would reframe job descriptions around outcomes, not tasks. Instead of listing qualifications, the tool helps hiring teams articulate what they’re trying to achieve: whether it’s growth, turnaround, post-M&A integration, or cost efficiency, and then finds the most suitable candidate match.
Fairness by design
Bias is endemic in HR today: ageism, sexism, disability, race. Imagine a platform that actively discourages bias. Gender, race, age, and even profile photos are optional. The system doesn’t reward those who include a photo, unlike most recruiting platforms. It doesn’t penalize those who don’t know how to game a résumé.
Success then becomes about alignment. Deep expertise. Purposeful outcomes.
This design gives companies what they want: competence. And gives candidates what they want: a fair chance.
This is more than an innovative way to use current AI technology. It’s a value statement about prioritizing people.
Why now
We’re at an inflection point.
Researchers like Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean forecast in AI 2027 that superhuman AI (AGI, then superintelligence) will bring changes in the next decade more disruptive than the Industrial Revolution.
If they’re even a little right, then the decisions being made today by a small circle in Silicon Valley will affect lives everywhere.
It’s important to step into the conversation now to help shape AI’s real-world role. The more human-centred, altruistic, practical uses of AI we build and model now, the more likely these values will help shape laws, norms, and infrastructure to come.
This is a historic moment. How we use AI now will shape the future.
People-first design
Every technology revolution sparks fear. But this one with AI is unique. It’s the first since the Industrial Revolution where machines are being designed to replace people as an explicit goal. Entire roles and careers may vanish.
But that isn’t inevitable either. It’s a choice.
AI can be built to assist, not erase. It can guide a leader to their next opportunity. It can help a CEO find a partner who unlocks transformation. It can put people out front, not overshadow them.
We invite others in talent tech and AI to take a similar stance. Let’s build tools for people. Let’s avoid displacement and instead elevate talent. Let’s embed honesty, fairness, clarity, and alignment in everything we make.
We don’t control the base models. But we do control how we use them. And how we build with them.
AI should amplify human potential, not replace it. That’s the choice I’m standing behind.
Tools & Platforms
ABA ethics opinion addresses jury selection discrimination from consultants and AI technology
Ethics
ABA ethics opinion addresses jury selection discrimination from consultants and AI technology
When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility. (Image from Shutterstock)
When using peremptory challenges, lawyers should not strike jurors based on discrimination, according to an ethics opinion by the ABA’s Standing Committee on Ethics and Professional Responsibility.
That also applies to client directives, as well as guidance from jury consultants or AI software, according to Formal Opinion 517, published Wednesday.
Such conduct violates Model Rule 8.4(g), which prohibits harassment and discrimination in the practice of law based on “race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status.”
A lawyer does not violate Rule 8.4(g) by exercising peremptory challenges on a discriminatory basis where not forbidden by other law, according to the opinion.
The U.S. Supreme Court explained such conduct violates the Equal Protection Clause of the 14th Amendment in Batson v. Kentucky (1986) and J.E.B. v. Alabama ex rel. T.B. (1994). In Batson, a lawyer struck a series of Black jurors in a criminal trial. In J.E.B., a lawyer struck a series of males in a paternity child support action.
The ethics opinion addresses when a Batson-type violation also constitutes professional misconduct under Rule 8.4(g).
Seemingly, if a lawyer commits such a violation, the lawyer also runs afoul of Rule 8.4(g). After all, in both settings the lawyer has engaged in a form of racial discrimination.
“Striking prospective jurors on discriminatory bases in violation of substantive law governing juror selection is not legitimate advocacy. Conduct that has been declared illegal by the courts or a legislature cannot constitute “legitimate advocacy,” the ethics opinion states.
However, Comment [5] to the model rule provides that a trial judge finding a Batson violation alone does not establish running afoul of 8.4.
The comment, according to the ethics opinion, gives “guidance on the evidentiary burden in a disciplinary proceeding.”
For example, in a disciplinary hearing a lawyer may be able to offer “a more fulsome explanation” for why they struck certain jurors. Furthermore, there is a “higher burden of proof” in lawyer discipline proceedings.
The ethics opinion also explains that a lawyer violates Rule 8.4(g) only if they know or reasonably should have known that the exercise of the peremptory challenges were unlawful. The lawyer may genuinely believe they had legitimate, nondiscriminatory reasons for striking certain jurors—such as their age, whether they paid attention during the jury selection process or something else.
According to the opinion, the question then centers on “whether ‘a lawyer of reasonable prudence and competence’ would have known that the challenges were impermissible.”
Also, the opinion addresses the difficult question of what if a client or jury consultant offers nondiscriminatory reasons for striking certain jurors and the lawyer follows such advice. Here, a reasonably competent and prudent lawyer should know whether the client or jury consultant’s reasons were pretextual or were legitimate.
Additionally, the opinion addresses a scenario where an AI-generated program ranks prospective jurors and applies those rankings, unknown to the lawyer, in a discriminatory manner. Lawyers should use “due diligence to acquire a general understanding of the methodology employed by the juror selection program,” the opinion states.
A July 9 ABA press release is here.
Write a letter to the editor, share a story tip or update, or report an error.
Tools & Platforms
Big Tech, NYC teachers union join forces in new AI initiative that’s drawing concerns
A new partnership between New York City’s teachers union and Big Tech companies has some educators wondering whether they’re at the forefront of improving instruction through artificial intelligence or welcoming a Trojan horse that threatens learning.
The American Federation of Teachers, the umbrella organization for the local United Federation of Teachers union, announced Tuesday it’s teaming up with Microsoft, OpenAI and Anthropic on a $23 million initiative to offer free AI training and software to AFT members. The investment, which is being covered by the companies, includes creating a new training space dubbed the “National Center for AI” on a floor of the UFT headquarters in Lower Manhattan.
UFT President Michael Mulgrew said at a press conference that some of his union’s educators started trainings this month, adding that the initiative will expand nationally over the next year. The initiative is aimed at K-12 teachers, is voluntary and focuses on tasks like lesson planning, according to the union and companies. AI can summarize texts and create worksheets and assessments.
“This tool could truly be a great gift to the children of this country and to education overall,” Mulgrew said. “But we’re not going to get there unless it’s driven by the people doing the work in the most important place in education, which is the classroom.”
Some teachers said they are skeptical about the initiative. Jia Lee, a special education teacher at the Earth School in the East Village, likened the arrangement to “letting the fox in the henhouse” and said she was “horrified” to see the union linking arms with the tech companies.
“I think a lot of educators would say we’re not anti-AI, we just have concerns about a lot of things that have not been explained or researched yet,” Lee said.
City education officials have sent mixed signals about integrating AI in classrooms. The local education department initially blocked OpenAI tool ChatGPT in schools in 2023, then lifted the ban. Schools spokesperson Nicole Brownstein said the agency is working on a “framework” for AI use, but declined to comment on the union’s new initiative.
Gerry Petrella, Microsoft’s general manager for U.S. policy, said the partnership would help the company figure out how to integrate AI into education “in a responsible and safe way.” He said he hoped AI tools would save teachers time so they could focus more on students and their individual needs.
National surveys show the technology is already creeping into students’ and teachers’ lives. A Harvard University survey last fall found half of high-school and college students use AI for some schoolwork, while a new Gallup poll found 60% of teachers reported using AI at some point over the past school year.
Annie Read Boyle, a fourth-grade teacher at P.S. 276 in Battery Park, said she hasn’t used AI much but is impressed with what she’s seen so far. Last year, she used a product called Diffit when she was teaching about the American Revolution.
“I said, ‘I want an article that’s fourth-grade level,’ and in 10 seconds [it] spit out this beautiful worksheet that would’ve taken me hours to create,” she said. “I was like, ‘Wow, this is really impressive and it just saved me so much time.’”
Boyle said she could imagine similar tools differentiating assignments based on students’ learning styles, abilities or language. Still, she cited concerns about data privacy, copyright infringement in materials and encouraging students to take shortcuts instead of developing critical-thinking skills.
“It’s such an important tool for teachers to know how to use so that we can teach the kids but it could really hurt the development process for kids,” she said, adding that she is also concerned about AI’s environmental impact and potential to drive job loss.
AFT President Randi Weingarten said Tuesday she hoped to learn from past mistakes involving technology, including social media’s harms on young people’s mental health. She said the union’s partnership with tech companies is a way to influence how AI is used with children.
“We can also make sure we have the guardrails we need to protect the safety and security of kids,” said Weingarten, whose union includes 1.8 million members nationwide. “That is now becoming our job. … We have to have a phone line back to [tech hub] Seattle.”
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle