Connect with us

Education

Melania Trump is right that the robots are here – but she’s wrong on how to handle it | Arwa Mahdawi

Published

on


MelanAI is coming for your kids

“The robots are here,” proclaimed Melania Trump during an AI event at the White House on Thursday. It can be hard to parse the first lady’s poker face and expressionless voice, but this certainly wasn’t a statement of regret. Rather Trump, reading from a script encased in a very analogue binder, was taking it upon herself to help America’s children navigate AI, which she touted as the “greatest engine of progress in the history of the United States of America”.

“As leaders and parents, we must manage AI’s growth responsibly,” she said in her speech. “During this primitive stage, it is our duty to treat AI as we would our own children.”

Does that mean foisting them off to a nanny or, as Donald Trump once did with Donald Trump Jr, abandoning them at the airport because they’re five minutes late? No, it means “empowering, but with watchful guidance”, apparently.

Melania Trump doesn’t grace the White House with her presence particularly often. The first lady has made clear that she is not beholden to things like “duty” or “tradition” like her predecessors. She does what she wants, when she wants. And Thursday’s roundtable on AI is the latest indication that she wants to position herself as a leading figure in the future of technology. Like the rest of her family, the first lady has enthusiastically embraced NFTs and cryptocurrency – and their amazing ability to rapidly generate the Trumps an immense amount of wealth. She’s also boasted about using an AI version of her voice to narrate the audiobook version of Melania. And last month she launched an AI contest for kids in grades K-12.

The first lady isn’t just positioning herself as a leading voice in technology; she’s trying to brand herself as the face of responsible innovation. While announcing her AI contest for kids, for example, she boasted that she’d “championed online safety through the Take It Down Act” (TDA). It’s true that Melania advocated for the TDA, which passed Congress with bipartisan support earlier this year and criminalizes the nonconsensual distribution of intimate imagery (NDII, once known as “revenge porn”.) Nevertheless, the legislation is rather more complicated than she’d have it seem.

Image-base sexual abuse (both authentic imagery and AI-generated content) is a serious problem that scholars and activists have been trying to address via legislation for a long time. While it’s commendable that Trump wanted to get involved with the TDA, some people believe she swooped in at the last minute and put her name to a dangerously bastardized version of a model statute that experts developed. Numerous civil rights activists have warned that the TDA has been broadened so much that it will be weaponized against free speech.

“I am gratified that the [TDA] incorporates much of the language of the model federal statute against NDII I first drafted in 2013,” wrote Dr Mary Anne Franks, president of the Cyber Civil Rights Initiative, in a statement earlier this year. “But the Take It Down Act also includes a poison pill: an extremely broad takedown provision that will likely end up hurting victims more than it help.”

The Electronic Frontier Foundation has similarly warned that the TDA is so broad that it gives the “powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don’t like”. Indeed, the president has said as much himself. “I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody,” he told a joint session of Congress.

All of which to say: Melania Trump may not be the best person to help manage AI’s growth responsibly and shield children from potential harm from the technology. But if she is keen on doing this work then I suggest she stop convening taskforces on how to integrate AI into childhood education, and simply ask her husband to stop gutting public education instead. The Trump administration is, for example, attempting to defund Head Start, a federally funded early childhood program for low-income families, and cancelled a grant program that has historically funded educational children’s programs like Sesame Street. The Trump administration is also trying to curtail education about slavery and Republicans are waging war on Wikipedia to try to remove criticism of Israel. More broadly, book bans and censorship are flourishing under Trump.

Melania Trump is right that the robots are here, and they’re here to stay. But I’m not convinced that the Trump administration is going to responsibly integrate AI into our schools in a way that increases equity and the sum of human knowledge. Rather I think it’s more likely that all these AI taskforces will succeed in doing is diverting large sums of taxpayer money towards the tech CEOs who have been busy bowing to Trump.

AI “will make a few people much richer and most people poorer”, Christopher Hinton, the so-called godfather of AI, told the Financial Times on Friday. Which, I suspect, is precisely why Melania Trump and the coterie of billionaires and tech executives gathered around her at the White House are so excited about it.

Accused rapist Conor McGregor wants to be the next president of Ireland

McGregor recently lost an appeal over a civil court ruling last year awarding damages to a woman who accused him of rape. He’s also had numerous other brushes with the law. Still that sort of thing doesn’t preclude someone from high office anymore, does it? McGregor wants to be president of Ireland and Elon Musk is enthusiastically supporting him in that bid.

A venture capitalist went to extreme lengths to punish her surrogate

“Compared to natural conception, carrying a genetically unrelated fetus more than triples the risk of severe, potentially deadly conditions, a statistic surrogates are rarely given,” writes Emi Nietfeld for Wired in a harrowing feature about a venture capitalist, Cindy Bi, who viciously hounded her surrogate when the baby died in utero. Bi then had a healthy baby via another surrogate – who had an emergency hysterectomy in the process. It feels like for-profit surrogacy has been normalized by celebrities; this piece is an essential reminder of the ethical issues involved with the womb-for-hire industry.

skip past newsletter promotion

Epstein victims say they will compile their own ‘client list’

“We know the names,” one survivor said during a press conference on Wednesday. “Now, together as survivors, we will confidentially compile the names we all know were regularly in the Epstein world.”

RFK Jr hints access to abortion pill could be cut back

There is an enormous amount of evidence that shows mifepristone and misoprostol, commonly known as the abortion pills, are safe and effective. The health secretary, however, is claiming otherwise and suggested that access may be curtailed. Meanwhile, Texas just passed a bill banning abortion pills from being mailed to the state.

Laura Loomer thinks Palestinian kids aren’t innocent

The far-right Trump confidante and “proud Islamophobe” recently used her considerable influence to get the Trump administration to block medical visas for sick kids from Gaza. Now she’s justifying this by calling Palestinian kids terrorists. “You think these kids are so innocent?” Loomer said on her podcast. “[Y]ou think little kids are not capable of evil?” I think the real terrorists here may be the people who have created the world’s largest cohort of child amputees and are systematically starving babies to death.

Google has a $45m contract to spread Israeli propaganda

Loomer is not the only one spreading dehumanizing misinformation that is fueling genocide. Drop Site News reports that Google is a “key entity” supporting Netanyahu’s messaging and amplifying misinformation about the famine in Gaza.

The week in pawtriarchy

My spirit animal may well be a raccoon in Kentucky, who recently ate a few too many fermented peaches discarded by a nearby distillery and passed out in a pool of dumpster water. Luckily a passing nurse started doing “compression-only CPR” until the little fella revived. Kentucky Mist Distillery, which makes peach-flavoured moonshine, shared a video of the raccoon resuscitation with a note saying: “PLEASE, DRINK RESPONSIBLY!!” I imagine that particular raccoon has learned that gorging yourself on fermented dumpster peaches can be whiskey business.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Education

DVIDS – News – Lethality, innovation, and transformation though AI education at the U.S. Army School of Advanced Military Studies

Published

on



THE ARMY UNIVERSITY, FORT LEAVENWORTH, Kansas – In late July 2025, the Advanced Military Studies Program at the School of Advanced Military Studies, known as SAMS, launched its first-ever experimental, three-day, Practical Application of Artificial Intelligence module.

The mission was simple: transform the program with an innovative, hands-on AI learning experience for students and faculty. The purpose was to enable warfighter lethality through AI education and training.

“AI is changing the character of warfare. Our graduates have got to be ready to lead formations powered by AI—and that’s why we did something about it,” Col. Dwight Domengeaux, Director, SAMS said.

Dr. Bruce Stanley, Director, AMSP, envisioned a module that pushed institutional norms about how mid-career officers learn about AI and learn with AI.

“Did we accept risk? Yes. We did—to create a critical learning opportunity for our students,” Stanley remarked. “We knew what was at stake, and we trusted our faculty and students to make it work.”

And make it work they did.

According to AMSP faculty, the module’s experimental instructional design was key, consisting of ten-and-a-half hours of total classroom contact time divided over three lessons.

“We covered a lot of ground with our students in three days,” Dr. Jacob Mauslein, associate professor, AMSP, said. “Subjects ranged from AI theory and ethical considerations of AI, to applying AI tools, and leading AI-enabled organizations.”

A novel feature of the module was that it was developed by AMSP students. As a task in their Future Operational Environment course, six students from the Class of 2025, mentored by two faculty, developed the AI module that would be taught to the Class of 2026. The students’ final draft was adopted almost without change by the faculty.

“Incorporating students as full participants in the process allowed us to co-develop lesson objectives and materials that deeply mattered to them,” Dr. Luke Herrington, one of the faculty leads for the module shared.

Meeting students where they were in terms of their AI skills and then taking them to the next level was part of the academic approach for the AI module, Herrington explained.

Maj. Justin Webb, PhD, an AY 2025 AMSP student, and one of the module’s developers explained it this way: “SAMS is a warfighting school—so we chose learning activities that would help us become more lethal warfighters with AI. Using AI tools like CamoGPT, Ask Sage, and others for several hours over three days helped us get there.”
Some students in the AY 2026 class were initially skeptical of using AI.

“At first, I didn’t know what I didn’t know,” Army Maj. Stuart Allgood, an Armor officer SAMS student said. “But by the end of the first day my thinking about AI had changed. After the second day, I could use AI tools I had never even heard of.”

Maj. Callum Knight, an intelligence officer from the United Kingdom summed up his experience.

“Before this course I viewed AI as just a data point,” Knight said. “Now that I’ve experienced what’s possible with AI, I realize it’s an imperative that is going to impact everything I do going forward.”

So, what’s next for AI at SAMS?

“Based on what our students got out of this, we intend to add more AI learning moments across the program,” Stanley said. “The priority now is to integrate AI into our upcoming operational warfare practical exercise.”

AMSP is one of the three distinct academic programs within SAMS.

The other two SAMS programs are the Advanced Strategic Leadership Studies Program or ASLSP – a Senior Service College equivalent, and, the Advanced Strategic Planning and Policy Program or ASP3 also known as the Goodpaster Scholars—a post-graduate degree program.

Matthew Yandura is an AMSP assistant professor, and retired Army colonel.







Date Taken: 08.29.2025
Date Posted: 09.11.2025 13:34
Story ID: 547863
Location: FORT LEAVENWORTH, KANSAS, US






Web Views: 7
Downloads: 0


PUBLIC DOMAIN  





Source link

Continue Reading

Education

AI in education: Most teachers aware of ChatGPT but few confident in classroom use

Published

on


Artificial intelligence is reshaping education, but most teachers in American classrooms remain unprepared to use it effectively. A new study by researchers from the University of Tennessee reveals sharp divides in awareness, usage, and confidence among elementary and secondary educators.

Published in Education Sciences, the study AI Literacy: Elementary and Secondary Teachers’ Use of AI-Tools, Reported Confidence, and Professional Development Needs surveyed 242 teachers across grades 3–12 in the southeastern United States. The findings underscore how quickly AI tools like ChatGPT are making their way into schools, while also exposing the challenges teachers face in adapting to them.

Awareness and Use of AI Tools

The research shows that while most teachers have heard of AI writing tools, fewer than half actively use them in the classroom. ChatGPT emerged as the most widely recognized and utilized platform, followed by Grammarly, Magic School, and Brisk. Secondary teachers consistently reported higher levels of familiarity, understanding, and actual use than their elementary counterparts.

Overall, 92 percent of respondents were aware of ChatGPT, but only 47 percent reported using AI in their teaching. Among those who did, AI was employed primarily for lesson planning, assessment design, feedback generation, and text differentiation for diverse learners. Some teachers also relied on AI for professional tasks such as drafting emails or creating instructional visuals. Despite these uses, adoption remains uneven, with elementary educators often citing developmental appropriateness and tool complexity as barriers.

The survey found that 80 percent of secondary teachers recognized AI tutoring systems, compared to just over half of elementary teachers. Awareness of grading and assessment tools such as Turnitin and Gradescope was also significantly higher among secondary teachers. This divide suggests that students in higher grades are more likely to experience AI-enhanced instruction than younger peers, potentially widening existing gaps in exposure to technology.

Confidence and Classroom Challenges

Confidence proved to be a decisive factor in whether teachers integrated AI. Secondary educators reported greater self-assurance across all categories, from lesson planning and grading to communicating with families and districts about AI use. Elementary teachers scored consistently lower, especially in areas like troubleshooting AI-related issues or explaining integration policies to parents.

Teachers expressed generally positive feelings about AI, particularly its ability to save time and improve productivity. Many reported satisfaction with tools like ChatGPT and Grammarly, which were seen as reliable supports for planning and student feedback. However, stress and uncertainty surfaced among those concerned about ethical implications, accuracy of AI-generated content, and student misuse.

The most frequently cited challenge was academic dishonesty. Teachers noted that students often bypassed learning by copying AI responses wholesale. Other problems included crafting effective prompts, dealing with inaccurate outputs, and navigating an overwhelming variety of platforms. Ethical concerns such as data privacy, algorithmic bias, and the risk of diminishing student critical thinking skills added to the complexity of adoption.

Even among teachers who felt positive, caution was evident. Many described AI as useful for generating first drafts or lesson outlines but insisted on editing outputs to suit classroom needs. A minority expressed skepticism altogether, fearing that reliance on AI could erode essential student skills and deepen inequities.

Professional Development Needs

Perhaps the most striking finding of the study is the lack of professional development. Only 24 percent of teachers reported receiving any AI-related training. Among those, many relied on self-teaching, peer collaboration, or district-led sessions rather than formal instruction. While some found these resources moderately helpful, the majority identified structured workshops as their top need.

Eighty percent of all respondents said professional development workshops would boost their confidence in using AI. Teachers also called for clear permission policies, access to reliable tools, and guidance on best practices. Elementary teachers were particularly likely to request training, reflecting their lower reported confidence compared to secondary colleagues.

Despite growing awareness of AI’s potential, the absence of system-wide support leaves teachers struggling to keep pace with technological change. The study warns that without targeted training and consistent policies, educators risk either misusing AI or failing to leverage its benefits at all. This could lead to fragmented practices across districts and further disparities between elementary and secondary levels.

Policy and Research Implications

The authors argue that one-size-fits-all solutions will not work, as the needs of elementary and secondary teachers diverge sharply. Differentiated professional development, ongoing support, and robust ethical guidelines are essential. Schools must also provide leadership, infrastructure, and a clear vision to reassure teachers about integrating AI responsibly.

The research highlights the importance of addressing not just technical competence but also the ethical and social dimensions of AI use in education. Teachers voiced concerns about academic integrity, algorithmic bias, and equity of access. Addressing these issues requires policies that balance innovation with protection for students.

Future studies, the authors suggest, should track how teacher awareness and confidence evolve over time as AI tools advance. Longitudinal research could shed light on whether training efforts actually lead to better classroom practices and improved student outcomes. There is also a pressing need to examine which forms of professional development are most effective in building AI literacy among teachers across different contexts.



Source link

Continue Reading

Education

Digital Learning for Africa: Ministers, Practitioners and Pathways

Published

on


Frameworks for the Futures of AI in Education.

Countries are using UNESCO’s Readiness Assessment Methodology (RAM) to map weaknesses and opportunities and to guide national AI strategies ; two latest additions being Namibia and Mozambique. 

The DRC is prioritizing digital transformation projects, investment partnerships for infrastructure, AI adapted to local languages, and personalized learning, organized around governance, regulation of human capital, and industrialization. RAM has supported startups, scholarships and capacity-building—pointing toward sovereign digital infrastructures and talent retention. 

Dr. Turyagenda notes that youth are already using AI and need a structured framework; its National AI Strategy and Digital Agenda Strategy align with UNESCO, AU and East African frameworks, with teachers involved from the start.

Preparing learners for an AI-driven economy.

Namibia—among the first in Southern Africa to launch a RAM process—is developing a national AI strategy and a National AI Institute. Hon. Mr. Dino Ballotti, Deputy Minister of Education, Innovation, Youth, Sports, Arts and Culture of Namibia underscores that the national approach is “humanity first” and context-specific—“Namibian problems require Namibian solutions”—with priorities in school connectivity, teacher and learner readiness, and data availability. Indigenous communities are actively involved in developing tools and digital technologies. 



Source link

Continue Reading

Trending