AI Insights
A new couple’s experiment with ChatGPT : NPR

One recent evening, my new boyfriend and I found ourselves in a spat.
I accused him of giving in to his anxious thoughts.
“It’s hard to get out of my head,” David said. “Mental spiraling is part of the nature of sensitivity sometimes — there’s emotional overflow from that.”
“Well, spiraling is bad,” said I, a woman who spirals.
Our different communication styles fueled the tense exchange. While I lean practical and direct, he’s contemplative and conceptual.
I felt we could benefit from a mediator. So, I turned to my new relationship consultant, ChatGPT.
AI enters the chat
Almost half of Generation Z uses artificial intelligence for dating advice, more than any other generation, according to a recent nationwide survey by Match Group, which owns the dating apps Tinder and Hinge. Anecdotally, I know women who’ve been consulting AI chatbots about casual and serious relationships alike. They gush over crushes, upload screenshots of long text threads for dissection, gauge long-term compatibility, resolve disagreements and even soundboard their sexts.
Kat, a friend of mine who uses ChatGPT to weed out dating prospects, told me she found it pretty objective. Where emotions might otherwise get in the way, the chatbot helped her uphold her standards.
“I feel like it gives better advice than my friends a lot of the time. And better advice than my therapist did,” said Kat, who asked to go by her first name due to concerns that her use of AI could jeopardize future romantic connections. “With friends, we’re all just walking around with our heads chopped off when it comes to emotional situations.”
When apps are challenging our old ways of finding connection and intimacy, it seems ironic to add another layer of technology to dating. But could Kat be on to something? Maybe a seemingly neutral AI is a smart tool for working out relationship issues, sans human baggage.
For journalistic purposes, I decided to immerse myself in the trend.
Let’s see what ChatGPT has to say about this …
Drawing on the theory that couples should seek therapy before major problems arise, I proposed to my boyfriend of less than six months that we turn to an AI chatbot for advice, assess the bot’s feedback and share the results. David, an artist who’s always up for a good experimental project (no last name for him, either!), agreed to the pitch.
Our first foray into ChatGPT-mediated couples counseling began with a question suggested by the bot to spark discussion about the health of our relationship. Did David have resources to help him manage his stress and anxiety? He did — he was in therapy, exercised and had supportive friends and family. That reference to his anxiety then sent him on a tangent.
He reflected on being a “sensitive artist type.” He felt that women, who might like that in theory, don’t actually want to deal with emotionally sensitive male partners.
“I’m supposed to be unflappable but also emotionally vulnerable,” David said.
He was opening up. But I accused him of spiraling, projecting assumptions and monologuing.
While he was chewing over big ideas, I tried to steer the conversation back to our interpersonal friction. That’s where ChatGPT came in: I recorded our conversation and uploaded the transcript to the bot. And then I posed a question. (Our chats have been heavily edited for brevity — it talks a lot.)
David was incredulous. “It feels like a cliché,” he said.
Deflection, I thought. I turned back to ChatGPT and read on:
It was a damning summary. Was I, as ChatGPT suggested, carrying a burnout level of emotional labor at this early stage in the relationship?
Pushing for objectivity
A human brought me back to reality.
“It might be true that you were doing more emotional labor [in that moment] or at the individual level. But there’s a huge bias,” said Myra Cheng, an AI researcher and computer science Ph.D. student at Stanford University.
The material that large language models (LLMs), such as ChatGPT, Claude and Gemini, are trained on — the internet, mostly — has a “huge American and white and male bias,” she said.
And that means all the cultural tropes and patterns of bias are present, including the stereotype that women disproportionately do the emotional labor in work and relationships.
Cheng was part of a research team that compared two datasets, each comprising personal advice: one dataset written by humans responding to real-world situations and the second dataset consisting of judgments made by LLMs in response to posts on Reddit’s AITA (“Am I the A**hole?”) advice forum.
The study found that LLMs consistently exhibit higher rates of sycophancy — excessive agreement with or flattery of the user — than humans do.
For soft-skill matters such as advice, sycophancy in AI chatbots can be especially dangerous, Cheng said, because there’s no certainty about whether its guidance is sensible. In one recent case revealing the perils of a sycophantic bot, a man who was having manic episodes said ChatGPT’s affirmations had prevented him from seeking help.
So, striving for something closer to objectivity in the biased bot, I changed my tack.
There it was again: I was stuck doing the emotional labor. I accused ChatGPT of continuing to lack balance.
“Why do you get ‘clear communication’?” David asked me, as if I chose those words.
At this point, I asked Faith Drew, a licensed marriage and family therapist based in Arizona who has written about the topic, for pointers on how to bring ChatGPT into my relationship.
It’s a classic case of triangulation, according to Drew. Triangulation is a coping strategy in relationships when a third person — a friend, parent or AI, for example — is brought in to ease tension between two people.
There’s value in triangulation, whether the source is a bot or a friend. “AI can be helpful because it does synthesize information really quickly,” Drew said.
But triangulation can go awry when you don’t keep sight of your partner in the equation.
“One person goes out and tries to get answers on their own — ‘I’m going to just talk to AI,'” she said. “But it never forces me back to deal with the issue with the person.”
The bot might not even have the capacity to hold me accountable if I’m not feeding it all the necessary details, she said. Triangulation in this case is valuable, she said, “if we’re asking the right questions to the bot, like: ‘What is my role in the conflict?'”
The breakthrough
In search of neutrality and accountability, I calibrated my chatbot once more. “Use language that doesn’t cast blame,” I commanded. Then I sent it the following text from David:
I feel like you accuse me of not listening before I even have a chance to listen. I’m making myself available and open and vulnerable to you.
“What’s missing on my end?” I asked ChatGPT.
After much flattery, it finally answered:
I found its response simple and revelatory. Plus, it was accurate.
He was picking up a lot of slack in the relationship lately. He made me dinners when work kept me late and set aside his own work to indulge me in long-winded, AI-riddled conversations.
I reflected on a point Drew made — about the importance of putting work into our relationships, especially in the uncomfortable moments, instead of relying on AI.
“Being able to sit in the distress with your partner — that’s real,” she said. “It’s OK to not have the answers. It’s OK to be empathic and not know how to fix things. And I think that’s where relationships are very special — where AI could not ever be a replacement.”
Here’s my takeaway. ChatGPT had a small glimpse into our relationship and its dynamics. Relationships are fluid, and the chatbot can only ever capture a snapshot. I called on AI in moments of tension. I could see how that reflex could fuel our discord, not help mend it. ChatGPT could be hasty to choose sides and often decided too quickly that something was a pattern.
Humans don’t always think and behave in predictable patterns. And chemistry is a big factor in compatibility. If an AI chatbot can’t feel the chemistry between people — sense it, recognize that magical thing that happens in three-dimensional space between two imperfect people — it’s hard to put trust in the machine when it comes to something as important as relationships.
A few times, we both felt that ChatGPT gave objective and creative feedback, offered a valid analysis of our communication styles and defused some disagreements.
But it took a lot of work to get somewhere interesting. In the end, I’d rather invest that time and energy — what ChatGPT might call my emotional labor — into my human relationships.
AI Insights
Gaining AI advantage: The need for trusted autonomy, transparency and control

The Department of Defense is racing to deploy artificial intelligence from central command to the tactical edge to ensure decision-dominance in future conflicts. However, military leaders face a fundamental obstacle that threatens to undermine their progress: Deploying autonomous AI agents without a deeper foundation of trust and operational control poses significant risks of fragmentation, flawed outcomes, and mission failure, say AI experts and former military intelligence officials in a new report.
The stakes for managing AI effectively in the military are increasing as global opponents speed up their use of commercial AI, and leaders face the emerging threat of what one AI expert in the report called “algorithmic warfare.” Given the growing amount of commercial and customized AI acquired by the U.S. military that operates inside so-called “black boxes,” experts warn that the lack of trust in AI output will hinder the Pentagon’s AI progress, especially if commanders lack confidence in the ability to verify or trust the data.
The report suggests that without a shift toward transparent, configurable, and explainable AI, the DoD risks mission failure and ceding the advantage to its rivals, even if it continues to invest billions in modernization.
The new report, titled “The AI control advantage: Trusted autonomy, on your terms,” produced by Scoop News Group on behalf of Seekr, argues that to achieve true decision dominance, defense leaders must move beyond acquiring fragmented, siloed AI tools. It lays out the case for taking a broader platform-based approach that provides a command-and-control layer for AI itself, ensuring that autonomous agents operate with explainable logic and in alignment with commander’s intent, from the enterprise cloud to the tactical edge.
The report, based on insights from former senior military and intelligence officials, highlights three major factors shaping the military’s approach to AI:
Confronting insight gaps and trust deficits
The DoD’s aging systems and dashboards generally fail to provide the insights needed to make quick decisions on the ground. This “insight gap” is exacerbated by a dangerous “trust deficit” in AI output, says Lisa Costa, former U.S. Space Force Chief Technology and Innovation Officer and now a Senior Advisor to Seekr in the report. Many AI applications function as black boxes, obscuring how they arrive at a recommendation. This lack of transparency makes it nearly impossible for commanders to verify the logic or trust the source of AI-generated recommendations. That poses potentially fatal risks in high-stakes operational environments when humans only have seconds to make critical decisions.
This forces an untenable choice between speed and safety, says Costa. “Our adversaries are moving forward with commercial AI. Waiting isn’t an option. However, trust is not an option, even if commercial AI is used. How can a commander execute a mission based on an AI recommendation if they cannot verify its reasoning or trust its source?”
True autonomy requires orchestration from the enterprise to the edge.
Additionally, the report says, effective military AI cannot be confined to a central cloud. It must be deployable as autonomous agents to the warfighter, operating in disconnected and denied environments. This requires an infrastructure that can create and manage these agents, pushing them from powerful, centralized resources out to a small form-factor device on the front lines, explains Derek Britton, SVP of Government at Seekr and a former U.S. Air Force intelligence officer.
“It’s all about creating the agentic processes at the various levels, using enterprise cloud capabilities… to develop human-centric AI agents, but then having the ability to push them out from the enterprise cloud to the tactical cloud node, then all the way out to the edge on a PC or a small form-factor device,” he says.
Fragmented solutions cannot keep pace with ‘Algorithmic Warfare.’
The future of conflict will continue to evolve as adversaries directly target U.S. capabilities dynamically and at machine speeds, and vice versa, creating a mounting contest between algorithms. A defense strategy built on disparate point solutions, each with its own vulnerabilities and no common framework for updates, is dangerously fragile, warns John Chao, Seekr’s Director of Federal Products and a former U.S. Marine Corps Special Operations Command Intelligence Operator.
He argues that defense leaders need to look beyond isolated AI tools and consider adopting a unified platform approach capable of developing, deploying and orchestrating trustworthy AI agents that can be updated rapidly across the enterprise and out to the tactical edge to maintain a competitive advantage.
Key takeaways for defense leaders
The report maintains that to gain the AI advantage, the imperative is to act now. “Mission owners can start by solving discreet but critical and urgent problems using pre-built, out-of-the-box commercial AI solutions that are transparent and configurable for their needs, without compromising safety and trust,” says Britton.
The report highlights four “non-negotiable principles” for embracing this platform approach. Among them is an AI platform that stresses data and algorithmic transparency, radical explainability, correctability, continuous improvement, and training agility. It also emphasizes the need for speed and points to the success Seekr has achieved with its AI-Ready Data Engine, which automates data preparation 2.5 times faster and 90% less expensive than traditional data preparation methods.
Listen to a “deep dive” podcast discussion highlighting the findings and recommendations of the report, created by Scoop News Group using NotebookLM.
This article and the full report were produced by Scoop News Group for DefenseScoop and sponsored by Seekr.
AI Insights
AI requirements are racking up across government, GAO says

Federal agencies are facing an onslaught of artificial intelligence requirements, a new government watchdog report detailed, with callouts coming from executive orders, federal laws, advisory guidance and other sources.
As of July, there were nearly 100 different objectives related to the emerging technology that might be considered government-wide standards, according to the Government Accountability Office.
“AI technologies can drive economic growth and support scientific advancements that improve the conditions of our world,” the GAO said in its correspondence to Congress. “It also holds substantial promise for improving the operations of government agencies. However, AI technologies also pose risks that can negatively impact individuals, groups, organizations, communities, society, and the environment.”
The goal of the report, the watchdog said, was to understand the various AI requirements facing the government and which bodies hold responsibility related to the technology.
The review included current requirements for federal agencies, like creating inventories of AI use cases and updating AI use policies. It also examined broader efforts, like the National AI Initiative, which focuses on goals like increasing research and development of the technology and investing in computing resources.
“Federal agencies’ efforts to implement AI have been guided by a variety of legislative and executive actions, as well as federal guidance,” GAO continued. “Congress has enacted legislation, and the President has issued EOs, to assist agencies in implementing AI in the federal government.”
The office reviewed new artificial intelligence initiatives created by the current and former administrations, stretching from the first Trump administration’s executive order on artificial intelligence and the signing of the AI Training Act to more recent guidance from the Office of Management and Budget.
Overall, GAO found that 10 different bodies had a stake in reviewing the U.S. government’s AI efforts, and that federal laws, executive orders, and guidance had produced 94 different expectations related to the technology, including reviews related to risk mitigation, investment strategies, and usage policies.
The GAO sent a draft of the report to OMB, the Office of Science and Technology Policy, the Commerce Department, the General Services Administration and the National Science Foundation. OSTP, Commerce and NSF responded with technical comments, while GSA declined to provide comments and OMB did not respond to GAO’s request for comments on its findings.
AI Insights
New study shows how AI is reshaping the telco value chain

The IBM Institute for Business Value study shows that generative AI is live in customer care for 69% of telecoms. Meanwhile, agentic AI—capable of autonomous decision-making—is being used by 44% of CSPs. These technologies are enabling real-time insights, personalized experiences and operational efficiency across the board.
Momentum is building in areas such as network automation, edge intelligence and service assurance, but leading CSPs are already pushing further.
For example, Bharti Airtel, a leading CSP in India, has deployed an AI-powered anti-SPAM network that flags over 8 billion spam calls and 1 billion spam SMS messages. It identifies nearly 1 million spammers daily. The company also launched an AI-driven RAN energy management solution, expected to save USD 12 million annually while reducing its carbon footprint.
Meanwhile, China Mobile has introduced over 24 AI products. One of them, Lingxi—an intelligent customer assistant—handles 90% of first-line inquiries and has boosted customer satisfaction by 10% in pilot regions. The company also uses AI-powered predictive analytics to reduce network repair times by 30% and AI-based energy management to dynamically optimize power usage across its RAN infrastructure.
As AI becomes embedded in critical infrastructure, telecom providers are turning to performance dashboards to bring transparency and accountability to AI-driven initiatives. These tools help shift AI from a black box to a visible engine for business value—tracking model drift, triggering retraining and alerting teams when KPIs fall below thresholds. Governance dashboards also support regulatory compliance by offering transparency logs for audit purposes.
To ensure sustained impact, continuous monitoring and agile feedback loops are essential. But measuring the right things matters equally. Focusing solely on cost can obscure gains in customer experience or business growth.
That insight is why leading telecom adopters track a balanced set of KPIs—most often cost savings, customer satisfaction, AI-driven revenue growth and operating margin. Over the past year, CSPs have reported real, measurable improvements across these high-priority performance areas.
By anchoring AI initiatives in business outcomes and operational KPIs, CSPs can ensure that innovation translates into growth, efficiency and long-term competitive advantage.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi