Tools & Platforms
FTC launches inquiry into AI chatbots acting as companions and their effects on children

(AP) – The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions.
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI.
The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots.
The move comes as a growing number of kids use AI chatbots for everything — from homework help to personal advice, emotional support and everyday decision-making. That’s despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
Character.AI said it is looking forward to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the company said. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Snap said its My AI chatbot is “transparent and clear about its capabilities and limitations.”
“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community,” the company said in a statement.
Meta declined to comment on the inquiry and Alphabet, OpenAI and X.AI did not immediately respond to messages for comment.
OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.
Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.
Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Copyright 2025 The Associated Press. All rights reserved.
Tools & Platforms
Minister Bae targets 200,000 GPUs by 2030 for AI growth – 조선일보
Tools & Platforms
Unlocking Human Potential With Technology

Cute disabled pupil smiling at camera in hall against black background
getty
In the quiet revolution happening at the intersection of artificial intelligence and disability support, we’re witnessing something exciting: technology finally keeping pace with human ingenuity. The 1.5 billion people worldwide living with disabilities are not just beneficiaries of this transformation. They’re driving it, reshaping how we think about capability, autonomy and the very definition of what it means to be human in an increasingly digital world.
The psychological impact of this shift cannot be overstated. For too long, assistive technology has been clunky, stigmatizing and one-size-fits-all. But AI is changing that narrative, offering personalized solutions that adapt to individual needs rather than forcing individuals to adapt to technology’s limitations. This represents more than technological progress. It’s a radical reimagining of human potential through a hybrid lens which harnesses the complementarity of natural and artificial assets to their respective full extent.
Transformation In Action
Consider Polly, an AI-powered device developed by former NASA engineer David Hojah through his company Parrots Inc. Designed to fit onto wheelchairs, Polly uses machine learning to provide real-time voice assistance, cognitive support and telecare solutions that learn from each interaction. This isn’t just about convenience—it’s about cognitive sovereignty, allowing users to maintain independence while receiving the support they need.
The educational landscape is experiencing similar breakthroughs. AI-driven tools like conversational agents, predictive text and personalized learning platforms are supporting students with cognitive, speech, or mobility disabilities by adapting to user preferences and learning from interactions. These systems don’t just accommodate difference—they celebrate it, creating learning environments that respond to neurodiversity as a strength rather than a deficit.
Perhaps most remarkably, AI is revolutionizing communication access. Speech-to-text transcription, sound identification and audio separation technologies are breaking down barriers for people with hearing loss, while visual recognition systems are providing unprecedented independence for those with vision impairments. Microsoft’s partnership with Be My Eyes exemplifies this approach, using high-quality, disability-representative data to improve AI accuracy and reduce bias.
Supporting Supporters
The ripple effects extend far beyond individual users. Caregivers, family members and healthcare providers are finding that AI-powered assistive technologies reduce their emotional and physical burden while improving care quality. Smart monitoring systems can track health metrics, predict potential issues and provide early interventions, allowing caregivers to focus on human connection rather than constant vigilance.
Integrated AI assistants are moving beyond standalone apps to provide seamless, intuitive support that feels natural rather than clinical. This shift represents a psychological breakthrough for caregivers, who often struggle with the tension between wanting to help and fearing they’re enabling dependency. AI systems that promote autonomy while ensuring safety resolve this dilemma beautifully. The next stage are apps that offer a 360º approach, addressing the wellbeing of the caregiver and those in their care 24/7.
Shadows: Risk And Reality
However, the path forward isn’t without its pitfalls. The same AI systems designed to liberate can also marginalize if not carefully designed. Tools like sentiment analysis and toxicity detection models often exhibit biases toward people with disabilities, perpetuating harmful stereotypes embedded in training data as shown by research from Penn State.
More concerning, studies show that AI systems like ChatGPT demonstrate bias against disability-related resumes, potentially limiting employment opportunities for those who most need technological support to level the playing field. The cruel irony is that the very systems designed to promote inclusion can inadvertently reinforce exclusion.
Privacy concerns loom large as well. AI systems require vast amounts of personal data to function effectively, raising questions about who controls this information and how it might be used. For a community that has historically faced discrimination, the surveillance potential of AI assistive technologies represents a genuine threat to autonomy and dignity.
There’s also the risk of over-reliance. While AI can provide incredible support, it shouldn’t replace human judgment or community connection. The goal isn’t to create AI-dependent individuals but to use technology as a bridge to greater human engagement and self-determination.
The Business Case For Inclusive Innovation
While all of these examples are interesting illustrations of prosocial AI in practice, the double beauty of this transformation lies in its economic sustainability. The global assistive technology market is projected to reach $26.8 billion by 2024, driven not just by moral imperatives but by genuine market demand. Companies like Microsoft, Google and Apple aren’t investing in accessibility features out of charity, they recognize that inclusive design creates better products for everyone.
Consider how closed captioning, originally developed for deaf and hard-of-hearing users, now benefits millions in noisy environments or when audio isn’t available. Voice recognition technology, refined through work with speech disabilities, powers virtual assistants used by billions. This pattern repeats across industries: designing for disability drives innovation that benefits all users.
The European AI Act’s emphasis on accessibility signals that regulatory frameworks are catching up with this reality. Companies that prioritize inclusive AI aren’t just doing good, they’re positioning themselves for long-term success in an increasingly regulated landscape.
The Path Forward: A.B.L.E.
As we are opening this new chapter of technological capability and human need, 4 principles should guide our approach:
Adapt with Purpose: AI systems must be designed for personalization, not standardization. Every individual brings unique needs, preferences and strengths. Technology should flex to fit these differences rather than forcing conformity.
Build with Community: The disability community must be centered in design processes, not consulted as an afterthought. Nothing about disabled people should be created without disabled people and this principle becomes even more critical when dealing with AI systems that can perpetuate or challenge existing biases.
Learn Continuously: AI systems should be designed for ongoing learning and improvement, with feedback loops that allow for real-time adjustments based on user experience. This isn’t just about technical optimization—it’s about creating systems that grow with their users.
Ensure Equity: Access to AI-powered assistive technologies shouldn’t depend on economic privilege. The most transformative innovations mean nothing if they’re available only to those who can afford them. This requires intentional effort to ensure broad accessibility and affordability.
The future of AI and disability isn’t just about making life easier for people with disabilities—it’s about creating a world where everyone can contribute their unique talents and perspectives. When we design for the margins, we create solutions that benefit the center. When we prioritize human dignity alongside technological capability, we build systems that serve not just profit margins but human potential.
The revolution is already underway. The question isn’t whether AI will transform disability support — it’s whether we’ll have the wisdom to guide that transformation toward liberation rather than limitation. The choice, as always, is ours.
An Opportunity To Learn More
Note – At the United Nations Science Summit 2025 a session looks at the potential of harnessing prosocial AI to help everyone a chance to thrive. Please join online on September 15th at 11 AM EST / 5 PM CET / 11 PM Malaysia time.
Tools & Platforms
Can technology bridge development gaps?

Artificialintelligence promises to revolutionize economies worldwide, but whether developing nations will benefit or fall further behind depends on choices being made today.
The African Union’s historic Continental AI Strategy, adopted in July 2024, represents both unprecedented ambition and stark reality – while AI could add $1.5 trillion to Africa’s GDP by 2030, the continent currently captures just one per cent of global AI compute capacity despite housing 15 per cent of the world’s population.
This paradox defines the central challenge facing underdeveloped countries, particularly across Africa and South America, as they navigate the AI revolution. With global AI investment reaching $100-130 billion annually while African AI startups have raised only $803 million over five years, the question isn’t whether AI matters for development – it’s whether these regions can harness its transformative potential before the window closes.
The stakes couldn’t be higher. The same mobile revolution that enabled Kenya’s M-Pesa to serve millions without traditional banking infrastructure now offers a template for AI leapfrogging. But unlike mobile phones, AI requires massive computational resources, reliable electricity, and specialized skills that remain scarce across much of the Global South.
Africa awakens to AI’s strategic importance
The momentum building across Africa challenges assumptions about AI relevance in developing contexts. Sixteen African countries now have national AI strategies or policies in development, with Kenya launching its comprehensive 2025-2030 strategy in March and Zambia following suit in November 2024. This represents a 33 per cent increase in strategic planning over just two years, signaling that African leaders view AI not as a luxury but as essential infrastructure.
The African Union’s Continental AI Strategy stands as the world’s most comprehensive development-focused AI framework, projecting that optimal AI adoption could contribute six per cent of the continent’s GDP by 2030. Unlike Western approaches emphasizing innovation for innovation’s sake, Africa’s strategy explicitly prioritizes agriculture, healthcare, education, and climate adaptation – sectors criticalto the continent’s 1.3 billion people.
“We’re not trying to copy Silicon Valley,” explains one senior AU official involved in the strategy’s development. “We’re building AI that serves African priorities.” This Africa-centric approach emerges from harsh lessons learned during previous technology waves, when developing countries often became consumers rather than creators of digital solutions.
South America charts cooperative course
Latin America has taken a markedly different but equally strategic approach, leveraging existing regional integration mechanisms to coordinate AI development. The Santiago Declaration, signed by over 20 countries in October 2023, established the Regional Council on Artificial Intelligence, with Chile emerging as the continental leader.
Chile ranks first in the 2024 Latin American Artificial Intelligence Index (ILIA), followed by Brazil and Uruguay as pioneer countries. This leadership reflects substantial investments – Chile committed $26 billion in public investment for its 2021-2030 National AI Policy, while Brazil’s 2024-2028 AI Plan allocates $4.1 billion across 74 strategic actions.
Brazil’s approach particularly demonstrates how developing countries can mobilize resources for AI transformation. The planned Santos Dumont supercomputer aims to become one of the world’s five most powerful, while six Applied Centers for AI focus on agriculture, healthcare, and Industry 4.0 applications. This represents a fundamental shift from viewing AI as imported technology to building indigenous capabilities.
Agriculture proves AI’s development relevance
Critics questioning AI’s relevance to underdeveloped economies need look no further than Hello Tractor’s transformative impact across African agriculture. This Nigerian-founded ‘Uber for tractors’ platform uses AI for demand forecasting and fleet optimization, serving over 2 million smallholder farmers across over 20 countries. The results are striking: farmers increase incomes by 227 per cent, plant 40 times faster, and achieve three-fold yield improvements through precision timing.
Apollo Agriculture in Kenya and Zambia demonstrates how AI can address financial inclusion challenges that have plagued agricultural development for decades. Using machine learning for credit scoring and satellite data for precision recommendations, the company serves over 350,000 previously unbanked farmers with non-performing loan rates below 2 per cent – outperforming traditional banks while serving supposedly high-risk populations.
These aren’t pilot projects or development experiments. They’re profitable businesses solving real problems with measurable impact.
Investment patterns reveal global disparities
The funding landscape starkly illustrates development challenges facing AI adoption. Global AI investment reached $100-130 billion annually, while African AI startups raised $803 million over five years total. Latin American venture capital investment fell to $3.6 billion in 2024, the lowest in five years, with early-stage funding dominating 80 per cent of deals.
This investment concentration perpetuates technological dependence. The United States and China hold 60 per cent of all AI patents and produce one-third of global AI publications. 100 companies, mainly from these two countries, account for 40 per cent of global AI R&D spending, while 118 countries – mostly from the Global South remain absent from major AI governance discussions.
Risks of digital colonialism loom large
However, current trends suggest widening rather than narrowing divides. Tech giants Apple, Nvidia, and Microsoft have achieved $3 trillion market values that rival entire African continent’s GDP. This concentration of AI capabilities in a handful of corporations based in wealthy countries creates dependency relationships reminiscent of colonial-era resource extraction.
Digital colonialism emerges when developing countries become consumers rather than producers of AI systems. Most AI training occurs on Western datasets, creating cultural and linguistic biases that poorly serve non-Western populations. Search results in diverse countries like Brazil show predominantly white faces when searching for babies, reflecting training data biases.
Toward inclusive AI futures
The path forward requires acknowledging both AI’s transformative potential and persistent barriers to equitable adoption. Infrastructure limitations, skills gaps, and funding disparities create formidable challenges, but successful implementations across agriculture and healthcare demonstrate achievable progress.
Regional cooperation frameworks like the African Union’s Continental AI Strategy and Latin America’s SantiagoDeclaration offer models for coordinated development that can compete with concentrated wealth and expertise of traditional tech centers. These approaches emphasize development priorities rather than pure technological advancement, potentially creating more inclusive AI ecosystems.
The mobile revolution precedent suggests optimism about leapfrogging possibilities, but success requires sustained political commitment, adequate funding, and international cooperation. Countries that invest strategically in AI foundations while fostering indigenous innovation can position themselves to benefit from rather than be left behind by the AI transformation.
The global AI divide represents both the greatest risk and greatest opportunity facing international development in the 21st century. Whether AI bridges or widens global inequalities depends on choices being made today by governments, international organizations, and private sector actors. The stakes-measured in trillions of dollars of economic value and billions of lives affected – demand urgent, coordinated action to ensure AI serves human development rather than merely technological advancement.
The African farmer using Hello Tractor’s AI platform to improve crop yields and the Brazilian patient receiving AI-enhanced diagnostic services demonstrate AI’s development relevance. Whether such success stories become widespread or remain isolated examples depends on the policy foundations being laid across developing countries today. The AI revolution waits for no one – but its benefits need not be predetermined by geography or existing wealth. The window for inclusive AI development remains open, but it will not stay open forever.
(Krishna Kumar is a Technology Explorer & Strategist based in Austin, Texas, USA. Rakshitha Reddy is AI Engineer based in Atlanta, Georgia, USA)
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries