The AI revolution is here, and it’s accelerating fast. Don’t get left behind. In this video, TD SYNNEX reveals its Destination AI program—your strategic guide to navigating this rapidly expanding market. Learn how to transform your business and gain a competitive edge with our all-in-one program that offers everything from expert training and certification to ongoing sales support. Join us and harness the incredible power of AI to build a future-proof business.
Tools & Platforms
City Detect CEO explains AI technology amid concerns from residents – rocketcitynow.com

Tools & Platforms
FTC launches inquiry into AI chatbots acting as companions and their effects on children

(AP) – The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions.
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI.
The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots.
The move comes as a growing number of kids use AI chatbots for everything — from homework help to personal advice, emotional support and everyday decision-making. That’s despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
Character.AI said it is looking forward to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the company said. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Snap said its My AI chatbot is “transparent and clear about its capabilities and limitations.”
“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community,” the company said in a statement.
Meta declined to comment on the inquiry and Alphabet, OpenAI and X.AI did not immediately respond to messages for comment.
OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.
Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.
Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Copyright 2025 The Associated Press. All rights reserved.
Tools & Platforms
Destination AI | IT Pro
Tools & Platforms
AI, Workforce and the Shift Toward Human-Centered Innovation

Artificial intelligence has been in the headlines for years, often linked to disruption, automation and the future of work. While most of that attention has focused on the private sector, something equally significant has been unfolding within government. Quietly, and often cautiously, public agencies are exploring how AI can improve the way they serve communities, and the impact this will have on the workforce is only just beginning to take shape.
Government jobs have always been about service, often guided by mission over profit. But they’re also known for process-heavy routines, outdated software and siloed information systems. With AI tools now capable of analyzing data, drafting documents or answering repetitive inquiries, the question facing government leaders isn’t just whether to adopt AI, but how to do so in a way that enhances, rather than replaces, the human element of public service.
A common misconception is that AI in government will lead to massive job cuts. But in practice, the trend so far leans toward augmentation. That means helping people do their jobs more effectively, rather than automating them out of existence.
For example, in departments where staff are overwhelmed by paperwork — think benefits processing, licensing or permitting — AI can help flag missing information, route forms correctly or even draft routine correspondence. These tasks take up hours of staff time every week. Offloading them allows employees to focus on more complex or sensitive issues that require human judgment.
Social workers, for instance, aren’t being replaced by machines. But they might be supported by systems that identify high-risk cases or suggest resources based on prior outcomes. That kind of assistance doesn’t reduce the value of their work. It frees them up to do the work that matters most: listening, supporting and solving problems for real people.
That said, integrating AI into public workflows isn’t just about buying software or installing a tool. It touches something deeper: the culture of government work.
Public agencies tend to be cautious, operating under strict rules around fairness, accountability and transparency. Those values don’t always align neatly with how AI systems are built or how they behave. If an AI model makes a decision about who receives services or how resources are distributed, who’s accountable if it gets something wrong?
This isn’t just a technical issue, but it’s about trust. Agencies need to take the time to understand the tools they’re using, ask hard questions about bias and equity, and include a diverse range of voices in the conversation.
One way to build that trust is through transparency. When AI is used to support decisions, citizens should know how it works, what data it relies on, and what guardrails are in place. Clear communication must be visible and oversight goes a long way toward building public confidence in new technology.
Perhaps the most important piece of this puzzle is the workforce itself. If AI is going to become a fixture in government, then the people working in government need to be ready.
This doesn’t mean every employee needs to become a coder. But it does mean rethinking job roles, offering training in data literacy, and creating new career paths for roles like AI governance, digital ethics and human-centered design.
Government has a chance to lead by example here. By investing in employees, not sidelining them, public agencies can show that AI can be part of a more efficient and humane system, one that values experience and judgment while embracing new tools that improve results.
There’s no single road map for what AI in government should look like. Different agencies have different needs, and not every problem can or should be solved with technology. But the direction is clear: Change is coming.
What matters now is how that change is managed. If AI is used thoughtfully — with clear purpose, oversight and human input — it can help governments do more with less, while also making jobs more rewarding. If handled poorly, it risks alienating workers and undermining trust.
At its best, AI should serve the public interest. And that means putting people first, not just the people who receive services, but also the people who provide them.
John Matelski is the executive director of the Center for Digital Government, which is part of e.Republic, Government Technology’s parent company.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi