The AI revolution is here, and it’s accelerating fast. Don’t get left behind. In this video, TD SYNNEX reveals its Destination AI program—your strategic guide to navigating this rapidly expanding market. Learn how to transform your business and gain a competitive edge with our all-in-one program that offers everything from expert training and certification to ongoing sales support. Join us and harness the incredible power of AI to build a future-proof business.
Tools & Platforms
Ted Cruz bill would let Big Tech go wild with AI experiments for 10 years

In the one-pager, Cruz noted that “most US rules and regulations do not squarely apply to emerging technologies like Al.” So “rather than force Al developers to design inferior products just to comply with outdated Federal rules, our regulations should become more flexible,” Cruz argued.
Therrier noted that once regulations are passed, they’re rarely updated and backed Cruz’s logic that AI firms may need support to override old rules that could restrict AI innovation. Consider the “many new applications in healthcare, transportation, and financial services,” Therrier said, which “could offer the public important new life-enriching service” unless “archaic rules” are relied on to “block those benefits by standing in the way of marketplace experimentation.”
“When red tape grows without constraint and becomes untethered from modern marketplace realities, it can undermine innovation and investment, undermine entrepreneurship and competition, raise costs to consumers, limit worker opportunities, and undermine long-term economic growth,” Therrier wrote.
But Therrier acknowledged that Cruz seems particularly focused on propping up a national framework to “address the rapid proliferation of AI legislative proposals happening across the nation,” noting that over 1,000 AI-related bills were introduced in the first half of this year.
Netchoice similarly celebrated the bill’s “innovation-first approach,” claiming “the SANDBOX Act strikes an important balance” between “giving AI developers room to experiment” and “preserving necessary safeguards.”
To critics, the bill’s potential to constrict new safeguards remains a primary concern. Steinhauser, of the Alliance for Secure AI, suggested that critics may get answers to their biggest questions about how well the law would work to protect public safety “in the coming days.”
His group noted that just during this summer alone, “multiple companies have come under bipartisan fire for refusing to take Americans’ safety seriously and institute proper guardrails on their AI systems, leading to avoidable tragedies.” They cited Meta allowing chatbots to be creepy to kids and OpenAI rushing to make changes after a child died after using ChatGPT to research a suicide.
Tools & Platforms
Destination AI | IT Pro
Tools & Platforms
AI, Workforce and the Shift Toward Human-Centered Innovation

Artificial intelligence has been in the headlines for years, often linked to disruption, automation and the future of work. While most of that attention has focused on the private sector, something equally significant has been unfolding within government. Quietly, and often cautiously, public agencies are exploring how AI can improve the way they serve communities, and the impact this will have on the workforce is only just beginning to take shape.
Government jobs have always been about service, often guided by mission over profit. But they’re also known for process-heavy routines, outdated software and siloed information systems. With AI tools now capable of analyzing data, drafting documents or answering repetitive inquiries, the question facing government leaders isn’t just whether to adopt AI, but how to do so in a way that enhances, rather than replaces, the human element of public service.
A common misconception is that AI in government will lead to massive job cuts. But in practice, the trend so far leans toward augmentation. That means helping people do their jobs more effectively, rather than automating them out of existence.
For example, in departments where staff are overwhelmed by paperwork — think benefits processing, licensing or permitting — AI can help flag missing information, route forms correctly or even draft routine correspondence. These tasks take up hours of staff time every week. Offloading them allows employees to focus on more complex or sensitive issues that require human judgment.
Social workers, for instance, aren’t being replaced by machines. But they might be supported by systems that identify high-risk cases or suggest resources based on prior outcomes. That kind of assistance doesn’t reduce the value of their work. It frees them up to do the work that matters most: listening, supporting and solving problems for real people.
That said, integrating AI into public workflows isn’t just about buying software or installing a tool. It touches something deeper: the culture of government work.
Public agencies tend to be cautious, operating under strict rules around fairness, accountability and transparency. Those values don’t always align neatly with how AI systems are built or how they behave. If an AI model makes a decision about who receives services or how resources are distributed, who’s accountable if it gets something wrong?
This isn’t just a technical issue, but it’s about trust. Agencies need to take the time to understand the tools they’re using, ask hard questions about bias and equity, and include a diverse range of voices in the conversation.
One way to build that trust is through transparency. When AI is used to support decisions, citizens should know how it works, what data it relies on, and what guardrails are in place. Clear communication must be visible and oversight goes a long way toward building public confidence in new technology.
Perhaps the most important piece of this puzzle is the workforce itself. If AI is going to become a fixture in government, then the people working in government need to be ready.
This doesn’t mean every employee needs to become a coder. But it does mean rethinking job roles, offering training in data literacy, and creating new career paths for roles like AI governance, digital ethics and human-centered design.
Government has a chance to lead by example here. By investing in employees, not sidelining them, public agencies can show that AI can be part of a more efficient and humane system, one that values experience and judgment while embracing new tools that improve results.
There’s no single road map for what AI in government should look like. Different agencies have different needs, and not every problem can or should be solved with technology. But the direction is clear: Change is coming.
What matters now is how that change is managed. If AI is used thoughtfully — with clear purpose, oversight and human input — it can help governments do more with less, while also making jobs more rewarding. If handled poorly, it risks alienating workers and undermining trust.
At its best, AI should serve the public interest. And that means putting people first, not just the people who receive services, but also the people who provide them.
John Matelski is the executive director of the Center for Digital Government, which is part of e.Republic, Government Technology’s parent company.
Tools & Platforms
SpamGPT Is the AI Tool Fueling Massive Phishing Scams

How Does SpamGPT Work?
SpamGPT works like any email marketing platform. It offers tools like SMTP/IMAP email server, email testing, and campaign performance monitoring in real time. There’s also an AI marketing assistant named KaliGPT, which is built directly into the dashboard for assistance.
However, where it differs from email marketing platforms is that it is specifically designed for creating spam and phishing emails to steal information and financial data from users.
More specifically, SpamGPT is “designed to compromise email servers, bypass spam filters, and orchestrate mass phishing campaigns with unprecedented ease,” according to a report from Varonis, a data security platform.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi