Connect with us

Tools & Platforms

SAIC-Google alliance targets AI at the tactical edge

Published

on


Science Applications International Corp.’s new strategic alliance with Google Public Sector highlights several trends that are shaping how companies pursue opportunities in the federal market.

Their alliance, announced in late July, is an example of how systems integrators and commercial technology companies are developing deeper and more intimate partnerships.

“When we talk about strategic partnerships, that’s completely divorced from any particular deal,” said Lauren Knausenberger, who leads strategic partnerships at SAIC and is a former Air Force chief information officer. “It has more to do with what part of the market, what problem, what area of technology are we going to do something amazing in.”

In other words, alliances like SAIC’s and Google’s are about the long-term mission instead of a specific contract opportunity.

The focus of alliances on the mission is another trend we are tracking. In this case, SAIC and Google want to bring the power of cloud computing out to the tactical edge.

SAIC is the first integrator to sell Google’s distributed cloud capability into the federal market, particularly the defense sector.

“One of our core focuses is operationalizing AI, and there are so many military, national security, law enforcement, and homeland security use cases for operationalizing AI at the edge,” Knausenberger said.

Google’s approach includes a ruggedized device that is now certified at the secret and top-secret levels for the enterprise and is built to bring a distributed cloud environment into the battlefield.

“It’s a really interesting time to partner with them because one of our key focuses is operationalizing AI,” she said.

The partnership combines SAIC’s capabilities, open-source algorithms and Google’s ruggedized device.

“You take a smattering of other partners as necessary and you start to have the ingredients to do just about any use case and do it very quickly,” Knausenberger said.

SAIC and Google are talking to customers now about pilots and demos.

“We’re going to be the first mission integrator to deploy their hardware for real mission applications,” she said.

Cloud computing at the edge is critical because traditional connectivity is either not feasible, not sure, or both. Some of the applications include autonomous drone operations in contested environments, where operational security is critical.

“Think about being a warfighter and you’re running autonomous drones and sensors and you’re in a place where you either don’t have connectivity or you should not give away your location by generating connectivity,” Knausenberger said.

One area of focus for SAIC is intelligence operations, particularly opportunities to push more analysis and interpretation to the edge.

“You run your fleet from the cloud. That’s where you do really heavy data operations,” she said. “But you want to sense and inference at the edge. Those are things you need to make rapid decisions.”

The partnership also includes the training of 1,000 SAIC employees on the Google cloud platform. The devices will be in SAIC’s labs, where engineers from both companies will work together on practical applications.

The ruggedized devices also have NVIDIA A-100 chips in them.

“We are going to have a few extra terabytes of chip performance hanging out in our labs,” she said. “My engineers are so excited.”

The partnership is not exclusive. SAIC also has close relationships with Google’s rivals Amazon Web Services and Microsoft.

Those relationships are critical in the defense sector because “all of our DOD customers are in multiple clouds,” Knausenberger said.

She sees more of these close-knit partnerships developing across the market. For SAIC, it is a question of identifying a need and deciding on the best way to build a solution to address that.

“Are we going to build? Or buy? Or partner?” she said. That references SAIC’s thinking for when it invests internally, makes an acquisition or collaborates with other companies.

That logic extends to SAIC Ventures, where the company invests in promising startup companies. See this episode of our podcast with SAIC executive Michael Hauser, who manages the venture fund for the company.

For partnerships, a mutual focus on mission is essential.

“No company has all of the capabilities needed to solve many of these large-scale problems, and so, our intent is to bring together the best capabilities that we can,” Knausenberger said.





Source link

Tools & Platforms

Destination AI | IT Pro

Published

on

By


The AI revolution is here, and it’s accelerating fast. Don’t get left behind. In this video, TD SYNNEX reveals its Destination AI program—your strategic guide to navigating this rapidly expanding market. Learn how to transform your business and gain a competitive edge with our all-in-one program that offers everything from expert training and certification to ongoing sales support. Join us and harness the incredible power of AI to build a future-proof business.



Source link

Continue Reading

Tools & Platforms

AI, Workforce and the Shift Toward Human-Centered Innovation

Published

on


Artificial intelligence has been in the headlines for years, often linked to disruption, automation and the future of work. While most of that attention has focused on the private sector, something equally significant has been unfolding within government. Quietly, and often cautiously, public agencies are exploring how AI can improve the way they serve communities, and the impact this will have on the workforce is only just beginning to take shape.

Government jobs have always been about service, often guided by mission over profit. But they’re also known for process-heavy routines, outdated software and siloed information systems. With AI tools now capable of analyzing data, drafting documents or answering repetitive inquiries, the question facing government leaders isn’t just whether to adopt AI, but how to do so in a way that enhances, rather than replaces, the human element of public service.

A common misconception is that AI in government will lead to massive job cuts. But in practice, the trend so far leans toward augmentation. That means helping people do their jobs more effectively, rather than automating them out of existence.


For example, in departments where staff are overwhelmed by paperwork — think benefits processing, licensing or permitting — AI can help flag missing information, route forms correctly or even draft routine correspondence. These tasks take up hours of staff time every week. Offloading them allows employees to focus on more complex or sensitive issues that require human judgment.

Social workers, for instance, aren’t being replaced by machines. But they might be supported by systems that identify high-risk cases or suggest resources based on prior outcomes. That kind of assistance doesn’t reduce the value of their work. It frees them up to do the work that matters most: listening, supporting and solving problems for real people.

That said, integrating AI into public workflows isn’t just about buying software or installing a tool. It touches something deeper: the culture of government work.

Public agencies tend to be cautious, operating under strict rules around fairness, accountability and transparency. Those values don’t always align neatly with how AI systems are built or how they behave. If an AI model makes a decision about who receives services or how resources are distributed, who’s accountable if it gets something wrong?

This isn’t just a technical issue, but it’s about trust. Agencies need to take the time to understand the tools they’re using, ask hard questions about bias and equity, and include a diverse range of voices in the conversation.

One way to build that trust is through transparency. When AI is used to support decisions, citizens should know how it works, what data it relies on, and what guardrails are in place. Clear communication must be visible and oversight goes a long way toward building public confidence in new technology.

Perhaps the most important piece of this puzzle is the workforce itself. If AI is going to become a fixture in government, then the people working in government need to be ready.

This doesn’t mean every employee needs to become a coder. But it does mean rethinking job roles, offering training in data literacy, and creating new career paths for roles like AI governance, digital ethics and human-centered design.

Government has a chance to lead by example here. By investing in employees, not sidelining them, public agencies can show that AI can be part of a more efficient and humane system, one that values experience and judgment while embracing new tools that improve results.

There’s no single road map for what AI in government should look like. Different agencies have different needs, and not every problem can or should be solved with technology. But the direction is clear: Change is coming.

What matters now is how that change is managed. If AI is used thoughtfully — with clear purpose, oversight and human input — it can help governments do more with less, while also making jobs more rewarding. If handled poorly, it risks alienating workers and undermining trust.

At its best, AI should serve the public interest. And that means putting people first, not just the people who receive services, but also the people who provide them.

John Matelski is the executive director of the Center for Digital Government, which is part of e.Republic, Government Technology’s parent company.





Source link

Continue Reading

Tools & Platforms

SpamGPT Is the AI Tool Fueling Massive Phishing Scams

Published

on


How Does SpamGPT Work?

SpamGPT works like any email marketing platform. It offers tools like SMTP/IMAP email server, email testing, and campaign performance monitoring in real time. There’s also an AI marketing assistant named KaliGPT, which is built directly into the dashboard for assistance.

However, where it differs from email marketing platforms is that it is specifically designed for creating spam and phishing emails to steal information and financial data from users.

More specifically, SpamGPT is “designed to compromise email servers, bypass spam filters, and orchestrate mass phishing campaigns with unprecedented ease,” according to a report from Varonis, a data security platform.



Source link

Continue Reading

Trending