Tools & Platforms
Advancing digital investigations and intelligence through ethical, human-assisted AI solutions
No matter how large or small an investigation, digital evidence is often the cornerstone of modern-day intel, whether it’s a criminal case, defense or intelligence matter. Nearly every person carries with them a cellphone, and that digital witness is a treasure trove of data. A person’s online behavior — where they go, who they interact with and what they’re searching for — is critical to public safety agencies working to protect our borders. This data, lawfully accessed and analyzed during an investigation or mission, can help make life-saving connections in large-scale investigations such as narcotics or human trafficking networks. Yet it is a lot of data and sifting through all of it requires a modernized workflow. Enter artificial intelligence — always in lock step with human oversight.
Each year, Cellebrite releases its Industry Trends Report, and an unsurprising 97% of respondents across multiple federal agencies cite smartphones as a key digital evidence source. Advanced investigative capabilities are badly needed, and federal agencies are increasingly strapped for time and resources. More than a third of investigators said they don’t have enough time to review all relevant data in investigations.
Cellebrite’s survey also shows a majority of investigators agree that digital evidence increases the ability to solve a case and shortens investigations, ultimately freeing up valuable time and resources for federal investigators. This is where AI can help address common investigative challenges and accelerate case closure.
Digital intelligence analysis challenges
Investigative speed is paramount for federal agencies to avoid backlogs. The multitude of devices with which suspects and victims regularly interact — an average case now involves two-to-five associated devices — means identifying and accessing relevant data takes time and can be a challenge to an efficient investigation.
Beyond the initial struggle to access data from multiple devices, federal investigators have to quickly identify actionable intelligence to advance an investigation. Where state and local agencies may rely on a collaborative network to help expedite data collection and analysis, even sometimes with regional federal agencies, on the whole, federal agencies are typically spread across a broader geographic area and do not have a platform to share evidence across regions or teams, which can slow down investigations.
Investigations must have clear chains of custody to ensure the findings are admissible in court. Only then can examination techniques and evidence insights be appropriately presented to prosecutors, judges and juries in a way that is both understandable and compliant with federal law. An investigation without proper due diligence and successful communication will only result in an inability to deliver justice for victims.
How AI can help
AI-powered investigative solutions can quickly sort through large quantities of data and automate repetitive tasks for investigators, allowing them to turn their attention to larger issues and top-level, nuanced decisions. In human trafficking investigations, I’ve seen first-hand these solutions identify criminal connections and accomplices which has resulted in the rescue of human trafficking victims. This digital evidence often leads to additional crimes and reveals links to other open cases — saving time and resources otherwise spent manually cross-referencing databases and suspect histories. All of this is done with human verification, and it speeds up the investigative timeline without sacrificing due diligence.
AI can also support sophisticated decision-making by interpreting, analyzing and summarizing critical information from exponentially growing data in real-time. This analysis points teams in the right direction, so they can find the proverbial needle in the haystack and make more informed decisions during their investigations. AI can help draw meaningful connections and even identify gaps in the evidence.
In addition to its productivity advances, AI has also been used to protect federal law enforcement examiners. Digital evidence can be traumatizing, especially when teams are reviewing human trafficking content or child sexual abuse material (CSAM). AI can categorize these kinds of files, and many teams are using it to reduce examiner exposure to distressing photos, videos, messages or other disturbing digital evidence assets. According to research from the Justice Department, nearly 20% of investigators struggle with burnout, and protecting an agent or officer’s wellbeing can help to expand a team’s capacity as they continue working with growing case volumes.
Best practices for applying AI to digital investigations
Make smart AI investments: Federal budgets are tightening, and agencies should invest in technology that can automate the most tedious –– but mission-critical –– tasks, such as image categorization. Agency leaders should prioritize AI solutions that provide actionable insights to accelerate investigations.
Create an ethical AI policy: Evidence defensibility should always be front-and-center, meaning responsible and transparent AI use is key. Particularly with a lack of official federal policy, agency leaders must determine which use cases of AI are appropriate and clearly communicate these expectations. Data privacy and a sound chain of custody should never be compromised.
Train law enforcement personnel: Once an ethical AI policy is established and tools are implemented, training on appropriate and effective use of these solutions is key. Ongoing conferences, seminars, digital forensics courses and simulated opportunities help staff keep pace with advancements in AI. AI-enabled insights gathered by under-skilled or untrained personnel can face admissibility challenges in court.
Ensure human oversight and analysis: Human expertise can never be replaced, no matter the AI’s sophistication. Technology-enabled findings must always be verified prior to use. For example, AI can flag unusual patterns in data yet cannot determine intent or identify if the activity is related to criminal behavior.
As the dependency on personal electronic devices grows, technology’s role in criminal cases, intelligence and defense missions will increase. Federal agencies have no choice but to embrace AI into their investigative processes and do so responsibly by investing in solutions that observe ethical guidelines, prioritizing ongoing training and human validation. These best practices will help federal law enforcement more swiftly and effectively protect our nation’s interests.
Matt Parker is head of global advocacy at Cellebrite.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tools & Platforms
What Research of 3,200 SMEs Revealed
A new report by the AI Chamber, based on insights from more than 3,200 small and medium-sized enterprises (SMEs) across Central and Eastern Europe (CEE), paints a striking picture of the region’s AI readiness: while over 75% of businesses say they are using artificial intelligence, only 25% are doing so at scale.
Titled “How Do SMEs in CEE Find Their Way in the World of AI?”, the study maps out how AI is reshaping operational workflows across 11 CEE countries – but also where gaps in leadership, talent, and regulatory awareness are holding many companies back.
The report uncovers four dominant mindsets shaping AI readiness in the region: Practical Optimists, Aware with Barriers, AI Indifferent, and Digitally Withdrawn. These categories reflect more than technological maturity – they reveal how vision, organizational structure, and risk appetite determine success or stagnation.
“This report challenges the prevailing myth that AI adoption is simply a matter of access to technology. The decisive factor is organizational maturity – leadership clarity, talent readiness, and strategic intent,” says Tomasz Snażyk, CEO of the AI Chamber.
“CEE’s SMEs could become global frontrunners in applied AI – if supported by the right regulatory frameworks and education initiatives. The opportunity is real. So is the risk of being left behind.”
Practical applications dominate, but strategic use remains rare
According to the report, the top three use cases of AI among SMEs in the region are data analysis (40%), automatic translation (35%), and task automation (28%) – indicating a focus on efficiency rather than transformation. More advanced applications like predictive analytics or AI-enhanced product development are far less common.
Notably, Estonia and Poland show more mature use cases, particularly in areas like customer behavior tracking and forecasting – suggesting that a small but growing cohort of regional players are moving beyond basic AI adoption into more data-driven decision-making.
However, only 8% of all SMEs surveyed are ready for an upcoming AI audit – a key requirement under the EU AI Act. In fact, just 39% of AI-using companies report even being familiar with the legislation, dropping to 29% among smaller and lower-tech firms.
Size matters – and so does vision
The data highlights a strong correlation between company size and AI maturity. SMEs with between 50–250 employees are far more likely to use AI strategically, have regulatory awareness, and invest in upskilling their teams. At the opposite end, micro-enterprises (under 10 employees) often lack the capacity, leadership bandwidth, or digital talent to take full advantage of AI opportunities.
Encouragingly, 61% of employees across the region are actively experimenting with AI tools in their day-to-day work. This bottom-up momentum is strongest in Poland and the Czech Republic, pointing to a grassroots wave of innovation that could help close the readiness gap – if matched by investment from leadership.
Still, one in four firms has taken no steps to upskill their workforce in AI-related competencies. That training gap is a red flag for policymakers, particularly as generative AI continues to reshape white-collar industries.
Fragmented AI landscape across CEE
Perhaps the most urgent takeaway from the report is the stark digital divide within CEE itself.
- Estonia emerges as the clear frontrunner, with 67% of companies reporting a positive AI impact and 65% familiar with the EU’s AI Act. Estonian firms also report among the lowest internal barriers to implementation, thanks to strong digital governance and high awareness.
- Slovakia and Poland also rank high in AI ambition: 70% and 65% of SMEs, respectively, want to expand their use of AI in the near future.
- In contrast, Croatia, Latvia, and Bulgaria show significantly lower levels of adoption and readiness. In Croatia, nearly a third of SMEs express no interest in using AI at all, while in Bulgaria, over half cite a lack of knowledge as a major barrier.
Even within high-growth markets like Poland, the ambition-execution gap is visible. While over half of SMEs there see AI as a competitive advantage, 35% report no current intent to adopt it, despite employee enthusiasm.
Regional crossroads
The timing of the report is critical. As global investment in AI is projected to soar from $189 billion in 2023 to $4.8 trillion by 2033 (UNCTAD), the world’s economic power centers are already shifting. More than 60% of AI patents and R&D funding are now concentrated in the U.S. and China, putting pressure on Europe—and CEE in particular – to define its AI trajectory.
For the AI Chamber, this moment represents both a risk and an opportunity.
“The window for CEE to define its digital future is open – but not indefinitely. The question facing governments, investors, and SMEs alike is no longer whether to embrace AI, but how quickly, how safely, and how strategically it can be done – before the gap with global leaders becomes irreversible.” – concludes Tomasz Snażyk.
Download the Full Report
👉 aichamber.eu/report/how-do-smes-in-cee-find-their-way-in-the-world-of-ai
Tools & Platforms
Mindsprint enhances ProcureSPRINT™ with Agentic AI to unlock up to 15% in procurement cost efficiencies
SINGAPORE, July 9, 2025 /PRNewswire/ — Mindsprint, a technology firm offering purpose-built AI-led solutions to modernize enterprise operations, today announced significant advancements to ProcureSPRINT™, its enterprise-grade AI platform designed to optimize procurement operations, accelerate decision-making, and deliver measurable cost efficiencies.
Building on its proven foundation, ProcureSPRINT™ now integrates advanced Agentic AI capabilities, empowering organizations to automate complex procurement processes, enhance supplier collaboration, and unlock hidden value levers that can drive procurement cost reductions of up to 15 percent.
ProcureSPRINT™ is built on a secure, scalable cloud infrastructure and offers a modular, plug-and-play architecture that meets the needs of procurement teams at varying maturity levels. Its Agentic AI-powered recommendation engine provides actionable insights to both operational teams and C-level leaders, ensuring organizations can achieve faster cycle times, improved supplier performance, and greater procurement transparency.
“As enterprises evolve, so must their procurement function. The latest enhancements to ProcureSPRINT™ reflect our commitment to strengthening the platform with advanced AI & intelligent automation to deliver practical insights that help organizations reduce costs, improve compliance, and achieve operational resilience,” said G Venkataramanan (GV), Head of Intelligence Enterprise Operations, Mindsprint. “Our Agentic AI approach allows teams to shift from manual execution to more autonomous, insight-driven procurement, delivering faster outcomes with reduced effort.”
ProcureSPRINT™’s suite of intelligent agents supports every stage of the procurement process, including:
-
The Onboarding Assistant Agent streamlines supplier registration through a self-service portal.
-
The RFx Agent simplifies competitive bidding and reverse auctions.
-
The Deal Advisor Agent provides AI-enabled recommendations for award decisions that maximize savings and minimize risk.
-
The Shipment Sentinel Agent offers real-time visibility into shipments and supplier performance.
In addition, the platform offers an advanced, digitized invoice processing system that supports omnichannel document capture, multi-lingual intelligent data extraction, real-time validation, and seamless ERP integration. Organizations using ProcureSPRINT™ achieve over 70 percent touchless invoice processing, significantly reducing manual workload and processing time.
Tools & Platforms
Impostor uses AI to impersonate Rubio and contact foreign and US officials : NPR
Secretary of State Marco Rubio attends a signing ceremony for a peace agreement between Rwanda and the Democratic Republic of the Congo at the State Department, June 27, 2025, in Washington.
Mark Schiefelbein/AP
hide caption
toggle caption
Mark Schiefelbein/AP
WASHINGTON — The State Department is warning U.S. diplomats of attempts to impersonate Secretary of State Marco Rubio and possibly other officials using technology driven by artificial intelligence, according to two senior officials and a cable sent last week to all embassies and consulates.
The warning came after the department discovered that an impostor posing as Rubio had attempted to reach out to at least three foreign ministers, a U.S. senator and a governor, according to the July 3 cable, which was first reported by The Washington Post.
The recipients of the scam messages, which were sent by text, Signal and voice mail, were not identified in the cable, a copy of which was shared with The Associated Press.
“The State Department is aware of this incident and is currently monitoring and addressing the matter,” department spokeswoman Tammy Bruce told reporters. “The department takes seriously its responsibility to safeguard its information and continuously take steps to improve the department’s cybersecurity posture to prevent future incidents.”
She declined to comment further due to “security reasons” and the ongoing investigation.
It’s the latest instance of a high-level Trump administration figure targeted by an impersonator, with a similar incident revealed in May involving President Donald Trump’s chief of staff, Susie Wiles. The misuse of AI to deceive people is likely to grow as the technology improves and becomes more widely available, and the FBI warned this past spring about “malicious actors” impersonating senior U.S. government officials in a text and voice messaging campaign.
The hoaxes involving Rubio had been unsuccessful and “not very sophisticated,” one of the officials said. Nonetheless, the second official said the department deemed it “prudent” to advise all employees and foreign governments, particularly as efforts by foreign actors to compromise information security increase.
The officials were not authorized to discuss the matter publicly and spoke on condition of anonymity.
“There is no direct cyber threat to the department from this campaign, but information shared with a third party could be exposed if targeted individuals are compromised,” the cable said.
The FBI has warned in a public service announcement about a “malicious” campaign relying on text messages and AI-generated voice messages that purport to come from a senior U.S. official and that aim to dupe other government officials as well as the victim’s associates and contacts.
This is not the first time that Rubio has been impersonated in a deepfake. This spring, someone created a bogus video of him saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.
Several potential solutions have been put forward in recent years to the growing misuse of AI for deception, including criminal penalties and improved media literacy. Concerns about deepfakes have also led to a flood of new apps and AI systems designed to spot phonies that could easily fool a human.
The tech companies working on these systems are now in competition against those who would use AI to deceive, according to Siwei Lyu, a professor and computer scientist at the University at Buffalo. He said he’s seen an increase in the number of deepfakes portraying celebrities, politicians and business leaders as the technology improves.
Just a few years ago, fakes contained easy-to-spot flaws — inhuman voices or mistakes like extra fingers — but now the AI is so good, it’s much harder for a human to spot, giving deepfake makers an advantage.
“The level of realism and quality is increasing,” Lyu said. “It’s an arms race, and right now the generators are getting the upper hand.”
The Rubio hoax comes after text messages and phone calls went to elected officials, business executives and other prominent figures from someone who seemed to have gained access to the contacts in Wiles’ personal cellphone, The Wall Street Journal reported in May.
Some of those who received calls heard a voice that sounded like Wiles, which may have been generated by AI, according to the newspaper. The messages and calls were not coming from Wiles’ number, the report said. The government was investigating.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education1 day ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business5 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers1 week ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work