That speed has caught many IT executives off guard as techniques that have always worked for them stop working, Andersen adds. “With this absolute velocity, you are seeing the old norms of trying to figure out how much to invest, those are no longer useful tools,” he says. “If you use traditional methods, you just don’t get it.”
Although Andersen agrees that inference pricing has gone down significantly, “the reality is that we are asking for more sophisticated tasks, queries that are perhaps 1,000 times more complicated” today as compared to two years ago, he says.
Capitalizing on cloud and data
When Natarajan joined Capital One in March 2023, ChatGPT was barely four months old. Despite having been used for about 15 years at that point, generative AI didn’t take off in terms of C-suite and board mindshare until OpenAI introduced ChatGPT.
ICF has found that, while artificial intelligence adoption is growing across the federal government, data remains a challenge.
In The AI Advantage: Moving from Exploration to Impact, published Thursday, ICF revealed that 83 percent of 200 federal leaders surveyed do not think their respective organizations’ data is ready for AI use.
“As federal leaders look to begin scaling AI programs, many are hitting the same wall: data readiness,” commented Kyle Tuberson, chief technology officer at ICF. “This report makes it clear: without modern, flexible data infrastructure and governance, AI will remain stuck in pilot mode. But with the right foundation, agencies can move faster, reduce costs, and deliver better outcomes for the public.”
The report also shared that 66 percent of respondents are optimistic that their data will be ready for AI implementation within the next two years.
ICF’s Study Findings
The report shows that many agencies are experimenting with AI, with 41 percent of leaders surveyed saying that they are running small-scale pilots and 16 percent in the process of escalating efforts to implement the technology. About 8 percent of respondents shared that their AI programs have matured.
Half of the respondents said their respective organizations are focused on AI experimentations. Meanwhile, 51 percent are prioritizing planning and readiness.
The report provides advice on steps federal leaders can take to advance their AI programs, including upskilling their workforce, implementing policies to ensure responsible and enterprise-wide adoption, and establishing scalable data strategies.
The rapid evolution of artificial intelligence in video creation tools is transforming how individuals and businesses communicate, share stories, and market their products. A notable development in this space comes from PixVerse, an AI-driven video creation platform, as highlighted in a keynote address by co-founder Jaden Xie on July 11, 2025. During his speech titled AI Video for Good, Xie emphasized the platform’s mission to democratize video production, stating that billions of people worldwide have never created a video or used one to share their stories. PixVerse aims to empower these individuals by leveraging AI to simplify video creation, making it accessible to non-professionals and underserved communities. This aligns with broader AI trends in 2025, where generative AI tools are increasingly focused on user-friendly interfaces and inclusivity, enabling content creation at scale. According to industry reports from sources like TechRadar, the global AI video editing market is projected to grow at a compound annual growth rate of 25.3% from 2023 to 2030, driven by demand for accessible tools in education, marketing, and personal storytelling. PixVerse’s entry into this space taps into a critical need for intuitive solutions that lower the technical barriers to video production, positioning it as a potential game-changer in the content creation ecosystem. The platform’s focus on empowering billions underscores a significant shift towards AI as a tool for social impact, beyond mere commercial applications.
From a business perspective, PixVerse’s mission opens up substantial market opportunities, particularly in sectors like education, small business marketing, and social media content creation as of mid-2025. For small businesses, AI-driven video tools can reduce the cost and time associated with professional video production, enabling them to compete with larger brands on platforms like YouTube and TikTok. Monetization strategies for platforms like PixVerse could include subscription-based models, freemium access with premium features, or partnerships with social media giants to integrate their tools directly into content-sharing ecosystems. However, challenges remain in scaling such platforms, including ensuring data privacy for users and managing the high computational costs of AI video generation. The competitive landscape is also heating up, with key players like Adobe Express and Canva incorporating AI video features into their suites as reported by Forbes in early 2025. PixVerse must differentiate itself through user experience and accessibility to capture market share. Additionally, regulatory considerations around AI-generated content, such as copyright issues and deepfake risks, are becoming more stringent, with the EU AI Act of 2024 setting precedents for compliance that PixVerse will need to navigate. Ethically, empowering users must be balanced with guidelines to prevent misuse of AI video tools for misinformation.
On the technical front, PixVerse likely relies on advanced generative AI models, such as diffusion-based algorithms or transformer architectures, to automate video editing and content generation, reflecting trends seen in 2025 AI research from sources like VentureBeat. Implementation challenges include optimizing these models for low-bandwidth environments to serve global users, especially in developing regions where internet access is limited. Solutions could involve edge computing or lightweight AI models to ensure accessibility, though this may compromise output quality initially. Looking ahead, the future implications of such tools are vast—by 2030, AI video platforms could redefine digital storytelling, with applications in virtual reality and augmented reality content creation. PixVerse’s focus on inclusivity could also drive adoption in educational sectors, where students and teachers create interactive learning materials. However, businesses adopting these tools must invest in training to maximize their potential and address ethical concerns through transparent usage policies. As the AI video market evolves in 2025, PixVerse stands at the intersection of technology and social good, potentially shaping how billions engage with video content while navigating a complex landscape of competition, regulation, and innovation.
FAQ: What is PixVerse’s mission in AI video creation? PixVerse aims to empower billions of people who have never made a video by using AI to simplify video creation, making it accessible to non-professionals and underserved communities, as stated by co-founder Jaden Xie on July 11, 2025.
How can businesses benefit from AI video tools like PixVerse? Businesses, especially small enterprises, can reduce costs and time in video production, enabling competitive marketing on social platforms. Monetization for platforms like PixVerse could involve subscriptions or partnerships with social media ecosystems as of mid-2025.
What are the challenges in implementing AI video tools globally? Challenges include optimizing AI models for low-bandwidth regions, managing high computational costs, ensuring data privacy, and addressing regulatory and ethical concerns around AI-generated content as highlighted in industry trends of 2025.
Smishing is a sort of portmanteau of SMS and phishing in which a text message is used to try to get the target to click on a link and provide personal information.Sean Kilpatrick/The Canadian Press
If it seems like your phone has been blowing up with more spam text messages recently, it probably is.
The Canadian Anti-Fraud Centre says so-called smishing attempts appear to be on the rise, thanks in part to new technologies that allow for co-ordinated bulk attacks.
The centre’s communications outreach officer Jeff Horncastle says the agency has actually received fewer fraud reports in the first six months of 2025, but that can be misleading because so few people actually alert the centre to incidents.
He says smishing is “more than likely increasing” with help from artificial intelligence tools that can craft convincing messages or scour data from security breaches to uncover new targets.
The warning comes as the Competition Bureau sent a recent alert about the tactic because it says many people are seeing more suspicious text messages.
Smishing is a sort of portmanteau of SMS and phishing in which a text message is used to try to get the target to click on a link and provide personal information.
The ruse comes in many forms but often involves a message that purports to come from a real organization or business urging immediate action to address an alleged problem.
It could be about an undeliverable package, a suspended bank account or news of a tax refund.
Horncastle says it differs from more involved scams such as a text invitation to call a supposed job recruiter, who then tries to extract personal or financial information by phone.
Nevertheless, he says a text scam might be quite sophisticated since today’s fraudsters can use artificial intelligence to scan data leaks for personal details that bolster the hoax, or use AI writing tools to help write convincing text messages.
“In the past, part of our messaging was always: watch for spelling mistakes. It’s not always the case now,” he says.
“Now, this message could be coming from another country where English may not be the first language but because the technology is available, there may not be spelling mistakes like there were a couple of years ago.”
The Competition Bureau warns against clicking on suspicious links and forwarding texts to 7726 (SPAM), so that the cellular provider can investigate further. It also encourages people to delete smishing messages, block the number and ignore texts even if they ask to reply with “STOP” or “NO.”
Horncastle says the centre received 886 reports of smishing in the first six months of 2025, up to June 30. That’s trending downwards from 2,546 reports in 2024, which was a drop from 3,874 in 2023. That too, was a drop in reports from 7,380 in 2022.
But those numbers don’t quite tell the story, he says.
“We get a very small percentage of what’s actually out there. And specifically when we’re looking at phishing or smishing, the reporting rate is very low. So generally we say that we estimate that only five to 10 per cent of victims report fraud to the Canadian Anti-Fraud Centre.”
Horncastle says it’s hard to say for sure how new technology is being used, but he notes AI is a frequent tool for all sorts of nefarious schemes such as manipulated photos, video and audio.
“It’s more than likely increasing due to different types of technology that’s available for fraudsters,” Horncastle says of smishing attempts.
“So we would discuss AI a lot where fraudsters now have that tool available to them. It’s just reality, right? Where they can craft phishing messages and send them out in bulk through automation through these highly sophisticated platforms that are available.”
The Competition Bureau’s deceptive marketing practices directorate says an informed public is the best protection against smishing.
“The bureau is constantly assessing the marketplace and through our intelligence capabilities is able to know when scams are on the rise and having an immediate impact on society,” says deputy commissioner Josephine Palumbo.
“That’s where these alerts come in really, really handy.”
She adds that it’s difficult to track down fraudsters who sometimes use prepaid SIM cards to shield their identity when targeting victims.
“Since SIM cards lack identification verification, enforcement agencies like the Competition Bureau have a hard time in actually tracking these perpetrators down,” Palumbo says.
Fraudsters can also spoof phone numbers, making it seem like a text has originated with a legitimate agency such as the Canada Revenue Agency, Horncastle adds.
“They might choose a number that they want to show up randomly or if they’re claiming to be a financial institution, they may make that financial institutions’ number show up on the call display,” he says.
“We’ve seen (that) with the CRA and even the Canadian Anti-Fraud Centre, where fraudsters have made our phone numbers show up on victims’ call display.”