Connect with us

Tools & Platforms

“AI represents the most profound tectonic shift of our generation”

Published

on


“AI represents the most profound tectonic shift of our generation. The scale and pace of its impact are beyond anything we’ve previously experienced – and we still don’t fully understand what’s coming,” said Dean Shahar, Managing Director & Head of Israel at DTCP Growth. “But amidst all the hype, I remind myself of an old lesson from financial history: Every time we hear ‘this time it’s different’ or ‘it’s a new economy’, whether it was the DotCom bubble, 2008, crypto, or COVID, we eventually rediscover that business fundamentals always matter.”

1 View gallery

Dean Shahar DTCP

Dean Shahar

(Photo: DTCP)

Shahar joined CTech for its VC AI survey to share his unique insights about how the technology will impact the investment space. “Even in the face of extreme disruption, the basics remain the foundation. And that perspective helps keep things in balance,” he added.

You can learn more in the interview below.

Fund ID
Name and Title: Dean Shahar, Managing Director & Head of Israel
Fund Name: DTCP Growth
Founding Team: Vicente Vento, Thomas Preuss, Jack Young
Founding Year: 2015
Investment Stage: Growth
Investment Sectors: AI, Cyber

On a scale of 1 to 10, how has AI impacted your fund’s operations over the past year – specifically in terms of the day-to-day work of the fund’s partners and team members?

I’d say it’s a 7, though part of me wants to say 10. We’re only beginning to scratch the surface of what AI can do. Giving it a 10 would suggest we’ve fully realized its potential, and we’re far from that. Venture capital work can be broken down into three core areas: (1) researching market opportunities and identifying exceptional talent, (2) networking and relationship building, and (3) executing transactions.

AI has meaningfully enhanced the first and third domains. Research is faster and deeper, and deal execution is more efficient thanks to tools that support diligence, data processing, and analysis. However, the second area, relationship building, remains profoundly human. It’s a nuanced, personal endeavor rooted in trust, and that part of the job isn’t going to be automated anytime soon.

Founders are still the single biggest driver of a startup’s outcome. Even in later-stage investing, identifying the right people to tackle the unknown is critical. Building a company is about solving problems no one has fully defined yet, and that human element can’t be replaced by even the smartest algorithms.

Have you already had any significant exits from AI companies? If so, what were the key characteristics of those companies?

Yes, and also, no. Let me explain. Over the past five years, we’ve had a series of strong exits across M&A and IPOs. Many of those companies included AI or ML components in their stack. However, AI as it’s being built and productized today is fundamentally different.

Looking back, what we once believed was an “AI race car” now feels more like a well-trained horse. Today’s AI is deeply tied to direct business outcomes in ways that weren’t possible just a few years ago. Take our portfolio company Zenity, for example. They enable enterprises to adopt AI tools securely, becoming critical enablers of modern, AI-driven workforces at scale.

That level of clarity in value proposition, where AI is not just embedded but essential, is new. SaaS was about automating known processes. AI is about introducing intelligence that, in many cases, surpasses human ability, replacing tasks rather than just improving productivity. That’s a major shift.

Is identifying promising AI startups different from evaluating companies in your more traditional investment domains? If so, how does that difference manifest?

Absolutely, it’s a different ballgame. Evaluating AI startups requires a higher tolerance for risk and a much more creative diligence process. Many of these ideas are unprecedented, so historical data points and benchmarks don’t hold as much weight. The past is no longer a reliable predictor of success.

Conviction today demands more imagination, upfront research, and validation through unconventional signals. The future is arriving too fast for rearview-mirror investing. Since AI became mainstream, especially post-ChatGPT, we’ve seen an explosion of startups claiming to be “AI-first.” The noise level is high, and cutting through it requires deeper technical understanding and sharper instincts than ever before.

What specific financial performance indicators (KPIs) do you examine when assessing a potential AI company? Are there any AI-specific metrics you consider particularly important?

It depends on the company’s stage and vertical. From a financial perspective, we’re still looking at the same core indicators: ARR, revenue growth, net dollar retention (NDR), and efficiency ratios, among others.

What’s changed is the interpretation of those metrics. With AI, context and narrative matter more. We’re moving away from a one-size-fits-all model where all SaaS companies were compared by the same yardstick. In AI, understanding why a company is growing, and how defensible that growth is has become more important than the raw numbers alone.

How do you approach the valuation of early-stage AI startups, which often lack significant revenues but possess strong technological potential?

Valuing early-stage companies was never about formulas like CAPM or WACC, it’s always been a dance between supply and demand. In today’s climate, demand is soaring because the perceived upside in AI is massive.

We’re seeing startups achieve impressive traction with very lean teams, disrupting massive industries with radical efficiency. That disruption potential pushes expectations, and therefore, risk appetite is higher which naturally drives up valuations.

It’s classic risk-reward theory at play, but AI adds a unique twist: the pace of innovation at the technological layer is so fast that it pushes both founders and investors to make bigger, bolder moves, faster.

What financial risks do you associate with investing in AI companies, beyond the usual technological risks?

One of the biggest is unpredictability. It permeates everything, from infrastructure costs and model behavior to regulatory ambiguity. As we move toward more autonomous agentic systems, we’re essentially giving non-human entities control over decision-making and workflows.

No matter how well we build guardrails, there will always be an edge case that surprises us, the “N+1 problem.” That unpredictability introduces new layers of operational and financial risk.

It also increases the frequency of iteration. Companies now need constant internal checks on model performance and faster feedback loops on business outcomes, especially as AI scales across high-stakes functions like marketing, customer support, and infrastructure management.

Do you focus on particular subdomains within AI?

Not really. I try to remain intellectually open and humble. What excites me is any idea, regardless of subdomain, that has the potential to dramatically improve the way business is done.

In hindsight, every breakthrough feels obvious. But in real time, it’s anything but. Success in venture capital, in my view, comes from curiosity and a willingness to explore paths others overlook.

How do you view AI’s impact on traditional industries? Are there specific AI technologies you believe will be especially transformative in certain sectors?

Every industry will be transformed; it’s only a question of how and when. The nature of that transformation varies: (1) In non-sensitive domains, AI will increasingly replace humans outright. (2) In sensitive or high-risk areas, AI will act more as an augmenting force, essentially becoming a digital teammate that supercharges human decision-making.

Take cybersecurity, for example. The data is highly sensitive, and a false positive (e.g., blocking a critical API) could paralyze an entire business. In these cases, AI will likely act as a co-pilot, handling the repetitive and low-risk tasks while humans retain control over critical decisions.

What specific AI trends in Israel do you see as having strong exit potential in the next five years? Are there niches where you believe Israeli startups particularly excel?

Cybersecurity continues to be Israel’s home court advantage. It’s where the majority of fundraising and M&A activity happens. That said, I’m hopeful we’ll see more Israeli companies tackle massive, underexplored (from an Israeli perspective) markets like healthcare or logistics, where the potential for AI-led disruption is enormous. Historically, we haven’t seen many home-grown success stories in those spaces, but the talent is certainly here.

Are there gaps or missing segments in the Israeli AI landscape that you’ve identified? What types of AI founders are you especially looking to back right now in Israel?

I don’t believe the founder type changes just because we’re talking about AI. The same traits I’ve always looked for still apply: grit, curiosity, and integrity.

I back founders I’d want to work for, build with, or follow. AI just raises the stakes, it doesn’t change the fundamentals of what makes someone a great founder.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

FTC to Question Tech Companies About Risks Around AI Chatbots

Published

on

By


The Federal Trade Commission (FTC) reportedly plans to study privacy harms and other risks posed to children and other users of artificial intelligence (AI)-powered chatbots.

The study will also gather information on how AI services store and share data, Bloomberg reported Thursday (Sept. 4), citing unnamed sources.

The FTC will use its authority to compel companies to turn over information related to its study and will seek information from the nine largest consumer chatbots, including those from OpenAI and Google, according to the report.

Asked about the report by Bloomberg, a White House spokesperson didn’t comment on a study but said the FTC is mindful of user safety when it comes to AI.

“President Trump pledged to cement America’s dominance in AI, cryptocurrency and other cutting-edge technologies of the future,” the spokesperson said, per the report. “FTC Chairman Andrew Ferguson and the entire administration are focused on delivering on this mandate without compromising the safety and well-being of the American people.”

The Wall Street Journal also reported Thursday that the FTC plans question AI companies, adding that the study will focus on chatbots’ impact on children’s mental health, that the White House approved the study, and that the FTC is preparing letters to OpenAI, Meta and Character.AI.

The administration and lawmakers have been pressured by parents and advocacy groups to add protections for children using AI chatbots, and this effort has been bolstered by recent reports of teenagers dying by suicide after forming relationships with chatbots, according to the report.

Some tech companies have taken steps to address this issue. For example, OpenAI said it would add teen accounts that can be overseen by parents, Character.AI has made similar changes, and Meta added more restrictions for those under 18 who use its AI products, per the report.

These reports came on the same day that First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education.

In a press release issued before the event, Trump said the growth of AI must be managed responsibly.

“During this primitive stage, it is our duty to treat AI as we would our own children — empowering, but with watchful guidance,” Trump said. “We are living in a moment of wonder, and it is our responsibility to prepare America’s children.”



Source link

Continue Reading

Tools & Platforms

Lancaster Bets on AI to Cut Red Tape, Boost Development

Published

on


Permitting is getting an AI boost in a city north of Los Angeles, the latest example of how the new technology is helping to power one of the most traditional of local government tasks.

Lancaster, Calif., will become one of the first municipalities in the U.S. to use a “next-generation permitting platform” from Labrynth, which sells automated compliance software, according to a statement.

The platform promises the ability for the city to “fast-track approvals, eliminate bottlenecks and raise the bar on permitting speed, transparency and economic readiness,” according to the statement.

That, in turn, will result in quicker permitting decisions, potentially eliminating a common pain point and source of complaints from developers, residents and others.

The platform uses what the statement calls “agentic workflows,” referencing a type of artificial intelligence designed to make decisions without human prompts, a capability that relies on machine learning, language processing and other tools.

The platform can “pre-screen submissions,” check them against city rules and “flag missing components” to permit applications, among other tasks, according to the statement.

“This tool allows us to take what’s historically been a bureaucratic pain point, permitting, and turn it into a driver of growth,” Lancaster Mayor R. Rex Parris told Government Technology via email. “Especially now, with so much demand for housing, energy and infrastructure projects, we need systems that match the pace of innovation.”

He said the platform already is offering “clearer feedback and faster turnaround” for developers.

Lancaster also will become one of the first cities included in Labrynth’s new Red Tape Index, which the company says is a national benchmark that measures “permitting speed, transparency and regulatory readiness.”

Parris said the city’s inclusion in the tool means Lancaster is “helping define what smart governance looks like nationally. We’re turning compliance from a cost center into a competitive advantage and setting a standard that other cities can follow.”

That index will eventually cover more than 500 cities, Labrynth said, estimating that achieving the mark will take 30 days.

“Lancaster is doing more than modernizing — they’re showing other cities across America what’s possible,” said Stuart Lacey, CEO of Labrynth, in the statement. “Their leadership underscores a broader shift across the country: When local governments remove barriers, they unlock opportunity. Lancaster is the blueprint.”

Using AI to speed up permitting represents one of the hottest areas of the government technology business. Two recent examples underscore that: Pennsylvania’s Permit Fast Track Program, credited with sparking economic development; and the private equity-backed triple merger of GovOS, Avenu and ITI.

Thad Rueter writes about the business of government technology. He covered local and state governments for newspapers in the Chicago area and Florida, as well as e-commerce, digital payments and related topics for various publications. He lives in New Orleans.



Source link

Continue Reading

Tools & Platforms

Why AI Adoption and Training Matter

Published

on


This is Part 1 of a three-part series focusing on AI end-user adoption and training

AI has ushered in a new era, changing not only the way we work but the nature of work itself.

Organizations have been using AI to automate routine tasks, generate insights, and augment decision-making to drive productivity and enhance customer relationships. The rise of AI assistants such as Microsoft Copilot, Zoom AI Companion, and Cisco AI Assistant is undeniable – they’ve quickly become part of daily work life. And the benefits are clear. According to the Microsoft Work Trend Index, 70% of early Copilot users said they were more productive, and 68% reported that Copilot improved the quality of their work. 

Despite the rapid growth of AI usage, we must ask – is AI in the workplace meeting its true potential, and are users getting the most out of it? AI offers organizations a massive opportunity to transform operations, empower workers, reduce costs, and revolutionize work – but only if employees embrace and adopt the tools available to them. Despite the hype – and hefty investments – many organizations struggle to realize the full benefits of AI due to the lack of user adoption and training strategies. This is typical of any tech adoption – McKinsey research shows that 70% of digital transformation initiatives fail due to poor adoption and change management. So, for organizations to get the most out of the money they’ve spent on AI initiatives, they have to be prepared to invest in end-user AI training.

Related:Sometimes You Can Automate Empathy

While companies face numerous challenges when implementing AI, the biggest are often people- and process-related rather than technology-related. The 2025 BCG AI at Work 2025 Report found that the uptake of generative AI by frontline workers has stalled, primarily due to a lack of training. Only 36% of employees were satisfied with their AI training, saying they had the skills needed for AI transformation. In contrast, 79% of respondents who received more than five hours of training became regular AI users, compared with just 67% of those who received less than five hours.

AI tools are only as effective as the people who use them. While some AI assistants are intuitive, many workers aren’t sure how to integrate AI into their daily workflows for maximum benefit. Additionally, most employees are not trained as prompt engineers and don’t know how to phrase inputs to get the best results.

Training can fix these challenges.

Related:Twilio Announces RCS Platform General Availability

There are a lot of reasons why an enterprise will want to invest the time and resources in training that will maximize end-user AI adoption. These include:

  • Reduced User Frustration and Resistance: Many workers fear AI could replace their jobs and may be reluctant to use it. Training can alleviate concerns and show how AI complements rather than replaces human skills.

  • Maximized ROI on AI Investments: Training ensures employees understand the tool’s capabilities and how to apply them to their specific workflows, extracting maximum value from the investment.

  • Enhanced Security and Privacy: Proper training promotes responsible and secure AI use, reducing the risk of data leaks or compliance violations. Untrained users may unintentionally share confidential information in prompts. For example:

    • A customer support rep might use AI to draft replies and include customer names, account numbers, or transaction details – potentially violating privacy regulations like GDPR, CCPA, or HIPAA.

    • A marketing executive could provide proprietary information to an AI tool, which might inadvertently be stored or exposed to others.

Related:Creating Great Service Experiences Involves Looking at Human and AI Agent Performance

Despite these benefits, many organizations fail to implement comprehensive adoption and training programs. According to TalentLMS, nearly half of the employees they surveyed said that AI is advancing faster than their company’s training capabilities, while 54% report a lack of clear guidelines on AI tool usage. At some point, investing in clear AI training will be a more cost-effective measure than dealing with security and compliance fallout, or losing ground to competitors with more effective AI use policies.

End-user adoption and training are essential for successful AI deployments. Investing in these programs equips employees with the skills to fully leverage AI technologies. From drafting documents and summarizing meetings to analyzing data and assisting customers, AI can automate routine tasks and improve productivity – but only if users know how to use it effectively.

With a well-planned adoption strategy that includes end-user training and change management, organizations can unlock the full value of their AI investments. The next two parts of this series will provide field-tested guidance on how to develop and deploy end-user adoption and training programs.





Source link

Continue Reading

Trending