Connect with us

Tools & Platforms

AI hiring in Israel’s tech sector doubled in first half of 2025, GotFriends says – The Jerusalem Post

Published

on

Tools & Platforms

AI Familiarity Erodes Public Trust Amid Bias and Misuse Concerns

Published

on

By


In the rapidly evolving world of artificial intelligence, a counterintuitive trend is emerging: greater familiarity with AI technologies appears to erode public confidence rather than bolster it. Recent research highlights how individuals who gain deeper knowledge about AI systems often become more skeptical of their reliability and ethical implications. This shift could have profound implications for tech companies pushing AI adoption in everything from consumer apps to enterprise solutions.

For instance, a study detailed in Futurism reveals that as people become more “AI literate”—meaning they understand concepts like machine learning algorithms and data biases—their trust in these systems diminishes. The findings, based on surveys of thousands of participants, suggest that exposure to AI’s inner workings uncovers vulnerabilities, such as opaque decision-making processes and potential for misuse, leading to heightened wariness.

The Erosion of Trust Through Education

Industry insiders have long assumed that education would demystify AI and foster acceptance, but the data tells a different story. According to the same Futurism report, participants who underwent AI training sessions reported a 15% drop in trust levels compared to those with minimal exposure. This literacy paradox mirrors historical patterns in other technologies, where initial hype gives way to scrutiny once complexities are revealed.

Compounding this, a separate analysis in Futurism from earlier this year links over-reliance on AI tools to a decline in users’ critical thinking skills. The study, involving cognitive tests on AI-dependent workers, found that delegating tasks to algorithms can atrophy human judgment, further fueling distrust when AI errors become apparent in real-world applications like automated hiring or medical diagnostics.

Public Sentiment Shifts and Polling Insights

Polling data underscores this growing disillusionment. A 2024 survey highlighted in Futurism showed public opinion turning against AI, with approval ratings dropping by double digits over the previous year. Respondents cited concerns over job displacement, privacy invasions, and the technology’s role in amplifying misinformation as key factors.

This sentiment is not isolated; it’s echoed in broader discussions about AI’s societal impact. For example, posts on platforms like X, as aggregated in recent trends, reflect widespread skepticism, with users debating how increased AI integration in daily life— from smart assistants to predictive analytics—might exacerbate inequalities rather than solve them. Such organic conversations align with formal studies, indicating a grassroots pushback against unchecked AI proliferation.

Implications for Tech Leaders and Policy

For tech executives, these findings pose a strategic dilemma. Companies investing billions in AI development must now contend with a more informed populace demanding transparency and accountability. The Futurism piece points to initiatives like explainable AI frameworks as potential remedies, where systems are designed to articulate their reasoning in human-understandable terms, potentially rebuilding eroded trust.

Yet, challenges remain. A related article in TNGlobal argues that trust in AI hinges on collaborative efforts, including zero-trust security models to safeguard data integrity. Without such measures, the industry risks regulatory backlash, as seen in emerging policies that mandate AI audits to address biases and ensure ethical deployment.

Looking Ahead: Balancing Innovation and Skepticism

As we move deeper into 2025, the trajectory of AI trust will likely influence investment and adoption rates. Insights from Newsweek reveal a mixed picture: while 45% of workers trust AI more than colleagues for certain tasks, this statistic masks underlying doubts about its broader reliability. Industry leaders must prioritize literacy programs that not only educate but also address fears head-on.

Ultimately, fostering genuine trust may require a cultural shift within tech firms, moving beyond profit-driven narratives to emphasize human-centric design. As evidenced by ongoing research in publications like Nature’s Humanities and Social Sciences Communications, transdisciplinary approaches—integrating ethics, psychology, and technology—could redefine AI’s role in society, turning skepticism into informed partnership.



Source link

Continue Reading

Tools & Platforms

Implications for Tech Sector Valuations

Published

on


The Trump administration’s 2025 AI Action Plan, titled “Winning the Race: America’s AI Action Plan,” has redefined the strategic calculus for Silicon Valley. By prioritizing deregulation, infrastructure expansion, and ideological neutrality in AI development, the administration has created a policy framework that both accelerates innovation and introduces new risks for tech sector valuations. This analysis examines how the alignment of tech firms with Trump’s industrial policy is reshaping market dynamics, investor sentiment, and long-term competitiveness in the AI era.

Policy Framework: Deregulation, Infrastructure, and Ideological Guardrails

The administration’s core strategy centers on reducing regulatory friction for energy and data center projects. Executive orders exempting these developments from federal environmental reviews and permitting streamlined access to public lands have drawn support from Silicon Valley leaders like San Jose Mayor Matt Mahan, who emphasized the critical need for energy supply to sustain AI infrastructure [1]. Simultaneously, the administration’s push for open-source and open-weight AI models aims to foster a competitive ecosystem while avoiding burdensome state-level regulations [4].

However, this deregulatory approach intersects with ideological mandates. The “Preventing Woke AI in the Federal Government” executive order requires AI models used by federal agencies to be “truth-seeking” and “ideologically neutral,” effectively banning contracts with firms whose chatbots address topics like critical race theory or transgenderism [3]. This has forced tech companies to re-engineer models to align with federal procurement requirements, creating a balancing act between compliance and technical integrity.

Strategic Positioning: Tech Firms Align with Federal Priorities

Major tech companies are recalibrating their strategies to align with the administration’s vision. For instance, firms like NVIDIA and AMD are leveraging federal partnerships to develop secure, full-stack AI export packages, capitalizing on the administration’s aggressive push to globalize American AI technology stacks [1]. The emphasis on infrastructure has also spurred collaborations with federal agencies for high-security data centers and grid modernization projects [5].

Yet, this alignment is not without friction. California lawmakers have raised concerns about the environmental and affordability impacts of data centers, advocating for stricter energy reporting standards [1]. This tension highlights the challenge of balancing deregulation with sustainability goals—a dynamic that could influence long-term investor confidence.

Investor Sentiment and Valuations: A Mixed Landscape

The administration’s policies have generated a polarized market response. While the Trump administration’s pause on new tariffs in April 2025 triggered a “V-shaped” recovery in tech stocks [6], earlier concerns about an AI-driven valuation bubble led to volatility. For example, the Technology Select Sector SPDR ETF surged 35% in the last three months of 2025, outpacing the S&P 500’s 19% gain [2]. However, UBS warned that U.S. tech valuations now trade at a HOLT Economic P/E above 35 times, nearing levels seen during the dotcom boom [1].

Key players like NVIDIA and AMD have benefited from their dominance in AI infrastructure, with AMD upgraded by Truist Securities due to strong industry feedback on its data center momentum [1]. Conversely, firms like Super Micro Computer (SMCI) and Dell face headwinds from Trump’s tariffs on semiconductors and copper, with SMCI reporting declining non-GAAP earnings per share despite robust revenue growth [2]. These divergent outcomes underscore the sector’s sensitivity to policy shifts and global trade dynamics.

Challenges and Risks

The administration’s focus on ideological neutrality in AI models introduces reputational and technical risks. Critics argue that removing DEI and climate change references from the NIST AI Risk Management Framework could compromise model reliability [3]. Additionally, Trump’s tariffs on semiconductors and copper—while framed as protecting national security—risk inflating infrastructure costs, though large firms may absorb these expenses due to AI’s strategic value [2].

Environmental concerns further complicate the narrative. California’s push for energy reporting standards reflects growing skepticism about the sustainability of data center expansion, a potential regulatory counterweight to federal deregulation [1].

Conclusion

Trump’s AI-driven industrial policy has created a dual-edged sword for Silicon Valley. While deregulation and export incentives are fueling innovation and valuations, ideological mandates and trade policies introduce volatility and long-term uncertainties. For investors, the key lies in distinguishing between firms that can navigate these policy-driven headwinds—such as those with strong federal partnerships and scalable infrastructure—and those vulnerable to regulatory or trade-related shocks. As the administration’s agenda unfolds, the tech sector’s ability to balance compliance, sustainability, and profitability will determine whether the current valuation surge proves sustainable or speculative.

Source:
[1] Silicon Valley mayor agrees with Trump on ‘energy … [https://www.politico.com/news/2025/08/27/silicon-valley-mayor-agrees-trump-energy-00532503]
[2] ‘An existential threat’: For Silicon Valley, falling behind in AI … [https://www.cnn.com/2025/08/18/tech/ai-spending-tariffs]
[3] Trump’s new AI policies keep culture war focus on tech companies [https://www.wunc.org/2025-07-23/trumps-new-ai-policies-keep-culture-war-focus-on-tech-companies]
[4] “Winning the Race: America’s AI Action Plan” – Key Pillars, … [https://www.ropesgray.com/en/insights/alerts/2025/07/winning-the-race-americas-ai-action-plan-key-pillars-policy-actions-and-future-implications]
[5] White House Releases AI Action Plan: Key Legal and …, [https://www.skadden.com/insights/publications/2025/07/the-white-house-releases-ai-action-plan]
[6] Q2 2025 Market Perspective [https://altiumwealth.com/blogs/altium-insights/q2-2025-market-perspective]



Source link

Continue Reading

Tools & Platforms

How to bridge the AI skills gap to power industrial innovation

Published

on


Onofrio Pirrotta is a senior vice president and managing partner at Kyndryl, where he leads the technology company’s U.S. manufacturing and energy market. Opinions are the author’s own.

Artificial intelligence is no longer a futuristic concept for manufacturers; it is embedded in operations, from predictive maintenance to intelligent automation. 

According to Kyndryl’s People Readiness Report, 95% of manufacturing organizations are already using AI across various areas of their business. Yet, despite this widespread adoption, a critical gap remains: 71% of manufacturing leaders said their workforce is not ready to leverage AI effectively.

This disconnect between technological investment and workforce readiness is more than a growing pain — it’s a strategic risk. If left unaddressed, it could stall innovation, limit return on investment and widen the competitive gap between AI pacesetters and those still struggling to align people with progress. 

The readiness paradox

The manufacturing sector is undergoing a profound transformation. AI, edge computing and digital twins are reshaping the factory floor, enabling real-time decision making and operational agility.

So why are only 14% of manufacturing organizations we surveyed incorporating AI into customer-facing products or services?

The answer lies in the “readiness paradox.” Manufacturers are investing in AI tools and platforms, but not in the people who use them. As a result, employees are wary of AI’s impact on their roles and many leaders are unsure how to guide their teams through the transition. Over half of manufacturing leaders cited a lack of skilled talent to manage AI and fear of job displacement is affecting employee engagement. The result is a workforce that is technologically surrounded but practically unprepared.

What AI pacesetters are doing differently

Pacesetting companies — representing just 14% of the total business and technology leaders in eight markets surveyed — have aligned their workforce, technology and growth strategies. They are seeing measurable benefits in productivity, innovation and employee engagement by using AI with the following approaches: 

  1. Strategic change management: Just over 60% are more likely to have implemented an overall AI adoption strategy and have a change management plan in place. They’re treating AI as a major, well-supported transformation rather than a quick fix.
  2. Trust-building measures: Employees are more likely to embrace AI if they are involved in its implementation and the creation of ethical guidelines. It’s also important to maintain transparency around AI goals.
  3. Proactive skills development: Pacesetters are investing in upskilling, mentorship and external certifications and are more likely to have tools in place to inventory current skills and identify gaps. This gives them a clearer roadmap for workforce development as well as a head start on future readiness.

Best practices

So how can manufacturers bridge the AI skills gap and join the ranks of Pacesetters to align innovation with workforce development?

Make workforce readiness a boardroom priority

AI strategy should not live solely in the IT department. It must be a cross-functional initiative that includes HR, operations and the C-suite.

Yet research shows a disconnect. CEOs are 28% more likely than chief technology officers to say their organizations are in the early stages of AI implementation and they are more likely to favor hiring external talent over upskilling current employees. This misalignment slows progress.

Manufacturers need unified leadership around a shared vision for AI and workforce transformation.

Establishing a cross-functional AI steering committee that includes frontline supervisors also ensures alignment between technology and talent strategies. Tying AI readiness to business KPIs such as productivity, quality and innovation metrics — as well as conducting regular workforce capability audits — will further elevate its importance in strategic planning and forecast future needs based on AI roadmaps.

Build a culture of trust and transparency

Fear is a powerful inhibitor. When employees worry that AI will replace them, they are less likely to engage with it. Leaders must address these concerns directly. That means communicating openly about how AI will be used, involving employees in pilot programs and demonstrating how AI can augment, not replace, their roles.

Implementing a tiered AI education program, launching employee enablement campaigns and providing access to AI-powered tools can help bring a manufacturer’s workforce along the AI journey. Hosting AI town halls where employees from supervisory roles, as well as the frontline, can ask questions or share concerns is another way to build engagement. Worker trust can also be reinforced through the development of an internal AI ethics policy and governance board. 



Source link

Continue Reading

Trending