Connect with us

Tools & Platforms

Dive teams conducting search on Nimitz Lake with AI technology after devastating Texas floods – kens5.com

Published

on

Tools & Platforms

AI Familiarity Erodes Public Trust Amid Bias and Misuse Concerns

Published

on

By


In the rapidly evolving world of artificial intelligence, a counterintuitive trend is emerging: greater familiarity with AI technologies appears to erode public confidence rather than bolster it. Recent research highlights how individuals who gain deeper knowledge about AI systems often become more skeptical of their reliability and ethical implications. This shift could have profound implications for tech companies pushing AI adoption in everything from consumer apps to enterprise solutions.

For instance, a study detailed in Futurism reveals that as people become more “AI literate”—meaning they understand concepts like machine learning algorithms and data biases—their trust in these systems diminishes. The findings, based on surveys of thousands of participants, suggest that exposure to AI’s inner workings uncovers vulnerabilities, such as opaque decision-making processes and potential for misuse, leading to heightened wariness.

The Erosion of Trust Through Education

Industry insiders have long assumed that education would demystify AI and foster acceptance, but the data tells a different story. According to the same Futurism report, participants who underwent AI training sessions reported a 15% drop in trust levels compared to those with minimal exposure. This literacy paradox mirrors historical patterns in other technologies, where initial hype gives way to scrutiny once complexities are revealed.

Compounding this, a separate analysis in Futurism from earlier this year links over-reliance on AI tools to a decline in users’ critical thinking skills. The study, involving cognitive tests on AI-dependent workers, found that delegating tasks to algorithms can atrophy human judgment, further fueling distrust when AI errors become apparent in real-world applications like automated hiring or medical diagnostics.

Public Sentiment Shifts and Polling Insights

Polling data underscores this growing disillusionment. A 2024 survey highlighted in Futurism showed public opinion turning against AI, with approval ratings dropping by double digits over the previous year. Respondents cited concerns over job displacement, privacy invasions, and the technology’s role in amplifying misinformation as key factors.

This sentiment is not isolated; it’s echoed in broader discussions about AI’s societal impact. For example, posts on platforms like X, as aggregated in recent trends, reflect widespread skepticism, with users debating how increased AI integration in daily life— from smart assistants to predictive analytics—might exacerbate inequalities rather than solve them. Such organic conversations align with formal studies, indicating a grassroots pushback against unchecked AI proliferation.

Implications for Tech Leaders and Policy

For tech executives, these findings pose a strategic dilemma. Companies investing billions in AI development must now contend with a more informed populace demanding transparency and accountability. The Futurism piece points to initiatives like explainable AI frameworks as potential remedies, where systems are designed to articulate their reasoning in human-understandable terms, potentially rebuilding eroded trust.

Yet, challenges remain. A related article in TNGlobal argues that trust in AI hinges on collaborative efforts, including zero-trust security models to safeguard data integrity. Without such measures, the industry risks regulatory backlash, as seen in emerging policies that mandate AI audits to address biases and ensure ethical deployment.

Looking Ahead: Balancing Innovation and Skepticism

As we move deeper into 2025, the trajectory of AI trust will likely influence investment and adoption rates. Insights from Newsweek reveal a mixed picture: while 45% of workers trust AI more than colleagues for certain tasks, this statistic masks underlying doubts about its broader reliability. Industry leaders must prioritize literacy programs that not only educate but also address fears head-on.

Ultimately, fostering genuine trust may require a cultural shift within tech firms, moving beyond profit-driven narratives to emphasize human-centric design. As evidenced by ongoing research in publications like Nature’s Humanities and Social Sciences Communications, transdisciplinary approaches—integrating ethics, psychology, and technology—could redefine AI’s role in society, turning skepticism into informed partnership.



Source link

Continue Reading

Tools & Platforms

How to bridge the AI skills gap to power industrial innovation

Published

on


Onofrio Pirrotta is a senior vice president and managing partner at Kyndryl, where he leads the technology company’s U.S. manufacturing and energy market. Opinions are the author’s own.

Artificial intelligence is no longer a futuristic concept for manufacturers; it is embedded in operations, from predictive maintenance to intelligent automation. 

According to Kyndryl’s People Readiness Report, 95% of manufacturing organizations are already using AI across various areas of their business. Yet, despite this widespread adoption, a critical gap remains: 71% of manufacturing leaders said their workforce is not ready to leverage AI effectively.

This disconnect between technological investment and workforce readiness is more than a growing pain — it’s a strategic risk. If left unaddressed, it could stall innovation, limit return on investment and widen the competitive gap between AI pacesetters and those still struggling to align people with progress. 

The readiness paradox

The manufacturing sector is undergoing a profound transformation. AI, edge computing and digital twins are reshaping the factory floor, enabling real-time decision making and operational agility.

So why are only 14% of manufacturing organizations we surveyed incorporating AI into customer-facing products or services?

The answer lies in the “readiness paradox.” Manufacturers are investing in AI tools and platforms, but not in the people who use them. As a result, employees are wary of AI’s impact on their roles and many leaders are unsure how to guide their teams through the transition. Over half of manufacturing leaders cited a lack of skilled talent to manage AI and fear of job displacement is affecting employee engagement. The result is a workforce that is technologically surrounded but practically unprepared.

What AI pacesetters are doing differently

Pacesetting companies — representing just 14% of the total business and technology leaders in eight markets surveyed — have aligned their workforce, technology and growth strategies. They are seeing measurable benefits in productivity, innovation and employee engagement by using AI with the following approaches: 

  1. Strategic change management: Just over 60% are more likely to have implemented an overall AI adoption strategy and have a change management plan in place. They’re treating AI as a major, well-supported transformation rather than a quick fix.
  2. Trust-building measures: Employees are more likely to embrace AI if they are involved in its implementation and the creation of ethical guidelines. It’s also important to maintain transparency around AI goals.
  3. Proactive skills development: Pacesetters are investing in upskilling, mentorship and external certifications and are more likely to have tools in place to inventory current skills and identify gaps. This gives them a clearer roadmap for workforce development as well as a head start on future readiness.

Best practices

So how can manufacturers bridge the AI skills gap and join the ranks of Pacesetters to align innovation with workforce development?

Make workforce readiness a boardroom priority

AI strategy should not live solely in the IT department. It must be a cross-functional initiative that includes HR, operations and the C-suite.

Yet research shows a disconnect. CEOs are 28% more likely than chief technology officers to say their organizations are in the early stages of AI implementation and they are more likely to favor hiring external talent over upskilling current employees. This misalignment slows progress.

Manufacturers need unified leadership around a shared vision for AI and workforce transformation.

Establishing a cross-functional AI steering committee that includes frontline supervisors also ensures alignment between technology and talent strategies. Tying AI readiness to business KPIs such as productivity, quality and innovation metrics — as well as conducting regular workforce capability audits — will further elevate its importance in strategic planning and forecast future needs based on AI roadmaps.

Build a culture of trust and transparency

Fear is a powerful inhibitor. When employees worry that AI will replace them, they are less likely to engage with it. Leaders must address these concerns directly. That means communicating openly about how AI will be used, involving employees in pilot programs and demonstrating how AI can augment, not replace, their roles.

Implementing a tiered AI education program, launching employee enablement campaigns and providing access to AI-powered tools can help bring a manufacturer’s workforce along the AI journey. Hosting AI town halls where employees from supervisory roles, as well as the frontline, can ask questions or share concerns is another way to build engagement. Worker trust can also be reinforced through the development of an internal AI ethics policy and governance board. 



Source link

Continue Reading

Tools & Platforms

3 AI roadblocks—and how to overcome them

Published

on


Evidence of uneven AI adoption in the private sector grows by the day, with executives worried about falling behind more tech-savvy competitors. But the stakes are different, and considerably higher, in government. For local leaders, AI isn’t about winning a race. It’s about unlocking new problem-solving capacity to deliver better services and meet pressing resident needs.

Even so, city governments face real barriers to further adoption, including persistent concerns about accuracy and privacy, procurement hurdles, and too little space for civil-servant experimentation. The good news? Innovative leaders are already showing how to overcome these obstacles. And, in doing so, they’re reaping insights of use to others aiming to do the same.

Designing AI tools tailored to employees’ needs.

Boston Chief Innovation Officer Santiago Garces has no doubt that his city-hall colleagues want to push their efforts forward with AI, and he’s got the data to prove it.  His team recently conducted a survey of 600 Boston city employees and found that 78 percent of them want to further integrate the technology into their work. When asked what’s holding them back, security, accuracy, and intellectual property are among civil servants’ top concerns. 

Boston’s solution: Developing AI tools with more specific use cases, such as speeding up the procurement process, and employee concerns in mind from the start. 

Following through on a project they began last year, Garces and his team recently deployed a tool called Bitbot that can answer employees’ questions about procurement. Because it was trained on dozens of procurement documents, as well as state law, local ordinances, and city best practices, Garces argues the tool is best described not as a chatbot (though it resembles one) and more like the AI version of a handbook people know they can trust. And while the city’s randomized controlled trial of the tool’s impact is still wrapping up, Garces says the city has generally seen faster task completion and higher levels of accuracy from employees using it. At the same time, the tool is set up not to send information back to the major tech companies the way most public-facing AI tools do, which helps address employee concerns around privacy and security. 

While not every city has the resources to develop products like this on its own, Garces notes that working with university partners (he works closely with Northeastern University) can be very affordable. And this sort of approach could help civil servants everywhere be more comfortable in pushing AI use forward.

“They want the city-provided tool that they know that they can trust,” Garces explains.

Rapidly prototyping to de-risk big purchases.

When not developing bespoke AI solutions, cities turn to outside vendors. And they’re increasingly doing so with great success and impact, according to Mitchell Weiss, a Harvard Business School professor and senior advisor to the Bloomberg Harvard City Leadership Initiative. Still, adoption is uneven. “Some local leaders are wary [of making a sizeable], given broader concerns in the private sector and worries about the return on investment, ” he adds. Tight city budgets make the stakes of a misstep especially, and private-sector caution only reinforces city leaders’ hesitation.  

That’s why some cities are shaking up how they buy AI tools, both to speed that procurement process up and make sure that they stay laser-focused on boosting efficiency and effectiveness, rather than pursuing new tech for its own sake. Call it “try before you buy” for cities and AI.

Take San Antonio. Emily Royall, who until this past month worked as a senior manager for the emerging technology division in the city, helped run a rapid prototyping initiative that ensures potential AI contracts address tangible, department-level needs. The city spends up to $25,000 on three-to-six-month pilots before committing to longer-term vendor deals. The goal is to gauge impact and kick the tires first. 

Longer term, Royal and her new colleagues at the Procurement Excellence Network (she joined the team in September) believe one way cities will take their AI games to the next level is by banding together and conducting joint solicitations. And unlike traditional approaches to cooperative purchasing, cities are now determined to take a more muscular role in deciding for themselves what the most valuable AI use cases look like, and then calling on industry to develop the products that bring them to life while still meeting cities’ privacy concerns.

“This is about pooling purchasing power to deliver the outcomes that governments actually want to see from their implementation of the technology,” she says.

Leading teams toward bolder experimentation.

One of the cities leading that charge when it comes to local governments shaping the AI market is San Jose, Calif., which on Wednesday announced the first winners of its AI Incentive Program, offering grants to AI startups taking on everything from food waste to maternal health. But that’s not the only way the city is standing out. San Jose is also a model when it comes to creating a workplace where employees trust that leaders will have their backs as they constantly experiment in new ways with the technology.  

“Integrating AI into city hall isn’t just a question of expense,” explains Mai-Ling Garcia, digital practice director at the Bloomberg Center for Public Innovation at Johns Hopkins University. “It also requires that you have the political capital to spend to take risks.” 

And San Jose Mayor Matt Mahan is spending that political capital to great effect.

“He tells us it’s OK if you try something and it doesn’t work—you will not be penalized so long as there’s sufficient due diligence,” explains Stephen Caines, the city’s chief innovation officer.

But it’s not just what the mayor tells civil servants. And it’s not just the training San Jose provides through its data and AI upskilling programs, which are delivered in partnership with San Jose State University and which the mayor wants to train 1,000 more civil servants next year. It’s the larger political climate he’s cultivated to encourage AI experimentation. 

For example, the mayor presented a memo to the city council two years ago calling for the city to seize the moment and help shape (and stimulate) the emerging industry, and to integrate it across city operations. When local lawmakers voted for it, it helped clarify for everyone in city hall that pushing public-sector AI use forward wasn’t just allowed, but a key part of their job.

“I am often reminding policymakers and my colleagues that we spend probably a disproportionate amount of time focused on the technology itself or the latest hot startup versus what moves the needle the most, which is the people who will use these tools,” Mayor Mahan tells Bloomberg Cities. He adds that it isn’t just him, but city leaders across the organization who encourage experimentation with the technology. 

“How you choose to react to failure matters a tremendous amount for building culture,” Mayor Mahan, who is participating in the Bloomberg Harvard City Leadership Initiative, explains.

Among San Jose’s most concrete AI successes so far is a traffic-signal initiative that has already shown the potential to reduce resident commute times by 20 percent. And if the mayor and his team have anything to say about it, that’s just the start of not just pushing AI use forward in their city but encouraging other cities to experiment, too.

“The outdated vision of government is that we are merely consumers of technology,” Caines, the local innovation officer, explains. “The thesis that we’re putting forward is that government can be not only a lab where technology can be deployed, it can also be a valuable partner in co-creation, and we can actually serve as a market indicator by highlighting use cases that make a difference for residents.”

 



Source link

Continue Reading

Trending