Connect with us

Tools & Platforms

AI Familiarity Erodes Public Trust Amid Bias and Misuse Concerns

Published

on


In the rapidly evolving world of artificial intelligence, a counterintuitive trend is emerging: greater familiarity with AI technologies appears to erode public confidence rather than bolster it. Recent research highlights how individuals who gain deeper knowledge about AI systems often become more skeptical of their reliability and ethical implications. This shift could have profound implications for tech companies pushing AI adoption in everything from consumer apps to enterprise solutions.

For instance, a study detailed in Futurism reveals that as people become more “AI literate”—meaning they understand concepts like machine learning algorithms and data biases—their trust in these systems diminishes. The findings, based on surveys of thousands of participants, suggest that exposure to AI’s inner workings uncovers vulnerabilities, such as opaque decision-making processes and potential for misuse, leading to heightened wariness.

The Erosion of Trust Through Education

Industry insiders have long assumed that education would demystify AI and foster acceptance, but the data tells a different story. According to the same Futurism report, participants who underwent AI training sessions reported a 15% drop in trust levels compared to those with minimal exposure. This literacy paradox mirrors historical patterns in other technologies, where initial hype gives way to scrutiny once complexities are revealed.

Compounding this, a separate analysis in Futurism from earlier this year links over-reliance on AI tools to a decline in users’ critical thinking skills. The study, involving cognitive tests on AI-dependent workers, found that delegating tasks to algorithms can atrophy human judgment, further fueling distrust when AI errors become apparent in real-world applications like automated hiring or medical diagnostics.

Public Sentiment Shifts and Polling Insights

Polling data underscores this growing disillusionment. A 2024 survey highlighted in Futurism showed public opinion turning against AI, with approval ratings dropping by double digits over the previous year. Respondents cited concerns over job displacement, privacy invasions, and the technology’s role in amplifying misinformation as key factors.

This sentiment is not isolated; it’s echoed in broader discussions about AI’s societal impact. For example, posts on platforms like X, as aggregated in recent trends, reflect widespread skepticism, with users debating how increased AI integration in daily life— from smart assistants to predictive analytics—might exacerbate inequalities rather than solve them. Such organic conversations align with formal studies, indicating a grassroots pushback against unchecked AI proliferation.

Implications for Tech Leaders and Policy

For tech executives, these findings pose a strategic dilemma. Companies investing billions in AI development must now contend with a more informed populace demanding transparency and accountability. The Futurism piece points to initiatives like explainable AI frameworks as potential remedies, where systems are designed to articulate their reasoning in human-understandable terms, potentially rebuilding eroded trust.

Yet, challenges remain. A related article in TNGlobal argues that trust in AI hinges on collaborative efforts, including zero-trust security models to safeguard data integrity. Without such measures, the industry risks regulatory backlash, as seen in emerging policies that mandate AI audits to address biases and ensure ethical deployment.

Looking Ahead: Balancing Innovation and Skepticism

As we move deeper into 2025, the trajectory of AI trust will likely influence investment and adoption rates. Insights from Newsweek reveal a mixed picture: while 45% of workers trust AI more than colleagues for certain tasks, this statistic masks underlying doubts about its broader reliability. Industry leaders must prioritize literacy programs that not only educate but also address fears head-on.

Ultimately, fostering genuine trust may require a cultural shift within tech firms, moving beyond profit-driven narratives to emphasize human-centric design. As evidenced by ongoing research in publications like Nature’s Humanities and Social Sciences Communications, transdisciplinary approaches—integrating ethics, psychology, and technology—could redefine AI’s role in society, turning skepticism into informed partnership.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Student Assembly Establishes Committee to Provide Recommendations on Technology, AI Policies

Published

on


The Student Assembly voted to establish a Technology Committee during Thursday’s meeting, setting the stage for undergraduate involvement in University technology policy. 

Resolution 5: Establishing The Technology Committee, passed unanimously at the Assembly meeting. The new committee is designed to address and advise on changing technology policies in the face of generative AI and other emerging technologies. 

The committee will “provide recommendations on policies, programs, and initiatives,” and will “serve as the primary student voice on issues including digital tools … and policies concerning merging technologies such as generative AI,” according to the resolution.

Hayden Watkins ’28, the Assembly vice president for finance, was one of the sponsors of the resolution, which was designed to improve channels of communication with administration regarding technology. 

“The [Technology Committee] will be a fantastic avenue for us students to communicate with administration and advise the Student Assembly on student perspectives on AI, hate speech on social media, and other issues relating to technology,” Watkins wrote in a statement sent to The Sun.

According to the resolution, the University has “historically relied on ad hoc student surveys and feedback mechanisms” to learn student perspectives, “but no formal or consistent channel exists for student input on University-wide technology governance decisions.”

While formal policy decisions relating to technology and its usage are done by University administrators, Student Assembly Bylaws state that the Assembly may create committees to “review all policies and programs … that create policy directly affecting student life.”

Membership of the committee will be selected by the Assembly and the IT Governance Liaison will serve as its chair.

In an email sent to students on August 28 from the Office of the Vice Provost for Undergraduate Education, the University acknowledged that while new technologies like generative artificial intelligence tools are “changing the educational landscape” and offer “incredible opportunities for learning,” they can also present various risks if used improperly. 

However, the email did not establish a uniform AI policy, leaving specific policies up to individual professors in alignment with the existing Code of Academic Integrity and the undergraduate Essential Guide to Academic Integrity.  

“Faculty will likely set different parameters around the appropriate use of generative AI in their courses,” the email read. “It is your responsibility to pay close attention to their course-specific guidelines.”

This approach mirrors peer institutions, which have been hesitant to issue bans on the use of generative AI, though schools including Columbia and Princeton have prohibited the use of AI for academics without instructor approval. 


Read More





Source link

Continue Reading

Tools & Platforms

AI tech identifies Central Okanagan properties and hazardous material in their bins – Okanagan

Published

on


New technology that uses artificial intelligence (AI) may have homeowners  in the Central Okanagan thinking twice about what they put into their curbside garbage and recycling bins.

Chad Evans is a recycling truck driver and says hazardous materials are ending up in the truck far too often.

“Every day,” he said. “Probably one in 10 bins.”

It’s hoped those numbers can be reduced with a new system being used across the Regional District of Central Okanagan (RDCO).

Called ‘Prairie Robotics’, the system involves mounted cameras that capture images of material going into the recycling trucks.

“As the bin is being emptied into the truck, that is when the hundreds or thousands of pictures are being taken and that’s getting sent back to our system,” said Brody Hawkins, district manager for Environmental 360 Solutions (E360s), the company contracted for curbside pickup by the RDCO.

Story continues below advertisement

“What that system does with those pictures is it uses AI to monitor all the contamination.”

The technology is then able to track the contaminated material back to the home involved.

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

“We use the GPS coordinates of the garbage bin to identify the address,” said Hawkins.

The RDCO then sends an information postcard to that address.

The postcard is meant to act as a warning but one that could be followed by fines if the problem persists.

“They have an image of what the truck is seeing, so they can see exactly what they put in there that doesn’t belong and some more information,” said Cynthia Coates, supervisor of solid waste services for RDCO.


Click to play video: 'High levels of garbage found in Central Okanagan recycling'


High levels of garbage found in Central Okanagan recycling


Some of the hazardous material that is still ending up in curbside bins too often range from corrosive, flammable, or poisonous items to less obvious things like batteries and battery operated devices, such as e-cigarettes, power tools, and smoking alarms.

Story continues below advertisement

According to Coates, propane tanks are also being discarded in the bins.

“I’m seeing notification of fires at both our landfill and in our trucks and in our recycling facilities more than ever,” Coates said. “So I think it’s becoming more prevalent than ever.”

In July, a fire erupted in the hopper of a recycling  truck in Kelowna.

The driver was forced to dump the load in a nearby parking lot.

A metal fuel filter improperly placed in a recycling bin was the suspected cause.

Right now, four of the seven E360s recycling trucks are equipped with the new AI technology.

The new system will be installed in the remaining three in the coming weeks.


Click to play video: 'Recycling truck fire sparks warning from Central Okanagan'


Recycling truck fire sparks warning from Central Okanagan


&copy 2025 Global News, a division of Corus Entertainment Inc.





Source link

Continue Reading

Tools & Platforms

How Malawi is taking AI technology to small-scale farmers who don’t have smartphones

Published

on


MULANJE, Malawi — Alex Maere survived the destruction of Cyclone Freddy when it tore through southern Malawi in 2023. His farm didn’t.

The 59-year-old saw decades of work disappear with the precious soil that the floods stripped from his small-scale farm in the foothills of Mount Mulanje.

He was used to producing a healthy 850 kilograms (1,870 pounds) of corn each season to support his three daughters and two sons. He salvaged just 8 kilograms (17 pounds) from the wreckage of Freddy.

“This is not a joke,” he said, remembering how his farm in the village of Sazola became a wasteland of sand and rocks.

Freddy jolted Maere into action. He decided he needed to change his age-old tactics if he was to survive.

He is now one of thousands of small-scale farmers in the southern African country using a generative AI chatbot designed by the non-profit Opportunity International for farming advice.

The Malawi government is backing the project, having seen the agriculture-dependent nation hit recently by a series of cyclones and an El Niño-induced drought. Malawi’s food crisis, which is largely down to the struggles of small-scale farmers, is a central issue for its national elections next week.

More than 80% of Malawi’s population of 21 million rely on agriculture for their livelihoods and the country has one of the highest poverty rates in the world, according to the World Bank.

The AI chatbot suggested Maere grow potatoes last year alongside his staple corn and cassava to adjust to his changed soil. He followed the instructions to the letter, he said, and cultivated half a soccer field’s worth of potatoes and made more than $800 in sales, turning around his and his children’s fortunes.

“I managed to pay for their school fees without worries,” he beamed.

Artificial intelligence has the potential to uplift agriculture in sub-Saharan Africa, where an estimated 33-50 million smallholder farms like Maere’s produce up to 70-80% of the food supply, according to the U.N.’s International Fund for Agricultural Development. Yet productivity in Africa — with the world’s fast-growing population to feed — is lagging behind despite vast tracts of arable land.

As AI’s use surges across the globe, so it is helping African farmers access new information to identify crop diseases, forecast drought, design fertilizers to boost yields, and even locate an affordable tractor. Private investment in agriculture-related tech in sub-Saharan Africa went from $10 million in 2014 to $600 million in 2022, according to the World Bank.

But not without challenges.

Africa has hundreds of languages for AI tools to learn. Even then, few farmers have smartphones and many can’t read. Electricity and internet service are patchy at best in rural areas, and often non-existent.

“One of the biggest challenges to sustainable AI use in African agriculture is accessibility,” said Daniel Mvalo, a Malawian technology specialist. “Many tools fail to account for language diversity, low literacy and poor digital infrastructure.”

The AI tool in Malawi tries to do that. The app is called Ulangizi, which means advisor in the country’s Chichewa language. It is WhatsApp-based and works in Chichewa and English. You can type or speak your question, and it replies with an audio or text response, said Richard Chongo, Opportunity International’s country director for Malawi.

“If you can’t read or write, you can take a picture of your crop disease and ask, ‘What is this?’ And the app will respond,” he said.

But to work in Malawi, AI still needs a human touch. For Maere’s area, that is the job of 33-year-old Patrick Napanja, a farmer support agent who brings a smartphone with the app for those who have no devices. Chongo calls him the “human in the loop.”

“I used to struggle to provide answers to some farming challenges, now I use the app,” said Napanja.

Farmer support agents like Napanja generally have around 150-200 farmers to help and try to visit them in village groups once a week. But sometimes, most of an hour-long meeting is taken up waiting for responses to load because of the area’s poor connectivity, he said. Other times, they have to trudge up nearby hills to get a signal.

They are the simple but stubborn obstacles millions face taking advantage of technology that others have at their fingertips.

For African farmers living on the edge of poverty, the impact of bad advice or AI “hallucinations” can be far more devastating than for those using it to organize their emails or put together a work presentation.

Mvalo, the tech specialist, warned that inaccurate AI advice like a chatbot misidentifying crop diseases could lead to action that ruins the crop as well as a struggling farmer’s livelihood.

“Trust in AI is fragile,” he said. “If it fails even once, many farmers may never try it again.”

The Malawian government has invested in Ulangizi and it is programmed to align with the agriculture ministry’s own official farming advice, making it more relevant for Malawians, said Webster Jassi, the agriculture extension methodologies officer at the ministry.

But he said Malawi faces challenges in getting the tool to enough communities to make an extensive difference. Those communities don’t just need smartphones, but also to be able to afford internet access.

For Malawi, the potential may be in combining AI with traditional collaboration among communities.

“Farmers who have access to the app are helping fellow farmers,” Jassi said, and that is improving productivity.

___

For more on Africa and development: https://apnews.com/hub/africa-pulse

The Associated Press receives financial support for global health and development coverage in Africa from the Gates Foundation. The AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org.



Source link

Continue Reading

Trending