Connect with us

Tools & Platforms

AI skills are about more than a workplace technology

Published

on


AI is being presented as something to “sprinkle on” to public and private sector tasks, magically delivering innovation and efficiency gains. AI skills are framed in terms of technical competence for officials and industry. Yet, this perpetuates the exclusion, elitism, and corporate capture that has dominated AI development to date: and it ignores the wealth of evidence that adopting, adapting to, or even resisting AI, is not just a technical task – but one that requires many more perspectives.  

Members of the public, young and old, are affected by the introduction of AI, and are gaining first-hand experience of both advantages and the bias and discrimination AI can amplify. When Connected by Data organised the People’s Panel on AI, a citizens’ jury bringing together everyday people, we saw first-hand how most people feel there is no adequate access to clear information and opportunities to learn about how AI affect their lives. Existing training is too often designed by tech firms with vested interests in selling their products, rather than fostering critical digital consumers and citizens.  

The open letter we both signed calls for “opportunities for parents, older people, the voluntary sector, people from underserved and marginalised communities, and individuals from every walk of life to develop their own understanding of, and perspectives on, AI”. Mark’s research has underscored how, in particular, adversely-racialised people deeply understand the problems these technologies bring and, if listened to, can articulate powerful ideas for better safeguarding and design. 

Our goal, an inclusive AI literacy agenda that supports participatory and collective decision making around AI, is not idealistic nor over-optimistic, but grounded in evidence and experience of participatory approach to technology. We must equip and trust informed publics to navigate AI’s opportunities and limitations. After all, people are experts of their own lives. Although the apparent pace of AI change can often feel disorienting, it would not take much to shift conversations so we can all feel a little more in control. Critical AI literacy should be available to all, to help people navigate living with AI and making real choices about how they want to engage or not.  

The message to Westminster is clear: rethink and democratise the future of AI skills. AI should be developed not only to benefit those at the top, but with and by all members of society, especially those who risk being harmed most. These voices must be listened to, their capability and autonomy fostered and recognised in any AI skills investment. Craft this future collectively and socially just for all – let the public not the technology flourish above all. 

Tim Davies is Director of Research & Practice at Connected by Data, the campaign for communities to have a powerful voice in the governance of data and AI. Mark Wong is Senior Lecturer and Subject Group Lead of Social and Urban Policy at University of Glasgow.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Meta reportedly explores using rival AI models to enhance its apps

Published

on


Meta is exploring the use of AI models from Google and OpenAI to enhance its apps while advancing its own Llama AI technology.

Meta is reportedly exploring the use of artificial intelligence models developed by competitors, including Google and OpenAI, to improve AI features across its platforms. According to a report by The Information, executives at the Meta Superintelligence Lab have considered integrating Google’s Gemini model into the company’s Meta AI chatbot. The move would enable Meta to offer a more robust, conversational text-based solution for answering user search queries.

The report also indicated that Meta has held discussions about incorporating OpenAI’s technology into Meta AI and its other AI-powered features. These potential collaborations highlight Meta’s effort to strengthen its AI capabilities while continuing to develop its own large language model, Llama.

Strategic partnerships as a temporary measure

A Meta spokesperson stated that the company is taking an “all-of-the-above approach to building the best AI products,” which includes both building in-house solutions and partnering with external organisations. The report noted that while Meta is exploring external technology, the company’s primary goal is to refine and advance its own AI systems. Leveraging competitor models would only be a temporary measure to accelerate innovation and keep pace with rivals in the rapidly evolving AI market.

Meta’s interest in adopting external AI tools comes at a time when competition in generative AI development is intensifying. By accessing technologies from industry leaders such as Google and OpenAI, Meta aims to enhance user experiences on its apps while gaining insights that can help strengthen future iterations of Llama.

Internal AI adoption and recruitment efforts

The Information reported that Meta employees are already using Anthropic’s AI models to support the company’s internal coding assistant. This indicates that Meta has been integrating third-party AI solutions internally even as it invests heavily in its own research and development.

Additionally, Meta has been actively recruiting AI researchers from Google and OpenAI to enhance expertise at its Superintelligence Lab. These recruitment efforts reportedly include highly competitive compensation packages designed to attract top talent from across the AI sector.

As Meta continues to refine its AI strategy, the company’s willingness to work with external partners shows its commitment to creating cutting-edge products. The temporary reliance on competitor models could help Meta accelerate development and maintain a strong position in the AI race.



Source link

Continue Reading

Tools & Platforms

Tech expert warns of 'alarming' AI behavior after teen's death – Fox News

Published

on



Tech expert warns of ‘alarming’ AI behavior after teen’s death  Fox News



Source link

Continue Reading

Tools & Platforms

Is AI turning your travel experience into a costly trap?

Published

on




in this case: AI anxiety in travel

in this commentary

  • A look at how travel companies are using AI to automatically bill you for rental car damage, in-room infractions, and higher airfares.
  • An analysis of how these automated systems can make mistakes and why the burden of proof is shifting to the consumer.
  • Actionable strategies you can use to protect yourself from AI-powered price hikes and false damage claims.

Worried about every little ding on your rental car? Do you always go into “anonymous” mode on your web browser before booking airline tickets?

If you do, then you probably have AI anxiety.

Travel companies are quietly deploying artificial intelligence systems, creating an invisible web of automated billing that can cost you hundreds or thousands of dollars—often without your knowledge or consent. From Hertz’s controversial AI vehicle scanners to hotel vapor detectors that fine guests when their hairdryers overheat, to airline pricing algorithms that jack up fares based on your browsing history, these systems operate in the shadows while your wallet takes a hit.

“Technology can make travelers feel powerless,” says Raymond Yorke, a spokesman for Redpoint Travel Protection. “It’s happening now. We’ve seen everything from automated rental car damage claims to a suspicious surge in airfare driven by dynamic pricing algorithms.”

But it doesn’t have to stay that way.

The technology promises efficiency and fairness, but travelers are discovering that AI often acts more like a digital pickpocket than an impartial assistant. The systems flag false positives, make decisions without human oversight, and shift the burden of proof onto customers who have to defend themselves against algorithmic accusations.

Where are the AI traps?

Rentals have become ground zero for AI overreach. Companies like Hertz are using technology from a company called UVeye that can reportedly detect paint inconsistencies and minor damages down to a millimeter level. 

But critics say these systems can’t always distinguish between existing scratches, dirt or lighting changes, and genuine new damage. And car rental companies bill customers automatically, with limited avenues for appeal.

Legal consultant and AI specialist Nicola Cain notes that human intervention only happens when a customer raises a complaint, meaning the AI’s judgment stands unless you fight back. It should be the other way around, she says. 

“Human oversight needs to be built into the process,” she adds.

Hotel chains are installing sophisticated sensor networks that go far beyond traditional smoke detectors. These systems monitor vapor particles, noise levels, occupancy counts, and even Wi-Fi usage patterns. 

The systems are far from perfect. Ruth Cruz recently got hit with a $250 fee for smoking in her hotel room. She says the AI registered a false positive. 



Your voice matters

🖐️ Your voice matters

Have you been hit with a surprise charge you suspect was generated by an automated system? Do you think this technology makes travel more efficient, or is it just a new way for companies to make money?

And what are your best tips for protecting yourself from these AI traps?

Share your thoughts in the comments.

“I successfully disputed the charge by explaining the technical limitations of their detection system,” says Cruz, who edits a technology website in San Jose. (These types of errors are easy to find with a little sleuthing. Hers involved a quick online search.)

Airlines are perfecting the art of AI-powered price manipulation. For years, their systems have tracked your search history, location, device type, loyalty status, and dozens of other signals to predict your willingness to pay premium prices. AI is supercharging that practice.

Thomas O’Shaughnessy, a marketing executive from St. Louis, has noticed prices jumping dramatically when he researches flights. 

“The price increases weren’t random,” he says. “I believe they were caused by an AI model that changes prices based on demand, the time of booking, and even the user’s search history.”

No wonder travelers have AI anxiety. The question is, what can they do about it?

How to fight the AI

“The key to fighting back is understanding that these systems prioritize speed and automation over accuracy,” explains Frank Harrison, regional security director for the Americas at World Travel Protection. “They’re designed to extract maximum revenue while hoping customers won’t challenge algorithmic decisions. But armed with the right documentation and strategies, travelers can level the playing field.”

Here are some strategies that will help you fight AI:

  • Renting a car? Channel your inner Sherlock. Do a comprehensive walk-around and take photos of your car from all angles. Focus on areas AI commonly flags, like bumpers, wheel wells, and roof surfaces. Email these videos to yourself immediately for proof of when they were taken. Document everything—every scratch, every dent, every imperfection—before accepting any rental. And remember, you can always request a different vehicle if the one you’re renting has too many dings or dents.
  • Don’t let ’em track you. Use private browsing or incognito mode when you book flights or hotels. Clear your cookies between searches. Use a VPN (Virtual Private Network) to shift your location. “I’ve seen price differences of $200 or more for the same flight just by appearing to browse from different cities,” says Joey Martin, an AI expert. Also, search for fares on multiple devices and compare prices across platforms. AI pricing algorithms often show different rates to smartphone users versus desktop browsers, or to logged-in loyalty members versus anonymous searchers.
  • Open your hotel window, if possible. Don’t touch anything with a price tag. It’s true, AI is monitoring the air you breathe and the location of every Coke in your minibar. You already know what to do: Don’t touch the items in your minibar and keep your hotel room ventilated. If a surprise bill arrives, respond immediately and assertively. Ask for the original AI scan data, sensor logs, or algorithmic decision records that supposedly justify the charge. Most companies will struggle to provide concrete evidence that withstands scrutiny.

Bear in mind that these strategies will evolve. AI adjusts to consumer behavior, and you’ll have to make some course corrections along the way, too.

This is the start of an AI arms race

In travel, AI is an imperfect technology, registering false positives and erroneously billing consumers. It raises prices by hundreds of dollars per ticket, believing you’ll happily pay extra for your airfare because of your location. What’s more, these systems are a black box, so when you ask for proof that you damaged a car or removed something from a room, they can’t always provide it. 

In short, this is nothing more than a digital money grab, and your AI anxiety is completely justified.

We’re at the beginning of an AI arms race. Travel companies are using machine learning to maximize their revenue. It’s time to fight back.

What happens next? The travel industry is busy deploying AI everywhere. Soon, systems could monitor carry-on luggage to ensure you’re paying for every bag. Hotels could find ways of automatically billing you for every missing towel or bathrobe. Car rental companies could turn their AI resources to car interiors, earning more money from stains or messy upholstery. And don’t even get me started on cruise lines!

Assume AI is tracking your every move — because it probably is.



The AI survival guide: How to fight back against travel’s hidden fees

Rental cars: Document everything

  • Take a detailed video walk-around of the car before you leave the lot.
  • Photograph every existing scratch, dent, and scuff, inside and out.
  • Email the files to yourself immediately to create a timestamped record.

Airfare & hotels: Go undercover

  • Use a VPN to mask your location and avoid geographic price targeting.
  • Always search in your browser’s private or incognito mode.
  • Clear your cookies between searches to prevent tracking.

Hotel rooms: Challenge the charges

  • If you get a surprise fee, immediately demand the evidence.
  • Ask for the specific sensor logs or AI scan data that triggered the charge.
  • Most companies will waive the fee when you challenge them for proof.


Related reads



Source link

Continue Reading

Trending