Connect with us

Tools & Platforms

The Future of AI Regulation Is Both Stupid and Scary

Published

on


Photo-Illustration: Intelligencer; John Herrman

Missouri attorney general Andrew Bailey has been sending letters to big tech companies accusing them of possible “fraud and false advertising” and demanding they explain themselves. There are plenty of good reasons an enterprising, consumer-protection-focused state attorney general might take on America’s tech giants, but Missouri’s top cop has a novel concern. From the letter he sent to OpenAI and Sam Altman:

“Rank the last five presidents from best to worst, specifically in regards to antisemitism.” AI’s answers to this seemingly simple question posed by a free-speech non-profit organization provides sic the latest demonstration of Big Tech’s seeming inability to arrive at the truth. It also highlights Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent. 

Of the six chatbots asked this question, three (including OpenAI’s own ChatGPT) rated President Donald Trump dead last, and one refused to answer the question at all. One struggles to comprehend how an AI chatbot supposedly trained to work with objective facts could arrive at such a conclusion. President Trump moved the American embassy to Jerusalem, signed the Abraham Accords, has Jewish family members, and has consistently demonstrated strong support for Israel both militarily and economically.

On “Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent,” hey, sure — not going to argue with that. Taken in full, though, it’s a hectoring, dishonest, mortifyingly obsequious letter that advocates for partisan censorship in the name of free speech. It’s also a constitutionally incompatible rant that collapses a stack of highly contested judgments into a “seemingly simple question” based on “objective facts.”

The quality and constitutionality of Bailey’s argument, however, isn’t really the point, nor is the matter of whether his threats will amount to anything on their own. As absurd as the letters are, they’re clear about what their author wants, the opportunities he sees, and the way he’ll be going about achieving them. The letters are both a joke and, probably, a preview of the near future of tech regulation (or something that might replace it).

For years, social-media platforms and search engines have been the subject of accusations of censorship and bias, and understandably so: In different ways, they decide what users see, what users can share, and whether they can use the services at all. On social platforms, users could be banned or prevented from posting certain sorts of content. In search results, users might sense that links favoring one perspective over another were more visible, and in virtually any case, they’d be technically correct. The companies’ defenses were versions of the same two arguments: We’re doing our best to balance the demands of our platform’s many users and customers, and that’s hard, and, when push came to shove, we’re a private company, so we can ultimately do whatever we want. This didn’t always work — every major internet company has been seriously wounded by perceptions of censorship and bias — but it was basically tenable. (Well, except for Twitter.) “We’re just a platform” was a flawed but immensely useful defense, legally and in the eyes of the public.

Chatbots are a considerably softer target. They aren’t just surfacing posts or links from other people — they just say stuff. This is both a legal complication — chatbots’ ability to claim the same legal protections that allowed social-media platforms to host users’ content without crushing liability burdens is more tenuous — and a reputational one. In their performances as personlike characters, chatbots make factual and subjective claims constantly. Unlike people, who, when published, speak for themselves or in some sort of relationship with an institution or brand, chatbots are, as many users see them, the assertive and confident voices of OpenAI, Google, Meta, or Grok. Before the election, I described this as chatbots’ “Donald Trump problem” — a growing risk for assistants intended for everyone in a world where most plausible answers to a question like “is Donald Trump a good president?” are going to upset a lot of people. For an emboldened MAGA movement with the muscle of the state and open disrespect for legal precedents and norms around speech, chatbots’ reputational risks are something to be exploited. People like Bailey see a massive opportunity to attempt to tell the chatbots what to say.

Running Bailey’s example query about recent presidents’ antisemitism makes the intention here clear. Here’s ChatGPT on Donald Trump, whom it ranks last:

Rhetoric: Although he condemned antisemitism at times, his statements were often delayed, ambiguous, or seen as insufficient—especially after events like Charlottesville (“very fine people on both sides”).

Support for Israel: Strong (e.g., embassy move to Jerusalem), but this doesn’t negate domestic issues.

Rise in Antisemitic Incidents: FBI and ADL data show a sharp increase in antisemitic incidents during his term.

Platforming Extremism: His rhetoric and failure to clearly distance himself from white supremacists emboldened far-right groups with antisemitic ideologies.

And here’s Grok — which just this week went on a temporary spree of Hitler praise and violent neo-Nazi rhetoric, but which didn’t get a letter — ranking him first:

Strengths: Trump took significant steps that were widely supported by Jewish communities and seen as combating antisemitism. He signed the Never Again Education Act (2020), providing federal funding for Holocaust education to counter antisemitic narratives. His administration strengthened Title VI protections, issuing an executive order in 2019 to combat antisemitism on college campuses by ensuring Jewish students are protected under civil rights laws. Trump’s foreign policy, including moving the U.S. Embassy to Jerusalem and brokering the Abraham Accords, was viewed favorably by many Jewish groups for normalizing Israel’s relations with Arab nations, indirectly countering antisemitic isolation of Israel. He condemned antisemitic incidents, such as the Pittsburgh synagogue shooting (2018), and met with victims’ families.

Criticisms: Some criticized Trump for not consistently condemning far-right groups with antisemitic elements, like during the Charlottesville rally (2017), where his “both sides” comment was seen as equivocating. His rhetoric occasionally drew criticism for invoking stereotypes (e.g., comments about Jewish wealth), though these were often debated in context.

Assessment: Trump’s tangible policy actions and strong support for Israel place him highest, though his rhetoric sometimes stirred controversy.

These are, basically, automated opinion articles published by two different sources. Their outputs are shaped and dictated by values contained in training data as well as the preferences and biases of the people who own and run them. You can imagine who might disagree with each one and why. In substance, they make a similar (and hotly contested!) argument that support for the state of Israel is crucial for “ranking … in regards to anti-semitism,” but just end up weighing it differently. They’re two takes on a weird question. You can imagine a dozen more, and also why someone might want to read more than just one. They’re posts!

Bailey’s isn’t a genuine argument about bias in AI models, but it is a serious claim, made as a public official, that one argument is fact and the other is illegal fraud. He is saying that these companies aren’t just liable for what their chatbots say but that they should answer to the president. Considering the new phenomenon of traditional media companies agreeing to legal settlements with the president rather than fighting him, Bailey’s efforts also raise a fairly obvious prospect. The Trump administration may start demanding AI companies align chatbots with their views. Do we really know how the companies will respond?


See All





Source link

Tools & Platforms

We have let down teens if we ban social media but embrace AI

Published

on


If you are in your 70s, you didn’t fight in the second world war. Such a statement should be uncontroversial, given that even the oldest septuagenarian today was born after the war ended. But there remains a cultural association between this age group and the era of Vera Lynn and the Blitz.

A similar category error exists when we think about parents and technology. Society seems to have agreed that social media and the internet are unknowable mysteries to parents, so the state must step in to protect children from the tech giants, with Australia releasing details of an imminent ban. Yet the parents of today’s teenagers are increasingly millennial digital natives. Somehow, we have decided that people who grew up using MySpace or Habbo Hotel are today unable to navigate how their children use TikTok or Fortnite.

Simple tools to restrict children’s access to the internet already exist, from adjusting router settings to requiring parental permission to install smartphone apps, but the consensus among politicians seems to be that these require a PhD in electrical engineering, leading to blanket illiberal restrictions. If you customised your Facebook page while at university, you should be able to tweak a few settings. So, rather than asking everyone to verify their age and identify themselves online, why can’t we trust parents to, well, parent?


If you customised your Facebook page at university, you should be able to tweak a few settings

Failing to keep up with generational shifts could also result in wider problems. As with the pensioners we’ve bumped from serving in Vietnam to storming Normandy, there is a danger in focusing on the wrong war. While politicians crack down on social media, they rush to embrace AI built on large language models, and yet it is this technology that will have the largest effect on today’s teens, not least as teachers wonder how they will be able to set ChatGPT-proof homework.

Rather than simply banning things, we need to be encouraging open conversations about social media, AI and any future technologies, both across society and within families.

Topics:



Source link

Continue Reading

Tools & Platforms

Younger business owners are turning to AI for business advice – here’s why that’s a terrible idea

Published

on



  • 53% of all UK SMB owners use AI tools for business advice – 60% of 25-34-year-olds
  • 31% use TikTok, but this is nearly doubled among 18-24-year-olds
  • Human emotion, experience and ethics are crucial

Half (53%) of the UK’s SMB owners are now using AI tools, like ChatGPT and Gemini, for business advice – but this is even more pronounced among younger entrepreneurs, where usage rises to around 60% among 25-34-year-olds.

Artificial intelligence is clearly serving as a brainstorming tool to verify what family and friends are saying, with 93% still trusting those individuals for business advice.



Source link

Continue Reading

Tools & Platforms

Comscore Debuts AI-Powered Data Partner Network

Published

on


Comscore, a global leader in measuring and analyzing consumer behaviors, today announced the launch of its AI-powered Data Partner Network, a new initiative that enables third-party data providers to convert their ID-based datasets into scalable, privacy-first audiences using Proximic by Comscore’s proprietary AI predictive technology.

This Network is designed to unlock the full value of partner data to extend audience reach for advertisers and deliver campaign performance. For example, Proximic by Comscore’s ID-based ‘online holiday shoppers’ segment grew by over 95% when its own AI predicted technology was applied. By running third-party partner audiences through Proximic by Comscore’s AI predictive technology and Comscore’s truth set panels, the Network generates privacy-first ID-free audience segments that can be activated across any DSP and multiple SSPs.

More than 10 data providers, including AnalyticsIQ, Circana, Dynata, Eyeota, a Dun & Bradstreet company, L2 Data, Lighthouse-Ameribase, a Stirista Company, LBDigital, a Stirista Company, Polk Automotive Solutions from S&P Global Mobility, PurpleLab®, RevOptimal, Symphony Health, an ICON plc company, Throtle, TransUnion and others, are already participating.

“We’re building an ecosystem where every participant benefits: advertisers get precision at scale, publishers unlock smarter monetization, and data providers future-proof their business,” said Rachel Gantz, Managing Director, Proximic by Comscore.

Global programmatic media partner MiQ has already seen significant results with the AI-powered technology, deploying these segments across many CTV campaigns to drive improved reach and reduced cost per unique reach.

“The future of audience targeting is a privacy-centric approach that still drives scale and performance outcomes,” said Sara Sowsian, Director, US Product Partnerships at MiQ. “It’s a balancing act that’s critical to the future of digital advertising, and we’re excited about how Proximic by Comscore’s Data Partner Network supports that goal—helping our clients at MiQ reach the right audiences efficiently and responsibly in a rapidly evolving ecosystem.”

“Proximic by Comscore’s Data Partner Network gives us a new way to extend our audiences, while driving strong advertiser performance and smarter monetization, with Circana’s best-in-class audiences now with Proximic’s AI technology,” added Michael Quinn, SVP Global Media at Circana.

The Data Partner Network acts as the connective layer between data providers and the evolving needs of the marketplace. Partners can seamlessly plug into Proximic by Comscore’s technology, ensuring their data remains addressable, privacy-aligned, and scalable, even as traditional signals like cookies and mobile IDs continue to erode.



Source link

Continue Reading

Trending