AI Insights
Nigel Farage urges minister to apologise for Jimmy Savile online safety claim

Nigel Farage has urged Technology Secretary Peter Kyle to “do the right thing and apologise” after he suggested that by opposing the government’s online safety law, the Reform UK leader was on the side of sex offenders like Jimmy Savile.
Reform has said it would scrap the new law, arguing it does not protect children and suppresses free speech.
Kyle told Sky News the law was a “huge step forward” for online safety, adding: “Make no mistake if people like Jimmy Savile were alive today he would be perpetrating his crimes online – and Nigel Farage is saying he is on their side.”
Farage called the minister’s comments “absolutely disgusting” and asked: “Just how low can the Labour government sink?”
Kyle refused to back down after Farage’s criticism, saying on social media: “If you want to overturn the Online Safety Act you are on the side of predators. It is as simple as that.”
Savile was a BBC TV personality who presented shows such as Top of the Pops and Jim’ll Fix It but after his death it emerged he had been one of the UK’s most prolific sexual predators, using his celebrity status to target children and young people.
Last week, the new online safety rules came into force aimed at preventing children from seeing harmful or inappropriate content.
Measures include requiring tech firms to put in place stricter checks for people accessing age-restricted content and taking quick action when harmful content is identified.
Failure to comply with the rules could see companies facing fines of up to £18m or 10% of the firm’s turnover, whichever is greater.
The age verification measures appear to have driven a sharp increase in the numbers downloading virtual private networks (VPN) which disguise user’s location online and could make it possible to avoid age checks.
On Monday, senior Reform figure Zia Yusuf said: “Sending all of these kids onto VPNs is a far worse situation, and sends them much closer to the dark web, where the real dangers lie.”
He added that one of Reform’s first acts in government would be to repeal the Online Safety Act.
Asked what Reform UK would put in its place, Farage said his party did not have a “perfect answer” but had “more access to some of the best tech brains, not just in the country but in the world” and would “make a much better job of it”.
Speaking to Sky News, Kyle acknowledged that “some people are finding their way round” the rules but said the government would not be banning VPNs.
He said the measures were a “huge, giant, unprecedented step forward in stopping harmful content finding its way into children’s feeds”.
“If we can take a big step forward, 70, 80, maybe even 90% forward when it comes to stopping harmful content getting into kids’ feeds – I’ll bank that, that’s a good day at work.
“That 10% that remains – we will go on figuring it out as we go forward.
“I see that Nigel Farage is already saying he is going to overturn these laws.
“We have people out there who are extreme pornographers, peddling hate, peddling violence – Nigel Farage is on their side.
“Make no mistake if people like Jimmy Savile were alive today he would be perpetrating his crimes online – and Nigel Farage is saying he is on their side.”
Speaking to the same channel, Reform UK’s Zia Yusuf said: “That is one of the most outrageous and disgusting things a politician has said in the political arena that I can remember, and that is quite a high bar.
He claimed Labour “have no idea how the internet works and that this act, despite its name, will make children less safe”.
“They are deeply unserious about child safety and levelling that accusation about Jimmy Savile denigrates the victims of Jimmy Savile,” he added.
Conservative MP Katie Lam also questioned the law after the social media platform X blocked users who had not verified their age from viewing a clip of her speaking in Parliament about grooming gangs.
Users who have not verified their age are shown a message from X reading: “Due to local laws, we are temporarily restricting access to this content until X estimates your age.”
Lam said: “The British state won’t protect children from mass gang rape. But it will “protect” adults from hearing about it.”
Before becoming Conservative Party leader, Kemi Badenoch criticised the Online Safety Bill, which had been drawn up by her own party.
In 2022, she said: “This bill is in no fit state to become law. If I’m elected prime minister I will ensure the bill doesn’t overreach. We should not be legislating for hurt feelings.”
Asked about implications for freedom of speech, Kyle said “I will be monitoring the impact, but I have not so far seen anything that gives me concern for anyone about free speech grounds.
“We have very strident protections for free speech in this country.
“This is not about free speech. This is about hateful, violent, extreme, misogynistic and pornographic material finding its way into children’s feeds.”
Prime Minister Sir Keir Starmer also defended the new law during a press conference with US President Donald Trump in Scotland on Monday.
“We’re not censoring anyone,” he said, adding: “We’ve got some measures which are there to protect children, in particular, from sites like suicide sites.
“I don’t see that as a free speech issue, I see that as child protection.”
AI Insights
Bristol City Council’s use of AI for creative booklet criticised

Carys NallyBBC News, West of England

Designers have criticised a council after it used artwork created by Artificial Intelligence (AI) to promote adult learning courses.
Illustrator Adam Birch complained to Bristol City Council after it released a course guide with an AI cover, adding that using AI to tell people about creative workshops “devalues” the classes.
But he also said it might have been misguided rather than “malicious”.
Bristol City Council leader Tony Dyer said it was updating its guidance around AI and understood the issue raised.
Mr Birch made it clear he does not want the booklet’s AI cover to deter people from taking the creative courses.
He said: “My big concern about it was – is it sending the wrong message?
“Why learn these [creative] skills if, right on the face of the book, you’re devaluing the use of it?”
Mr Birch said there are certain “mistakes” that let the viewer know an image has been created by AI.
“Extra or missing fingers and toes is always a dead giveaway,” he said.
“On the cover [of the booklet], the lady only has four fingers and I think seven toes.”

Mr Birch, who creates illustrations for various outlets, said it was not lost on him that he has got to “move with the times” as an artist.
“I appreciate it from all angles,” he said. “This [cover] cost next to nothing to generate.
“But it would have cost next to nothing to take a photo of one of the classes going on – or used some work from the classes as the material on the cover.
“What you’re doing is wiping out a job.”

Luke Oram, an artist and illustrator from Wick, in South Gloucestershire, said he believed AI will affect young people trying to get a start in the creative industry.
“I worry about the 22-year-old graduate who has no idea how to get into a career, or how to even find any work, who then just feels completely undervalued,” he said.
“[They’ll be] alienated from the culture they’re working in because those opportunities just aren’t common anymore.”
“It’s the erosion of knowledge,” he added. “[AI] is damaging.”
Despite this, some in the creative industry have told the BBC there’s a pressure to use AI.
An artist working from Leamington Spa, who wanted to remain anonymous, said his CEO is now recommending his company use AI in their work.
“We’re being told to bring our heads out of the sand,” he said.
“But the people who will be enriched by AI are at the top. For the people expected to use it, they see it as the opposite of what we should be doing.”
He added: “AI is ‘fast-food’. We never stop to think about whether we should – it’s always whether we could.”
Council ‘understands issues’
The creative course booklets were distributed in July and a total of 72,000 were printed.
Up to 70,250 booklets went to individuals and organisations in Bristol, with a few to South Gloucestershire and North Somerset postcodes.
There are no plans for any further print runs.
Mr Dyer said the council fully understands the issues raised.
“While AI presents exciting opportunities for local authorities to improve and adapt their services, we recognise the strong feelings expressed by residents over our use of AI-generated imagery for this booklet,” he said.
“We are currently trialling some limited use of AI and developing our policies and procedures as we learn.”
Mr Dyer added that since the imagery for the booklet was commissioned, the council has updated its guidance for the use of AI.
AI Insights
Tesla Falls Short in India With Just 600 Orders Since Launch

Tesla Inc.’s long-awaited entry into India has delivered underwhelming results so far, with tepid bookings fueling fresh doubts about the company’s global growth outlook.
Source link
AI Insights
From bench to bot: Why AI-powered writing may not deliver on its promise

This is my final “bench to bot” column, and after more than two years of exploring the role of artificial intelligence in scientific writing, I find myself in an unexpected place. When I started this series in 2023, I wasn’t among the breathless AI optimists promising revolutionary transformation, nor was I reflexively dismissive of its potential. I approached these tools with significant reservations about their broader societal impacts, but I was curious whether they might offer genuine value for scientific communication specifically.
What strikes me now, looking back, is how my measured optimism for science may have caused me to underestimate the deeper complications at play. The problem is not that the tools don’t work—it’s that they work too well, at least at producing competent prose. But competent prose generated by a machine, I’ve come to realize, might not be what science actually needs.
My initial set of starting assumptions seemed reasonable. The purpose of neuroscience isn’t getting award-winning grants and publishing high-profile papers. It’s the production of knowledge, technology and treatments. But scientists spend enormous amounts of time wrestling with grants and manuscripts. If AI could serve as a strategic aid for specific writing tasks, helping scientists overcome time-consuming communication bottlenecks, I was all for it. What’s more, writing abilities aren’t equally distributed, which potentially disadvantages brilliant researchers who struggle with prose. AI could help here, too. So long as I remained explicit about claiming AI would not solve all writing troubles, and my goal was always thoughtful incorporation for targeted use cases, not mindless adoption, I felt this column would be a worthwhile service for the community struggling with how to handle this seismic technological shift.
These assumptions felt solid when I started this column. But if I’m being honest, I’ve always harbored some nagging reservations that even thoughtful incorporation of AI tools in scientific writing tasks carries risks I wasn’t fully acknowledging—perhaps even to myself. Recently, I encountered a piece by computer scientists Sayash Kapoor and Arvind Narayanan that articulated those inchoate doubts better than I ever could. They argue that AI might actually slow scientific progress—not despite its efficiency gains but because of them:
Any serious attempt to forecast the impact of AI on science must confront the production-progress paradox. The rate of publication of scientific papers has been growing exponentially, increasing 500 fold between 1900 and 2015. But actual progress, by any available measure, has been constant or even slowing. So we must ask how AI is impacting, and will impact, the factors that have led to this disconnect.
Our analysis in this essay suggests that AI is likely to worsen the gap. This may not be true in all scientific fields, and it is certainly not a foregone conclusion. By carefully and urgently taking actions such as those we suggest below, it may be possible to reverse course. Unfortunately, AI companies, science funders, and policy makers all seem oblivious to what the actual bottlenecks to scientific progress are. They are simply trying to accelerate production, which is like adding lanes to a highway when the slowdown is actually caused by a toll booth. It’s sure to make things worse.
Though Kapoor and Narayanan focus on AI’s broader impact on science, their concerns about turbo-charging production without improving the underlying process echo what economist Robert Solow observed decades ago about computers—we see them everywhere except in the productivity statistics. This dynamic maps directly onto scientific writing in troubling ways.
T
he truth is that the process of writing often matters just as much, or more, than the final product. I explored this issue in my column on teaching and AI, but the idea applies to anyone who writes, because we often write to learn, or, at least, we learn while we write. When scientists struggle to explain their methodology clearly, they might discover gaps in their own understanding. When they wrestle with articulating why their particular approach matters, they might uncover new connections or refine their hypotheses. Stress-testing ideas with the pressure of the page is a time-honored way to deepen thinking. I suspect countless private struggles with writing have served as quiet engines of scientific discovery. Neuroscientist Eve Marder seems to recognize this cognitive value, putting it beautifully:
But most importantly, writing is the medium that allows you to explain, for all time, your new discoveries. It should not be a chore, but an opportunity to share your excitement, and maybe your befuddlement. It allows each of us to add to and modify the conceptual frameworks that guide the way we understand our science and the world…It is not an accident that some of our best and most influential scientists write elegant and well-crafted papers. So, work to make writing one of the great pleasures of your life as a scientist, and your science will benefit.
Previously, my hope was that with the newfound technological ability to decouple sophisticated text production from human struggle, it would start to become clear which parts of the writing struggle are valuable versus which are just pure cognitive drag. However, two years in, I don’t think anyone is any closer to an answer. And I’m realizing through observations of students, colleagues—and myself—that each of us individually is not going to be capable of making that distinction in real time during the heat of composition, the pressure of deadlines and the seductiveness of slick technology.
Rather than offering a set of rules about when to use these tools, perhaps the most honest guidance I can provide is this: Before reaching for AI assistance, pause and ask yourself whether you’re trying to clarify your thinking or simply produce text. If the process matters, or just the product. If it’s the former—if you’re genuinely wrestling with how to explain a concept or articulate why your approach matters—that struggle might be worth preserving. The discomfort of not knowing quite how to say something is often an important signal that you’re at the edge of your understanding, perhaps about to break into new territory. The scientists who do the most exciting and meaningful work in an AI-saturated future won’t be those who can efficiently generate passable grants and manuscripts but those who respect this signal and recognize when the struggle of writing is actually the struggle of discovery in disguise.
The stakes are actually quite high for science, because writing, for all its flaws, is one of the most potent thinking tools humans have developed. When I think of the role of writing in the production-progress paradox, I keep returning to something neuroscientist Henry Markram told me years ago: “I realized that I could write a high-profile research paper every year, but then what? I die, and there’s going to be a column on my grave with a list of beautiful papers.” With AI, we scientists risk optimizing our way to beautiful papers while fundamental progress in neuroscience remains stalled. We might end up with impressive publication lists as we die from the diseases we failed to cure.
The path forward means acknowledging that efficiency isn’t always progress, that removing friction isn’t always improvement, and that tools designed to make us more productive might sometimes make us less capable. These tensions won’t resolve themselves, and perhaps that’s the point. The act of recognizing such tensions, of constantly questioning whether science’s technological shortcuts are serving its deeper intellectual goals, may itself be a form of progress. It’s a more complex message than the one I started with, but complexity is often where the truth lives.
AI-use statement: Anthropic’s Claude Sonnet 4 was used for editorial feedback after the drafting process.
-
Business4 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi