Connect with us

AI Insights

Children under-14 advised against smartphones

Published

on


Will Fyfe

BBC News

Reporting fromMonmouthshire

‘One of my friends had a smartphone when they were three or four’

Parents of thousands of children have been asked not to give them a smartphone until at least 14-years-old amid fears some were using devices for eight hours a day

Many schools have already banned smartphones on site but one part of the UK thinks it will be the first to have a countywide policy advising parents against giving children smartphones at home.

Using mobiles is already banned in schools in Monmouthshire, south Wales, but due to a rise in cyber-bullying reports and fears phone use at home is affecting schoolwork, schools are going a step further.

“We’ve got reports of students who are online at two, three, four in the morning,” said headteacher Hugo Hutchinson.

“We get a lot of wellbeing issues, as do all schools, that come from social media activity online over the weekend, or when they should be asleep,” added the head of Monmouth Comprehensive.

Mr Hutchinson said schools had worked on “robust” phone policies but pointed out ultimately children’s time was largely spent outside of school, where many still had unrestricted access to smartphones.

While teachers in Monmouthshire acknowledge they can’t force parents not to give smartphones to their under-14 children, schools have taken a “big step” to give advice about what parent should do in their own home.

Schools in some areas of the UK have already asked parents not to get their under-14s smartphones – like in St Albans, Belfast and Solihull in the West Midlands.

‘I was worried my son would feel left out’

But Monmouthshire believe they’re the first county in the UK where all secondary and primary teachers in both state and private schools are advising against smartphones for more than 9,000 children under the age of 14.

One of those parents being advised not to give their children a smartphone is Emma who said she felt like “the worst parent in the world” after continuously telling her 12-year-old son Monty he wasn’t allowed one.

“He was feeling left out,” she said.

A woman in a purple dress with short grey hair stands over the kitchen counter and places her smartphone into an old sweet tin. Stood next to her is her husband, wearing a blue t-shirt. He also places his own smartphone into the tin.

Emma and her husband Kev have been attempting to lock their own phones away to support their children

“He would be sitting on the school bus without a phone and everybody else would be doing the journey with a phone. He found that quite difficult. I think for boys it’s more about games on the phone.”

The mum-of-three is worried what her son could be exposed to online and how “addictive” devices were but offered Monty a “brick phone” – a term to describe older models that can’t connect to the internet and is only capable of calls and texts.

As the thought of giving Monty a smartphone when he reached secondary school had become one of her “biggest fears”, she and other parents said they were relieved schools are taking ownership.

A schoolboy with floppy black hair sits on the sofa in his blue school shirt and tie. He is looking into the camera and holding a smartphone out in front of him.

Monty has just turned 12-years-old, but doesn’t yet have his own smartphone so sometimes plays games on his mum’s phone

Schools hope the intervention of teachers would help those parents that were worried saying no to a smartphone would mean their child was “left out”.

But other some argued their children had been using smartphones without any problems.

Nicholas Dorkings’ son, who is moving up to secondary school in September, had his own smartphone when he was eight-years-old.

“He’s always sort of been on one,” he said.

A man with short brown hair and blue eyes looks straight into the camera. He is wearing a navy blue t-shirt and has a metal chain around his neck. Behind him sit other families with school children.

Nicholas Dorkings said his son had been using a smartphone since he was eight-years-old

“It’s like a calming thing, or [something to use] out of boredom. He’s not on it that much, he’s more of a TV boy. He doesn’t pull it out his pocket every five minutes, he can put it down and just leave it.”

Nicholas said he could understand why schools wanted to get involved, but he believed smartphones had become essential to how young people communicate.

Eleven-year-old Lili’s primary school class is one of the first to be targeted by the new policy, after teachers wrote to their parents urging them to consider “brick phones” – if they felt their child needed something for travelling to school.

‘Most kids around here have smartphones’

Lili said she felt “14 to 15” was about the right age for children to get their first smartphone as by then they might stand a better chance of knowing if something they read online “wasn’t true”.

“We found out that one in four children have been cyber-bullied within our school, which is really strange,” said the year six pupil.

An 11-year-old girl in green school uniform looks into the camera. She wars large, blue rimmed glasses and has shoulder length light brown hair.

Lili thinks many school children are being given smartphones too young

“It shouldn’t be right, there shouldn’t be the chance for people to be cyber-bullied, because we’re really young.”

Lili’s classmate Morgan said she had got a smartphone but had decided to stop using it after learning more about them in school.

“Most kids around here have smartphones,” said the 11-year-old.

An 11-year-old girl with long brown hair and blue eyes smiles at the camera. Behind her a classroom whiteboard hangs on the wall.

Morgan said she was trying not to use her smartphone much

“They are just 100% always on it. When kids come over to play at some households they just go on their smartphones and just text.”

“I used to go on it to just scroll but I got bored – but then I’d also get bored not being on my smartphone. I just decided to stop scrolling to read a book or the trampoline.”

Are mobile phones being banned in UK schools?

Schools in Northern Ireland are advised to restrict pupils from using phones, in Scotland teachers are backed to introduce phone bans while in Wales, headteachers have been told smartphones shouldn’t be banned “outright”.

In England, the children’s commissioner has said banning phones should be a decision for head teachers but insisted parents had “the real power” to alter how their children used phones with more time spent on them outside of school.

So now every parent of all of Monmouthshire’s state and private schools will be told about the county’s new smartphone over the coming months.

‘People have an addiction to smartphones’

“This is not a school issue. This is a whole community and society issue,” added Mr Hutchinson, whose comprehensive school in Monmouth has 1,700 pupils.

“Like all schools, we are experiencing much higher levels of mental health issues as a result. Addiction to smartphones, addiction to being online.

“We have students who on average are spending six, seven, eight hours a day online outside school. We’ve got reports of students who are online at two, three, four in the morning.

“So the impact on their school day, the impact on their learning and the impact on their life chances is really fundamental.”

A man with short brown hair sits at a desk in and office. He wears a dark navy suit and pink tie.

Hugo Hutchinson said he felt many parents were “grateful” school were stepping in

In a token of solidarity to their son Monty and to encourage their two younger daughters, Emma Manchand and her husband Kev offered give up their own smartphones.

“We do 24-hours without the phone, which has been quite a challenging,” she said.

“Sometimes we might slightly fail. But the first time I did it, although I was nervous, I felt like I’d had a little mini break.

“The kids love it as well, because of course they get to be the ones telling us to put our phones down.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Contributor: The human brain doesn’t learn, think or recall like an AI. Embrace the difference

Published

on


Recently, Nvidia founder Jensen Huang, whose company builds the chips powering today’s most advanced artificial intelligence systems, remarked: “The thing that’s really, really quite amazing is the way you program an AI is like the way you program a person.” Ilya Sutskever, co-founder of OpenAI and one of the leading figures of the AI revolution, also stated that it is only a matter of time before AI can do everything humans can do, because “the brain is a biological computer.”

I am a cognitive neuroscience researcher, and I think that they are dangerously wrong.

The biggest threat isn’t that these metaphors confuse us about how AI works, but that they mislead us about our own brains. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to one new machine after another: a clock, a switchboard, a computer. The latest erroneous metaphor is that our brains are like AI systems.

I’ve seen this shift over the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like “training,” “fine-tuning” and “optimization” are frequently used to describe human behavior. But we don’t train, fine-tune or optimize in the way that AI does. And such inaccurate metaphors can cause real harm.

The 17th century idea of the mind as a “blank slate” imagined children as empty surfaces shaped entirely by outside influences. This led to rigid education systems that tried to eliminate differences in neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalized support. Similarly, the early 20th century “black box” model from behaviorist psychology claimed only visible behavior mattered. As a result, mental healthcare often focused on managing symptoms rather than understanding their emotional or biological causes.

And now there are new misbegotten approaches emerging as we start to see ourselves in the image of AI. Digital educational tools developed in recent years, for example, adjust lessons and questions based on a child’s answers, theoretically keeping the student at an optimal learning level. This is heavily inspired by how an AI model is trained.

This adaptive approach can produce impressive results, but it overlooks less measurable factors such as motivation or passion. Imagine two children learning piano with the help of a smart app that adjusts for their changing proficiency. One quickly learns to play flawlessly but hates every practice session. The other makes constant mistakes but enjoys every minute. Judging only on the terms we apply to AI models, we would say the child playing flawlessly has outperformed the other student.

But educating children is different from training an AI algorithm. That simplistic assessment would not account for the first student’s misery or the second child’s enjoyment. Those factors matter; there is a good chance the child having fun will be the one still playing a decade from now — and they might even end up a better and more original musician because they enjoy the activity, mistakes and all. I definitely think that AI in learning is both inevitable and potentially transformative for the better, but if we will assess children only in terms of what can be “trained” and “fine-tuned,” we will repeat the old mistake of emphasizing output over experience.

I see this playing out with undergraduate students, who, for the first time, believe they can achieve the best measured outcomes by fully outsourcing the learning process. Many have been using AI tools over the past two years (some courses allow it and some do not) and now rely on them to maximize efficiency, often at the expense of reflection and genuine understanding. They use AI as a tool that helps them produce good essays, yet the process in many cases no longer has much connection to original thinking or to discovering what sparks the students’ curiosity.

If we continue thinking within this brain-as-AI framework, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from identifying familiar patterns, but from breaking them through messiness and unexpected mistakes. Alexander Fleming discovered penicillin by noticing that mold growing in a petri dish he had accidentally left out was killing the surrounding bacteria. A fortunate mistake made by a messy researcher that went on to save the lives of hundreds of millions of people.

This messiness isn’t just important for eccentric scientists. It is important to every human brain. One of the most interesting discoveries in neuroscience in the past two decades is the “default mode network,” a group of brain regions that becomes active when we are daydreaming and not focused on a specific task. This network has also been found to play a role in reflecting on the past, imagining and thinking about ourselves and others. Disregarding this mind-wandering behavior as a glitch rather than embracing it as a core human feature will inevitably lead us to build flawed systems in education, mental health and law.

Unfortunately, it is particularly easy to confuse AI with human thinking. Microsoft describes generative AI models like ChatGPT on its official website as tools that “mirror human expression, redefining our relationship to technology.” And OpenAI CEO Sam Altman recently highlighted his favorite new feature in ChatGPT called “memory.” This function allows the system to retain and recall personal details across conversations. For example, if you ask ChatGPT where to eat, it might remind you of a Thai restaurant you mentioned wanting to try months earlier. “It’s not that you plug your brain in one day,” Altman explained, “but … it’ll get to know you, and it’ll become this extension of yourself.”

The suggestion that AI’s “memory” will be an extension of our own is again a flawed metaphor — leading us to misunderstand the new technology and our own minds. Unlike human memory, which evolved to forget, update and reshape memories based on myriad factors, AI memory can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that remembers almost everything isn’t an extension of the self; it breaks from the very mechanisms that make us human. It would mark a shift in how we behave, understand the world and make decisions. This might begin with small things, like choosing a restaurant, but it can quickly move to much bigger decisions, such as taking a different career path or choosing a different partner than we would have, because AI models can surface connections and context that our brains may have cleared away for one reason or another.

This outsourcing may be tempting because this technology seems human to us, but AI learns, understands and sees the world in fundamentally different ways, and doesn’t truly experience pain, love or curiosity like we do. The consequences of this ongoing confusion could be disastrous — not because AI is inherently harmful, but because instead of shaping it into a tool that complements our human minds, we will allow it to reshape us in its own image.

Iddo Gefen is a PhD candidate in cognitive neuroscience at Columbia University and author of the novel “Mrs. Lilienblum’s Cloud Factory.”. His Substack newsletter, Neuron Stories, connects neuroscience insights to human behavior.



Source link

Continue Reading

AI Insights

Nvidia AI challenger Groq announces European expansion — Helsinki data center targets burgeoning AI market

Published

on


American AI hardware and software firm, Groq (not to be confused with Elon Musk’s AI venture, Grok), has announced it’s establishing its first data center in Europe as part of its efforts to compete in the rapidly expanding AI industry in the EU market, as per CNBC. It’s looking to capture a sizeable portion of the inference market, leveraging its efficient Language Processing Unit (LPU), application-specific integrated circuit (ASIC) chips to offer fast, efficient inference that it claims will outcompete the GPU-driven alternatives.

“We decided about four weeks ago to build a data center in Helsinki, and we’re actually unloading racks into it right now,” Groq CEO Jonathan Ross said in his interview with CNBC. “We expect to be serving traffic to it by the end of this week. That’s built fast, and it’s a very different proposition than what you see in the rest of the market.”



Source link

Continue Reading

AI Insights

Poland Calls for EU Probe of xAI After Lewd Rants by Chatbot

Published

on




Poland’s government wants the European Union to investigate and possibly fine Elon Musk’s xAI following abusive and lewd comments made by its artificial intelligence chatbot Grok about the country’s politicians.



Source link

Continue Reading

Trending