Connect with us

AI Research

AI systems designed for children

Published

on


To gain insights into how artificial intelligence systems and robotics can be designed with children’s rights in mind, Ayça Atabey and Sonia Livingstone spoke to Waseda University’s Professor Toshie Takahashi, as part of their work on AI and child rights at the Digital Futures for Children centre .Toshie has been researching AI since 2016, when she recognised the profound influence AI has on the lives of children and young people, and felt a strong need to examine this relationship from a cross-cultural perspective. She has launched two major international projects: “A Future with AI” (in collaboration with the UN) and “Project GenZAI” (as part of Japan’s Moonshot R&D Program), focusing on global comparative research on AI and children and young people.

1. When you look at how AI systems and robots are designed for children, what do you think are the most crucial opportunities and risks?

AI designed for children presents valuable opportunities, such as supporting their learning. However, it also poses risks like privacy violations and deepfakes. That is why human-centred design is essential. Instead of AI unilaterally influencing children, we must fosterinteractive relationships that empower children to actively shape their own futures.

2. At the Digital Futures for Children centre we often ask, “What does good look like in the digital world with and for children?” How would you describe what good looks like in the context of AI systems designed for children? 

In this context, “good” could involve creating systems that enable children to engage safely and meaningfully with AI. AI should be designed to spark creativity and support children in realising their full potential.

3. Are there any good practice examples you can think of from Japan or elsewhere?

One initiative I would like to highlight is Japan’s Moonshot R&D Program. In the project I’m involved with, we aim to develop AI-driven robots that “learn and act autonomously while coexisting with humans.” Together with robotics engineers, computer scientists, and neurosurgeons, I contribute from the perspective of the humanities and social sciences to the development of AIREC — a smart robot designed to stay with an individual throughout their life.

This project also led to our collaboration with partners in nine countries — including the US, UK, Italy, Spain, Estonia, Chile, China, Singapore, and Japan — as well as with leading institutions such as Stanford and Cambridge, on “Project GenZAI,” a global comparative study on Generation Z and AI. Since 2021, Project GenZAI has conducted in-depth interviews with children and young people about their views on AI across these nine countries. As part of the interviews, participants are asked to draw their vision of an ideal society in 2050. These drawings reveal striking cross-cultural differences. For example, in Western societies such as the UK, young people often emphasize environmental issues and a sense of community. In contrast, in Japan, there is a stronger focus on healthcare and AI-driven robots designed to support people in an increasingly super-aged society.

Ideal society in 2050 designed by a female 24 year-old, UK
Ideal society in 2050 designed by a female 18 year-old, Japan

4. Are there any changes you would like to see in the AI ecosystem or from key stakeholders, such as governments, to achieve what good looks like?

To realise this vision, we must shift away from AI-first approaches, toward innovations that prioritise human well-being. All stakeholders — including businesses, governments, researchers, civil society, and youth — must work together based on human-centred values.

In “A Future with AI”, our UN-based project, we proposed design principles informed by the voices of children and young people, emphasising cultural and age-sensitive approaches, accountability, and AI as a complementary support system. AI literacy, reskilling programs, and flexible regulatory frameworks (e.g., ethical AI certification marks) are also crucial.

5. How do you define “Human-Centred AI” or “Human-Centric AI” and can you tell us about the Japanese model?

Human-Centred AI is an approach that respects human dignity and diversity, aiming to enhance human capabilities and well-being. In Japan, there is a cultural tendency to view AI and robots as partners, which fosters a generally positive attitude toward their use as supportive tools in education and care.

6. Can you tell us where you see children and young people in HCAI discussions in different models?

Traditionally, HCAI models have not sufficiently reflected the perspectives of children and youth. But as AI becomes embedded in daily life, young people should be regarded as central agents in shaping our future. In our A Future with AI project, which involved youth from 36 countries, their role as co-designers was clearly emphasized. The youth participants collectively affirmed that AI is part of their future, and highlighted the importance of human–AI collaboration for equality and sustainability. While they expressed generally positive views, they also drew clear red lines — such as a firm rejection of autonomous lethal weapons — and called for international rules on AI design and use. Ultimately, they believe that humanity can manage the risks and achieve a successful and ethical coexistence with AI.

7. Which key learning would you like to share from your work that you think researchers should pay more attention to in today’s increasingly GenAI-driven ecosystem and its impact on children’s lives?

Since ChatGPT gained popularity in 2023, we have been conducting annual in-depth interviews on generative AI and its impact on children and young people. Overall, they are optimistic about its potential to enhance creativity, support learning, and generate new job opportunities. At the same time, they express concerns about misinformation and a potential decline in critical thinking skills, particularly among younger children. Rather than relying solely on regulation, they emphasize the importance of developing the literacy needed to understand and navigate AI effectively.

8. What should we have asked you that we have not?

When thinking about “children and AI,” it’s vital to see children not merely as recipients of the future, but as active creators of it. We need systems — in education, policy and technological development — that proactively incorporate the voices of children and youth.

Toshie Takahashi is Professor at Waseda University, Tokyo, and an Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), University of Cambridge. She has held visiting appointments at the University of Oxford, Harvard University, and Columbia University. Her cross-cultural, transdisciplinary research explores the social impact of robots and the potential of AI for Good. A frequent speaker at UN forums and global conferences, she is also the author of Towards the Age of Digital Wisdom (Shinnyosha, 2016, in Japanese), which received first prize in the Telecommunication Social Science Awards. She holds a PhD from LSE and advises Japan’s Ministry of Internal Affairs and Communications.

This post gives the views of the authors and not the position of the Media@LSE blog, nor of the London School of Economics and Political Science.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI slows down some experienced software developers, study finds

Published

on


By Anna Tong

SAN FRANCISCO (Reuters) -Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.

AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.

The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”

The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.

AI is also expected to replace entry-level coding positions. Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years.

Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56%, another study found developers were able to complete 26% more tasks in a given time.

But the new METR study shows that those gains don’t apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown.

Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study’s authors said.

The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.

“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what’s needed,” Becker said.

The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with.

Still, the majority of the study’s participants, as well as the study’s authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page.

“Developers have goals other than completing the task as soon as possible,” Becker said. “So they’re going with this less effortful route.”

(Reporting by Anna Tong in San Francisco; Editing by Sonali Paul)



Source link

Continue Reading

AI Research

China’s cloud services spending hits US$11.6 billion in first quarter on AI-related demand

Published

on


Alibaba Group Holding’s cloud computing unit continued to lead the industry in the March quarter, with a commanding 33 per cent market share and a 15 per cent year-on-year revenue growth, Canalys data showed. Hangzhou-based Alibaba owns the South China Morning Post.
Second-ranked Huawei Technologies’ cloud business expanded its market share to 18 per cent, while posting an 18 per cent revenue increase in the same period.
Tencent Holdings’ cloud unit, meanwhile, held a 10 per cent share, but recorded limited revenue growth in the quarter owing to graphics processing unit (GPU) supply constraints and prioritised use of these AI chips for the firm’s internal operations.
These results reflect the robust domestic demand for cloud infrastructure amid a surge in AI-related activities this year, even as service providers contend with US export restrictions that limit China’s access to advanced chips used in data centres.

“Leading cloud providers are actively exploring pathways for AI adoption, unlocking capabilities and building ecosystems through model open-sourcing, while accelerating task execution and scenario delivery via AI agent platforms,” Canalys senior analyst Yi Zhang said in a report on Thursday.

Alibaba Cloud remains the leading provider of cloud infrastructure services in mainland China. Photo: Shutterstock



Source link

Continue Reading

AI Research

Artificial intelligence used to improve speed and accuracy of autism and ADHD diagnoses: IU News

Published

on


A test subject completes a task by pressing a dot when it appears on a computer screen. Photo by James Brosher, Indiana University It can take as long as 18 months for children with suspected autism spectrum or attention-deficit-hyperactivity disorders to get a diagnostic appointment with a psychiatrist in Indiana. But an interdisciplinary team led by an Indiana University researcher has developed a new diagnostic approach using artificial intelligence that could speed up and improve the detection of neurodivergent disorders.

Psychiatrists, who currently use a variety of tests and patient surveys to analyze symptoms such as communication impairments, hyperactivity or repeated behaviors, have no widely available quantitative or biological tests to diagnose autism, ADHD or related disorders.

“The symptoms of neurodivergent disorders are very heterogeneous; psychiatrists call them ‘spectrum disorders’ because there’s no one observable thing that tells them if a person is neurotypical or not,” said Jorge José, the James H. Rudy Distinguished Professor of Physics in the College of Arts and Sciences at IU Bloomington and member of the Stark Neuroscience Research Institute at the IU School of Medicine in Indianapolis.

That’s why José — in collaboration with an interdisciplinary team of scholars, including IU School of Medicine Distinguished Professor Emeritus John I. Nurnberger and associate professor of psychiatry Martin Plawecki — dedicated his recent research to improving diagnostic tools for children with these symptoms.

A new study on the use of artificial intelligence to quickly diagnose autism and ADHD, published July 8 in Nature’s Scientific Reports, details the latest step in his team’s development of a data-driven approach to rapidly and accurately assess neurodivergent disorders using quantitative biomarkers and biometrics.

Their method — which has the potential to diagnose autism or ADHD in as little as 15 minutes — could be used in schools to triage students who might need further care, said Khoshrav Doctor, a Ph.D. student at the University of Massachusetts Amherst and former visiting research scholar at IU who has been a member of José’s team since 2016.

Both he and José said their approach is not meant to replace the role of psychiatrists in the diagnosis and treatment of neurodivergent disorders.

“It could help as an additional tool in the clinician’s toolbelt,” Doctor said. “It also gives us the ability to see who might need the quickest intervention and direct them to providers earlier.”

Finding the biomarkers

Jorge José portrait Jorge José, Indiana University Bloomington James H. Rudy Distinguished Professor of Physics. Photo by James Brosher, Indiana University

In 2018, José published an autism study in collaboration with Rutgers, revealing that there are “movement biomarkers” that, while imperceptible to the naked eye, can be identified and measured in severity by using sensors.

José and his team instructed a group of participants to reach for a target when it appeared on a computer touch screen in front of them. Using sensors attached to participants’ hands, researchers recorded hundreds of images of micromovements per second.

The images showed that neurotypical patients moved in a measurably different way than participants with autism. The researchers were able to correlate increased randomness in movement with the participants who had previously been diagnosed with autism.

Improving treatment

In the years since their landmark 2018 study, José and his present team took advantage of new high-definition kinematic Bluetooth sensors to collect information not just on the velocity of study participants’ movements, but also to measure acceleration, rotation and many other variables.

“We’re taking a physicist’s approach to looking at the brain and analyzing movement specifically,” said IU physics graduate student Chaundy McKeever, who recently joined José’s group. “We’re looking at how sporadic the movement of a patient is. We’ve found that, typically, the more sporadic their movement, the more severe a disorder is.”

The team also introduced the use of a specialized area of artificial intelligence known as deep learning to analyze the new measurements. Using a supervised deep-learning technique, the team studied raw movement data from participants with autism spectrum disorder, ADHD, comorbid autism and ADHD, and neurotypical development.

This enhanced method, detailed in their July 8 Scientific Reports paper, introduced an ability to better analyze a patient’s neurodivergent disorder.

“By studying the statistics of the motion fluctuations, invisible to the naked eye, we can assess the severity of a disorder in terms of a new set of biometrics,” José said. “No psychiatrist can currently tell you how serious a condition is.”

With the added ability to assess a neurodivergent disorder’s severity, health care providers can better set up and monitor the impact of their treatments.

“Some patients will need a significant number of services and specialized treatments,” José said. “If, however, the severity of a patient’s disorder is in the middle of the spectrum, their treatments can be more minutely adjusted, will be less demanding and often can be carried out at home, making their care more affordable and easier to carry out.”



Source link

Continue Reading

Trending