Connect with us

Education

Design of generative AI-powered pedagogy for virtual reality environments in higher education

Published

on


Study 1, Needs Analysis, focused on identifying the needs for the pedagogical use of AI and VR solutions. Many digital tools labeled as AIEd are designed to address specific educational or institutional needs10. To investigate the educational needs and to answer RQ1, we (author UHR) collected two datasets of teachers’ ideas for using emerging technologies in their teaching. This Needs Analysis, conducted using qualitative descriptive methods, was planned to inform the next phase, Pedagogical Design.

In Dataset 1, 30 meetings with 31 university teachers from different disciplines were held. Some teachers attended several meetings accompanied by one to three colleagues, while 13 teachers only participated in individual one-on-one meetings with the researcher. The university teachers represented 9 faculties and one research institute (Table 8) at UH, and they had varying years of teaching experience, with nine being professors. 18 female and 13 male educators participated in the research; age ranged from early 30s to 60s, with the majority in their 40s or 50s (exact ages were not requested).

Table 8 Affiliation of teachers at UH participating in Study 1, Dataset 1

In Dataset 2, the participants were 35 teachers from different levels of the educational system, from K12 to higher education, who volunteered to submit their ideas regarding the use of ChatGPT in education in public groups on social media. These participants were not asked to provide demographic information.

The study followed the guidelines from the Declaration of Helsinki, Finnish National Board on Research Integrity (TENK), and GDPR compliant practices while collecting, analyzing, storing, and publishing the data. After the meetings with teachers for Dataset 1, all the teachers gave informed, signed consent to allow the researcher’s own meeting notes to be used for this research. All the data was anonymized and cannot be linked to an individual respondent. Dataset 2 was gathered in a public exercise, which was clearly known to all the participants, so there was no need to ask for individual consent.

For Dataset 1, during 17 months in 2022–2023, researcher UHR met with university teachers as part of the Serendip development project at UH, focusing on AI and VR experimentation. Meetings, both online and in-person, lasting 1–2 h, were initiated by either the teachers or the researcher. In these meetings, the researcher gathered teachers’ reflections and ideas for the Serendip development project. Notes from each meeting were compiled into a single document (Word, Microsoft 2024), and anonymized data were stored in the university’s digital cloud storage (OneDrive, Microsoft 2024).

Dataset 2 was gathered through a public competition organized by the Technology Industries of Finland, aimed at encouraging educators to explore the potential of using ChatGPT (as a representative of LLMs in general) in various educational levels. This initiative was in response to the common tendency towards banning ChatGPT in education. Educators were tasked to submit their positive ideas on how ChatGPT could be utilized in teaching and learning. The competition was announced in January 2023 across three Finnish Facebook groups: Opettajien vuosityöaika, ICT opetuksessa, and Mytech. Submissions were public written posts in the mentioned Facebook groups, allowing for community engagement where everyone could comment on and enhance each other’s ideas.

The language of the written ideas and the meeting notes was Finnish, so the quotes used in Study 1 have been translated freely into English by the researcher (UHR).

The chosen analysis method was inductive thematic analysis, which identifies themes from data sources such as interviews50,51. This method involves open coding and grouping findings to identify relevant themes52. We followed the six steps outlined by Braun and Clarke53,54 without any prior assumptions, categories, or theories.

The first step was Familiarizing with the data. This included repeated reading of the data, searching for meanings and patterns. Notes were taken and ideas for coding developed while reading. The second step Generating initial codes, included the production of initial codes from the data. Coding was done by writing notes and comments on the text that was under analysis. From Dataset 1, there were altogether 28 initial codes, and from Dataset 2, there were 22 initial codes (see Table 9).

Table 9 Initial codes for Study 1

In the third step, Searching for themes, the coded data was tabularized in a spreadsheet. The findings were then inserted into an online whiteboard (Miro, RealtimeBoard Inc. 2024), and a visual mind map was created with the initial codes for both datasets separately. Similar kinds of codes were grouped into preliminary thematic groups. Three themes were found from the Dataset 1: challenges, general learning ideas, and discipline-specific ideas. From Dataset 2, four themes were identified: ChatGPT fostering AI literacy, ChatGPT acting as a character, ChatGPT improving learning, and ChatGPT as a teachers help.

The fourth step was Reviewing themes. In this phase, the coded data was reviewed again, and themes were analyzed. Then the dataset was checked to see if it reflects the identified themes accurately. Two themes from Dataset 1 were combined into one in this phase, leaving two themes for Dataset 1: challenges in current teaching and opportunities of emerging technologies in teaching. The four themes in Dataset 2 remained as above.

During the fifth step, Defining and naming themes, the final refinements could be made after the thematic map of the data was satisfactory. In this phase, the key was to identify the essence of what each theme is about. The themes were considered as themselves but also in relation to the others. In this phase, the categories were also named for each theme, for both datasets separately. At the final step, the report was written as result of this study. The interesting content, which demonstrated the essence of the points, was chosen from the data and provided as citations. To produce a conclusion and to answer RQ1, the findings from both datasets were combined into one table (Table 3).

The study on Pedagogical Design started by choosing the applicable reference framework. For sustainability education, the sustainability competency framework was found suitable as it provided the structure for sustainability competencies, related learning objectives, and instructional methods32,33,34. Subsequently, the author, N.M.A.M.H., identified the feasible learning objectives and their corresponding instructional methods suitable for IVR. Consequently, to answer RQ2, the following three design steps were formulated regarding the design of GAI-powered characters for pedagogical purpose in IVR:

  1. 1.

    Selecting the applicable reference framework.

  2. 2.

    Identifying the learning objectives and instructional methods that could be feasible in IVR.

  3. 3.

    Designing the GAI characters.

After the initial design was formulated, it was discussed with chosen domain experts to confirm the design ideas before doing extensive prompt engineering for the GAI characters. After the iterative design and prompt engineering, the chosen domain experts tested the design to be able to address RQ3. The outcomes of this design process, covering the three design steps utilized in this study, are presented in the “Results” section.

With the goal of deploying the 3D GAI characters in the IVR learning environment, these characters were initially developed as text-based characters using CurreChat (UH’s user interface for GPT 4). Reflecting the UNESCO guidelines, author N.M.A.M.H. followed the practices of prompt engineering: continuous refinement, iteration, and experimentation of the outcome3. For Tero, prompt engineering entailed over 25 testing rounds, covering more than 400 questions. Madida underwent 24 testing rounds with over 225 questions. The prompt for testing Madida required a relatively smaller number of questions, because the exercise intended with Madida includes a limited number of questions36 that are structured to help the student in preparing the presentation of the backcasting exercise.

Study 2 aimed to evaluate the human-GAI interaction quality and the utility of the GAI characters as pedagogical tools. It also sought to gather insights from domain experts to further develop these characters and to inform product development and research, addressing RQ3. A deductive qualitative content analysis55 was employed, based on categories identified in Study 1, to achieve these objectives. Initially, the domain experts interacted with the two GAI characters, which were developed in text format at this stage with UH’s GPT interface (CurreChat). Subsequently, semi-structured interviews56 were conducted with domain experts who have the expertise and knowledge relevant to the two GAI characters. Five semi-structured interviews were conducted to test the two GAI characters (three for Tero and two for Madida) to elicit in-depth information from interviewees (i.e., domain experts) aligned with the study’s focus55.

The interviews focused on domain experts’ reflections and insights about the GAI characters. Structured guiding questions were formulated—five for Tero and four for Madida—to elicit detailed responses while avoiding ‘yes-or-no’ questions.

The guiding interview questions for Tero were as follows (in this order):

  • What is your impression?

  • To what extent is the information provided by Tero relevant and accurate?

  • To what extent does Tero reflect a real forest owner?

  • What are your opinions about Tero’s forest management plan?

  • What are your suggestions for further development of the conversation with Tero?

For Madida, the guiding interview questions were as follows (in this order):

  • What is your impression?

  • To what extent is the information provided by Madida relevant and accurate?

  • To what extent does the exercise with Madida reflect a real backcasting presentation preparation exercise?

  • What are your suggestions for further development of the conversation with Madida?

For Study 2, testing the two GAI characters involved conducting 5 interviews with 8 domain experts from UH (Table 10). Testing for Tero involved three semi-structured interviews: the first with two forest sciences domain experts, the second with three sustainability education domain experts, and the third with one forest sciences domain expert. For Madida, two testing sessions were held, each involving one backcasting domain expert.

Table 10 GAI characters evaluation interviews: expert details, affiliations, and gender distribution in Study 2

The domain experts were contacted and invited to participate in multiple live testing sessions held between 2022 and 2024. Participation was voluntary, and consent forms were obtained prior to testing. Prompts engineered for the GAI characters were added to UH’s CurreChat yet remained concealed from the testers. After individually testing the text-based characters for 20–30 min, the domain experts participated in semi-structured interviews. These interviews were recorded. The final interview data consisted of two video recordings, one audio recording, one speech-to-text file, and interview notes. All data collected during the interviews were securely stored in UH’s own cloud storage (OneDrive, Microsoft 2024).

The study followed the guidelines from the Declaration of Helsinki, Finnish National Board on Research Integrity (TENK), and GDPR compliant practices. Participants were instructed not to disclose any sensitive information during discussions with the GAI. Furthermore, all participants signed a consent form to participate in the research, and their identities were anonymized prior to analysis.

All the tested features are provided by the UH Serendip developer team according to the UH safety and privacy regulations. The developed text-based AI prototypes are using UH’s own GDPR-safe GAI interface CurreChat, which utilizes an LLM developed by OpenAI. It is in the Microsoft Azure cloud service of the UH in the EU region. The information input into the service is not used to further train the LLM. Additionally, no user identification information is passed on to the language model. No discussion data is saved in the system.

To address RQ3, the interview data underwent qualitative content analysis55. The analysis aimed to evaluate how well the two pedagogical GAI characters developed for this study met the educators’ needs identified in Study 1. For this purpose, the selected method for Study 2 was content analysis using the deductive approach. Deductive content analysis is utilized when the analysis is built on the basis of previous information55. Therefore, the deductive approach was found suitable for analyzing the experts’ insights during testing, based on the needs already identified in Study 1, thereby addressing RQ3. The deductive approach to content analysis in this study followed the three main phases identified by Elo & Kyngäs: preparation, organizing, and reporting55.

The first phase of the content analysis was preparation. In this phase, complete familiarity with the data was achieved, the unit of analysis was determined, and a focus on the manifest content was established. Familiarity was gained by Author N.M.A.M.H. as she led and transcribed all interviews. Each session’s notes were written immediately afterward, and all recordings (videos and audio) were manually transcribed and reviewed to ensure accuracy. This involved multiple reviews of the interviews to reach a high level of data comprehension. Then, a separate document (Word, Microsoft 2024) was created, containing verbatim transcriptions of all the recorded data (videos and audio), the speech-to-text file, and the interview notes from all five interview sessions. Thus, all interview data, including questions posed by the interviewer and responses from the experts, were compiled into this document (Word, Microsoft 2024) for thorough analysis. The unit of analysis was identified as the sentence(s) within each response, which were chosen because they conveyed specific ideas or insights. Some ideas were expressed in single sentences, while others spanned multiple sentences. The analysis was strictly focused on the manifest content, deliberately excluding any latent elements such as laughter55. This approach was chosen to more accurately assess the effectiveness of the GAI characters (as the designed solution) in meeting the educators’ needs identified in Study 1.

The second phase was organizing. This phase entails deciding on the categorization matrix for the deductive content analysis, developing the coding frame, coding the data, and analyzing the data based on the matrix. Since the co-first authors had already discussed and reached consensus on the grouping of needs into categories—along with their meanings, similarities, and differences—in the conclusion of Study 1, this agreed-upon categorization matrix (see Table 3) was used as the basis for the deductive content analysis in Study 2. A second document (Word, Microsoft 2024) was created for the development of the coding frame based on the needs analysis identified in Study 1 (Table 3). Eleven out of the 12 identified needs were given a two-digit code each, while one need was branched into two sub-categories, each given a three-digit code. In each code, the first digit represents the main category, and the second digit represents the generic category. For instance, the identified need (AI acting as a character being a learning companion) was given (Code 1.2); the first digit “1” represents the main category of “AI acting as a character,” and the second digit “2” represents its generic category of “being a learning companion.” The remaining need out of the 12 is the need for “Emerging technology to help the teacher with personalized teaching,” where the generic category of “personalized teaching” was further branched into two sub-categories: “differentiating learning paths” and “providing linguistic possibilities for lesson planning.” Therefore, each sub-category of “personalized learning” was given a three-digit code: (Code 3.3.1): “Personalized teaching: Differentiating learning paths” and (Code 3.3.2): “Personalized teaching: Linguistic possibilities for lesson planning.” The final coding frame is illustrated in Fig. 3.

Fig. 3

Coding frame used for analyzing data in Study 2.

To address RQ3, which investigates how well the designed GAI-powered characters met the teachers’ identified needs, it was necessary to select data aspects that aligned with the categorization matrix of the teachers’ identified needs (Table 3). The data in Study 2 were coded according to this categorization matrix. This involved comparing sentences to each identified need and then marking the sentences with their corresponding codes using the comment function in the document (Word, Microsoft 2024). Throughout the coding process, constant reference was made to the meanings of each category to ensure the sentences accurately matched each category. After coding the entire dataset, a third document (Word, Microsoft 2024) was created featuring a table with three columns: the first column contains the experts’ needs along with its corresponding code; the second column provides an explanation of the need; and the third column includes the corresponding sentences from the testing sessions, copied and pasted as quotations.

The third phase was reporting. Relevant quotations that illustrated these needs (i.e., codes) were categorized and listed in Table 11. Codes not supported by data findings were removed from the matrix (2.4, 3.1, 3.3.1, and 3.3.2). Additional details of the reporting were provided in the “Results” section.

Table 11 Examples of domain experts’ quotations reflecting the defined categories



Source link

Education

How Ivy League Schools Are Navigating AI In The Classroom

Published

on


The widespread adoption and rapid advancement of Artificial Intelligence (AI) has had far-reaching consequences for education, from student writing and learning outcomes to the college admissions process. While AI can be a helpful tool for students in and outside of the classroom, it can also stunt students’ learning, autonomy, and critical thinking, and secondary and higher education institutions grapple with the promises and pitfalls of generative AI as a pedagogical tool. Given the polarizing nature of AI in higher education, university policies for engaging with AI vary widely both across and within institutions; however, there are some key consistencies across schools that can be informative for students as they prepare for college academics, as well as the parents and teachers trying to equip high school students for collegiate study amidst this new technological frontier.

Here are five defining elements of Ivy League schools’ approach to AI in education—and what they mean for students developing technological literacy:

1. Emphasis on Instructor and Course Autonomy

First and foremost, it is important to note that no Ivy League school has issued blanket rules on AI use—instead, like many other colleges and secondary schools, Ivy League AI policies emphasize the autonomy of individual instructors in setting policies for their courses. Princeton University’s policy states: “In some current Princeton courses, permitted use of AI must be disclosed with a description of how and why AI was used. Students might also be required to keep any recorded engagement they had with the AI tool (such as chat logs). When in doubt, students should confirm with an instructor whether AI is permitted and how to disclose its use.” Dartmouth likewise notes: “Instructors, programs, and schools may have a variety of reasons for allowing or disallowing Artificial Intelligence tools within a course, or course assignment(s), depending on intended learning outcomes. As such, instructors have authority to determine whether AI tools may be used in their course.”

With this in mind, high school students should be keenly aware that a particular teacher’s AI policies should not be viewed as indicative of all teachers’ attitudes or policies. While students may be permitted to use AI in brainstorming or editing papers at their high school, they should be careful not to grow reliant on these tools in their writing, as their college instructors may prohibit the technology in any capacity. Further, students should note that different disciplines may be more or less inclined toward AI tolerance—for instance, a prospective STEM student might have a wider bandwidth for using the technology than a student who hopes to study English. Because of this, the former should devote more of their time to understanding the technology and researching its uses in their field, whereas the latter should likely avoid employing AI in their work in any capacity, as collegiate policies will likely prohibit its use.

2. View of AI Misuse as Plagiarism / Academic Dishonesty

Just as important as learning to use generative AI in permissible and beneficial ways is learning how generative AI functions. Many Ivy League schools, including UPenn and Columbia, clearly state that AI misuse—whatever that may be in the context of a particular class or project, constitutes academic dishonesty and will be subject to discipline as such. The more students can understand the processes conducted by large language models, the more equipped they will be to make critical decisions about where its use is appropriate, when they need to provide citations, how to spot hallucinations, and how to prompt the technology to cite its sources, as well. Even where AI use is permitted, it is never a substitute for critical thinking, and students should be careful to evaluate all information independently and be transparent about their AI use when permitted.

Parents and teachers can help students in this regard by viewing the technology as a pedagogical tool; they should not only create appropriate boundaries for AI use, but also empower students with the knowledge of how AI works so that they do not view the technology as a magic content generator or unbiased problem-solver.

Relatedly, prestigious universities also emphasize privacy and ethics concerns related to AI usage in and outside of the classroom. UPenn, for instance, notes: “​​Members of the Penn community should adhere to established principles of respect for intellectual property, particularly copyrights when considering the creation of new data sets for training AI models. Avoid uploading confidential and/or proprietary information to AI platforms prior to seeking patent or copyright protection, as doing so could jeopardize IP rights.” Just as students should take a critical approach to evaluating AI sources, they should also be aware of potential copyright infringement and ethical violations related to generative AI use.

3. Openness to Change and Development in Response to New Technologies

Finally, this is an area of technology that is rapidly developing and changing—which means that colleges’ policies are changing too. Faculty at Ivy League and other top schools are encouraged to revisit their course policies regularly, experiment with new pedagogical methods, and guide students through the process of using AI in responsible, reflective ways. As Columbia’s AI policy notes, “Based on our collective experience with Generative AI use at the University, we anticipate that this guidance will evolve and be updated regularly.”

Just as students should not expect AI policies to be the same across classes or instructors, they should not expect these policies to remain fixed from year to year. The more that students can develop as independent and autonomous thinkers who use AI tools critically, the more they will be able to adapt to these changing policies and avoid the negative repercussions that come from AI policy violations.

Ultimately, students should approach AI with a curious, critical, and research-based mentality. It is essential that high school students looking forward to their collegiate career remember that schools are looking for dynamic, independent thinkers—while the indiscriminate use of AI can hinder their ability to showcase those qualities, a critical and informed approach can distinguish them as a knowledgeable citizen of our digital world.



Source link

Continue Reading

Education

In Peru, gangs target schools for extortion : NPR

Published

on


Parents drop off their children at the private San Vicente School in Lima, Peru, which was targeted for extortion, in April.

Ernesto Benavides/AFP via Getty Images


hide caption

toggle caption

Ernesto Benavides/AFP via Getty Images

LIMA, Peru — At a Roman Catholic elementary school on the ramshackle outskirts of Lima, students are rambunctious and seemingly carefree. By contrast, school administrators are stressing out.

One tells NPR that gangsters are demanding that the school pay them between 50,000 and 100,000 Peruvians sols — between $14,000 and $28,000.

“They send us messages saying they know where we live,” says the administrator — who, for fear of retaliation from the gangs, does not want to reveal his identity or the name of the school. “They send us photos of grenades and pistols.”

These are not empty threats. A few weeks ago, he says, police arrested a 16-year-old in the pay of gangs as he planted a bomb at the entrance to the school. The teenager had not been a student or had other connections with the school.

Schools in Peru are easy targets for extortion. Due to the poor quality of public education, thousands of private schools have sprung up. Many are located in impoverished barrios dominated by criminals — who are now demanding a cut of their tuition fees.

Miriam Ramírez, president of one of Lima’s largest parent-teacher associations, says at least 1,000 schools in the Peruvian capital are being extorted and that most are caving into the demands of the gangs. To reduce the threat to students, some schools have switched to online classes. But she says at least five have closed down.

Miriam Ramierez is wearing a coat while standing in a park.

Miriam Ramírez is president of one of Lima’s largest parent-teacher associations and she says at least 1,000 schools in the Peruvian capital are being extorted and that most are caving into the demands of the gangs.

John Otis for NPR


hide caption

toggle caption

John Otis for NPR

If this keeps up, Ramírez says, “The country is going to end up in total ignorance.”

Extortion is part of a broader crime wave in Peru that gained traction during the COVID pandemic. Peru also saw a huge influx of Venezuelan migrants, including members of the Tren de Aragua criminal group that specializes in extortion — though authorities concede it is hard to definitively connect Tren de Aragua members with these school extortions.

Francisco Rivadeneyra, a former Peruvian police commander, tells NPR that corrupt cops are part of the problem. In exchange for bribes, he says, officers tip off gangs about pending police raids. NPR reached out to the Peruvian police for comment but there was no response.

Political instability has made things worse. Due to corruption scandals, Peru has had six presidents in the past nine years. In March, current President Dina Boluarte declared a state of emergency in Lima and ordered the army into the streets to help fight crime.

But analysts say it’s made little difference. Extortionists now operate in the poorest patches of Lima, areas with little policing, targeting hole-in-the-wall bodegas, streetside empanada stands and even soup kitchens. Many of the gang members themselves are from poor or working class backgrounds, authorities say, so they are moving in an environment that they already know.

“We barely have enough money to buy food supplies,” says Genoveba Huatarongo, who helps prepare 100 meals per day at a soup kitchen in the squatter community of Villa María.

Even so, she says, thugs stabbed one of her workers and then left a note demanding weekly “protection” payments. Huatarongo reported the threats to the police. To avoid similar attacks, nearby soup kitchens now pay the gangsters $14 per week, she says.

But there is some pushback.

Carla Pacheco, who runs a tiny grocery in a working-class Lima neighborhood, is refusing to make the $280 weekly payments that local gangsters are demanding, pointing out that it takes her a full month to earn that amount.

Carla Pacheco runs a tiny grocery in Lima and she is refusing to make the $280 weekly payments that local gangsters are demanding.

Carla Pacheco runs a tiny grocery in Lima and she is refusing to make the $280 weekly payments that local gangsters are demanding.

John Otis for NPR


hide caption

toggle caption

John Otis for NPR

She’s paid a heavy price. One morning she found her three cats decapitated, their heads hung in front of her store.

Though horrified, she’s holding out. To protect her kids, she changed her children’s schools to make it harder for gangsters to target them.

She rarely goes out and now dispenses groceries through her barred front door rather than allowing shoppers inside.

“I can’t support corruption because I am the daughter of policeman,” Pacheco explains. “If I pay the gangs, that would bring me down to their level.”

After a bomb was found at its front gate in March, the San Vicente School in north Lima hired private security guards and switched to online learning for several weeks. When normal classes resumed, San Vicente officials told students to wear street clothes rather than school uniforms to avoid being recognized by gang members.

“They could shoot the students in revenge,” explains Violeta Upangi, waiting outside the school to pick up her 13-year-old daughter.

Due to the threats, about 40 of San Vicente’s 1,000 students have left the school, says social studies teacher Julio León.

Rather than resist, many schools have buckled to extortion demands.

The administrator at the Catholic elementary school says his colleagues reported extortion threats to the police. But instead of going after the gangs, he says, the police recommended that the school pay them off for their own safety. As a result, the school ended up forking over the equivalent of $14,000. The school is now factoring extortion payments into its annual budgets, the administrator says.

“It was either that,” the administrator explains, “or close down the school.”



Source link

Continue Reading

Education

Labour must keep EHCPs in Send system, says education committee chair | Special educational needs

Published

on


Downing Street should commit to education, health and care plans (EHCPs) to keep the trust of families who have children with special educational needs, the Labour MP who chairs the education select committee has said.

A letter to the Guardian on Monday, signed by dozens of special needs and disability charities and campaigners, warned against government changes to the Send system that would restrict or abolish EHCPs. More than 600,000 children and young people rely on EHCPs for individual support in England.

Helen Hayes, who chairs the cross-party Commons education select committee, said mistrust among many families with Send children was so apparent that ministers should commit to keeping EHCPs.

“I think at this stage that would be the right thing to do,” she told BBC Radio 4’s Today programme. “We have been looking, as the education select committee, at the Send system for the last several months. We have heard extensive evidence from parents, from organisations that represent parents, from professionals and from others who are deeply involved in the system, which is failing so many children and families at the moment.

“One of the consequences of that failure is that parents really have so little trust and confidence in the Send system at the moment. And the government should take that very seriously as it charts a way forward for reform.

“It must be undertaking reform and setting out new proposals in a way that helps to build the trust and confidence of parents and which doesn’t make parents feel even more fearful than they do already about their children’s future.”

She added: “At the moment, we have a system where all of the accountability is loaded on to the statutory part of the process, the EHCP system, and I think it is understandable that many parents would feel very, very fearful when the government won’t confirm absolutely that EHCPs and all of the accountabilities that surround them will remain in place.”

The letter published in the Guardian is evidence of growing public concern, despite reassurances from the education secretary, Bridget Phillipson, that no decisions have yet been taken about the fate of EHCPs.

Labour MPs who spoke to the Guardian are worried ministers are unable to explain key details of the special educational needs shake-up being considered in the schools white paper to be published in October.

Stephen Morgan, a junior education minister, reiterated Phillipson’s refusal to say whether the white paper would include plans to change or abolish EHCPs, telling Sky News he could not “get into the mechanics” of the changes for now.

However, he said change was needed: “We inherited a Send system which was broken. The previous government described it as lose, lose, lose, and I want to make sure that children get the right support where they need it, across the country.”

Hayes reiterated this wider point, saying: “It is absolutely clear to us on the select committee that we have a system which is broken. It is failing families, and the government will be wanting to look at how that system can be made to work better.

“But I think they have to take this issue of the lack of trust and confidence, the fear that parents have, and the impact that it has on the daily lives of families. This is an everyday lived reality if you are battling a system that is failing your child, and the EHCPs provide statutory certainty for some parents. It isn’t a perfect system … but it does provide important statutory protection and accountability.”



Source link

Continue Reading

Trending