Connect with us

Education

A hybrid model combining environmental analysis and machine learning for predicting AI education quality

Published

on


This section of the article proposes strategies to address the challenges about applications of AI in universities, attempting to tackle them from two different perspectives. Initially, by employing a macro-environment segmentation method, corrective actions and their impacts on the management of education and learning in universities are examined within the context of AI education. Subsequently, an effective strategy, grounded in ML techniques, is presented to develop high-quality AI educational programs in higher education.

Corrective actions in university education and learning management, within the context of AI education, based on macro-environment segmentation

This section proposes a set of corrective actions in university education and learning management within the context of AI education, based on macro-environment segmentation. According to this research, the macro environment of university teaching and learning is divided into three parts: the external environment, the intermediate environment, and the internal environment. In the external environment, actions such as enacting supportive laws and regulations for AI education, collaborating with relevant industries and institutions, and utilizing external resources for the advancement of AI education are undertaken. In the intermediate environment, actions like establishing an appropriate management structure, developing human resources, and employing new teaching and learning technologies are carried out. Finally, within the internal environment, actions such as designing and offering AI-based educational programs, developing AI-based teaching and learning methods, and assessing the effectiveness of AI education are performed.

External environment

The external environment encompasses factors that are beyond the control of universities. These factors can influence university teaching and learning. Corrective actions in the external environment can aid in creating conducive conditions for the advancement of AI education in universities. These actions can be undertaken by governments, relevant industries and institutions, and the universities themselves. Following are the necessary corrective actions in the external environment listed to enhance university teaching and learning in the context of AI education:

  • Establishing laws and regulations to support AI education: Governments can assist in the development of this field in universities by establishing supportive laws and regulations for AI education. These laws and regulations may include the following:

  • Financial support for universities to develop AI education: Governments can support the development of AI education in universities by allocating government funds. These funds can be used to cover financial expenses, provide human resources, and acquire necessary technologies for the advancement of AI education.

  • Facilitating collaboration between universities and related industries and institutions: Governments can support the development of AI education in universities by facilitating collaboration between universities and related industries and institutions. These facilitations may include the following:

    • Establishing laws and regulations to support collaboration between universities and related industries and institutions.

    • Providing financial and tax incentives to companies that collaborate with universities.

  • Establishing educational standards for AI education: Governments can ensure the quality of AI education in universities by establishing educational standards. These standards may include the following:

  • Collaboration with related industries and institutions: Universities can benefit from the knowledge and experience of these organizations in developing AI education through collaboration. These collaborations can lead to the formation of joint educational programs, conducting joint research, and providing financial resources for the development of AI education.

  • Utilizing external resources for the development of AI education: Universities can make use of external resources, such as government grants, private sector grants, and grants from international organizations, to develop AI education. These resources can be used to cover financial expenses, provide human resources, and acquire necessary technologies for the development of AI education.

Intermediate environment

The intermediate environment includes factors that are within universities but outside the direct control of managers and education and learning experts. These factors can influence teaching and learning in universities. Corrective actions in the intermediate environment can help establish an appropriate structure and conditions for the development of AI education in universities. The corrective actions in the intermediate environment include:

  • Establishing an appropriate management structure: Universities should establish an appropriate management structure for the development of AI education. This structure should include the following:

    • A management unit or center responsible for AI education.

    • A policy council for AI education.

    • An expert team for the development of AI education.

  • Development of human resources: Universities should undertake necessary actions to develop human resources specialized in AI. These actions may include the following:

    • Conducting training courses for university staff.

    • Attracting and hiring AI specialists.

    • Creating job opportunities for AI specialists at the university.

  • Utilizing new teaching and learning technologies: Universities should utilize new teaching and learning technologies for the development of AI education. These technologies may include things like ML, DL, virtual reality, augmented reality, and so on.

Internal environment

The internal environment includes factors that are within universities and under the direct control of managers and education and learning experts. These factors can influence teaching and learning in universities. Corrective actions in the internal environment can contribute directly to the development of AI education in universities. The corrective actions in the university’s internal environment include:

  • Designing and offering AI-based educational programs: Universities should design and offer educational programs based on AI. These programs should fulfill the needs of students and society.

  • Development of AI-based teaching and learning methods: Universities should develop teaching and learning methods based on AI. These methods should facilitate active and collaborative learning among students.

  • Evaluating the effectiveness of AI education: Universities should evaluate the effectiveness of AI education. These evaluations can contribute to improving the quality of AI education.

In summary, corrective actions in the macro, intermediate, and internal environments can aid in the development of AI education in universities. Considering the above, it can be inferred that employing efficient and precise strategies for evaluating the quality and effectiveness of AI educational programs is one of the primary requirements at various levels. Implementing this process in traditional ways can be time-consuming and complex. However, by utilizing AI techniques, this task can be accomplished more desirably. The next section presents a strategy based on AI techniques to achieve this goal.

Proposed strategy for evaluating the quality of AI training programs in higher education

In this section, a new strategy is proposed for quality evaluation of AI education programs in higher education. In this regard, the dataset used for designing this model is described first, followed by the presentation of the steps of the proposed strategy.

Data

In this research, a dataset containing information related to AI educational programs at higher education levels was utilized. This data was collected through in-person AI training classes from various technical and engineering faculties. During the data collection process, 188 questionnaires were distributed among the respondents in the target population. After the data collection, the accuracy and completeness of the provided information were verified. All the questionnaires were anonymous, without any personal or identity information. This study was conducted following ethical principles for research involving human subjects. This study did not involve human subjects research as defined by Belmont Report. Data was collected through a voluntary anonymous questionnaire that did not pose any risks beyond those encountered in daily life. Informed consent was obtained from all subjects and/or their legal guardian(s). No identifiable data was collected through questionnaires. Additionally, all dataset instances were anonymized and the attributes were encoded; therefore, none of the participants could be identified through the data. None of the questionnaires contained any invalid information. Conversely, 8 questionnaires contained at least one unanswered question, which were discarded due to the presence of missing values in the analysis process. Therefore, the total number of samples in the dataset used in this research amounts to 180. All the questionnaires were evaluated by three experts, and the quality of the educational program was determined as a numerical variable ranging from zero (worst) to 100 (best). Ultimately, an averaging strategy was used to determine the final score for each sample. The standard deviation of the scores determined by the experts is 2.66, which validates the collected information.

The list of indicators collected through questionnaires is presented in Table 2. The dataset listed in this table encompasses three general categories, each of which could potentially be related to the quality of the educational program. Consequently, this research aims to evaluate the AI education quality using (a subset of) the 14 indicators that have been listed as Table 2.

Table 2 The set of indicators considered to evaluate the quality of AI education.

Proposed quality evaluation algorithm

The proposed algorithm for the AI education quality evaluation in higher education utilizes a combination of optimization techniques and ML. In this approach, the optimization technique is initially used for identifying the indicators associated with the quality of AI training. Subsequently, an optimally structured artificial neural network (ANN) is utilized for prediction. This algorithm can be broken down into the following steps:

  1. 1.

    Data Pre-processing.

  2. 2.

    Feature Selection based on the Capuchin Search Algorithm (CapSA).

  3. 3.

    Quality Prediction based on ANN and CapSA.

The rest of this section is devoted to the description of each of the above steps.

Data preprocessing

The data preprocessing stage is the initial phase of the proposed model and is utilized to prepare the database for processing in subsequent stages. This stage comprises two steps: value conversion and normalization. To this end, all nominal features are first converted into numerical values. Specifically, for each nominal feature, a unique list of all its nominal values is initially prepared. In ranked features, the unique values obtained are sorted based on rank, and in discrete nominal features, this list is sorted in ascending order based on the frequency of the value in that feature. Then, the nominal value is replaced by a natural number corresponding to its position in the sorted list. By doing this, all the features of the dataset are converted into a numerical format. At the end of the pre-processing step, all features are mapped to the range [0, 1] based on the following relationship:

$$\:\overrightarrow{{N}_{i}}=\frac{\overrightarrow{x}-\:\text{m}\text{i}\text{n}\left(\overrightarrow{x}\right)}{\text{max}\left(\overrightarrow{x}\right)-\text{m}\text{i}\text{n}\left(\overrightarrow{x}\right)}$$

(1)

Where, \(\:\overrightarrow{x}\) represents the input feature vector and \(\:\overrightarrow{{N}_{i}}\) represents the corresponding normalization vector. Also, min and max are the minimum and maximum functions for the feature vector, respectively.

Feature selection using CapSA

After normalizing the features of the database, the process of feature selection and data dimension reduction is carried out. This algorithm aims to reduce the dimensions of the features, thereby increasing processing speed and reducing the error rate in evaluating the quality of AI training. The CapSA algorithm is utilized to achieve this goal. CapSA is a proven and fast meta – heuristic optimization algorithm that is applicable in feature selection. It is capable of both global search and local fine-tuning, does not get trapped into local optima and is fast-converging to good solutions. Compared with other algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), CapSA is easier to implement, faster and less likely to converge into local optima. First, CapSA increases the model’s accuracy and decreases overfitting by choosing appropriate features, which also increases model interpretability. Subsequently, the structure of the response vector and the objectives defined for the optimization algorithm are first explained, followed by a description of the feature selection steps using CapSA.

In the proposed method, the number of optimization variables corresponds to the number of features present in the database (Table 2), which is equal to 14. In other words, each solution vector of the optimization algorithm is of length 14. CapSA should be capable of determining the selection or non-selection of a feature via the response vector. In this way, each solution vector can be viewed as a binary string where each existing feature is assigned a position in the response vector of the optimization algorithm. Each position can have a value of 0 or 1. If a position has a value of 0, that feature is not selected in the current solution, and if it has a value of 1, the feature corresponding to the current position is considered as the selected feature.

Optimization objectives can be considered the most crucial part of an optimization algorithm. In the proposed method, the following two objectives are utilized to assess the quality or fitness of solution vectors in CapSA:

A) Maximizing the average correlation of the selected features with the target variable: The more a feature correlates with the target variable of the problem, the more significant that feature becomes. In other words, it becomes easier to predict the change in the target based on it. For this reason, maximizing the correlation of the selected features with the target variable is considered as the first objective in the optimization algorithm. This objective criterion is described in Eq. (2):

$$\:{F}_{1}=\frac{1}{\left|S\right|}\sum\:_{\forall\:i\in\:S}corr(i,T)$$

(2)

Where, S represents the set of features selected in the current response and \(\:\left|S\right|\) represents the number of these features. Also, T describes the target variable and \(\:corr(i,T)\)is the correlation evaluation function between the selected feature i and the target variable.

B) Minimizing the average correlation of selected features with each other: A feature is suitable for selection if it can provide new information compared to other selected features. Features that have highly correlated values exhibit similar patterns, and it is not appropriate to select them as descriptive features of the data. For this reason, minimizing the correlation of selected features is considered as the second objective of CapSA. This objective can be described in Eq. (3):

$$\:{F}_{2}=\frac{1}{{\left|S\right|}^{2}}\sum\:_{\forall\:i\in\:S}\sum\:_{\forall\:j\in\:S,\:(j\ne\:i)}corr(i,j)$$

(3)

Since the two aforementioned objectives are incompatible with each other (the first objective is defined as a maximization problem and the second objective is defined as a minimization problem), they need to be harmonized in the optimization algorithm. Therefore, they are combined in the form of the following fitness function:

$$\:fitness=\frac{{F}_{2}}{{F}_{1}+1}$$

(4)

Thus, the goal of the feature selection algorithm in the proposed approach is to identify features that can minimize the above relationship. The proposed algorithm, which aims to select the most relevant features to the quality of AI training using CapSA, is as follows:

Step 1) Determine the initial population randomly based on the boundaries set for each optimization variable.

Step 2) Determine the quality of each capuchin (solution) using Eq. (4).

Step 3) Set the initial velocity of each capuchin agent.

Step 4) Select half of the Capuchin population randomly as leaders, and designate the rest as follower Capuchins.

Step 5) If the number of iterations of the algorithm has reached the maximum value of G, proceed to step 13. If not, repeat the following steps:

Step 6) Calculate the CapSA lifetime parameter using Eq. (5):

$$\:\tau\:={\beta\:}_{0}{e}^{{\left(-\frac{{\beta\:}_{1}g}{G}\right)}^{{\beta\:}_{2}}}$$

(5)

Where, g represents the current iteration number, and the parameters \(\:{\beta\:}_{0}\), \(\:{\beta\:}_{1}\), and \(\:{\beta\:}_{2}\) are the coefficients of CapSA lifetime.

Step 7) Repeat the following steps for each Capuchin agent (both leader and follower) like i:

Step 8) If i is a Capuchin leader, update its velocity based on Eq. (6):

$$\:{v}_{j}^{i}=\rho\:{v}_{j}^{i}+\tau\:{a}_{1}\left({x}_{bes{t}_{j}}^{i}-{x}_{j}^{i}\right){r}_{1}+\tau\:{a}_{2}\left(F-{x}_{j}^{i}\right){r}_{2}$$

(6)

Where, j represents the dimensions of the problem, and \(\:{v}_{j}^{i}\) represents the velocity of capuchin i in dimension j. \(\:{x}_{j}^{i}\) indicates the position of capuchin i for the jth variable, and\(\:{x}_{bes{t}_{j}}^{i}\) describes the best position of capuchin i for the jth variable from the beginning until now. Also, \(\:{r}_{1}\) and\(\:{r}_{2}\) are two random numbers in the interval [0, 1]. Finally, ρ is the parameter that influences the previous velocity.

Step 9) Update the new positions of the leader Capuchins based on their velocity and movement pattern.

Step 10) Update the new positions of the follower Capuchins based on their velocity and the position of the leader.

Step 11) Determine the quality of the population members using Eq. (4).

Step 12) If the position of the entire population is updated, proceed to step 5; else, go to step 7.

Step 13) Return the solution with the best quality value as the set of selected features.

After executing the above steps, a set, such as X, is selected as significant features for the quality of AI training in higher education. This set is then used as the input for the third step of the proposed method. It should be noted that while implementing CapSA for feature selection, the population size and umber of iterations were set as 50 and 100, respectively. Also, the parameters of CapSA lifetime: \(\:{\beta\:}_{0}\), \(\:{\beta\:}_{1}\), and \(\:{\beta\:}_{2}\) (in Eq. 5) were considered as 2, 21, and 2, respectively. Additionally, the parameter ρ for influencing the previous velocity in Eq. 6 was set at 0.7.

Quality prediction based on ANN and CapSA

After identifying the set of indicators that impact the quality of AI education, the final phase of the presented approach attempts to predict the target variable based on these indicators. The current research is an effort to model the relationship among the selected features and the target (quality of education) using ANNs. To achieve an accurate prediction model, attention must be given to the problem of optimally configuring the MLP model. The use of many neurons and layers in the MLP model can increase the complexity of the model, and conversely, using models with less complexity can lead to a decrease in prediction accuracy. Also, conventional training algorithms for adjusting the weight values of NNs cannot guarantee achieving the highest prediction accuracy. To address these challenges, CapSA is used in the proposed method to optimize the configuration of the MLP model and its training. The proposed CapSA-MLP hybrid model provides an effective solution for the quality prediction of AI education. As a result, CapSA adjusts MLP architecture and weight vectors enhancing the model’s performance of learning patterns in a data set. In general, MLPs are appropriate for nonlinear mappings and can be used for regression as well as for classification. As the result, the proposed hybrid model built based on the CapSA and MLP could provide a more accurate and reliable model for the quality prediction of AI education.

In the proposed method, CapSA replaces the conventional training algorithms for MLPs. This optimization model not only adjusts the configuration of the MLP model’s hidden layers but also strives to determine the optimal weight vector for the NN. It does this by defining training performance, as the objective function. Figure 1 illustrates the structure of the NN that the proposed method uses to predict the quality of AI training.

Fig. 1

Structure of the NN employed for predicting the AI education quality.

According to Fig. 1, the proposed NN comprises an input layer, two hidden layers, and an output layer. The input layer is populated with the features selected in the previous phase. CapSA determines the number of neurons in the first and second hidden layers. Consequently, the proposed MLP structure lacks a static architecture. On the other hand, the activation functions of these two hidden layers are set as logarithmic and linear sigmoid, respectively. Finally, the output layer consists of a neuron, the value of which indicates the predicted score for the input sample. Each neuron in this NN receives a number of weighted inputs, depicted as directional vectors in Fig. 1. In addition, each neuron possesses a bias value, omitted from the figure for simplicity. Under these conditions, the output of each neuron, transferred to the neurons of the subsequent layer, is formulated as follows:

$$\:{y}_{i}=G\left(\sum\:_{n=1}^{{N}_{i}}{w}_{n}\times\:{x}_{n}+{b}_{i}\right)$$

(7)

Where, \(\:{x}_{n}\) and \(\:{w}_{n}\) denote the input value and weight of the nth neuron, respectively, and \(\:{b}_{i}\) represents the bias value of this neuron. Also, \(\:{N}_{i}\) indicates the number of inputs of the ith neuron and \(\:G(.)\) defines the activation function. As previously mentioned, CapSA is utilized to determine the number of neurons in the hidden layers and to fine-tune the weight vector of this NN. The optimization steps in this phase mirror the process outlined in the second step (feature selection). The distinction in this phase is that a different approach is employed to encode the solution vector and assess fitness. Consequently, the structure of the response vector and the criteria for evaluating suitability are elucidated in the following.

The response vector (capuchin) in the capuchin search algorithm, utilized in the presented approach, dictates the topology of the MLP, and also its weights/biases vector. Consequently, each response vector in CapSA is composed of 2 interconnected parts. The first part is defined for determining the size of hidden layers in NN, while the second part of solution defines its weight/bias vector, and corresponds to the topology established in the first part. Consequently, the capuchins in this step possess a variable length. Given that the number of states for the NN topology can be infinite, a range of 0 to 15 neurons is envisaged for each hidden layer in the aforementioned network. Therefore, each entry in the first part of the response vector is a natural number in the range of 0 to 15, and if a layer is defined with no neurons (0), that layer is eliminated. It’s worth noting that in the first part of each capuchin only specifies the size of hidden layers (not input or output layers).

The length of the second part of the solution vector is dictated by the topology established in the first part. For a NN with I input neurons, H1 neurons in the first hidden layer, H2 neurons in the second hidden layer, and P output neurons, the length of the second part of each solution vector in CapSA corresponds to:

$$\:L={H}_{1}\times\:\left(I+1\right)+{H}_{2}\times\:\left({H}_{1}+1\right)+P\times\:({H}_{2}+1)$$

(8)

Where, \(\:{H}_{1}\times\:\left(I+1\right)\) represents the number of weight values between the neurons of the input layer and the first hidden layer, in addition to the bias of the first hidden layer. The value of \(\:{H}_{2}\times\:\left({H}_{1}+1\right)\) denotes the number of weights between the first and second hidden layers, along with the bias of the second hidden layer. Finally, \(\:P\times\:({H}_{2}+1)\)illustrates the number of weights between the last two layers, as well as the bias of the output layer. Consequently, the length of the second part of each solution vector in the optimization algorithm equates to L. In this vector, the value of the weight and bias is represented as a real value within interval [-1, + 1]. In other words, each optimization variable in the second part of the chromosome is characterized as a real variable with search boundaries of [-1, + 1].

The first population of CapSA was generated randomly. The NN produces outputs for the training instances after the weights have been determined by the solution. These outputs are then matched with the ground-truth values of the target and based on that, the NN’s performance (training quality) is measured. Subsequently, the Mean Absolute Error (MAE) criterion is utilized to assess both the NN’s training quality and the optimality of the response. Consequently, the objective function of the CapSA is formulated by Eq. (9):

$$\:MAE=\frac{1}{N}\sum\:_{i=1}^{N}|{T}_{i}-{Z}_{i}|$$

(9)

Where, N denotes the number of training instances and Ti indicates the actual value for the target of ith training instance. Also, Zi corresponds to the output generated by the NN for the ith training sample. As previously mentioned, the optimization steps of the MLP model by CapSA in this phase mirror the process outlined in the second step (feature selection), thus the repetition of these contents is foregone. Upon determining the NN with the optimal topology and weight vector that can minimize Eq. (9), this NN is employed to predict the quality of training in new samples. In should be noted that in this phase, the CapSA was implemented using the parameter setting same as the values considered for feature selection.



Source link

Education

Traya’s holistic prescription, ET BrandEquity

Published

on


Saloni Anand, co-founder of Traya

Five years ago, Traya Health, a holistic hair loss solution, was born out of a deeply personal health struggle faced by co-founder Saloni Anand and her husband. What began as a quest for personal well-being has blossomed into a pioneering brand that challenges conventional wisdom in the hair care industry. Saloni shared Traya’s science-first approach in a session at the ETBrandEquity Brand World Summit 2025.

The genesis of a solution

Saloni Anand, co-founder of Traya, recounted the origins of the brand. Her co-founder, armed with a biomedical chemistry background, embarked on extensive research to address his uncontrollable hypothyroidism. During this challenging journey, a surprising side effect emerged: his hair began to grow back.

“About two years later, we realised that this is something awesome, and everything out there in the industry is not able to grow hair, but we could, so there’s some potential to explore this,” Anand shared. This discovery spurred intensive research into hair science, revealing critical insights that would become the bedrock of Traya’s unique approach.

Dispelling hair loss myths: Traya’s foundational learnings

Traya’s deep dive into hair science led to three fundamental revelations that shaped their model:

Diagnosis is key: “We learned hair loss is genetic mostly, but has multiple types. Not everyone has hair loss because medically multiple types of it require diagnosis.”

Follicle potential: Hair regrowth is possible if follicles are still present, meaning it’s achievable for most individuals not in very advanced stages of hair loss.

No magic bullet: “There is no magic molecule for one product that can grow everyone’s hair. It’s a wider thing that’s happening. It’s more like diabetes than anything.”

Analysing the existing hair industry, Anand observed, “More than 10,000 products on Amazon today sell with the label of hair fall and are topical. Selling you a shampoo, conditioner that has wrongful claims, promising 30-day results, sometimes even worse.” This landscape, rife with superficial solutions, solidified Traya’s mission: “We are here to grow hair, and we will do everything it takes to get that emphasis.”

The “three sciences” model: Traya’s holistic prescription

The first year was dedicated solely to building formulations. This led to Traya’s distinctive model: a hair solution built on diagnosis and a holistic approach. The brand name, “Traya,” is Sanskrit for “three sciences,” embodying their core philosophy: Ayurveda, Allopathy (Dermatology) and Nutrition.

The consumer journey begins with an online diagnosis. The solution provided is a customised kit incorporating elements from all three sciences, including a diet plan, recognising that hair loss often stems from internal imbalances.

Initial skepticism from investors was high. Saloni and her husband launched Traya with personal funds. Six months later, with tangible results from their first critical trials, they secured their initial investment.

Breaking the rules: A D2C brand of the future

Traya today stands as a largely scaled, profitable brand, having served over 10 lakh Indians. A distinctive aspect of its D2C model is that 100 per cent of its revenue comes directly from its platform. “If you download the Traya app, take a long diagnosis. They buy a gift. If the consumer cannot choose which product they are buying. We tell them what they should buy,” Anand stated, emphasising their doctor-led, personalised approach.

Eighty per cent of Traya’s revenue comes from repeat customers. “This happened because we did not have the baggage of how,” she noted.

Education, retention and AI: The pillars of growth

Anand highlighted three critical pillars for modern D2C success:

Believe in education: Traya faced the challenge of educating consumers on why previous topical solutions had failed and why a holistic, science-backed approach was necessary. “Our journey from zero to one crore per month is really smooth. We really had to build these fundamentals,” she revealed. This rapid scale was driven by a deep commitment to educating their audience. Traya’s culture prohibits discussing competitor brands, focusing solely on their consumers. “The moment you do that and you just focus on your consumer, you have the ability to do something,” she added.

Retention over acquisition: Traya defines itself internally as a “habit building organisation,” treating hair loss as a chronic disease. Their North Star metric is retention, supported by a data-tech engine and over 800 hair coaches who ensure adherence and usage. “Back in 2023, when we were having that growth chart, we reached a point where we saw retention numbers there, and we cautiously stopped all our marketing scale up,” Anand disclosed. This move underscored their commitment to long-term customer success over short-term acquisition. “How can you be a D2C brand in 2025? That’s not too little but is just too little today to differentiate. Can you add a service there? Can you add a community? How can you be more than just a product gone?”

Embrace AI: While acknowledging AI as a buzzword, Anand firmly believes it will be a pivotal theme in brand building. Traya, despite its 800-person team, has already seen impressive results from integrating AI. “Three months ago, I took a mandate at Traya that no more tech hiring, and we are about since then, we have done zero tech hiring, and we’ve increased the tech productivity four times,” she shared, emphasising the transformative power of AI in consumer evaluation, discovery and shopping.

Saloni Anand concluded by summarising her key takeaways for aspiring D2C brands: “Think more than product solutions. Think of efficiency. Think science, if your product works, everything else will fall in place. Think AI. Think of the review word and think of retention first.”

  • Published On Jul 7, 2025 at 08:59 AM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about ETBrandEquity industry right on your smartphone!






Source link

Continue Reading

Education

Ministers urged to keep care plans for children with special needs

Published

on


Ministers are facing calls to not cut education plans for children and young people with special needs and disabilities (Send).

Campaigners say education, health and care plans (EHCPs) are “precious legal protections”, warning that thousands of children could lose access to education if the plans are abolished.

The government has said it inherited the current system “left on its knees”. Speaking on the BBC’s Sunday with Laura Kuenssberg programme, Education Secretary Bridget Phillipson described it as a “complex and sensitive area” when asked if she could rule out scrapping EHCPs.

But Neil O’Brien, the shadow education minister, has criticised the government for “broken promises and U-turns”.

An EHCP is a legally binding document which ensures a child or young person with special or educational needs gets the right support from a local authority.

Full details of the proposed changes are due in October, but ministers have not ruled out scrapping the education plans, insisting no decisions have been taken.

In a letter to the Guardian newspaper, campaigners have said that without the documents in mainstream schools, “many thousands of children risk being denied vital provision, or losing access to education altogether”.

“Whatever the Send system’s problems, the answer is not to remove the rights of children and young people. Families cannot afford to lose these precious legal protections,” they added.

Signatories to the letter include the heads of charities, professors, Send parents including actor Sally Phillips, and campaigners including broadcaster Chris Packham.

Speaking to the BBC’s Sunday with Laura Kuenssberg programme, Ms Phillipson saidL

“What I can say very clearly is that we will strengthen and put in place better support for children.

“I’ve been spending a lot of time listening to parents, to disability rights groups, to campaigners and to others and to colleagues across Parliament as well, because it’s important to get this right,” she added, but said it is “tough”.

Mr O’Brien, the shadow minister, said the government had “no credibility left”.

“This is a government defined by broken promises and u-turns. They said they would employ more teachers and they have fewer. They said they would not raise tax on working people but did,” Mr O’Brien said.

Data from the Department for Education released in June showed that the number of EHCPs has increased.

In total, there were 638,745 EHCPs in place in January 2025, up 10.8% on the same point last year.

The number of new plans which started during 2024 also grew by 15.8% on the previous year, to 97,747.

Requests for children to be assessed for EHCPs rose by 11.8% to 154,489 in 2023.

A Department for Education spokesperson said: “We have been clear that there are no plans to abolish Send tribunals, or to remove funding or support from children, families and schools.”

The spokesperson added that it would be “totally inaccurate to suggest that children, families and schools might experience any loss of funding or support”.



Source link

Continue Reading

Education

Korean tech companies eye growing AI public education market

Published

on


(Naver)

Artificial intelligence (AI) is bringing a fresh wave of innovation to South Korea’s public education sector. big tech companies are actively developing AI-based solutions for public education and forming partnerships with schools alongside edtech startups.

According to the information technology (IT) industry on Sunday, Naver launched a digital public education support system called Whale UBT in April 2025 and integrated it into the Gwangju Metropolitan Office of Education’s teaching-learning platform, Gwangju AI-ON. Naver also plans to expand adoption to other regional education offices

Whale UBT allows for the unified management of various test items – including diagnostic and unit assessments – within a single platform. A database of about 400,000 questions provided by four educational publishers is available, enabling teachers to create customized tests based on students’ levels.

It also features automatic grading.

To date, AI education platforms were adopted more rapidly in private education, where entry requirements are comparatively less restrictive. The use of AI tools in public education was initially determined by individual teachers; however, their implementation has been rapidly increasing at both the school and district levels.

This trend is driven in part by the increasing sophistication of AI solutions. These tools now go beyond simply marking answers right or wrong – they can analyze step-by-step processes for descriptive questions, improving both convenience and educational outcomes.

A good example is edtech startup Turing Co.’s math learning platform, Math King. Turing signed a memorandum of understanding with the Korea Association of Future Education Study in February 2025 to promote adoption of Math King in Korean schools.

Math King can generate personalized problem sets for each student in just one second, and AI analyzes even the descriptive answers in homework assignments. The system automatically generates consultation reports that can be sent to parents and includes recommendations for future learning directions.

“We are using Math King for advanced classes, and it has eliminated the hassle of creating customized math problems,” Gyeonggu High School teacher Park Jun-hyung said. ‘I can now manage nearly twice as many students.”

AI solutions also help with administrative tasks, significantly reducing teachers’ workloads, particularly for writing student records.

While many teachers have already been using tools like ChatGPT informally for record writing, new, more convenient solutions are now being developed. These specialized AI tools offer stronger security than ChatGPT.

Edtech startup Elements launched inline AI in April, a solution specifically designed to assist with student record writing. It employs a local Retrieval-Augmented Generation (RAG) system, ensuring that data is not sent externally. The AI updates student records automatically based on data from teachers and students.

Given the rapid growth of the AI education market, adoption in public education is expected to accelerate even further. According to market research firm Straits Research, the global AI education market is projected to grow from $4.43 billion in 2024 to $72.45 billion in 2033.

By Ahn Sun-je and Lee Eun-joo
[ⓒ Pulse by Maeil Business News Korea & mk.co.kr, All rights reserved]



Source link

Continue Reading

Trending