Education
Indian students, groups sound alarm over gov’t scholarship woes
Out of 440 applications received under the National Overseas Scholarship (NOS) scheme, administered by the ministry of social Justice and empowerment to support students from disadvantaged communities, including Scheduled Castes, Denotified, Nomadic and Semi-Nomadic Tribes, Landless Agricultural Labourers, and Traditional Artisans, 106 candidates were placed on the selected list.
However, only 40 of them have received provisional award letters, while the remaining 66 would receive their awards depending on the “availability of funds,” as per a public notification by the ministry.
“106 candidates have been placed in the selected list. Out of these, initially, the provisional award letters will be issued to the candidates from serial number 1 to 40,” read the ministry’s July 1 announcement.
“Provisional award letters to the remaining candidates (from serial number 41 to 106) in the selected list may be issued in due course, subject to availability of funds,” the statement added.
While 64 eligible candidates were placed on the non-selected list due to factors such as their universities not being within top QS rankings, state quotas, and category-wise slots, and 270 applicants were rejected for not meeting eligibility criteria, this marks the first time in at least three years that all students on the selected list have not received scholarships in the first round.
“My university of choice is within the top 100 in the world, yet I missed out on the scholarship. I have been trying to secure the funding on my own, without my family’s help, and now there’s no certainty whether I will be able to study abroad any time soon,” a postgraduate student, one of the 66 selected candidates who didn’t receive a provisional scholarship, told The PIE News.
The student, who did not wish to be named, is now exploring other study abroad scholarships for marginalised communities while awaiting the second round of the NOS, expected in September or October 2025 based on available funds.
“Even after being selected for the scholarship, I might not be able to study abroad if funds do not come through. This is what will affect many women and first-generation scholars,” stated another student from Delhi, holding a UK university offer, while speaking with Hindustan Times.
Over the years, the NOS has served as a key scholarship for students from marginalised communities with a parental income of less than ₹8 lakh (approximately GBP £6,870) per annum.
The scheme funds master’s and PhD programs abroad, offering up to ₹16,920 (around GBP £145) annually, for a maximum of three years (master’s) or four years (PhD).
While the scheme awards 125 scholarships annually, allocations are capped at 10% each Indian state.
But despite a significant rise in the scheme’s budget, Rs. 130 crore (around GBP £12.10 million) allocated for FY 2025–26, up over 36% from Rs. 95 crore (around GBP £8.84 million) in 2024–25, government authorities are still awaiting approval for the disbursal of funds and have requested additional allocation from the Centre.
“We are seeking more allocation to administer the scheme. The allocation this year is higher than others. But what must be considered is that the scholarship is paid out through the period of education of the candidates,” a senior government official told The Hindu.
“So, a part of this year’s allocation must be used for this as well, that is for candidates selected in previous years and continuing their studies. As a result, the ministry is seeking more allocation and soon this will be worked out.”
On one hand, India is becoming the fourth-largest economy in the world, but on the other, it cannot fund 125 scholars from historically marginalised communities to study abroad
Raju Kendre, Eklavya India Foundation
Over the years, the ministry has faced criticism from students and advocacy groups over various issues with the NOS, ranging from delays in fund disbursals to administrative hurdles faced by students.
Many candidates had also raised concerns over delays in another ministry scheme, National Fellowship for Scheduled Castes.
The fellowship saw an initial selection list of 865 scholars announced in March 2025, but a revised list released the following month drastically reduced the number of selections to 805, eliminating 487 candidates who had previously been shortlisted.
Earlier this year, the parliamentary standing committee on social justice and empowerment flagged several issues with scholarship schemes run by the ministry.
The government has since announced plans to evaluate the NOS before the 2026–27 financial year to “assess its performance and determine whether it should be continued”.
Despite the number of NOS scholarship recipients rising from 51 in 2019–20 to 126 in 2023–24, according to data presented in India’s upper house, the Rajya Sabha, the “insufficient” budget has raised alarm among stakeholders, including, Raju Kendre, founder of the Eklavya India Foundation, which supports marginalised students pursuing study abroad opportunities and research.
“Despite an 80% increase in the number of scholarship recipients from marginalised communities, the budget allocation remains inadequate. This reflects the government’s willingness to support these students,” Kendre said.
“On one hand, India is becoming the fourth-largest economy in the world, but on the other, it cannot fund 125 scholars from historically marginalised communities to study abroad. Instead of expanding opportunities, the government seems to be cutting back, which is deeply concerning.”
Education
Crizac hits Indian stock market following IPO success
Nearly a week after Kolkata-headquartered Crizac raised Rs. 860 crore (£73.9 million) through its initial public offering (IPO), structured as an offer for sale (OFS) by promoters Pinky Agarwal and Manish Agarwal, the company’s shares surged in domestic stock markets on Wednesday, at nearly a 15% premium above the issue price of Rs. 245.
The IPO’s success – managed by Equirus Capital Private Limited and Anand Rathi Advisors Limited – along with its strong performance on the National Stock Exchange and Bombay Stock Exchange, is expected to fuel Crizac’s expansion into new destinations and services.
“The reason we went for a full OFS, or fully secondary, as we might say in the UK, is because the company’s balance sheet is very strong. We already have sufficient capital to support our expansion plans. Our focus remains on diversifying globally, which has been our strength over the past five years and will continue to be our strength in the future,” Christopher Nagle, CEO of Crizac, told The PIE News.
While an OFS means that the company, in this case, Crizac, did not raise new capital through the IPO – with proceeds instead going to existing shareholders, namely the Agarwals – its entry into the financial markets allows the company to publicly demonstrate “the scale, size, and operations of the company in a transparent way”, according to Nagle.
Crizac’s decision to go public comes as it looks to expand, beyond student recruitment, into areas such as student loans, housing, and other services.
The company is also eyeing new geographies and high-growth markets within India.
We also see great potential and can add great value in other destinations like Ireland, the USA, and Australia
Vikash Agarwal, Crizac
“We have a strong plan to expand across cities in India. Even though we are already one of the biggest recruiters for India-UK, we believe there’s still significant room for growth,” stated Vikash Agarwal, chairman and managing director, Crizac.
“We also see great potential and can add great value in other destinations like Ireland, the USA, and Australia,” he added.
Crizac, which reported a total income of Rs. 849.5 crore (£78m) in FY25, currently works with over 10,000 agents and some 173 international institutions.
Tthrough its stock market listing, the company aims to strengthen confidence among it partners.
“The fact that we are listed doesn’t change how we interact with agents, but we believe it will lead to even greater trust from universities and agent partners alike, thanks to the level of diligence and corporate governance that is now required of us,” stated Nagle.
With a market capitalisation of Rs 5,379.84 crore (nearly £555m), Crizac’s solid financial track record and low debt levels have been key drivers behind its IPO, even as changing policies in major study destinations continue to influence the sector.
As destinations like Australia hike visa fees, the UK increases compliance among institutions and considers imposing levies on international student fees, the US tightens vetting and eyes visa time limits, and Canada raises financial thresholds amid falling study permits, it remains to be seen how students from India, Nigeria, and China will navigate their study abroad choices in the coming years.
According to government data presented in the Indian Parliament, there was a nearly 15% decline in Indian students going abroad, largely in the major four destinations, while countries like Germany, Russia, France, Ireland, and New Zealand saw increased interest.
However, despite the downturn, Crizac is confident that its move will inspire other Indian education companies to create value on the global stage.
“Being the first listed company in this space will unlock significant value for the industry. We believe many are already watching our listing closely, and there will be a lot others going public from this sector now,” stated Agarwal.
Education
The Pros And Cons Of AI In The Workplace And In Education
The integration of artificial intelligence into our daily lives is no longer a futuristic concept but a present-day reality, fundamentally reshaping industries and institutions. From the bustling floors of global corporations to the hallowed halls of academia, AI is proving to be a transformative, yet complex, force. For business and tech leaders, understanding the dual nature of this technological revolution—its remarkable advantages and its inherent challenges—is paramount. There are both pros and cons on AI in the workplace and in education: this article delves into the multifaceted impact of AI in the workplace and education, exploring the significant opportunities it presents alongside the critical concerns that demand our attention.
AI in the Workplace: A New Era of Productivity and Peril
The modern workplace is in the throes of an AI-driven evolution, promising unprecedented levels of efficiency and innovation. One of the most significant pros of artificial intelligence in a professional setting is its ability to automate repetitive and mundane tasks. This allows human employees to redirect their focus towards more strategic, creative, and complex problem-solving endeavors. For instance, in the realm of human resources, AI-powered tools can screen thousands of resumes in minutes, a task that would take a team of recruiters days to complete. Companies like Oracle are leveraging their AI-powered human resource solutions to streamline candidate sourcing and improve hiring decisions, freeing up HR professionals to concentrate on building relationships and fostering a positive work environment.
Beyond automation, AI is a powerful engine for enhanced decision-making. By analyzing vast datasets, machine learning algorithms can identify patterns and trends that are imperceptible to the human eye, providing data-driven insights that inform strategic business choices. In the financial sector, AI algorithms are instrumental in fraud detection, analyzing transaction patterns in real-time to flag anomalies and prevent fraudulent activities before they cause significant damage. Similarly, in manufacturing, companies like Siemens are utilizing AI-powered “Industrial Copilots” to monitor machinery, predict maintenance needs, and prevent costly downtime, thereby optimizing production lines and ensuring operational continuity.
However, the widespread adoption of AI in the workplace is not without its cons. The most pressing concern for many is the specter of job displacement. As AI systems become more sophisticated, there is a legitimate fear that roles currently performed by humans, particularly those involving routine and predictable tasks, will become obsolete. While some argue that AI will create new jobs, there is a transitional period that could see significant disruption and require a massive effort in upskilling and reskilling the workforce.
Furthermore, the ethical implications of AI cannot be overstated. The potential for bias in AI algorithms is a significant challenge. If an AI system is trained on biased data, it will perpetuate and even amplify those biases in its decision-making processes. Additionally, the increasing use of AI raises serious privacy concerns — some people have go on to create distasteful clothes remover AI tools. The vast amounts of data that AI systems collect and process, from employee performance metrics to customer behavior, create a treasure trove of sensitive information that must be protected from misuse and security breaches.
AI in Education: Personalizing Learning While Preserving the Human Touch
The educational landscape is also being profoundly reshaped by artificial intelligence, with the promise of creating more personalized, engaging, and accessible learning experiences. One of the most celebrated benefits of AI in education is its capacity to facilitate personalized learning at scale. AI-powered adaptive learning platforms can tailor educational content to the individual needs and learning pace of each student. For example, platforms like Carnegie Learning’s “Mika” software use AI to provide personalized tutoring in mathematics, offering real-time feedback and adapting the curriculum to address a student’s specific areas of difficulty. This individualized approach has the potential to revolutionize how we teach and learn, moving away from a one-size-fits-all model to a more student-centric methodology.
AI is also a valuable tool for automating the administrative burdens that often consume a significant portion of educators’ time. Grading multiple-choice tests, managing schedules, and tracking attendance are all tasks that can be efficiently handled by AI systems. This frees up teachers to focus on what they do best: inspiring, mentoring, and interacting directly with their students. Language-learning apps like Duolingo are a prime example of AI in action, using machine learning to personalize lessons and provide instant feedback, making language education more accessible and engaging for millions of users worldwide.
Despite these advancements, the integration of AI in education raises a number of critical concerns and cons. A primary worry is the potential for a diminished human connection in the learning process. While AI can provide personalized content, it cannot replicate the empathy, encouragement, and nuanced understanding that a human teacher provides. Over-reliance on technology could lead to a sense of isolation for students and hinder the development of crucial social and emotional skills.
Data privacy is another significant hurdle. Educational AI platforms collect vast amounts of student data, from academic performance to learning behaviors. Ensuring the security and ethical use of this sensitive information is paramount. There is a tangible risk of this data being misused or falling victim to cyberattacks, which could have serious consequences for students and educational institutions.
In conclusion, artificial intelligence has both pros and cons, both the workplace and the field of education. The potential for increased productivity, data-driven insights, and personalized experiences is immense. However, we must proceed with a clear-eyed understanding of the challenges. Addressing concerns around job displacement, data privacy, and the importance of human interaction will be crucial in harnessing the full potential of AI for the betterment of our professional and educational futures. The path forward lies not in a blind embrace of technology, but in a thoughtful and ethical integration that prioritizes both progress and humanity.
Education
New York Passes the Responsible AI Safety and Education Act
The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.
Applicability and Relevant Definitions
The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models.
- “Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
- “Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.
The RAISE Act imposes the following obligations and restrictions on large developers:
- Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”
- “Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
- Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:
- (1) implement a written safety and security protocol;
- (2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
- (3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
- (4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
- (5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.
- Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
- Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.
- “Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.
If enacted, the RAISE Act would take effect 90 days after being signed into law.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions