Education
US escalates fight over Harvard’s international student data
After months of attempts to obtain the records of Harvard’s international students, the US Department of Homeland Security (DHS) announced yesterday it would start sending subpoenas to the university, demanding it turn over the documents.
“We tried to do things the easy way with Harvard. Now, through their refusal to cooperate, we have to do things the hard way,” said assistant secretary for public affairs, Tricia McLaughlin, in a statement on July 9.
“Harvard, like other universities, has allowed foreign students to abuse their visa privileges and advocate for violence and terrorism on campus,”she claimed. “If Harvard won’t defend the interests of its students, then we will.”
Since mid-April, the Trump administration has launched multiple attacks on Harvard for allegedly failing to root out antisemitism on campus and failing to hand over international students’ records, among other accusations.
The administrative subpoenas, issued by ICE, command Harvard to turn over extensive records on its 7,000 international students since January 2020.
DHS did not publicly announce a deadline or specify which documents it requires, though past requests have included video and audio footage of international students involved in pro-Palestinian protests, as well as internal emails and administrative memos. The department did not immediately respond to The PIE News’s request for comment.
Harvard University spokesperson Jason Newton called the move “unfounded retribution” by the federal government but appeared to comply with DHS’s demands.
“Harvard is committed to following the law, and while the government’s subpoenas are unwarranted, the university will continue to cooperate with lawful requests and obligations,” said Newton.
The power to issue the documents is limited to certain state and federal agencies, without requiring a judge’s approval. But if Harvard refuses to comply, ICE will need to seek a judicial order to enforce the demands.
Harvard continues to defend itself… against harmful government overreach aimed at dictating whom private universities can admit and hire, and what they can teach
Jason Newton, Harvard University
In what has become a months-long standoff, Newton maintained that Harvard would continue to defend itself against “harmful government overreach aimed at dictating whom private universities can admit and hire, and what they can teach”.
After the university’s public rebuttal of a long list of government demands on April 16, secretary of homeland security Kristi Noem moved to strip Harvard of its ability to enrol international students on May 22, which was blocked by a judge soon after.
Harvard did submit some international student records to the government on April 30, maintaining it had provided the “information required by law”, though this was subsequently deemed “insufficient” by Secretary Noem.
In a separate attack, President Trump signed a proclamation attempting to suspend the visas for international students coming to America’s oldest institution, which was also halted by the courts.
The administration’s latest salvo is intended to send a message to campuses across the US.
It warns: “Other universities and academic institutions that are asked to submit similar information should take note of Harvard’s actions, and the repercussions, when considering whether or not to comply with similar requests”.
The row with Harvard has been one of the focal points of Trump’s sweeping attacks on higher education, which has seen investigations launched into dozens of universities, a near month-long pause on new student visa interviews and enhanced social media vetting of international students.
The Wednesday subpoena is the second issued to Harvard is less than two weeks. On June 26, the House Judiciary Committee subpoenaed the university for its financial aid records amid alleged tuition-fixing at the institution.
Secretary Noem previously called the university’s international student certification “a giant cash cow for Harvard”.
Writing in an op-ed in the Washington Post, Noem claimed the institution had “fostered antisemitic extremism” and used taxpayer money to “collaborate with an American adversary”.
Education
Crizac hits Indian stock market following IPO success
Nearly a week after Kolkata-headquartered Crizac raised Rs. 860 crore (£73.9 million) through its initial public offering (IPO), structured as an offer for sale (OFS) by promoters Pinky Agarwal and Manish Agarwal, the company’s shares surged in domestic stock markets on Wednesday, at nearly a 15% premium above the issue price of Rs. 245.
The IPO’s success – managed by Equirus Capital Private Limited and Anand Rathi Advisors Limited – along with its strong performance on the National Stock Exchange and Bombay Stock Exchange, is expected to fuel Crizac’s expansion into new destinations and services.
“The reason we went for a full OFS, or fully secondary, as we might say in the UK, is because the company’s balance sheet is very strong. We already have sufficient capital to support our expansion plans. Our focus remains on diversifying globally, which has been our strength over the past five years and will continue to be our strength in the future,” Christopher Nagle, CEO of Crizac, told The PIE News.
While an OFS means that the company, in this case, Crizac, did not raise new capital through the IPO – with proceeds instead going to existing shareholders, namely the Agarwals – its entry into the financial markets allows the company to publicly demonstrate “the scale, size, and operations of the company in a transparent way”, according to Nagle.
Crizac’s decision to go public comes as it looks to expand, beyond student recruitment, into areas such as student loans, housing, and other services.
The company is also eyeing new geographies and high-growth markets within India.
We also see great potential and can add great value in other destinations like Ireland, the USA, and Australia
Vikash Agarwal, Crizac
“We have a strong plan to expand across cities in India. Even though we are already one of the biggest recruiters for India-UK, we believe there’s still significant room for growth,” stated Vikash Agarwal, chairman and managing director, Crizac.
“We also see great potential and can add great value in other destinations like Ireland, the USA, and Australia,” he added.
Crizac, which reported a total income of Rs. 849.5 crore (£78m) in FY25, currently works with over 10,000 agents and some 173 international institutions.
Tthrough its stock market listing, the company aims to strengthen confidence among it partners.
“The fact that we are listed doesn’t change how we interact with agents, but we believe it will lead to even greater trust from universities and agent partners alike, thanks to the level of diligence and corporate governance that is now required of us,” stated Nagle.
With a market capitalisation of Rs 5,379.84 crore (nearly £555m), Crizac’s solid financial track record and low debt levels have been key drivers behind its IPO, even as changing policies in major study destinations continue to influence the sector.
As destinations like Australia hike visa fees, the UK increases compliance among institutions and considers imposing levies on international student fees, the US tightens vetting and eyes visa time limits, and Canada raises financial thresholds amid falling study permits, it remains to be seen how students from India, Nigeria, and China will navigate their study abroad choices in the coming years.
According to government data presented in the Indian Parliament, there was a nearly 15% decline in Indian students going abroad, largely in the major four destinations, while countries like Germany, Russia, France, Ireland, and New Zealand saw increased interest.
However, despite the downturn, Crizac is confident that its move will inspire other Indian education companies to create value on the global stage.
“Being the first listed company in this space will unlock significant value for the industry. We believe many are already watching our listing closely, and there will be a lot others going public from this sector now,” stated Agarwal.
Education
The Pros And Cons Of AI In The Workplace And In Education
The integration of artificial intelligence into our daily lives is no longer a futuristic concept but a present-day reality, fundamentally reshaping industries and institutions. From the bustling floors of global corporations to the hallowed halls of academia, AI is proving to be a transformative, yet complex, force. For business and tech leaders, understanding the dual nature of this technological revolution—its remarkable advantages and its inherent challenges—is paramount. There are both pros and cons on AI in the workplace and in education: this article delves into the multifaceted impact of AI in the workplace and education, exploring the significant opportunities it presents alongside the critical concerns that demand our attention.
AI in the Workplace: A New Era of Productivity and Peril
The modern workplace is in the throes of an AI-driven evolution, promising unprecedented levels of efficiency and innovation. One of the most significant pros of artificial intelligence in a professional setting is its ability to automate repetitive and mundane tasks. This allows human employees to redirect their focus towards more strategic, creative, and complex problem-solving endeavors. For instance, in the realm of human resources, AI-powered tools can screen thousands of resumes in minutes, a task that would take a team of recruiters days to complete. Companies like Oracle are leveraging their AI-powered human resource solutions to streamline candidate sourcing and improve hiring decisions, freeing up HR professionals to concentrate on building relationships and fostering a positive work environment.
Beyond automation, AI is a powerful engine for enhanced decision-making. By analyzing vast datasets, machine learning algorithms can identify patterns and trends that are imperceptible to the human eye, providing data-driven insights that inform strategic business choices. In the financial sector, AI algorithms are instrumental in fraud detection, analyzing transaction patterns in real-time to flag anomalies and prevent fraudulent activities before they cause significant damage. Similarly, in manufacturing, companies like Siemens are utilizing AI-powered “Industrial Copilots” to monitor machinery, predict maintenance needs, and prevent costly downtime, thereby optimizing production lines and ensuring operational continuity.
However, the widespread adoption of AI in the workplace is not without its cons. The most pressing concern for many is the specter of job displacement. As AI systems become more sophisticated, there is a legitimate fear that roles currently performed by humans, particularly those involving routine and predictable tasks, will become obsolete. While some argue that AI will create new jobs, there is a transitional period that could see significant disruption and require a massive effort in upskilling and reskilling the workforce.
Furthermore, the ethical implications of AI cannot be overstated. The potential for bias in AI algorithms is a significant challenge. If an AI system is trained on biased data, it will perpetuate and even amplify those biases in its decision-making processes. Additionally, the increasing use of AI raises serious privacy concerns — some people have go on to create distasteful clothes remover AI tools. The vast amounts of data that AI systems collect and process, from employee performance metrics to customer behavior, create a treasure trove of sensitive information that must be protected from misuse and security breaches.
AI in Education: Personalizing Learning While Preserving the Human Touch
The educational landscape is also being profoundly reshaped by artificial intelligence, with the promise of creating more personalized, engaging, and accessible learning experiences. One of the most celebrated benefits of AI in education is its capacity to facilitate personalized learning at scale. AI-powered adaptive learning platforms can tailor educational content to the individual needs and learning pace of each student. For example, platforms like Carnegie Learning’s “Mika” software use AI to provide personalized tutoring in mathematics, offering real-time feedback and adapting the curriculum to address a student’s specific areas of difficulty. This individualized approach has the potential to revolutionize how we teach and learn, moving away from a one-size-fits-all model to a more student-centric methodology.
AI is also a valuable tool for automating the administrative burdens that often consume a significant portion of educators’ time. Grading multiple-choice tests, managing schedules, and tracking attendance are all tasks that can be efficiently handled by AI systems. This frees up teachers to focus on what they do best: inspiring, mentoring, and interacting directly with their students. Language-learning apps like Duolingo are a prime example of AI in action, using machine learning to personalize lessons and provide instant feedback, making language education more accessible and engaging for millions of users worldwide.
Despite these advancements, the integration of AI in education raises a number of critical concerns and cons. A primary worry is the potential for a diminished human connection in the learning process. While AI can provide personalized content, it cannot replicate the empathy, encouragement, and nuanced understanding that a human teacher provides. Over-reliance on technology could lead to a sense of isolation for students and hinder the development of crucial social and emotional skills.
Data privacy is another significant hurdle. Educational AI platforms collect vast amounts of student data, from academic performance to learning behaviors. Ensuring the security and ethical use of this sensitive information is paramount. There is a tangible risk of this data being misused or falling victim to cyberattacks, which could have serious consequences for students and educational institutions.
In conclusion, artificial intelligence has both pros and cons, both the workplace and the field of education. The potential for increased productivity, data-driven insights, and personalized experiences is immense. However, we must proceed with a clear-eyed understanding of the challenges. Addressing concerns around job displacement, data privacy, and the importance of human interaction will be crucial in harnessing the full potential of AI for the betterment of our professional and educational futures. The path forward lies not in a blind embrace of technology, but in a thoughtful and ethical integration that prioritizes both progress and humanity.
Education
New York Passes the Responsible AI Safety and Education Act
The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.
Applicability and Relevant Definitions
The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models.
- “Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
- “Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.
The RAISE Act imposes the following obligations and restrictions on large developers:
- Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”
- “Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
- Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:
- (1) implement a written safety and security protocol;
- (2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
- (3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
- (4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
- (5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.
- Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
- Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.
- “Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.
If enacted, the RAISE Act would take effect 90 days after being signed into law.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children