Connect with us

Education

Google to invest $1 billion in AI education, elevating competition with Microsoft and OpenAI

Published

on


A recent survey reveals a significant increase in student use of AI tools for academic purposes

Alphabet’s Google has unveiled a substantial commitment of $1 billion over three years to deliver artificial intelligence (AI) training and tools to higher education institutions and nonprofits across the United States.

So far, over 100 universities have joined this initiative, including some of the nation’s largest public university systems, such as Texas A&M and the University of North Carolina. 

Participating institutions may benefit from cash funding and resources, including cloud computing credits aimed at supporting AI training for students and facilitating research on AI-related subjects.

This billion-dollar investment also encompasses the value of premium AI tools, such as an enhanced version of the Gemini chatbot, which Google plans to distribute to college students free of charge.

Rival efforts in AI integration

Google aspires to extend this program to all accredited nonprofit colleges in the U.S. and is contemplating similar initiatives in other countries, as noted by Senior Vice President James Manyika during an interview. He refrained from detailing the amount Google is allocating in direct funds to external institutions compared to covering its own cloud and subscription expenses.

This announcement arrives at a time when competitors like OpenAI, Anthropic, and Amazon are making analogous efforts to integrate AI into education, as the technology increasingly permeates society. In July, Microsoft committed $4 billion to enhance AI in education on a global scale.

By promoting their products to students, tech companies also position themselves to secure business contracts once these users transition into the workforce.

A growing body of research has highlighted concerns regarding AI’s impact on education, from facilitating cheating to diminishing critical thinking, prompting some schools to contemplate bans. Manyika indicated that Google has not encountered pushback from administrators since it commenced planning its education initiative earlier this year, but acknowledged that “many more questions” about AI-related concerns still linger.

“We’re hoping to learn together with these institutions about how best to use these tools,” he remarked, emphasizing that the insights gained could inform future product development decisions.

google ai

Read more: MEA, Europe edTech market size to reach $120 billion by 2027

Rapid rise of AI in education

Recent developments in AI education reveal that the integration of AI tools in academia is accelerating rapidly. According to a 2025 student survey by the Higher Education Policy Institute (HEPI), student use of generative AI has surged, with 92 percent of students now using AI tools in some form, up from 66 percent in 2024. Many students use AI for explaining concepts, summarizing articles, and research suggestions, though 18 percent admitted to including AI-generated text directly in their coursework. The survey highlights a growing demand for institutional support in developing AI skills, though only 36 percent of students reported receiving such support. This indicates a critical gap between AI adoption and educational infrastructure that institutions need to address for effective and ethical AI integration in learning environments.

Moreover, OpenAI, one of Google’s competitors, is leading an initiative called the National Academy for AI Instruction, backed by major tech firms and teachers’ unions, aiming to train 400,000 K-12 educators across the U.S. by 2030. This highlights a collective effort across the education sector to prepare both teachers and students for a future where AI literacy is essential. Meanwhile, Microsoft’s Elevate program, with a $4 billion investment over five years, aims to train 20 million people globally in AI skills through a unified platform offering courses across various competency levels and partnering with educational and labor organizations.

These initiatives are supported by data indicating that the global EdTech market, driven by AI-powered tools, is projected to reach $404 billion by 2025. Countries leading in investments include the U.S., China, and the U.K., with AI applications spanning adaptive learning, administration, and student engagement.





Source link

Education

US Education Department is all for using AI in classrooms: Key guidelines explained

Published

on


Artificial intelligence (AI) has moved from being a futuristic concept to an active part of classrooms across the United States. From adaptive learning platforms to AI-powered lesson planning, schools are integrating technology to improve learning outcomes and ease teacher workloads. However, the challenge lies in adopting these tools without violating federal and state regulations.

Federal guidance: Innovation with safeguards

In July 2025, the US Department of Education issued guidance confirming that AI can be used in schools when aligned with federal laws. The framework focuses on three core principles—privacy, equity, and human oversight.AI tools must comply with the Family Educational Rights and Privacy Act (FERPA) to protect student data. Algorithms should be designed to prevent bias or discrimination under civil rights regulations. Human decision-making must remain central, ensuring that AI supports educators rather than replacing them.The Department also encouraged schools seeking federal grants to propose AI-driven projects, provided they meet these compliance standards.

State-level action: Rapid policy development

Since the federal guidance, more than half of US states have introduced their own AI frameworks for schools. Ohio now mandates that all districts adopt an AI-use policy by mid-2026, while Rhode Island has published detailed recommendations for responsible classroom integration.These local rules aim to ensure innovation while safeguarding student interests. However, the pace of policy development and the diversity of approaches have created a complex regulatory environment for schools.

Mixed practices at the local level

Despite progress, many districts still operate in a gray area. Policies differ widely between schools, and families often face uncertainty about what is permissible. Some institutions allow AI on personal devices while banning it on school-owned systems. In certain cases, schools have reverted to traditional measures, such as requiring handwritten essays in class to prevent AI-assisted work.This variation highlights the need for consistent guidelines and clear communication with students and parents.

AI as a classroom resource

Educators are increasingly using AI as a tool for efficiency and creativity. AI platforms assist in lesson planning, assessment design, and content generation, enabling teachers to save significant time on administrative work. These efficiencies allow more focus on interactive teaching and student engagement.AI-powered tutoring systems are also being introduced to provide personalised support, particularly for students who need extra academic help. States such as New Hampshire are experimenting with AI-driven tools to enhance math and reading instruction.

Responsible AI use: Best practices for schools

To remain compliant and maximise benefits, schools should adopt structured approaches to AI integration:

  • Personalised Learning: Use adaptive platforms to tailor lessons while ensuring compliance with privacy regulations.
  • Teacher Support: Allow educators to use AI for planning and administrative tasks with mandatory human review.
  • Assessment Integrity: Shift from take-home essays to in-class writing or oral presentations to discourage misuse.
  • Career Guidance: Deploy AI-driven counselling tools while retaining human oversight for final decisions.

Managing risks and ensuring compliance

AI adoption brings challenges that schools must address proactively:

  • Bias Prevention: Regular audits are necessary to eliminate algorithmic bias.
  • Privacy Protection: All tools should meet FERPA standards and undergo security checks.
  • Avoiding Over-Reliance: AI should support, not replace, teacher judgment in academic and disciplinary matters.

Comprehensive district-level policies, continuous teacher training, and stakeholder engagement are essential for responsible use.

The road ahead

The Department of Education is collecting public feedback on AI-related policies and exploring ways to integrate AI into its own operations. States will continue rolling out new requirements in the coming months, making 2025 a critical year for AI in education.The future of AI in classrooms depends on a balanced approach—leveraging its potential to improve learning while upholding legal and ethical standards. Schools that integrate AI responsibly will not only enhance student outcomes but also prepare learners for a technology-driven world.





Source link

Continue Reading

Education

State Superintendent Thurmond Convenes Statewide AI in Education Workgroup for Public Schools  – Van Nuys News Press

Published

on


SACRAMENTO—State Superintendent of Public Instruction Tony Thurmond hosted the first meeting today of the Public Schools: Artificial Intelligence (AI) Workgroup at the California Department of Education (CDE) Headquarters in Sacramento. Established after last year’s passage of Senate Bill 1288, a bill authored by Senator Josh Becker (13th District) and sponsored by Superintendent Thurmond, the workgroup marks California as one of the first states in the nation to establish a legislatively mandated statewide effort focused on AI in K–12 education. 

“There is an urgent need for clear direction on AI use in schools to ensure technology enhances, rather than replaces, the vital role of educators,” said Superintendent Thurmond. “Workgroup members are representatives from various organizations, including technology leaders. The majority are educators, and this workgroup also includes students. We want to ensure that those who will be affected by this guidance and policy have a voice in creating it.” 

The workgroup is a model of Superintendent Thurmond’s efforts to develop strong public–private partnerships that power innovation in public education. It will develop the statewide guidance and a model policy to ensure AI benefits students and educators while safeguarding privacy, data security, and academic integrity. The group includes teachers, students, administrators, classified staff, higher education leaders, and industry experts. At least half of the members are current classroom teachers, elevating educator expertise as the foundation for decision-making. 

The launch of the Public Schools: Artificial Intelligence Workgroup directly advances Superintendent Thurmond’s priorities, which include 

  • Transforming Education with Innovation: equipping schools with equitable, forward-looking approaches to technology; 
  • Equity and Access for All Students: ensuring AI tools do not exacerbate inequities but instead expand opportunities for every student; 
  • Whole Child Support: safeguarding against bias, misuse, and misinformation in AI systems while protecting student well-being; 
  • Elevating Educator Voice: centering teachers in decision-making about AI in classrooms; and 
  • Transparency and Public Engagement: committing to openness through public meetings and shared resources. 

Today was the initial meeting of the Public Schools: Artificial Intelligence Workgroup. The second meeting will take place in October, followed by a third meeting in February. 

The CDE has released initial guidance for schools and educators regarding the use of AI, which will be enhanced by the work of this group. The initial guidance can be found on the CDE Learning With AI, Learning About AI web page




Source link

Continue Reading

Education

The Guardian view on GCSE resits: admitting the problem is just the first step | Editorial

Published

on


For years, rigid rules and a shocking failure rate in compulsory GCSE retakes have been one of the exam system’s dirty secrets. At last this dire situation is getting some of the attention it deserves. This year, nearly a quarter of all maths and English language entries in England, Wales and Northern Ireland were for students aged 17 or older on a repeat attempt – with just one in six of those retaking maths managing to pass.

By calling this a crisis, Jill Duffy, who heads the OCR exam board, has thrown a spotlight on the problem. But admitting that there is an issue with resits, as officials are now doing, is only the first step. There are differing views about what ought to happen next.

Reforming GCSEs is outside the scope of the review being led by Prof Becky Francis. But a proposal to ditch compulsory resits is on the table. The Sixth Form Colleges Association wants a second attempt to be followed – for those who fail – by a modular alternative. This would mean students not being forced to endlessly repeat the parts of the courses they have mastered, and focusing instead on the gaps.

Nick Gibb, the former Conservative schools minister, has predictably set his face against change and demanded that all schools follow the example of the best. But while big variations in results should be drilled into, and successes learned from, this is not an adequate response. Many subject experts believe that the qualifications are poorly designed if their purpose is to serve as a universal gateway to the world of work. Rather than sticking to vital competencies (such as numeracy, statistics and reading comprehension), the current versions include calculus and geometry (in maths) and quasi-literary analysis (in English language).

It is a great shame that these issues were not grasped more effectively by Labour in opposition. Changes to the curriculum and exam system are a painstaking process. Prof Francis’s review is the best chance of breaking a destructive cycle. But the Department for Education’s recent record of engagement with the further education sector – where most resits are taken – is not good. There is no secondary English specialist on the review, and teacher shortages and challenges around provision for special educational needs and disabilities remain concerning.

Resits must also be seen in the context of a wider debate around the future of post-16 education, including the pledge by ministers to abolish courses that they see as unwelcome competition to T-levels. As with resits, critics of this policy are most worried about less academically able pupils with lower test scores. Even the government’s own figures show a gap, with tens of thousands of students on the threatened courses, including some BTecs, potentially unsuited to newer alternatives.

With a skills white paper due in the autumn, it is not too late to tackle unanswered questions. A better balance between ambition and pragmatism can surely be found. Plenty of jobs in the UK do not require calculus or textual analysis. T-levels were meant to boost less academic, more practical teenagers. This year’s resit figures are a worrying addition to existing evidence that these are the pupils for whom the system works least well. Ministers must be absolutely confident that any changes they introduce make things better, and not worse.



Source link

Continue Reading

Trending