Connect with us

Jobs & Careers

OpenAI to Train 4 Lakh Teachers in US to Build AI-Ready Classrooms

Published

on


OpenAI is doubling down on its commitment to democratise AI education by launching large-scale initiatives in the United States. The company has partnered with the American Federation of Teachers (AFT) to launch the National Academy for AI Instruction, a five-year initiative aimed at training four lakh K-12 teachers, nearly one in 10 across the country, to use and teach AI in classrooms effectively.

With a $10 million contribution over five years, including $8 million in funding and $2 million in engineering and computing support, OpenAI will help establish a flagship training hub in New York City and support the development of additional centres by 2030.

The initiative promises free workshops, hands-on training, and AI tools specifically built for educators, with a strong focus on equity and accessibility in underserved school districts.

“Educators make the difference, and they should lead this next shift with AI,” OpenAI CEO Sam Altman said, recalling how a high school teacher sparked his own early curiosity in AI.

The academy is also backed by the United Federation of Teachers, Microsoft and Anthropic, and aims to ensure that teachers are at the forefront of setting commonsense guardrails and using AI to enhance, rather than replace, human teaching.

Meanwhile, in a parallel development, OpenAI announced the launch of OpenAI Academy India in collaboration with the IndiaAI Mission under the IT and electronics ministry. This marks the first international expansion of OpenAI’s educational platform, aiming to train one million teachers in generative AI skills.

The partnership will deliver AI training in English and Hindi (with more regional languages to follow), and extend to civil servants via the iGOT Karmayogi platform. Additional efforts include six-city workshops, hackathons across seven states, and $100,000 in API credits to 50 AI startups.

Union minister Ashwini Vaishnaw hailed the initiative as a step towards making AI knowledge accessible to every citizen. Jason Kwon, chief strategy officer at OpenAI, called India “one of the most dynamic countries for AI development”.



Source link

Jobs & Careers

Hugging Face Launches Reachy Mini, Open-Source Robot for AI Enthusiasts and Educators 

Published

on


Hugging Face, in collaboration with Pollen Robotics, launched Reachy Mini, a desktop-sized open-source robot designed for AI experimentation and education. It is now available for pre-order globally. 

Developed for human-robot interaction and creative coding, the robot is available in two versions: Lite at $299 and a wireless version at $449. Thomas Wolf, co-founder and chief science officer at Hugging Face, announced in a LinkedIn post that the first deliveries are expected right after the summer of 2025 through 2026. 

Built for developers, educators and hobbyists, Reachy Mini enables users to program and deploy AI applications using Python. The robot includes multimodal sensors and offers integration with Hugging Face for real-time behaviour sharing and experimentation.

Two Versions for Flexible Use

The Reachy Mini Lite lacks onboard computing and Wi-Fi, whereas the wireless version is equipped with a Raspberry Pi 5, a battery, and four microphones. While the Lite version is compatible with Mac and Linux, it has not yet been released for Windows.

Both variants feature motorised head movement, body rotation, animated antennas, and a wide-angle camera. A speaker and audio-visual interaction capabilities come standard. The robot is currently in the early stages of development. “We’re sharing it as-is, without warranties or guarantees, to engage with early adopters and gather feedback,” the company mentioned in its announcement.

The Mini Lite version consists of two microphones, while the Mini version has four and measures 11 “/28cm in height and 6.3”/16cm in width, weighing 3.3 lbs/1.5 kg. 

Open-Source Development

Users can test and deploy behaviours in real life or through simulation. Over 15 preloaded behaviours will be available at launch through the Hugging Face hub. Future programming support will be expanded to include JavaScript and Scratch.

Reachy Mini’s open-source hardware and software allow for full transparency and community participation. With a modular kit-based assembly, it encourages hands-on learning, coding with children, and collaborative building.

Users can join the growing community of over 10 million on Hugging Face to upload, download and evolve new robot behaviours, positioning Reachy Mini as a flexible tool for AI exploration and learning.



Source link

Continue Reading

Jobs & Careers

Hugging Face’s Latest Small Language Model Adds Reasoning Capabilities

Published

on


Hugging Face has released SmolLM3, a 3B parameter language model that offers long-context reasoning, multilingual capabilities, and dual-mode inference, making it one of the most competitive small-scale open models to date. The model is available under the Apache 2.0 license.

Trained on 11.2 trillion tokens, SmolLM3 outperforms other models in its class, including Llama-3.2-3B and Qwen2.5-3B, while rivalling larger 4B models such as Gemma3 and Qwen3. 

The model supports six languages, including English, French, Spanish, German, Italian, and Portuguese, and can process context lengths of up to 128k tokens, enabled by NoPE and YaRN techniques.

The release includes both a base model and an instruction-tuned model with dual reasoning modes. Users can toggle between different flags to control whether the model generates answers with or without reasoning traces.

Pretraining was conducted over three stages with evolving mixes of web, code, and math datasets. A mid-training phase extended the model’s context length and added general reasoning capabilities, followed by supervised fine-tuning and preference alignment using Anchored Preference Optimisation (APO).

SmolLM3 achieved strong results across 12 benchmarks, ranking high on knowledge and reasoning tasks and demonstrating strong multilingual and coding performance. Instructing and reasoning modes yielded further gains on tasks like LiveCodeBench and AIME 2025.

The full training recipe, including data mixtures, ablations, synthetic data generation, and model alignment steps, has also been made public on its GitHub and Hugging Face pages. This open approach aims to help the research community replicate and build on SmolLM3’s performance.

A few months back, Hugging Face launched SmolLM2, an open-source small language model trained on 11 trillion tokens, including custom datasets for math, code, and instruction-following. It outperforms models like Qwen2.5-1.5B and Llama3.2-1B on several benchmarks, particularly MMLU-Pro, while achieving competitive results on others, like TriviaQA and Natural Questions.

It appears that Hugging Face is focusing on minor but consistent improvements for its small language models.



Source link

Continue Reading

Jobs & Careers

Cerebras Brings Reasoning Time Down from 60 to 0.6 Seconds

Published

on


Cerebras, the AI infrastructure firm, announced on July 8 that it will deploy Alibaba’s flagship Qwen3 reasoning model, featuring 235 billion parameters, on Cerebras hardware. The model is claimed to run at 1,500 tokens per second.

“That means reasoning time goes from 60 seconds on GPUs to just 0.6 seconds,” said the company in the announcement. Cerebras added that it is enabling the model with 131k context for enterprise customers, which allows production-grade code generation. 

The model will be available for all to try later this week at Cerebras. 

The company develops wafer-scale AI chips optimised for inference — a process which involves deriving insights from pre-trained AI models. Its cloud services host a range of AI models powered by its hardware, allowing users and developers to generate over 1,000 tokens per second. 

In AI models, ‘reasoning’ involves using extra computation to analyse a user query step-by-step, aiming for an accurate and relevant answer. This process can be time-consuming, sometimes taking several minutes to complete. 

Custom hardware systems often surpass the inference performance of traditional NVIDIA GPUs, which are frequently used for training and deploying AI models. 

Along with Cerebras, companies like Groq and SambaNova have built hardware that offers superior performance for inference. 

In May, Cerebras announced that its hardware has outperformed NVIDIA’s DGX B200, which consists of 8 Blackwell GPUs, in terms of output speed while deploying Meta’s Llama 4 Maverick model. 

Cerebras achieved an output token speed of over 2,500 tokens per second, whereas NVIDIA demonstrated an output token speed of only 1,000 tokens per second. 

However, NVIDIA outperformed systems from Groq, AMD, Google, and other vendors. “Only Cerebras stands – and we smoked Blackwell,” said Cerebras in a post on X. “We’ve tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta’s flagship model,” said the company. 



Source link

Continue Reading

Trending