Connect with us

Jobs & Careers

Bengaluru Developer Builds ‘Mission Office’ iPhone Game with Potholes and Cows Using ChatGPT

Published

on


A Bengaluru-based software developer has launched an iPhone game that captures the daily chaos of commuting in the city with the help of AI. Titled Mission Office, the game was created by Harin Nitisvaar using ChatGPT to generate assets, with final edits done in Photoshop. 

Built using SwiftUI in Xcode, the game is now available for download on the iOS App Store. 

Drawing from his own rides to work on an Ather electric scooter, Nitisvaar, who currently works at Zepto and formerly at Swiggy and Dunzo, designed the game to simulate the challenges faced by Bengaluru commuters. 

Players dodge potholes, barricades, and cows, aiming to reach their office in one piece. Familiar city landmarks add to the immersive, cartoon-style gameplay.

“Currently, the player image is AI-generated, and I can even create your avatar driving your favorite vehicle,” Nitisvaar replied to a post asking if players can customise the vehicle to other EVs like Ultraviolette, Ola, or Chetak.

A version 2.0 is in the works, which will add traffic jams and urban floods to further simulate the Bengaluru commute.

The game’s clever integration of real-world elements and its humorous tone have struck a chord on social media. Users on X (formerly Twitter) have praised its relatability, with some suggesting new features like auto-rickshaws and regional language dialogues.

Nitisvaar’s approach stands out for its low-cost development process powered by AI tools, showing how generative models can help solo developers create visually rich games. 

While the game is available only on iOS for now, there’s no word yet on an Android release.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Jobs & Careers

Hugging Face’s Latest Small Language Model Adds Reasoning Capabilities

Published

on


Hugging Face has released SmolLM3, a 3B parameter language model that offers long-context reasoning, multilingual capabilities, and dual-mode inference, making it one of the most competitive small-scale open models to date. The model is available under the Apache 2.0 license.

Trained on 11.2 trillion tokens, SmolLM3 outperforms other models in its class, including Llama-3.2-3B and Qwen2.5-3B, while rivalling larger 4B models such as Gemma3 and Qwen3. 

The model supports six languages, including English, French, Spanish, German, Italian, and Portuguese, and can process context lengths of up to 128k tokens, enabled by NoPE and YaRN techniques.

The release includes both a base model and an instruction-tuned model with dual reasoning modes. Users can toggle between different flags to control whether the model generates answers with or without reasoning traces.

Pretraining was conducted over three stages with evolving mixes of web, code, and math datasets. A mid-training phase extended the model’s context length and added general reasoning capabilities, followed by supervised fine-tuning and preference alignment using Anchored Preference Optimisation (APO).

SmolLM3 achieved strong results across 12 benchmarks, ranking high on knowledge and reasoning tasks and demonstrating strong multilingual and coding performance. Instructing and reasoning modes yielded further gains on tasks like LiveCodeBench and AIME 2025.

The full training recipe, including data mixtures, ablations, synthetic data generation, and model alignment steps, has also been made public on its GitHub and Hugging Face pages. This open approach aims to help the research community replicate and build on SmolLM3’s performance.

A few months back, Hugging Face launched SmolLM2, an open-source small language model trained on 11 trillion tokens, including custom datasets for math, code, and instruction-following. It outperforms models like Qwen2.5-1.5B and Llama3.2-1B on several benchmarks, particularly MMLU-Pro, while achieving competitive results on others, like TriviaQA and Natural Questions.

It appears that Hugging Face is focusing on minor but consistent improvements for its small language models.



Source link

Continue Reading

Jobs & Careers

Cerebras Brings Reasoning Time Down from 60 to 0.6 Seconds

Published

on


Cerebras, the AI infrastructure firm, announced on July 8 that it will deploy Alibaba’s flagship Qwen3 reasoning model, featuring 235 billion parameters, on Cerebras hardware. The model is claimed to run at 1,500 tokens per second.

“That means reasoning time goes from 60 seconds on GPUs to just 0.6 seconds,” said the company in the announcement. Cerebras added that it is enabling the model with 131k context for enterprise customers, which allows production-grade code generation. 

The model will be available for all to try later this week at Cerebras. 

The company develops wafer-scale AI chips optimised for inference — a process which involves deriving insights from pre-trained AI models. Its cloud services host a range of AI models powered by its hardware, allowing users and developers to generate over 1,000 tokens per second. 

In AI models, ‘reasoning’ involves using extra computation to analyse a user query step-by-step, aiming for an accurate and relevant answer. This process can be time-consuming, sometimes taking several minutes to complete. 

Custom hardware systems often surpass the inference performance of traditional NVIDIA GPUs, which are frequently used for training and deploying AI models. 

Along with Cerebras, companies like Groq and SambaNova have built hardware that offers superior performance for inference. 

In May, Cerebras announced that its hardware has outperformed NVIDIA’s DGX B200, which consists of 8 Blackwell GPUs, in terms of output speed while deploying Meta’s Llama 4 Maverick model. 

Cerebras achieved an output token speed of over 2,500 tokens per second, whereas NVIDIA demonstrated an output token speed of only 1,000 tokens per second. 

However, NVIDIA outperformed systems from Groq, AMD, Google, and other vendors. “Only Cerebras stands – and we smoked Blackwell,” said Cerebras in a post on X. “We’ve tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta’s flagship model,” said the company. 



Source link

Continue Reading

Jobs & Careers

OpenAI to Train 4 Lakh Teachers in US to Build AI-Ready Classrooms

Published

on


OpenAI is doubling down on its commitment to democratise AI education by launching large-scale initiatives in the United States. The company has partnered with the American Federation of Teachers (AFT) to launch the National Academy for AI Instruction, a five-year initiative aimed at training four lakh K-12 teachers, nearly one in 10 across the country, to use and teach AI in classrooms effectively.

With a $10 million contribution over five years, including $8 million in funding and $2 million in engineering and computing support, OpenAI will help establish a flagship training hub in New York City and support the development of additional centres by 2030.

The initiative promises free workshops, hands-on training, and AI tools specifically built for educators, with a strong focus on equity and accessibility in underserved school districts.

“Educators make the difference, and they should lead this next shift with AI,” OpenAI CEO Sam Altman said, recalling how a high school teacher sparked his own early curiosity in AI.

The academy is also backed by the United Federation of Teachers, Microsoft and Anthropic, and aims to ensure that teachers are at the forefront of setting commonsense guardrails and using AI to enhance, rather than replace, human teaching.

Meanwhile, in a parallel development, OpenAI announced the launch of OpenAI Academy India in collaboration with the IndiaAI Mission under the IT and electronics ministry. This marks the first international expansion of OpenAI’s educational platform, aiming to train one million teachers in generative AI skills.

The partnership will deliver AI training in English and Hindi (with more regional languages to follow), and extend to civil servants via the iGOT Karmayogi platform. Additional efforts include six-city workshops, hackathons across seven states, and $100,000 in API credits to 50 AI startups.

Union minister Ashwini Vaishnaw hailed the initiative as a step towards making AI knowledge accessible to every citizen. Jason Kwon, chief strategy officer at OpenAI, called India “one of the most dynamic countries for AI development”.



Source link

Continue Reading

Trending