Jobs & Careers
5 Fun Generative AI Projects for Absolute Beginners


Image by Author | Canva
# Introduction
This is the second article in my beginner project series. If you haven’t seen the first one on Python, it’s worth checking out: 5 Fun Python Projects for Absolute Beginners.
So, what’s generative AI or Gen AI? It is all about creating new content like text, images, code, audio, or even video using AI. Before the large language and vision models era, things were quite different. But now, with the rise of foundation models like GPT, LLaMA, and LLaVA, everything has shifted. You can build creative tools and interactive apps without having to train models from scratch.
I’ve picked these 5 projects to cover a bit of everything: text, image, voice, vision, and some backend concepts like fine-tuning and RAG. You’ll get to try out both API-based solutions and local setups, and by the end, you’ll have touched all the building blocks used in most modern Gen AI apps. So, Let’s get started.
# 1. Recipe Generator App (Text Generation)
Link: Build a Recipe Generator with React and AI: Code Meets Kitchen
We’ll start with something simple and fun that only uses text generation and an API key, no need for heavy setup. This app lets you input a few basic details like ingredients, meal type, cuisine preference, cooking time, and complexity. It then generates a full recipe using GPT. You’ll learn how to create the frontend form, send the data to GPT, and render the AI-generated recipe back to the user. Here is another advanced version of same idea: Create an AI Recipe Finder with GPT o1-preview in 1 Hour. This one has more advanced prompt engineering, GPT-4, suggestions, ingredient substitutions, and a more dynamic frontend.
# 2. Image Generator App (Stable Diffusion, Local Setup)
Link: Build a Python AI Image Generator in 15 Minutes (Free & Local)
Yes, you can generate cool images using tools like ChatGPT, DALL·E, or Midjourney by just typing a prompt. But what if you want to take it a step further and run everything locally with no API costs or cloud restrictions? This project does exactly that. In this video, you’ll learn how to set up Stable Diffusion on your own computer. The creator keeps it super simple: you install Python, clone a lightweight web UI repo, download the model checkpoint, and run a local server. That’s it. After that, you can enter text prompts in your browser and generate AI images instantly, all without internet or API calls.
# 3. Medical Chatbot with Voice + Vision + Text
Link: Build an AI Voice Assistant App using Multimodal LLM Llava and Whisper
This project isn’t specifically built as a medical chatbot, but the use case fits well. You speak to it, it listens, it can look at an image (like an X-ray or document), and it responds intelligently combining all three modes: voice, vision, and text. It’s built using LLaVA (a multimodal vision-language model) and Whisper (OpenAI’s speech-to-text model) in a Gradio interface. The video walks through setting it up on Colab, installing libraries, quantizing LLaVA to run on your GPU, and stitching it all together with gTTS for audio replies.
# 4. Fine-Tuning Modern LLMs
Link: Fine tune Gemma 3, Qwen3, Llama 4, Phi 4 and Mistral Small with Unsloth and Transformers
So far, we’ve been using off-the-shelf models with prompt engineering. That works, but if you want more control, fine-tuning is the next step. This video from Trelis Research is one of the best out there. Therefore, instead of suggesting a project that simply swaps a fine-tune model, I wanted you to focuse on the actual process of fine-tuning a model yourself. This video shows you how to fine-tune models like Gemma 3, Qwen3, Llama 4, Phi 4, and Mistral Small using Unsloth (library for faster, memory-efficient training) and Transformers. It’s long (about 1.5 hours), but super worth it. You’ll learn when fine-tuning makes sense, how to prep datasets, run quick evals using vLLM, and debug real training issues.
# 5. Build Local RAG from Scratch
Link: Local Retrieval Augmented Generation (RAG) from Scratch (step by step tutorial)
Everyone loves a good chatbot, but most fall apart when asked about stuff outside their training data. That’s where RAG is useful. You give your LLM a vector database of relevant documents, and it pulls context before answering. The video walks you through building a fully local RAG system using a Colab notebook or your own machine. You’ll load documents (like a textbook PDF), split them into chunks, generate embeddings with a sentence-transformer model, store them in SQLite-VSS, and connect it all to a local LLM (e.g. Llama 2 via Ollama). It’s the clearest RAG tutorial I’ve seen for beginners, and once you’ve done this, you’ll understand how ChatGPT plugins, AI search tools, and internal company chatbots really work.
# Wrapping Up
Each of these projects teaches you something essential:
Text → Image → Voice → Fine-tuning → Retrieval
If you’re just getting into Gen AI and want to actually build stuff, not just play with demos, this is your blueprint. Start from the one that excites you most. And remember, it’s okay to break things. That’s how you learn.
Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.
Jobs & Careers
NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

NVIDIA disclosed on August 28, 2025, that two unnamed customers contributed 39% of its revenue in the July quarter, raising questions about the chipmaker’s dependence on a small group of clients.
The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products.
In a filing with the U.S. Securities and Exchange Commission (SEC), NVIDIA said “Customer A” accounted for 23% of total revenue and “Customer B” for 16%. A year earlier, its top two customers made up 14% and 11% of revenue.
The concentration highlights the role of large buyers, many of whom are cloud service providers. “Large cloud service providers made up about 50% of the company’s data center revenue,” NVIDIA chief financial officer Colette Kress said on Wednesday. Data center sales represented 88% of NVIDIA’s overall revenue in the second quarter.
“We have experienced periods where we receive a significant amount of our revenue from a limited number of customers, and this trend may continue,” the company wrote in the filing.
One of the customers could possibly be Saudi Arabia’s AI firm Humain, which is building two data centers in Riyadh and Dammam, slated to open in early 2026. The company has secured approval to import 18,000 NVIDIA AI chips.
The second customer could be OpenAI or one of the major cloud providers — Microsoft, AWS, Google Cloud, or Oracle. Another possibility is xAI.
Previously, Elon Musk said xAI has 230,000 GPUs, including 30,000 GB200s, operational for training its Grok model in a supercluster called Colossus 1. Inference is handled by external cloud providers.
Musk added that Colossus 2, which will host an additional 550,000 GB200 and GB300 GPUs, will begin going online in the coming weeks. “As Jensen Huang has stated, xAI is unmatched in speed. It’s not even close,” Musk wrote in a post on X.Meanwhile, OpenAI is preparing for a major expansion. Chief Financial Officer Sarah Friar said the company plans to invest in trillion-dollar-scale data centers to meet surging demand for AI computation.
The post NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue appeared first on Analytics India Magazine.
Jobs & Careers
‘Reliance Intelligence’ is Here, In Partnership with Google and Meta

Reliance Industries chairman Mukesh Ambani has announced the launch of Reliance Intelligence, a new wholly owned subsidiary focused on artificial intelligence, marking what he described as the company’s “next transformation into a deep-tech enterprise.”
Addressing shareholders, Ambani said Reliance Intelligence had been conceived with four core missions—building gigawatt-scale AI-ready data centres powered by green energy, forging global partnerships to strengthen India’s AI ecosystem, delivering AI services for consumers and SMEs in critical sectors such as education, healthcare, and agriculture, and creating a home for world-class AI talent.
Work has already begun on gigawatt-scale AI data centres in Jamnagar, Ambani said, adding that they would be rolled out in phases in line with India’s growing needs.
These facilities, powered by Reliance’s new energy ecosystem, will be purpose-built for AI training and inference at a national scale.
Ambani also announced a “deeper, holistic partnership” with Google, aimed at accelerating AI adoption across Reliance businesses.
“We are marrying Reliance’s proven capability to build world-class assets and execute at India scale with Google’s leading cloud and AI technologies,” Ambani said.
Google CEO Sundar Pichai, in a recorded message, said the two companies would set up a new cloud region in Jamnagar dedicated to Reliance.
“It will bring world-class AI and compute from Google Cloud, powered by clean energy from Reliance and connected by Jio’s advanced network,” Pichai said.
He added that Google Cloud would remain Reliance’s largest public cloud partner, supporting mission-critical workloads and co-developing advanced AI initiatives.
Ambani further unveiled a new AI-focused joint venture with Meta.
He said the venture would combine Reliance’s domain expertise across industries with Meta’s open-source AI models and tools to deliver “sovereign, enterprise-ready AI for India.”
Meta founder and CEO Mark Zuckerberg, in his remarks, said the partnership is aimed to bring open-source AI to Indian businesses at scale.
“With Reliance’s reach and scale, we can bring this to every corner of India. This venture will become a model for how AI, and one day superintelligence, can be delivered,” Zuckerberg said.
Ambani also highlighted Reliance’s investments in AI-powered robotics, particularly humanoid robotics, which he said could transform manufacturing, supply chains and healthcare.
“Intelligent automation will create new industries, new jobs and new opportunities for India’s youth,” he told shareholders.
Calling AI an opportunity “as large, if not larger” than Reliance’s digital services push a decade ago, Ambani said Reliance Intelligence would work to deliver “AI everywhere and for every Indian.”
“We are building for the next decade with confidence and ambition,” he said, underscoring that the company’s partnerships, green infrastructure and India-first governance approach would be central to this strategy.
The post ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta appeared first on Analytics India Magazine.
Jobs & Careers
Cognizant, Workfabric AI to Train 1,000 Context Engineers

Cognizant has announced that it would deploy 1,000 context engineers over the next year to industrialise agentic AI across enterprises.
According to an official release, the company claimed that the move marks a “pivotal investment” in the emerging discipline of context engineering.
As part of this initiative, Cognizant said it is partnering with Workfabric AI, the company building the context engine for enterprise AI.
Cognizant’s context engineers will be powered by Workfabric AI’s ContextFabric platform, the statement said, adding that the platform transforms the organisational DNA of enterprises, how their teams work, including their workflows, data, rules, and processes, into actionable context for AI agents.Context engineering is essential to enabling AI a
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies