Jobs & Careers
Beginner’s Guide to Gemini CLI: Install, Setup, and Use It Like a Pro


Image by Author | Canva
# Introduction
Gemini CLI is Google’s new open-source AI assistant that runs in your terminal. It brings the Gemini language model (Gemini 2.5 Pro) directly to your shell so you can ask questions, generate code, fix bugs, or create documentation without leaving the command line. “Gemini” itself is an LLM and “Gemini CLI” is basically a user tool that makes model interactive in your workflows. In short, it’s like ChatGPT for developers. Google released Gemini CLI in June 2025, and it’s FREE for individuals. You just need to login using your personal google account and it gives you access to Gemini 2.5 Pro with a huge 1 million-token context window at no cost (up to 60 requests/minute and 1,000/day). It’s a great free and open source alternative to AI coding assistants like Anthropic’s Claude Code.
Let me help you with the setup and walk you through some examples to highlight its importance.
# Setting Up Gemini CLI on Your System
To install Gemini CLI, you need a command-line environment (Terminal on macOS/Linux, PowerShell or similar on Windows) and either Homebrew or Node.js. On macOS, the easiest method is via Homebrew:
- Install Gemini CLI via Homebrew: Execute the following command in your terminal
- Alternatively, install via Node (any OS): If you prefer or don’t use Homebrew, install Node.js (version 20 or higher. Then run:
npm install -g @google/gemini-cli
or
npx https://github.com/google-gemini/gemini-cli
This installs the CLI globally on macOS, Linux, or Windows. Node.js v20+ is required; you can download it from nodejs.org or use nvm to manage versions.
Once installed, you can simply run the following command to launch the gemini-cli:
This should start the CLI (if you see the “Gemini CLI” ASCII banner, you’re set). If gemini is not found, you may need to open a new terminal or add npm’s global bin to your PATH. You will see something like this:


Screenshot of Gemini CLI Launch
On first run, Gemini CLI will prompt you to pick a color theme (light or dark) and then log in with your Google account. Follow the instructions in the browser (or CLI) to authorize. If you prefer using an API key instead of login, you can set GEMINI_API_KEY=”YOUR_KEY” in your environment (see Google AI Studio to generate a key). Once authenticated, the CLI confirms it’s ready to use.
# Running Your First Gemini CLI Commands
With Gemini CLI set up, you can start using natural language commands right away. It opens a prompt (marked >) where you type questions or tasks. For example, let’s try with a simple prompt and ask: “Write a short paragraph about why Gemini CLI is awesome.” Here’s output:


Screenshot of Gemini CLI: Simple Paragraph Writing
// Task 1: Fixing bugs with Gemini CLI
Gemini CLI can integrate with tools like GitHub or your local Git to find issues. For instance, let’s use the built-in @search tool to fetch a GitHub issue URL, then ask for a fix plan:
Prompt (Source):
Here’s a GitHub issue: [@search https://github.com/google-gemini/gemini-cli/issues/4715]. Analyze the code and suggest a 3-step fix plan.
The CLI identified the root cause and suggested how to modify the code. The screenshot below shows it reporting a 3-step plan. You can review its plan, then confirm to let Gemini CLI automatically apply the changes to your files.


Screenshot of Gemini CLI: Fixing bugs
// Task 2a: Working with a Project (Simple Example)
I’ve created a project folder by cloning the gitdiagram repo. If you want to know more about this repo, head over to my article: Make Sense of a 10K+ Line GitHub Repo Without Reading the Code. Let’s navigate to our project folder using:
Now run gemini. You can start asking questions about the code. Let’s try the following prompt:
Prompt:
Explain the main components of this codebase
Gemini CLI will scan the files and use the Gemini model to summarize or answer, as shown in the screenshot below:


Screenshot of Gemini CLI: Working with a Project (Simple Example)
It parsed the folders and returned a structured summary (listing directories like src/, public/, etc.). This is handy for onboarding onto new projects or generating documentation.
// Task 2b: Working with a Project (Advanced Example)
Exploring the codebase is easy. Now let’s give it a more technical task to see how the output turns out. We’ll provide the following prompt to Gemini:
Prompt:
Analyze this repo for common performance anti-patterns.


Screenshot of Gemini CLI: Working with a Project (Advanced Example)
The response was so detailed that I’ve only included the starting portion in the screenshot. Gemini CLI created a detailed plan and then read every file independently. In the end, it shared a summary of potential performance anti-patterns along with recommended next steps:
Summary of potential performance anti-patterns based on initial analysis:
1. Large Frontend Bundle Size:
* Mermaid Library: The mermaid library is likely a significant contributor to the bundle size. If it's not
lazy-loaded, it will impact initial page load performance.
* `react-icons`: Depending on how it's imported, it can also contribute to bundle size if not tree-shaken
effectively.
2. Unoptimized Image Usage: While Next.js has next/image, without inspecting the actual image usage in components,
it's hard to say if images are properly optimized (e.g., correct sizes, formats, lazy loading).
3. Potential Backend Performance Issues (Python & Node.js):
* N+1 Queries: This is a common database anti-pattern that can significantly slow down data retrieval.
* Lack of Caching: If frequently accessed data is not cached at the application or database level, it can lead
to redundant computations and database hits.
* Synchronous Operations: Blocking I/O in either backend could lead to performance bottlenecks under heavy load.
4. `reactStrictMode: false`: While not a direct anti-pattern, it can hide potential performance issues related to
React's rendering behavior during development.
5. Development-like Docker Volume Mount: Mounting the entire backend directory in the Docker container is less
optimal for production builds compared to copying only necessary files.
To confirm these, further investigation would be needed, including:
* Bundle Analysis: Using tools like @next/bundle-analyzer to identify large modules in the frontend.
* Performance Profiling: Running the application and using browser developer tools (for frontend) and backend
profiling tools to identify bottlenecks.
* Code Review: Deep diving into the src/ and backend/ code to identify specific instances of the anti-patterns
mentioned.
These examples show how Gemini CLI turns simple prompts into real actions. You can query code, generate or refactor it, fix bugs, and improve performance , all from your terminal.
# Wrapping Up
Gemini CLI is a powerful new tool for developers. Once you have it installed on macOS (or any OS), you can interact with Google’s Gemini LLM as easily as any local command. Some of the key features that makes it different are:
- ReAct Agent Loop: Internally, it runs a ReAct agent loop with your local environment. This means it can decide when to call a tool (search, run shell, edit file) versus when to answer directly. For example, it fetched a URL with @search when needed.
- Built-in Tools: It has built-in “tools” such as grep, echo, file read/write, and you can invoke web search or file system queries from prompts.
- Multimodal Capabilities: Gemini CLI can even work with images/PDFs (since Gemini is multimodal). It supports integration with external Model Context Protocol (MCP) servers e.g., you could hook up an image generator (Imagen) or a custom API. This lets you do things like “generate code from this sketch” or “summarize a PDF.”
Try it out: After following the setup above, open a terminal in a project folder, type gemini, and start experimenting. You’ll quickly see how an AI companion in your shell can dramatically boost your productivity!
Kanwal Mehreen is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.
Jobs & Careers
NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

NVIDIA disclosed on August 28, 2025, that two unnamed customers contributed 39% of its revenue in the July quarter, raising questions about the chipmaker’s dependence on a small group of clients.
The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products.
In a filing with the U.S. Securities and Exchange Commission (SEC), NVIDIA said “Customer A” accounted for 23% of total revenue and “Customer B” for 16%. A year earlier, its top two customers made up 14% and 11% of revenue.
The concentration highlights the role of large buyers, many of whom are cloud service providers. “Large cloud service providers made up about 50% of the company’s data center revenue,” NVIDIA chief financial officer Colette Kress said on Wednesday. Data center sales represented 88% of NVIDIA’s overall revenue in the second quarter.
“We have experienced periods where we receive a significant amount of our revenue from a limited number of customers, and this trend may continue,” the company wrote in the filing.
One of the customers could possibly be Saudi Arabia’s AI firm Humain, which is building two data centers in Riyadh and Dammam, slated to open in early 2026. The company has secured approval to import 18,000 NVIDIA AI chips.
The second customer could be OpenAI or one of the major cloud providers — Microsoft, AWS, Google Cloud, or Oracle. Another possibility is xAI.
Previously, Elon Musk said xAI has 230,000 GPUs, including 30,000 GB200s, operational for training its Grok model in a supercluster called Colossus 1. Inference is handled by external cloud providers.
Musk added that Colossus 2, which will host an additional 550,000 GB200 and GB300 GPUs, will begin going online in the coming weeks. “As Jensen Huang has stated, xAI is unmatched in speed. It’s not even close,” Musk wrote in a post on X.Meanwhile, OpenAI is preparing for a major expansion. Chief Financial Officer Sarah Friar said the company plans to invest in trillion-dollar-scale data centers to meet surging demand for AI computation.
The post NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue appeared first on Analytics India Magazine.
Jobs & Careers
‘Reliance Intelligence’ is Here, In Partnership with Google and Meta

Reliance Industries chairman Mukesh Ambani has announced the launch of Reliance Intelligence, a new wholly owned subsidiary focused on artificial intelligence, marking what he described as the company’s “next transformation into a deep-tech enterprise.”
Addressing shareholders, Ambani said Reliance Intelligence had been conceived with four core missions—building gigawatt-scale AI-ready data centres powered by green energy, forging global partnerships to strengthen India’s AI ecosystem, delivering AI services for consumers and SMEs in critical sectors such as education, healthcare, and agriculture, and creating a home for world-class AI talent.
Work has already begun on gigawatt-scale AI data centres in Jamnagar, Ambani said, adding that they would be rolled out in phases in line with India’s growing needs.
These facilities, powered by Reliance’s new energy ecosystem, will be purpose-built for AI training and inference at a national scale.
Ambani also announced a “deeper, holistic partnership” with Google, aimed at accelerating AI adoption across Reliance businesses.
“We are marrying Reliance’s proven capability to build world-class assets and execute at India scale with Google’s leading cloud and AI technologies,” Ambani said.
Google CEO Sundar Pichai, in a recorded message, said the two companies would set up a new cloud region in Jamnagar dedicated to Reliance.
“It will bring world-class AI and compute from Google Cloud, powered by clean energy from Reliance and connected by Jio’s advanced network,” Pichai said.
He added that Google Cloud would remain Reliance’s largest public cloud partner, supporting mission-critical workloads and co-developing advanced AI initiatives.
Ambani further unveiled a new AI-focused joint venture with Meta.
He said the venture would combine Reliance’s domain expertise across industries with Meta’s open-source AI models and tools to deliver “sovereign, enterprise-ready AI for India.”
Meta founder and CEO Mark Zuckerberg, in his remarks, said the partnership is aimed to bring open-source AI to Indian businesses at scale.
“With Reliance’s reach and scale, we can bring this to every corner of India. This venture will become a model for how AI, and one day superintelligence, can be delivered,” Zuckerberg said.
Ambani also highlighted Reliance’s investments in AI-powered robotics, particularly humanoid robotics, which he said could transform manufacturing, supply chains and healthcare.
“Intelligent automation will create new industries, new jobs and new opportunities for India’s youth,” he told shareholders.
Calling AI an opportunity “as large, if not larger” than Reliance’s digital services push a decade ago, Ambani said Reliance Intelligence would work to deliver “AI everywhere and for every Indian.”
“We are building for the next decade with confidence and ambition,” he said, underscoring that the company’s partnerships, green infrastructure and India-first governance approach would be central to this strategy.
The post ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta appeared first on Analytics India Magazine.
Jobs & Careers
Cognizant, Workfabric AI to Train 1,000 Context Engineers

Cognizant has announced that it would deploy 1,000 context engineers over the next year to industrialise agentic AI across enterprises.
According to an official release, the company claimed that the move marks a “pivotal investment” in the emerging discipline of context engineering.
As part of this initiative, Cognizant said it is partnering with Workfabric AI, the company building the context engine for enterprise AI.
Cognizant’s context engineers will be powered by Workfabric AI’s ContextFabric platform, the statement said, adding that the platform transforms the organisational DNA of enterprises, how their teams work, including their workflows, data, rules, and processes, into actionable context for AI agents.Context engineering is essential to enabling AI a
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers2 months ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle