Connect with us

Jobs & Careers

Make Sense of a 10K+ Line GitHub Repos Without Reading the Code

Published

on


How I Made Sense of a 10K+ Lines GitHub Repo Without Reading the Code
Image by Author | Canva

 

Navigating and understanding large codebases can be challenging, especially for new developers joining a project or when revisiting older repositories. Traditional methods of understanding code structures involve reading through numerous files and documentation, which can be time-consuming and error-prone. GitDiagram offers a solution by converting GitHub repositories into interactive diagrams, providing a visual representation of the codebase’s architecture. This tool helps in understanding complex systems, and enhancing collaboration among development teams. In this article, I will walk you through the step-by-step process of using GitDiagram locally. So, without any further wait, let’s get started.

 

Step-by-Step Guide to Using GitDiagram Locally

 

Step 1: Clone the GitDiagram repository

git clone https://github.com/ahmedkhaleel2004/gitdiagram.git
cd gitdiagram

 

Step 2: Install Dependencies

This fetches and installs dependencies into node_modules.

 
Before running pnpm install, make sure you have Node.js and pnpm installed globally.

  • To install Node.js, download it from nodejs.org
  • To install pnpm, run the following command:
  •  

 

Step 3: Set Up Environment Variables

 
Edit the .env file to include your OpenAI / Anthropic /OpenRouter API key and, optionally, your GitHub personal access token.

 

Step 4: Start Backend Services

docker-compose up --build -d

 
The FastAPI server will be available at localhost:8000. You will see the following message at the server side.

{"message":"Hello from GitDiagram API!"}

 

Step 5: Initialize the Database

Run the following commands to set up the database:

chmod +x start-database.sh
./start-database.sh
pnpm db:push

 
When prompted to generate a random password, input yes. The Postgres database will start in a container at localhost:5432.
Note: When I tried to run this command, I got this error:

sh: drizzle-kit: command not found
 ELIFECYCLE  Command failed.
 WARN   Local package.json exists, but node_modules missing, did you mean to install?

 
Turns out I hadn’t installed drizzle-kit. So if you see this, just run:

 
After that, pnpm db:push worked fine and gave me this output:

No config path provided, using default 'drizzle.config.ts'
Reading config file '/Users/kanwal/Desktop/gitdiagram/drizzle.config.ts'
Using 'postgres' driver for database querying
[✓] Pulling schema from database...
[✓] Changes applied

 

Step 6: Run the Frontend

 
You can now access the website at localhost:3000 and edit the rate limits defined in backend/app/routers/generate.py in the generate function decorator. Let’s try to visualize the github repo of the fastapi library.

Frontend Interface:
 
Frontend Interface
 

Output:
 
Output

 

Concluding Thoughts

 
This is a great idea and a really useful repository. I’ve personally felt the need for something like this in my own projects, so I appreciate the effort and vision behind it.

That said, offering an unbiased opinion—there’s definitely room for improvement.

One recurring issue I ran into was:

Syntax error in text mermaid version 11.4.

 
According to the project owner ahmedkhaleel2004, this error usually means the LLM generated invalid Mermaid.js syntax.

 

I’ve tried addressing this issue in numerous ways, but ultimately, I find that there is no reliable fix—it’s mostly a limitation of the LLM. If there were a way to validate Mermaid.js code, that would help, but as of now, I’m not sure how.

 

He also noted that the current prompt (in `prompts.py`, specifically the third one that generates Mermaid code) already tries to enforce correct syntax—but it’s not foolproof, and new syntax issues still occur.

A Solution I Found Online That Worked
While digging through the GitHub Issues, I came across a workaround shared by another user that actually worked for me:

 

Add this to the customize diagram prompt:Ignore the syntax issue from Mermaid version 11.4.1 and regenerate the remainder of the diagram.

 

Using that line helped bypass the error. Even though some components might still be missing, it at least produced a partial diagram—enough to give a high-level understanding of the codebase.
 
 

Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Jobs & Careers

NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

Published

on



NVIDIA disclosed on August 28, 2025, that two unnamed customers contributed 39% of its revenue in the July quarter, raising questions about the chipmaker’s dependence on a small group of clients.

The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products.

In a filing with the U.S. Securities and Exchange Commission (SEC), NVIDIA said “Customer A” accounted for 23% of total revenue and “Customer B” for 16%. A year earlier, its top two customers made up 14% and 11% of revenue.

The concentration highlights the role of large buyers, many of whom are cloud service providers. “Large cloud service providers made up about 50% of the company’s data center revenue,” NVIDIA chief financial officer Colette Kress said on Wednesday. Data center sales represented 88% of NVIDIA’s overall revenue in the second quarter.

“We have experienced periods where we receive a significant amount of our revenue from a limited number of customers, and this trend may continue,” the company wrote in the filing.

One of the customers could possibly be Saudi Arabia’s AI firm Humain, which is building two data centers in Riyadh and Dammam, slated to open in early 2026. The company has secured approval to import 18,000 NVIDIA AI chips.

The second customer could be OpenAI or one of the major cloud providers — Microsoft, AWS, Google Cloud, or Oracle. Another possibility is xAI.

Previously, Elon Musk said xAI has 230,000 GPUs, including 30,000 GB200s, operational for training its Grok model in a supercluster called Colossus 1. Inference is handled by external cloud providers. 

Musk added that Colossus 2, which will host an additional 550,000 GB200 and GB300 GPUs, will begin going online in the coming weeks. “As Jensen Huang has stated, xAI is unmatched in speed. It’s not even close,” Musk wrote in a post on X.Meanwhile, OpenAI is preparing for a major expansion. Chief Financial Officer Sarah Friar said the company plans to invest in trillion-dollar-scale data centers to meet surging demand for AI computation.

The post NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue appeared first on Analytics India Magazine.



Source link

Continue Reading

Jobs & Careers

‘Reliance Intelligence’ is Here, In Partnership with Google and Meta 

Published

on



Reliance Industries chairman Mukesh Ambani has announced the launch of Reliance Intelligence, a new wholly owned subsidiary focused on artificial intelligence, marking what he described as the company’s “next transformation into a deep-tech enterprise.”

Addressing shareholders, Ambani said Reliance Intelligence had been conceived with four core missions—building gigawatt-scale AI-ready data centres powered by green energy, forging global partnerships to strengthen India’s AI ecosystem, delivering AI services for consumers and SMEs in critical sectors such as education, healthcare, and agriculture, and creating a home for world-class AI talent.

Work has already begun on gigawatt-scale AI data centres in Jamnagar, Ambani said, adding that they would be rolled out in phases in line with India’s growing needs. 

These facilities, powered by Reliance’s new energy ecosystem, will be purpose-built for AI training and inference at a national scale.

Ambani also announced a “deeper, holistic partnership” with Google, aimed at accelerating AI adoption across Reliance businesses. 

“We are marrying Reliance’s proven capability to build world-class assets and execute at India scale with Google’s leading cloud and AI technologies,” Ambani said.

Google CEO Sundar Pichai, in a recorded message, said the two companies would set up a new cloud region in Jamnagar dedicated to Reliance.

“It will bring world-class AI and compute from Google Cloud, powered by clean energy from Reliance and connected by Jio’s advanced network,” Pichai said. 

He added that Google Cloud would remain Reliance’s largest public cloud partner, supporting mission-critical workloads and co-developing advanced AI initiatives.

Ambani further unveiled a new AI-focused joint venture with Meta. 

He said the venture would combine Reliance’s domain expertise across industries with Meta’s open-source AI models and tools to deliver “sovereign, enterprise-ready AI for India.”

Meta founder and CEO Mark Zuckerberg, in his remarks, said the partnership is aimed to bring open-source AI to Indian businesses at scale. 

“With Reliance’s reach and scale, we can bring this to every corner of India. This venture will become a model for how AI, and one day superintelligence, can be delivered,” Zuckerberg said.

Ambani also highlighted Reliance’s investments in AI-powered robotics, particularly humanoid robotics, which he said could transform manufacturing, supply chains and healthcare. 

“Intelligent automation will create new industries, new jobs and new opportunities for India’s youth,” he told shareholders.

Calling AI an opportunity “as large, if not larger” than Reliance’s digital services push a decade ago, Ambani said Reliance Intelligence would work to deliver “AI everywhere and for every Indian.”

“We are building for the next decade with confidence and ambition,” he said, underscoring that the company’s partnerships, green infrastructure and India-first governance approach would be central to this strategy.

The post ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta  appeared first on Analytics India Magazine.



Source link

Continue Reading

Jobs & Careers

Cognizant, Workfabric AI to Train 1,000 Context Engineers

Published

on


Cognizant has announced that it would deploy 1,000 context engineers over the next year to industrialise agentic AI across enterprises.

According to an official release, the company claimed that the move marks a “pivotal investment” in the emerging discipline of context engineering. 

As part of this initiative, Cognizant said it is partnering with Workfabric AI, the company building the context engine for enterprise AI. 

Cognizant’s context engineers will be powered by Workfabric AI’s ContextFabric platform, the statement said, adding that the platform transforms the organisational DNA of enterprises, how their teams work, including their workflows, data, rules, and processes, into actionable context for AI agents.Context engineering is essential to enabling AI a

Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.



Source link

Continue Reading

Trending