Jobs & Careers
Debugging Python in Docker: A Tutorial for Beginners


Image by Author | Ideogram
# Introduction
Docker has simplified how we develop, ship, and run applications by providing consistent environments across different systems. However, this consistency comes with a trade-off: debugging becomes deceptively complex for beginners when your applications — including Python applications — are running inside Docker containers.
For those new to Docker, debugging Python applications can feel like trying to fix a car with the hood welded shut. You know something’s wrong, but you can’t quite see what’s happening inside.
This beginner-friendly tutorial will teach you how to get started with debugging Python in Docker.
# Why is Debugging in Docker Different?
Before we dive in, let’s understand why Docker makes debugging tricky. When you’re running Python locally on your machine, you can:
- See error messages immediately
- Edit files and run them again
- Use your favorite debugging tools
- Check what files exist and what’s in them
But when Python runs inside a Docker container, it’s often trickier and less direct, especially if you’re a beginner. The container has its own file system, its own environment, and its own running processes.
# Setting Up Our Example
Let’s start with a simple Python program that has a bug. Don’t worry about Docker yet; let’s first understand what we’re working with.
Create a file called app.py
:
def calculate_sum(numbers):
total = 0
for num in numbers:
total += num
print(f"Adding {num}, total is now {total}")
return total
def main():
numbers = [1, 2, 3, 4, 5]
result = calculate_sum(numbers)
print(f"Final result: {result}")
# This line will cause our program to crash!
division_result = 10 / 0
print(f"Division result: {division_result}")
if __name__ == "__main__":
main()
If you run this normally with python3 app.py
, you’ll see it calculates the sum correctly but then crashes with a “division by zero” error. Easy to spot and fix, right?
Now let’s see what happens when this simple application runs inside a Docker container.
# Creating Your First Docker Container
We need to tell Docker how to package our Python program. Create a file called `Dockerfile`:
FROM python:3.11-slim
WORKDIR /app
COPY app.py .
CMD ["python3", "app.py"]
Let me explain each line:
FROM python:3.11-slim
tells Docker to start with a pre-made Linux system that already has Python installedWORKDIR /app
creates an `/app` folder inside the container and sets it as the working directoryCOPY app.py .
copies yourapp.py
file from your computer into the `/app` folder inside the containerCMD ["python3", "app.py"]
tells Docker what command to run when the container starts
Now let’s build and run this container:
docker build -t my-python-app .
docker run my-python-app
You’ll see the output, including the error, but then the container stops and exits. This leaves you to figure out what went wrong inside the isolated container.
# 1. Running an Interactive Debugging Session
The first debugging skill you need is learning how to get inside a running container and check for potential problems.
Instead of running your Python program immediately, let’s start the container and get a command prompt inside it:
docker run -it my-python-app /bin/bash
Let me break down these new flags:
-i
means “interactive” — it keeps the input stream open so you can type commands-t
allocates a “pseudo-TTY” — basically, it makes the terminal work properly/bin/bash
overrides the normal command and gives you a bash shell instead
Now that you have a terminal inside the container, you can run commands like so:
# See what directory you're in
pwd
# List files in the current directory
ls -la
# Look at your Python file
cat app.py
# Run your Python program
python3 app.py
You’ll also see the error:
root@fd1d0355b9e2:/app# python3 app.py
Adding 1, total is now 1
Adding 2, total is now 3
Adding 3, total is now 6
Adding 4, total is now 10
Adding 5, total is now 15
Final result: 15
Traceback (most recent call last):
File "/app/app.py", line 18, in
main()
File "/app/app.py", line 14, in main
division_result = 10 / 0
~~~^~~
ZeroDivisionError: division by zero
Now you can:
- Edit the file right here in the container (though you’ll need to install an editor first)
- Explore the environment to understand what’s different
- Test small pieces of code interactively
Fix the division by zero error (maybe change `10 / 0` to `10 / 2`), save the file, and run it again.
The problem is fixed. When you exit the container, however, you lose track of changes you made. This brings us to our next technique.
# 2. Using Volume Mounting for Live Edits
Wouldn’t it be nice if you could edit files on your computer and have those changes automatically appear inside the container? That’s exactly what volume mounting does.
docker run -it -v $(pwd):/app my-python-app /bin/bash
The new part here is -v $(pwd):/app
:
$(pwd)
outputs the current directory path.:/app
maps your current directory to/app
inside the container.- Any file you change on your computer immediately changes inside the container too.
Now you can:
- Edit
app.py
on your computer using your favorite editor - Inside the container, run
python3 app.py
to test your changes - Keep editing and testing until it works
Here’s a sample output after changing the divisor to 2:
root@3790528635bc:/app# python3 app.py
Adding 1, total is now 1
Adding 2, total is now 3
Adding 3, total is now 6
Adding 4, total is now 10
Adding 5, total is now 15
Final result: 15
Division result: 5.0
This is useful because you get to use your familiar editing environment on your computer and the exact same environment inside the container as well.
# 3. Connecting a Remote Debugger from Your IDE
If you’re using an Integrated Development Environment (IDE) like VS Code or PyCharm, you can actually connect your IDE’s debugger directly to code running inside a Docker container. This gives you the full power of your IDE’s debugging tools.
Edit your `Dockerfile` like so:
FROM python:3.11-slim
WORKDIR /app
# Install the remote debugging library
RUN pip install debugpy
COPY app.py .
# Expose the port that the debugger will use
EXPOSE 5678
# Start the program with debugger support
CMD ["python3", "-m", "debugpy", "--listen", "0.0.0.0:5678", "--wait-for-client", "app.py"]
What this does:
pip install debugpy
installs Microsoft’s debugpy library.EXPOSE 5678
tells Docker that our container will use port 5678.- The
CMD
starts our program through the debugger, listening on port 5678 for a connection. No changes to your Python code are needed.
Build and run the container:
docker build -t my-python-app .
docker run -p 5678:5678 my-python-app
The -p 5678:5678
maps port 5678 from inside the container to port 5678 on your computer.
Now in VS Code, you can set up a debug configuration (in .vscode/launch.json
) to connect to the container:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
}
}
]
}
When you start debugging in VS Code, it will connect to your container, and you can set breakpoints, inspect variables, and step through code just like you would with local code.
# Common Debugging Problems and Solutions
⚠️ “My program works on my computer but not in Docker”
This usually means there’s a difference in the environment. Check:
- Python version differences.
- Missing dependencies.
- Different file paths.
- Environment variables.
- File permissions.
⚠️ “I can’t see my print statements”
- Use
python -u
to avoid output buffering. - Make sure you’re running with
-it
if you want interactive output. - Check if your program is actually running as intended (maybe it’s exiting early).
⚠️ “My changes aren’t showing up”
- Make sure you’re using volume mounting (
-v
). - Check that you’re editing the right file.
- Verify the file is copied into the container.
⚠️ “The container exits immediately”
- Run with
/bin/bash
to inspect the container’s state. - Check the error messages with
docker logs container_name
. - Make sure your
CMD
in the Dockerfile is correct.
# Conclusion
You now have a basic toolkit for debugging Python in Docker:
- Interactive shells (
docker run -it ... /bin/bash
) for exploring and quick fixes - Volume mounting (
-v $(pwd):/app
) for editing in your local file system - Remote debugging for using your IDE’s full capabilities
After this, you can try using Docker Compose for managing complex applications. For now, start with these simple techniques. Most debugging problems can be solved just by getting inside the container and poking around.
The key is to be methodical: understand what should be happening, figure out what is actually happening, and then bridge the gap between the two. Happy debugging!
Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.
Jobs & Careers
NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

NVIDIA disclosed on August 28, 2025, that two unnamed customers contributed 39% of its revenue in the July quarter, raising questions about the chipmaker’s dependence on a small group of clients.
The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products.
In a filing with the U.S. Securities and Exchange Commission (SEC), NVIDIA said “Customer A” accounted for 23% of total revenue and “Customer B” for 16%. A year earlier, its top two customers made up 14% and 11% of revenue.
The concentration highlights the role of large buyers, many of whom are cloud service providers. “Large cloud service providers made up about 50% of the company’s data center revenue,” NVIDIA chief financial officer Colette Kress said on Wednesday. Data center sales represented 88% of NVIDIA’s overall revenue in the second quarter.
“We have experienced periods where we receive a significant amount of our revenue from a limited number of customers, and this trend may continue,” the company wrote in the filing.
One of the customers could possibly be Saudi Arabia’s AI firm Humain, which is building two data centers in Riyadh and Dammam, slated to open in early 2026. The company has secured approval to import 18,000 NVIDIA AI chips.
The second customer could be OpenAI or one of the major cloud providers — Microsoft, AWS, Google Cloud, or Oracle. Another possibility is xAI.
Previously, Elon Musk said xAI has 230,000 GPUs, including 30,000 GB200s, operational for training its Grok model in a supercluster called Colossus 1. Inference is handled by external cloud providers.
Musk added that Colossus 2, which will host an additional 550,000 GB200 and GB300 GPUs, will begin going online in the coming weeks. “As Jensen Huang has stated, xAI is unmatched in speed. It’s not even close,” Musk wrote in a post on X.Meanwhile, OpenAI is preparing for a major expansion. Chief Financial Officer Sarah Friar said the company plans to invest in trillion-dollar-scale data centers to meet surging demand for AI computation.
The post NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue appeared first on Analytics India Magazine.
Jobs & Careers
‘Reliance Intelligence’ is Here, In Partnership with Google and Meta

Reliance Industries chairman Mukesh Ambani has announced the launch of Reliance Intelligence, a new wholly owned subsidiary focused on artificial intelligence, marking what he described as the company’s “next transformation into a deep-tech enterprise.”
Addressing shareholders, Ambani said Reliance Intelligence had been conceived with four core missions—building gigawatt-scale AI-ready data centres powered by green energy, forging global partnerships to strengthen India’s AI ecosystem, delivering AI services for consumers and SMEs in critical sectors such as education, healthcare, and agriculture, and creating a home for world-class AI talent.
Work has already begun on gigawatt-scale AI data centres in Jamnagar, Ambani said, adding that they would be rolled out in phases in line with India’s growing needs.
These facilities, powered by Reliance’s new energy ecosystem, will be purpose-built for AI training and inference at a national scale.
Ambani also announced a “deeper, holistic partnership” with Google, aimed at accelerating AI adoption across Reliance businesses.
“We are marrying Reliance’s proven capability to build world-class assets and execute at India scale with Google’s leading cloud and AI technologies,” Ambani said.
Google CEO Sundar Pichai, in a recorded message, said the two companies would set up a new cloud region in Jamnagar dedicated to Reliance.
“It will bring world-class AI and compute from Google Cloud, powered by clean energy from Reliance and connected by Jio’s advanced network,” Pichai said.
He added that Google Cloud would remain Reliance’s largest public cloud partner, supporting mission-critical workloads and co-developing advanced AI initiatives.
Ambani further unveiled a new AI-focused joint venture with Meta.
He said the venture would combine Reliance’s domain expertise across industries with Meta’s open-source AI models and tools to deliver “sovereign, enterprise-ready AI for India.”
Meta founder and CEO Mark Zuckerberg, in his remarks, said the partnership is aimed to bring open-source AI to Indian businesses at scale.
“With Reliance’s reach and scale, we can bring this to every corner of India. This venture will become a model for how AI, and one day superintelligence, can be delivered,” Zuckerberg said.
Ambani also highlighted Reliance’s investments in AI-powered robotics, particularly humanoid robotics, which he said could transform manufacturing, supply chains and healthcare.
“Intelligent automation will create new industries, new jobs and new opportunities for India’s youth,” he told shareholders.
Calling AI an opportunity “as large, if not larger” than Reliance’s digital services push a decade ago, Ambani said Reliance Intelligence would work to deliver “AI everywhere and for every Indian.”
“We are building for the next decade with confidence and ambition,” he said, underscoring that the company’s partnerships, green infrastructure and India-first governance approach would be central to this strategy.
The post ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta appeared first on Analytics India Magazine.
Jobs & Careers
Cognizant, Workfabric AI to Train 1,000 Context Engineers

Cognizant has announced that it would deploy 1,000 context engineers over the next year to industrialise agentic AI across enterprises.
According to an official release, the company claimed that the move marks a “pivotal investment” in the emerging discipline of context engineering.
As part of this initiative, Cognizant said it is partnering with Workfabric AI, the company building the context engine for enterprise AI.
Cognizant’s context engineers will be powered by Workfabric AI’s ContextFabric platform, the statement said, adding that the platform transforms the organisational DNA of enterprises, how their teams work, including their workflows, data, rules, and processes, into actionable context for AI agents.Context engineering is essential to enabling AI a
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies