Jobs & Careers
Debugging Python in Docker: A Tutorial for Beginners


Image by Author | Ideogram
# Introduction
Docker has simplified how we develop, ship, and run applications by providing consistent environments across different systems. However, this consistency comes with a trade-off: debugging becomes deceptively complex for beginners when your applications — including Python applications — are running inside Docker containers.
For those new to Docker, debugging Python applications can feel like trying to fix a car with the hood welded shut. You know something’s wrong, but you can’t quite see what’s happening inside.
This beginner-friendly tutorial will teach you how to get started with debugging Python in Docker.
# Why is Debugging in Docker Different?
Before we dive in, let’s understand why Docker makes debugging tricky. When you’re running Python locally on your machine, you can:
- See error messages immediately
- Edit files and run them again
- Use your favorite debugging tools
- Check what files exist and what’s in them
But when Python runs inside a Docker container, it’s often trickier and less direct, especially if you’re a beginner. The container has its own file system, its own environment, and its own running processes.
# Setting Up Our Example
Let’s start with a simple Python program that has a bug. Don’t worry about Docker yet; let’s first understand what we’re working with.
Create a file called app.py
:
def calculate_sum(numbers):
total = 0
for num in numbers:
total += num
print(f"Adding {num}, total is now {total}")
return total
def main():
numbers = [1, 2, 3, 4, 5]
result = calculate_sum(numbers)
print(f"Final result: {result}")
# This line will cause our program to crash!
division_result = 10 / 0
print(f"Division result: {division_result}")
if __name__ == "__main__":
main()
If you run this normally with python3 app.py
, you’ll see it calculates the sum correctly but then crashes with a “division by zero” error. Easy to spot and fix, right?
Now let’s see what happens when this simple application runs inside a Docker container.
# Creating Your First Docker Container
We need to tell Docker how to package our Python program. Create a file called `Dockerfile`:
FROM python:3.11-slim
WORKDIR /app
COPY app.py .
CMD ["python3", "app.py"]
Let me explain each line:
FROM python:3.11-slim
tells Docker to start with a pre-made Linux system that already has Python installedWORKDIR /app
creates an `/app` folder inside the container and sets it as the working directoryCOPY app.py .
copies yourapp.py
file from your computer into the `/app` folder inside the containerCMD ["python3", "app.py"]
tells Docker what command to run when the container starts
Now let’s build and run this container:
docker build -t my-python-app .
docker run my-python-app
You’ll see the output, including the error, but then the container stops and exits. This leaves you to figure out what went wrong inside the isolated container.
# 1. Running an Interactive Debugging Session
The first debugging skill you need is learning how to get inside a running container and check for potential problems.
Instead of running your Python program immediately, let’s start the container and get a command prompt inside it:
docker run -it my-python-app /bin/bash
Let me break down these new flags:
-i
means “interactive” — it keeps the input stream open so you can type commands-t
allocates a “pseudo-TTY” — basically, it makes the terminal work properly/bin/bash
overrides the normal command and gives you a bash shell instead
Now that you have a terminal inside the container, you can run commands like so:
# See what directory you're in
pwd
# List files in the current directory
ls -la
# Look at your Python file
cat app.py
# Run your Python program
python3 app.py
You’ll also see the error:
root@fd1d0355b9e2:/app# python3 app.py
Adding 1, total is now 1
Adding 2, total is now 3
Adding 3, total is now 6
Adding 4, total is now 10
Adding 5, total is now 15
Final result: 15
Traceback (most recent call last):
File "/app/app.py", line 18, in
main()
File "/app/app.py", line 14, in main
division_result = 10 / 0
~~~^~~
ZeroDivisionError: division by zero
Now you can:
- Edit the file right here in the container (though you’ll need to install an editor first)
- Explore the environment to understand what’s different
- Test small pieces of code interactively
Fix the division by zero error (maybe change `10 / 0` to `10 / 2`), save the file, and run it again.
The problem is fixed. When you exit the container, however, you lose track of changes you made. This brings us to our next technique.
# 2. Using Volume Mounting for Live Edits
Wouldn’t it be nice if you could edit files on your computer and have those changes automatically appear inside the container? That’s exactly what volume mounting does.
docker run -it -v $(pwd):/app my-python-app /bin/bash
The new part here is -v $(pwd):/app
:
$(pwd)
outputs the current directory path.:/app
maps your current directory to/app
inside the container.- Any file you change on your computer immediately changes inside the container too.
Now you can:
- Edit
app.py
on your computer using your favorite editor - Inside the container, run
python3 app.py
to test your changes - Keep editing and testing until it works
Here’s a sample output after changing the divisor to 2:
root@3790528635bc:/app# python3 app.py
Adding 1, total is now 1
Adding 2, total is now 3
Adding 3, total is now 6
Adding 4, total is now 10
Adding 5, total is now 15
Final result: 15
Division result: 5.0
This is useful because you get to use your familiar editing environment on your computer and the exact same environment inside the container as well.
# 3. Connecting a Remote Debugger from Your IDE
If you’re using an Integrated Development Environment (IDE) like VS Code or PyCharm, you can actually connect your IDE’s debugger directly to code running inside a Docker container. This gives you the full power of your IDE’s debugging tools.
Edit your `Dockerfile` like so:
FROM python:3.11-slim
WORKDIR /app
# Install the remote debugging library
RUN pip install debugpy
COPY app.py .
# Expose the port that the debugger will use
EXPOSE 5678
# Start the program with debugger support
CMD ["python3", "-m", "debugpy", "--listen", "0.0.0.0:5678", "--wait-for-client", "app.py"]
What this does:
pip install debugpy
installs Microsoft’s debugpy library.EXPOSE 5678
tells Docker that our container will use port 5678.- The
CMD
starts our program through the debugger, listening on port 5678 for a connection. No changes to your Python code are needed.
Build and run the container:
docker build -t my-python-app .
docker run -p 5678:5678 my-python-app
The -p 5678:5678
maps port 5678 from inside the container to port 5678 on your computer.
Now in VS Code, you can set up a debug configuration (in .vscode/launch.json
) to connect to the container:
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "localhost",
"port": 5678
}
}
]
}
When you start debugging in VS Code, it will connect to your container, and you can set breakpoints, inspect variables, and step through code just like you would with local code.
# Common Debugging Problems and Solutions
⚠️ “My program works on my computer but not in Docker”
This usually means there’s a difference in the environment. Check:
- Python version differences.
- Missing dependencies.
- Different file paths.
- Environment variables.
- File permissions.
⚠️ “I can’t see my print statements”
- Use
python -u
to avoid output buffering. - Make sure you’re running with
-it
if you want interactive output. - Check if your program is actually running as intended (maybe it’s exiting early).
⚠️ “My changes aren’t showing up”
- Make sure you’re using volume mounting (
-v
). - Check that you’re editing the right file.
- Verify the file is copied into the container.
⚠️ “The container exits immediately”
- Run with
/bin/bash
to inspect the container’s state. - Check the error messages with
docker logs container_name
. - Make sure your
CMD
in the Dockerfile is correct.
# Conclusion
You now have a basic toolkit for debugging Python in Docker:
- Interactive shells (
docker run -it ... /bin/bash
) for exploring and quick fixes - Volume mounting (
-v $(pwd):/app
) for editing in your local file system - Remote debugging for using your IDE’s full capabilities
After this, you can try using Docker Compose for managing complex applications. For now, start with these simple techniques. Most debugging problems can be solved just by getting inside the container and poking around.
The key is to be methodical: understand what should be happening, figure out what is actually happening, and then bridge the gap between the two. Happy debugging!
Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.
Jobs & Careers
Visa Launches MCP Server and Agent Toolkit to Advance Agentic Commerce

Visa has expanded its Intelligent Commerce program with the introduction of a Model Context Protocol (MCP) Server and a Visa Acceptance Agent Toolkit, designed to help developers and business users connect AI agents directly to Visa’s network.
The MCP Server allows developers to link AI agents and large language models with Visa Intelligent Commerce APIs, creating a standardised and secure way to integrate payments. “For AI agents and LLMs to interact with Visa’s trusted network, they need a secure, consistent way to communicate with our services,” the company said in its announcement.
According to Visa, the MCP Server eliminates the need for custom-built integrations, accelerates prototype development, and allows agents to dynamically apply Visa APIs to commerce tasks. Early adopters within Visa have already used the technology to streamline generative AI workflows.
The company also announced the pilot of the Visa Acceptance Agent Toolkit, which runs on the MCP Server. It is designed to let both developers and non-technical users complete commerce tasks in plain language without coding.
“Now available in pilot, the Visa Acceptance Agent Toolkit empowers both developers and business users to put agentic commerce into action — without writing a single line of code,” Visa noted.
Initial use cases include creating invoices and summarising transaction data through natural language commands. For example, a user could request: “Create an invoice for $100 for John Doe, due Friday,” and the agent would process the request through Visa’s Invoice API.
The Toolkit is currently available as a self-hosted package via npm for JavaScript developers, with all actions routed through the MCP Server under Visa’s security and access controls.
Visa said both the MCP Server and Toolkit remain in pilot while the company explores further B2B and B2C applications. “Trust is crucial for enabling AI commerce,” Visa stated, adding that its decades of work with machine learning and datasets position it to support secure, next-generation payments at scale.
The post Visa Launches MCP Server and Agent Toolkit to Advance Agentic Commerce appeared first on Analytics India Magazine.
Jobs & Careers
Top 7 Small Language Models


Image by Author
# Introduction
Small language models (SLMs) are quickly becoming the practical face of AI. They are getting faster, smarter, and far more efficient, delivering strong results with a fraction of the compute, memory, and energy that large models require.
A growing trend in the AI community is to use large language models (LLMs) to generate synthetic datasets, which are then used to fine-tune SLMs for specific tasks or to adopt particular styles. As a result, SLMs are becoming smarter, faster, and more specialized, all while maintaining a compact size. This opens up exciting possibilities: you can now embed intelligent models directly into systems that don’t require a constant internet connection, enabling on-device intelligence for privacy, speed, and reliability.
In this tutorial, we will review some of the top small language models making waves in the AI world. We will compare their size and performance, helping you understand which models offer the best balance for your needs.
# 1. google/gemma-3-270m-it
The Gemma 3 270M model is the smallest and most ultra-lightweight member of the Gemma 3 family, designed for efficiency and accessibility. With just 270 million parameters, it can run smoothly on devices with limited computational resources, making it ideal for experimentation, prototyping, and lightweight applications.
Despite its compact size, the 270M model supports a 32K context window and can handle a wide range of tasks such as basic question answering, summarization, and reasoning.
# 2. Qwen/Qwen3-0.6B
The Qwen3-0.6B model is the most lightweight variant in the Qwen3 series, designed to deliver strong performance while remaining highly efficient and accessible. With 600 million parameters (0.44B non-embedding), it strikes a balance between capability and resource requirements.
Qwen3-0.6B comes with the ability to seamlessly switch between “thinking mode” for complex reasoning, math, and coding, and “non-thinking mode” for fast, general-purpose dialogue. It supports a 32K context length and offers multilingual support across 100+ languages.
# 3. HuggingFaceTB/SmolLM3-3B
The SmolLM3-3B model is a small yet powerful open-source language model designed to push the limits of small-scale language models. With 3 billion parameters, it delivers strong performance in reasoning, math, coding, and multilingual tasks while remaining efficient enough for broader accessibility.
SmolLM3 supports dual-mode reasoning, allowing users to toggle between extended “thinking mode” for complex problem-solving and a faster, lightweight mode for general dialogue.
Beyond text generation, SmolLM3 also enables agentic usage with tool calling, making it versatile for real-world applications. As a fully open model with public training details, open weights, and checkpoints, SmolLM3 provides researchers and developers with a transparent, high-performance foundation for building reasoning-capable AI systems at the 3B–4B scale.
# 4. Qwen/Qwen3-4B-Instruct-2507
The Qwen3-4B-Instruct-2507 model is an updated instruction-tuned variant of the Qwen3-4B series, designed to deliver stronger performance in non-thinking mode. With 4 billion parameters (3.6B non-embedding), it introduces major improvements across instruction following, logical reasoning, text comprehension, mathematics, science, coding, and tool usage, while also expanding long-tail knowledge coverage across multiple languages.
Unlike other Qwen3 models, this version is optimized exclusively for non-thinking mode, ensuring faster, more efficient responses without generating reasoning tokens. It also demonstrates better alignment with user preferences, excelling in open-ended and creative tasks such as writing, dialogue, and subjective reasoning.
# 5. google/gemma-3-4b-it
The Gemma 3 4b model is an instruction-tuned, multimodal member of the Gemma 3 family, designed to handle both text and image inputs while generating high-quality text outputs. With 4 billion parameters and support for a 128K token context window, it is well-suited for tasks such as question answering, summarization, reasoning, and detailed image understanding.
Importantly, it is highly used for fine-tuning on text classification, image classification, or specialized tasks, which further improves the model’s specialization and performance for certain domains.
# 6. janhq/Jan-v1-4B
The Jan-v1 model is the first release in the Jan Family, built specifically for agentic reasoning and problem-solving within the Jan App. Based on the Lucy model and powered by the Qwen3-4B-thinking architecture, Jan-v1 delivers enhanced reasoning capabilities, tool utilization, and improved performance on complex agentic tasks.
By scaling the model and fine-tuning its parameters, it has achieved an impressive accuracy of 91.1% on SimpleQA. This marks a significant milestone in factual question answering for models of this size. It is optimized for local use with the Jan app, vLLM, and llama.cpp, with recommended settings to enhance performance.
# 7. microsoft/Phi-4-mini-instruct
The Phi-4-mini-instruct model is a lightweight 3.8B parameter language model from Microsoft’s Phi-4 family, designed for efficient reasoning, instruction following, and safe deployment in both research and commercial applications.
Trained on a mix of 5T tokens from high-quality filtered web data, synthetic “textbook-like” reasoning data, and curated supervised instruction data, it supports a 128K token context length and excels in math, logic, and multilingual tasks.
Phi-4-mini-instruct also supports function calling, multilingual generation (20+ languages), and integration with frameworks like vLLM and Transformers, enabling flexible deployment.
# Conclusion
This article explores a new wave of lightweight yet powerful open models that are reshaping the AI landscape by balancing efficiency, reasoning, and accessibility.
From Google’s Gemma 3 family with the ultra-compact gemma-3-270m-it
and the multimodal gemma-3-4b-it
, to Qwen’s Qwen3 series with the efficient Qwen3-0.6B
and the long-context, instruction-optimized Qwen3-4B-Instruct-2507
, these models highlight how scaling and fine-tuning can unlock strong reasoning and multilingual capabilities in smaller footprints.
SmolLM3-3B
pushes the boundaries of small models with dual-mode reasoning and long-context support, while Jan-v1-4B
focuses on agentic reasoning and tool use within the Jan App ecosystem.
Finally, Microsoft’s Phi-4-mini-instruct
demonstrates how 3.8B parameters can deliver competitive performance in math, logic, and multilingual tasks through high-quality synthetic data and alignment techniques.
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master’s degree in technology management and a bachelor’s degree in telecommunication engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.
Jobs & Careers
IBM Cloud to Eliminate Free Human Support and Pivot to Self-Service and AI

IBM Cloud will overhaul its Basic Support tier, transitioning from free, human-led case support to a self-service model starting in January 2026, according to emails accessed by The Register.
Under the current basic support, which is provided at no cost with Pay‑As‑You‑Go or Subscription accounts, customers can “raise cases with IBM’s support team 24×7.” However, no guaranteed response times or dedicated account managers are included.
According to an email sent to affected customers, this upcoming change means Basic Support users will lose the ability to “open or escalate technical support cases through the portal or APIs.”
Instead, they will still be able to “self‑report service issues (e.g., hardware or backup failures) via the Cloud Console” and lodge “billing and account cases in the IBM Cloud Support Portal,” the media house reported.
IBM encourages users to adopt its Watsonx-powered IBM Cloud AI Assistant, which was upgraded earlier this year. The company also plans to introduce a “report an issue” tool in January 2026, promising “faster issue routing.” Additionally, an expanded library of documentation will provide deeper self‑help content.
The internal message reassures customers that “This no‑cost support level will shift to a self‑service model to align with industry standards and improve your support experience.” Still, for those requiring “technical support, faster response times, or severity‑level control,” IBM advises upgrading to a paid support plan, with pricing “starting at $200/month”.
While IBM claims the move brings its support structure in line with industry norms, the article notes that hyperscale cloud providers such as AWS, Google Cloud, and Microsoft Azure already offer similar self‑service tiers, with extra value like community forums, advisor tools, and usage‑based optimisation, without such drastic cuts to human support, as per news reports.
The post IBM Cloud to Eliminate Free Human Support and Pivot to Self-Service and AI appeared first on Analytics India Magazine.
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics