Connect with us

Books, Courses & Certifications

Build AWS architecture diagrams using Amazon Q CLI and MCP

Published

on


Creating professional AWS architecture diagrams is a fundamental task for solutions architects, developers, and technical teams. These diagrams serve as essential communication tools for stakeholders, documentation of compliance requirements, and blueprints for implementation teams. However, traditional diagramming approaches present several challenges:

  • Time-consuming process – Creating detailed architecture diagrams manually can take hours or even days
  • Steep learning curve – Learning specialized diagramming tools requires significant investment
  • Inconsistent styling – Maintaining visual consistency across multiple diagrams is difficult
  • Outdated AWS icons – Keeping up with the latest AWS service icons and best practices challenging.
  • Difficult maintenance – Updating diagrams as architectures evolve can become increasingly burdensome

Amazon Q Developer CLI with the Model Context Protocol (MCP) offers a streamlined approach to creating AWS architecture diagrams. By using generative AI through natural language prompts, architects can now generate professional diagrams in minutes rather than hours, while adhering to AWS best practices.

In this post, we explore how to use Amazon Q Developer CLI with the AWS Diagram MCP and the AWS Documentation MCP servers to create sophisticated architecture diagrams that follow AWS best practices. We discuss techniques for basic diagrams and real-world diagrams, with detailed examples and step-by-step instructions.

Solution overview

Amazon Q Developer CLI is a command line interface that brings the generative AI capabilities of Amazon Q directly to your terminal. Developers can interact with Amazon Q through natural language prompts, making it an invaluable tool for various development tasks.

Developed by Anthropic as an open protocol, the Model Context Protocol (MCP) provides a standardized way to connect AI models to virtually any data source or tool. Using a client-server architecture (as illustrated in the following diagram), the MCP helps developers expose their data through lightweight MCP servers while building AI applications as MCP clients that connect to these servers.

The MCP uses a client-server architecture containing the following components:

  • Host – A program or AI tool that requires access to data through the MCP protocol, such as Anthropic’s Claude Desktop, an integrated development environment (IDE), AWS MCP CLI, or other AI applications
  • Client – Protocol clients that maintain one-to-one connections with server
  • Server – Lightweight programs that expose capabilities through standardized MCP or act as tools
  • Data sources – Local data sources such as databases and file systems, or external systems available over the internet through APIs (web APIs) that MCP servers can connect with

As announced in April 2025, MCP enables Amazon Q Developer to connect with specialized servers that extend its capabilities beyond what’s possible with the base model alone. MCP servers act as plugins for Amazon Q, providing domain-specific knowledge and functionality. The AWS Diagram MCP server specifically enables Amazon Q to generate architecture diagrams using the Python diagrams package, with access to the complete AWS icon set and architectural best practices.

Prerequisites

To implement this solution, you must have an AWS account with appropriate permissions and follow the steps below.

Set up your environment

Before you can start creating diagrams, you need to set up your environment with Amazon Q CLI, the AWS Diagram MCP server, and AWS Documentation MCP server. This section provides detailed instructions for installation and configuration.

Install Amazon Q Developer CLI

Amazon Q Developer CLI is available as a standalone installation. Complete the following steps to install it:

  1. Download and install Amazon Q Developer CLI. For instructions, see Using Amazon Q Developer on the command line.
  2. Verify the installation by running the following command: q --version
    You should see output similar to the following: Amazon Q Developer CLI version 1.x.x
  3. Configure Amazon Q CLI with your AWS credentials: q login
  4. Choose the login method suitable for you:

Set up MCP servers

Complete the following steps to set up your MCP servers:

  1. Install uv using the following command: pip install uv
  2. Install Python 3.10 or newer: uv python install 3.10
  3. Install GraphViz for your operating system.
  4. Add the servers to your ~/.aws/amazonq/mcp.json file:
{
  "mcpServers": {
    "awslabs.aws-diagram-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-diagram-mcp-server"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "autoApprove": [],
      "disabled": false
    },
    "awslabs.aws-documentation-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-documentation-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "autoApprove": [],
      "disabled": false
    }
  }
}

Now, Amazon Q CLI automatically discovers MCP servers in the ~/.aws/amazonq/mcp.json file.

Understanding MCP server tools

The AWS Diagram MCP server provides several powerful tools:

  • list_icons – Lists available icons from the diagrams package, organized by provider and service category
  • get_diagram_examples – Provides example code for different types of diagrams (AWS, sequence, flow, class, and others)
  • generate_diagram – Creates a diagram from Python code using the diagrams package

The AWS Documentation MCP server provides the following useful tools:

  • search_documentation – Searches AWS documentation using the official AWS Documentation Search API
  • read_documentation – Fetches and converts AWS documentation pages to markdown format
  • recommend – Gets content recommendations for AWS documentation pages

These tools work together to help you create accurate architecture diagrams that follow AWS best practices.

Test your setup

Let’s verify that everything is working correctly by generating a simple diagram:

  1. Start the Amazon Q CLI chat interface and verify the output shows the MCP servers being loaded and initialized: q chat
    q chat
  2. In the chat interface, enter the following prompt:
    Please create a diagram showing an EC2 instance in a VPC connecting to an external S3 bucket. Include essential networking components (VPC, subnets, Internet Gateway, Route Table), security elements (Security Groups, NACLs), and clearly mark the connection between EC2 and S3. Label everything appropriately concisely and indicate that all resources are in the us-east-1 region. Check for AWS documentation to ensure it adheres to AWS best practices before you create the diagram.
  3. Amazon Q CLI will ask you to trust the tool that is being used; enter t to trust it.Amazon Q CLI will generate and display a simple diagram showing the requested architecture. Your diagram should look similar to the following screenshot, though there might be variations in layout, styling, or specific details because it’s created using generative AI. The core architectural components and relationships will be represented, but the exact visual presentation might differ slightly with each generation.

    If you see the diagram, your environment is set up correctly. If you encounter issues, verify that Amazon Q CLI can access the MCP servers by making sure you installed the necessary tools and the servers are in the ~/.aws/amazonq/mcp.json file.

Configuration options

The AWS Diagram MCP server supports several configuration options to customize your diagramming experience:

  • Output directory – By default, diagrams are saved in a generated-diagrams directory in your current working directory. You can specify a different location in your prompts.
  • Diagram format – The default output format is PNG, but you can request other formats like SVG in your prompts.
  • Styling options – You can specify colors, shapes, and other styling elements in your prompts.

Now that our environment is set up, let’s create more diagrams.

Create AWS architecture diagrams

In this section, we walk through the process of multiple AWS architecture diagrams using Amazon Q CLI with the AWS Diagram MCP server and AWS Documentation MCP server to make sure our requirements follow best practices.

When you provide a prompt to Amazon Q CLI, the AWS Diagram and Documentation MCP servers complete the following steps:

  1. Interpret your requirements.
  2. Check for best practices on the AWS documentation.
  3. Generate Python code using the diagrams package.
  4. Execute the code to create the diagram.
  5. Return the diagram as an image.

This process happens seamlessly, so you can focus on describing what you want rather than how to create it.

AWS architecture diagrams typically include the following components:

  • Nodes – AWS services and resources
  • Edges – Connections between nodes showing relationships or data flow
  • Clusters – Logical groupings of nodes, such as virtual private clouds (VPCs), subnets, and Availability Zones
  • Labels – Text descriptions for nodes and connections

Example 1: Create a web application architecture

Let’s create a diagram for a simple web application hosted on AWS. Enter the following prompt:

Create a diagram for a simple web application with an Application Load Balancer, two EC2 instances, and an RDS database. Check for AWS documentation to ensure it adheres to AWS best practices before you create the diagram

After you enter your prompt, Amazon Q CLI will search AWS documentation for best practices using the search_documentation tool from awslabsaws_documentation_mcp_server.
search documentaion-img


Following the search of the relevant AWS documentation, it will read the documentation using the read_documentation tool from the MCP server awslabsaws_documentation_mcp_server.
read_document-img

Amazon Q CLI will then list the needed AWS service icons using the list_icons tool, and will use generate_diagram with awslabsaws_diagram_mcp_server.
list and generate on cli

You should receive an output with a description of the diagram created based on the prompt along with the location of where the diagram was saved.
final-output-1stexample

Amazon Q CLI will generate and display the diagram.

The generated diagram shows the following key components:

Example 2: Create a multi-tier architecture

Multi-tier architectures separate applications into functional layers (presentation, application, and data) to improve scalability and security. We use the following prompt to create our diagram:

Create a diagram for a three-tier web application with a presentation tier (ALB and CloudFront), application tier (ECS with Fargate), and data tier (Aurora PostgreSQL). Include VPC with public and private subnets across multiple AZs. Check for AWS documentation to ensure it adheres to AWS best practices before you create the diagram.

The diagram shows the following key components:

  • A presentation tier in public subnets
  • An application tier in private subnets
  • A data tier in isolated private subnets
  • Proper security group configurations
  • Traffic flow between tiers

Example 3: Create a serverless architecture

We use the following prompt to create a diagram for a serverless architecture:

Create a diagram for a serverless web application using API Gateway, Lambda, DynamoDB, and S3 for static website hosting. Include Cognito for user authentication and CloudFront for content delivery. Check for AWS documentation to ensure it adheres to AWS best practices before you create the diagram.

The diagram includes the following key components:

Example 4: Create a data processing diagram

We use the following prompt to create a diagram for a data processing pipeline:

Create a diagram for a data processing pipeline with components organized in clusters for data ingestion, processing, storage, and analytics. Include Kinesis, Lambda, S3, Glue, and QuickSight. Check for AWS documentation to ensure it adheres to AWS best practices before you create the diagram.

The diagram organizes components into distinct clusters:

Real-world examples

Let’s explore some real-world architecture patterns and how to create diagrams for them using Amazon Q CLI with the AWS Diagram MCP server.

Ecommerce platform

Ecommerce platforms require scalable, resilient architectures to handle variable traffic and maintain high availability. We use the following prompt to create an example diagram:

Create a diagram for an e-commerce platform with microservices architecture. Include components for product catalog, shopping cart, checkout, payment processing, order management, and user authentication. Ensure the architecture follows AWS best practices for scalability and security. Check for AWS documentation to ensure it adheres to AWS best practices before you create the diagram.

The diagram includes the following key components:

Intelligent document processing solution

We use the following prompt to create a diagram for an intelligent document processing (IDP) architecture:

Create a diagram for an intelligent document processing (IDP) application on AWS. Include components for document ingestion, OCR and text extraction, intelligent data extraction (using NLP and/or computer vision), human review and validation, and data output/integration. Ensure the architecture follows AWS best practices for scalability and security, leveraging services like S3, Lambda, Textract, Comprehend, SageMaker (for custom models, if applicable), and potentially Augmented AI (A2I). Check for AWS documentation related to intelligent document processing best practices to ensure it adheres to AWS best practices before you create the diagram.

The diagram includes the following key components:

  • Amazon API Gateway as the entry point for client applications, providing a secure and scalable interface
  • Microservices implemented as containers in ECS with Fargate, enabling flexible and scalable processing
  • Amazon RDS databases for product catalog, shopping cart, and order data, providing reliable structured data storage
  • Amazon ElastiCache for product data caching and session management, improving performance and user experience
  • Amazon Cognito for authentication, ensuring secure access control
  • Amazon Simple Queue Service and Amazon Simple Notification Service for asynchronous communication between services, enabling decoupled and resilient architecture
  • Amazon CloudFront for content delivery and static assets from S3, optimizing global performance
  • Amazon Route53 for DNS management, providing reliable routing
  • AWS WAF for web application security, protecting against common web exploits
  • AWS Lambda functions for serverless microservice implementation, offering cost-effective scaling
  • AWS Secrets Manager for secure credential storage, enhancing security posture
  • Amazon CloudWatch for monitoring and observability, providing insights into system performance and health.

Clean up

If you no longer need to use the AWS Cost Analysis MCP server with Amazon Q CLI, you can remove it from your configuration:

  1. Open your ~/.aws/amazonq/mcp.json file.
  2. Remove or comment out the MCP server entries.
  3. Save the file.

This will prevent the server from being loaded when you start Amazon Q CLI in the future.

Conclusion

In this post, we explored how to use Amazon Q CLI with the AWS Documentation MCP and AWS Diagram MCP servers to create professional AWS architecture diagrams that adhere to AWS best practices referenced from official AWS documentation. This approach offers significant advantages over traditional diagramming methods:

  • Time savings – Generate complex diagrams in minutes instead of hours
  • Consistency – Make sure diagrams follow the same style and conventions
  • Best practices – Automatically incorporate AWS architectural guidelines
  • Iterative refinement – Quickly modify diagrams through simple prompts
  • Validation – Check architectures against official AWS documentation and recommendations

As you continue your journey with AWS architecture diagrams, we encourage you to deepen your knowledge by learning more about the Model Context Protocol (MCP) to understand how it enhances the capabilities of Amazon Q. When seeking inspiration for your own designs, the AWS Architecture Center offers a wealth of reference architectures that follow best practices. For creating visually consistent diagrams, be sure to visit the AWS Icons page, where you can find the complete official icon set. And to stay at the cutting edge of these tools, keep an eye on updates to the official AWS MCP Servers—they’re constantly evolving with new features to make your diagramming experience even better.


About the Authors

Joel Asante, an Austin-based Solutions Architect at Amazon Web Services (AWS), works with GovTech (Government Technology) customers. With a strong background in data science and application development, he brings deep technical expertise to creating secure and scalable cloud architectures for his customers. Joel is passionate about data analytics, machine learning, and robotics, leveraging his development experience to design innovative solutions that meet complex government requirements. He holds 13 AWS certifications and enjoys family time, fitness, and cheering for the Kansas City Chiefs and Los Angeles Lakers in his spare time.

Dunieski Otano is a Solutions Architect at Amazon Web Services based out of Miami, Florida. He works with World Wide Public Sector MNO (Multi-International Organizations) customers. His passion is Security, Machine Learning and Artificial Intelligence, and Serverless. He works with his customers to help them build and deploy high available, scalable, and secure solutions. Dunieski holds 14 AWS certifications and is an AWS Golden Jacket recipient. In his free time, you will find him spending time with his family and dog, watching a great movie, coding, or flying his drone.

Varun Jasti is a Solutions Architect at Amazon Web Services, working with AWS Partners to design and scale artificial intelligence solutions for public sector use cases to meet compliance standards. With a background in Computer Science, his work covers broad range of ML use cases primarily focusing on LLM training/inferencing and computer vision. In his spare time, he loves playing tennis and swimming.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Books, Courses & Certifications

Complete Guide with Curriculum & Fees

Published

on


The year 2025 for AI education provides choices catering to learning style, career goal, and budget. The Logicmojo Advanced Data Science & AI Program has emerged as the top one, offering comprehensive training with proven results in placement for those wishing to pursue job-oriented training. It offers the kind of live training, projects, and career support that fellow professionals seek when interested in turning into a high-paying AI position. 

On the other hand, for the independent learner seeking prestige credentials, a few other good options might include programs from Stanford, MIT, and DeepLearning.AI. Google and IBM certificates are an inexpensive footing for a beginner, while, at the opposite end of the spectrum, a Carnegie Mellon certificate is considered the ultimate academic credential in AI.

Whatever choice you make in 2025 to further your knowledge in AI will place you at the forefront of technology innovation. AI, expected to generate millions of jobs, has the potential to revolutionize every industry, and so whatever you learn today will be the deciding factor in your career waters for at least the next few decades. 



Source link

Continue Reading

Books, Courses & Certifications

Artificial Intelligence and Machine Learning Bootcamp Powered by Simplilearn

Published

on


Artificial Intelligence and Machine Learning are noteworthy game-changers in today’s digital world. Technological wonders once limited to science fiction have become science fact, giving us innovations such as self-driving cars, intelligent voice-operated virtual assistants, and computers that learn and grow.

The two fields are making inroads into all areas of our lives, including the workplace, showing up in occupations such as Data Scientist and Digital Marketer. And for all the impressive things that Artificial Intelligence and Machine Learning have accomplished in the last ten years, there’s so much more in store.

Become the Highest Paid AI Engineer!

With Our Trending AI Engineer Master ProgramKnow More

Simplilearn wants today’s IT professionals to be better equipped to embrace these new technologies. Hence, it offers Machine Learning Bootcamp, held in conjunction with Caltech’s Center for Technology and Management Education (CTME) and in collaboration with IBM.

The bootcamp covers the relevant points of Artificial Intelligence and Machine Learning, exploring tools and concepts such as Python and TensorFlow. The course optimizes the academic excellence of Caltech and the industry prowess of IBM, creating an unbeatable learning resource that supercharges your skillset and prepares you to navigate the world of AI/ML better.

Why is This a Great Bootcamp?

When you bring together an impressive lineup of Simplilearn, Caltech, and IBM, you expect nothing less than an excellent result. The AI and Machine Learning Bootcamp delivers as promised.

This six-month program deals with vital AI/ML concepts such as Deep Learning, Statistics, and Data Science With Python. Here is a breakdown of the diverse and valuable information the bootcamp offers:

  • Orientation. The orientation session prepares you for the rigors of an intense, six-month learning experience, where you dedicate from five to ten hours a week to learning the latest in AI/ML skills and concepts.
  • Introduction to Artificial Intelligence. There’s a difference between AI and ML, and here’s where you start to learn this. This offering is a beginner course covering the basics of AI and workflows, Deep Learning, Machine Learning, and other details.
  • Python for Data Science. Many data scientists prefer to use the Python programming language when working with AI/ML. This section deals with Python, its libraries, and using a Jupyter-based lab environment to write scripts.
  • Applied Data Science with Python. Your exposure to Python continues with this study of Python’s tools and techniques used for Data Analytics.
  • Machine Learning. Now we come to the other half of the AI/ML partnership. You will learn all about Machine Learning’s chief techniques and concepts, including heuristic aspects, supervised/unsupervised learning, and developing algorithms.
  • Deep Learning with Keras and Tensorflow. This section shows you how to use Keras and TensorFlow frameworks to master Deep Learning models and concepts and prepare Deep Learning algorithms.
  • Advanced Deep Learning and Computer Vision. This advanced course takes Deep Learning to a new level. This module covers topics like Computer Vision for OCR and Object Detection, and Computer Vision Basics with Python.
  • Capstone project. Finally, it’s time to take what you have learned and implement your new AI/ML skills to solve an industry-relevant issue.

Become the Highest Paid AI Engineer!

With Our Trending AI Engineer Master ProgramKnow More

Become the Highest Paid AI Engineer!

The course also offers students a series of electives:

  • Statistics Essentials for Data Science. Statistics are a vital part of Data Science, and this elective teaches you how to make data-driven predictions via statistical inference.
  • NLP and Speech Recognition. This elective covers speech-to-text conversion, text-to-speech conversion, automated speech recognition, voice-assistance devices, and much more.
  • Reinforcement Learning. Learn how to solve reinforcement learning problems by applying different algorithms and strategies like TensorFlow and Python.
  • Caltech Artificial Intelligence and Machine Learning Bootcamp Masterclass. These masterclasses are conducted by qualified Caltech and IBM instructors.

This AI and ML Bootcamp gives students a bounty of AI/ML-related benefits like:

  • Campus immersion, which includes an exclusive visit to Caltech’s robotics lab.
  • A program completion certificate from Caltech CTME.
  • A Caltech CTME Circle membership.
  • The chance to earn up to 22 CEUs courtesy of Caltech CTME.
  • An online convocation by the Caltech CTME Program Director.
  • A physical certificate from Caltech CTME if you request one.
  • Access to hackathons and Ask Me Anything sessions from IBM.
  • More than 25 hands-on projects and integrated labs across industry verticals.
  • A Level Up session by Andrew McAfee, Principal Research Scientist at MIT.
  • Access to Simplilearn’s Career Service, which will help you get noticed by today’s top hiring companies.
  • Industry-certified certificates for IBM courses.
  • Industry masterclasses delivered by IBM.
  • Hackathons from IBM.
  • Ask Me Anything (AMA) sessions held with the IBM leadership.

And these are the skills the course covers, all essential tools for working with today’s AI and ML projects:

  • Statistics
  • Python
  • Supervised Learning
  • Unsupervised Learning
  • Recommendation Systems
  • NLP
  • Neural Networks
  • GANs
  • Deep Learning
  • Reinforcement Learning
  • Speech Recognition
  • Ensemble Learning
  • Computer Vision

About Caltech CTME

Located in California, Caltech is a world-famous, highly respected science and engineering institution featuring some of today’s brightest scientific and technological minds. Contributions from Caltech alumni have earned worldwide acclaim, including over three dozen Nobel prizes. Caltech CTME instructors offer this quality of learning to our students by holding bootcamp master classes.

About IBM

IBM was founded in 1911 and has earned a reputation as the top IT industry leader and master of IT innovation.

How to Thrive in the Brave New World of AI and ML

Machine Learning and Artificial Intelligence have enormous potential to change our world for the better, but the fields need people of skill and vision to help lead the way. Somehow, there must be a balance between technological advancement and how it impacts people (quality of life, carbon footprint, job losses due to automation, etc.).

The AI and Machine Learning Bootcamp helps teach and train students, equipping them to assume a role of leadership in the new world that AI and ML offer.



Source link

Continue Reading

Books, Courses & Certifications

Teaching Developers to Think with AI – O’Reilly

Published

on


Developers are doing incredible things with AI. Tools like Copilot, ChatGPT, and Claude have rapidly become indispensable for developers, offering unprecedented speed and efficiency in tasks like writing code, debugging tricky behavior, generating tests, and exploring unfamiliar libraries and frameworks. When it works, it’s effective, and it feels incredibly satisfying.

But if you’ve spent any real time coding with AI, you’ve probably hit a point where things stall. You keep refining your prompt and adjusting your approach, but the model keeps generating the same kind of answer, just phrased a little differently each time, and returning slight variations on the same incomplete solution. It feels close, but it’s not getting there. And worse, it’s not clear how to get back on track.

That moment is familiar to a lot of people trying to apply AI in real work. It’s what my recent talk at O’Reilly’s AI Codecon event was all about.

Over the last two years, while working on the latest edition of Head First C#, I’ve been developing a new kind of learning path, one that helps developers get better at both coding and using AI. I call it Sens-AI, and it came out of something I kept seeing:

There’s a learning gap with AI that’s creating real challenges for people who are still building their development skills.

My recent O’Reilly Radar article “Bridging the AI Learning Gap” looked at what happens when developers try to learn AI and coding at the same time. It’s not just a tooling problem—it’s a thinking problem. A lot of developers are figuring things out by trial and error, and it became clear to me that they needed a better way to move from improvising to actually solving problems.

From Vibe Coding to Problem Solving

Ask developers how they use AI, and many will describe a kind of improvisational prompting strategy: Give the model a task, see what it returns, and nudge it toward something better. It can be an effective approach because it’s fast, fluid, and almost effortless when it works.

That pattern is common enough to have a name: vibe coding. It’s a great starting point, and it works because it draws on real prompt engineering fundamentals—iterating, reacting to output, and refining based on feedback. But when something breaks, the code doesn’t behave as expected, or the AI keeps rehashing the same unhelpful answers, it’s not always clear what to try next. That’s when vibe coding starts to fall apart.

Senior developers tend to pick up AI more quickly than junior ones, but that’s not a hard-and-fast rule. I’ve seen brand-new developers pick it up quickly, and I’ve seen experienced ones get stuck. The difference is in what they do next. The people who succeed with AI tend to stop and rethink: They figure out what’s going wrong, step back to look at the problem, and reframe their prompt to give the model something better to work with.

When developers think critically, AI works better. (slide from my May 8, 2025, talk at O’Reilly AI Codecon)

The Sens-AI Framework

As I started working more closely with developers who were using AI tools to try to find ways to help them ramp up more easily, I paid attention to where they were getting stuck, and I started noticing that the pattern of an AI rehashing the same “almost there” suggestions kept coming up in training sessions and real projects. I saw it happen in my own work too. At first it felt like a weird quirk in the model’s behavior, but over time I realized it was a signal: The AI had used up the context I’d given it. The signal tells us that we need a better understanding of the problem, so we can give the model the information it’s missing. That realization was a turning point. Once I started paying attention to those breakdown moments, I began to see the same root cause across many developers’ experiences: not a flaw in the tools but a lack of framing, context, or understanding that the AI couldn’t supply on its own.

The Sens-AI framework steps (slide from my May 8, 2025, talk at O’Reilly AI Codecon)

Over time—and after a lot of testing, iteration, and feedback from developers—I distilled the core of the Sens-AI learning path into five specific habits. They came directly from watching where learners got stuck, what kinds of questions they asked, and what helped them move forward. These habits form a framework that’s the intellectual foundation behind how Head First C# teaches developers to work with AI:

  1. Context: Paying attention to what information you supply to the model, trying to figure out what else it needs to know, and supplying it clearly. This includes code, comments, structure, intent, and anything else that helps the model understand what you’re trying to do.
  2. Research: Actively using AI and external sources to deepen your own understanding of the problem. This means running examples, consulting documentation, and checking references to verify what’s really going on.
  3. Problem framing: Using the information you’ve gathered to define the problem more clearly so the model can respond more usefully. This involves digging deeper into the problem you’re trying to solve, recognizing what the AI still needs to know about it, and shaping your prompt to steer it in a more productive direction—and going back to do more research when you realize that it needs more context.
  4. Refining: Iterating your prompts deliberately. This isn’t about random tweaks; it’s about making targeted changes based on what the model got right and what it missed, and using those results to guide the next step.
  5. Critical thinking: Judging the quality of AI output rather than just simply accepting it. Does the suggestion make sense? Is it correct, relevant, plausible? This habit is especially important because it helps developers avoid the trap of trusting confident-sounding answers that don’t actually work.

These habits let developers get more out of AI while keeping control over the direction of their work.

From Stuck to Solved: Getting Better Results from AI

I’ve watched a lot of developers use tools like Copilot and ChatGPT—during training sessions, in hands-on exercises, and when they’ve asked me directly for help. What stood out to me was how often they assumed the AI had done a bad job. In reality, the prompt just didn’t include the information the model needed to solve the problem. No one had shown them how to supply the right context. That’s what the five Sens-AI habits are designed to address: not by handing developers a checklist but by helping them build a mental model for how to work with AI more effectively.

In my AI Codecon talk, I shared a story about my colleague Luis, a very experienced developer with over three decades of coding experience. He’s a seasoned engineer and an advanced AI user who builds content for training other developers, works with large language models directly, uses sophisticated prompting techniques, and has built AI-based analysis tools.

Luis was building a desktop wrapper for a React app using Tauri, a Rust-based toolkit. He pulled in both Copilot and ChatGPT, cross-checking output, exploring alternatives, and trying different approaches. But the code still wasn’t working.

Each AI suggestion seemed to fix part of the problem but break another part. The model kept offering slightly different versions of the same incomplete solution, never quite resolving the issue. For a while, he vibe-coded through it, adjusting the prompt and trying again to see if a small nudge would help, but the answers kept circling the same spot. Eventually, he realized the AI had run out of context and changed his approach. He stepped back, did some focused research to better understand what the AI was trying (and failing) to do, and applied the same habits I emphasize in the Sens-AI framework.

That shift changed the outcome. Once he understood the pattern the AI was trying to use, he could guide it. He reframed his prompt, added more context, and finally started getting suggestions that worked. The suggestions only started working once Luis gave the model the missing pieces it needed to make sense of the problem.

Applying the Sens-AI Framework: A Real-World Example

Before I developed the Sens-AI framework, I ran into a problem that later became a textbook case for it. I was curious whether COBOL, a decades-old language developed for mainframes that I had never used before but wanted to learn more about, could handle the basic mechanics of an interactive game. So I did some experimental vibe coding to build a simple terminal app that would let the user move an asterisk around the screen using the W/A/S/D keys. It was a weird little side project—I just wanted to see if I could make COBOL do something it was never really meant for, and learn something about it along the way.

The initial AI-generated code compiled and ran just fine, and at first I made some progress. I was able to get it to clear the screen, draw the asterisk in the right place, handle raw keyboard input that didn’t require the user to press Enter, and get past some initial bugs that caused a lot of flickering.

But once I hit a more subtle bug—where ANSI escape codes like ";10H" were printing literally instead of controlling the cursor—ChatGPT got stuck. I’d describe the problem, and it would generate a slightly different version of the same answer each time. One suggestion used different variable names. Another changed the order of operations. A few attempted to reformat the STRING statement. But none of them addressed the root cause.

The COBOL app with a bug, printing a raw escape sequence instead of moving the asterisk.

The pattern was always the same: slight code rewrites that looked plausible but didn’t actually change the behavior. That’s what a rehash loop looks like. The AI wasn’t giving me worse answers—it was just circling, stuck on the same conceptual idea. So I did what many developers do: I assumed the AI just couldn’t answer my question and moved on to another problem.

At the time, I didn’t recognize the rehash loop for what it was. I assumed ChatGPT just didn’t know the answer and gave up. But revisiting the project after developing the Sens-AI framework, I saw the whole exchange in a new light. The rehash loop was a signal that the AI needed more context. It got stuck because I hadn’t told it what it needed to know.

When I started working on the framework, I remembered this old failure and thought it’d be a perfect test case. Now I had a set of steps that I could follow:

  • First, I recognized that the AI had run out of context. The model wasn’t failing randomly—it was repeating itself because it didn’t understand what I was asking it to do.
  • Next, I did some targeted research. I brushed up on ANSI escape codes and started reading the AI’s earlier explanations more carefully. That’s when I noticed a detail I’d skimmed past the first time while vibe coding: When I went back through the AI explanation of the code that it generated, I saw that the PIC ZZ COBOL syntax defines a numeric-edited field. I suspected that could potentially cause it to introduce leading spaces into strings and wondered if that could break an escape sequence.
  • Then I reframed the problem. I opened a new chat and explained what I was trying to build, what I was seeing, and what I suspected. I told the AI I’d noticed it was circling the same solution and treated that as a signal that we were missing something fundamental. I also told it that I’d done some research and had three leads I suspected were related: how COBOL displays multiple items in sequence, how terminal escape codes need to be formatted, and how spacing in numeric fields might be corrupting the output. The prompt didn’t provide answers; it just gave some potential research areas for the AI to investigate. That gave it what it needed to find the additional context it needed to break out of the rehash loop.
  • Once the model was unstuck, I refined my prompt. I asked follow-up questions to clarify exactly what the output should look like and how to construct the strings more reliably. I wasn’t just looking for a fix—I was guiding the model toward a better approach.
  • And most of all, I used critical thinking. I read the answers closely, compared them to what I already knew, and decided what to try based on what actually made sense. The explanation checked out. I implemented the fix, and the program worked.
My prompt that broke ChatGPT out of its rehash loop

Once I took the time to understand the problem—and did just enough research to give the AI a few hints about what context it was missing—I was able to write a prompt that broke ChatGPT out of the rehash loop, and it generated code that did exactly what I needed. The generated code for the working COBOL app is available in this GitHub GIST.

The working COBOL app that moves an asterisk around the screen

Why These Habits Matter for New Developers

I built the Sens-AI learning path in Head First C# around the five habits in the framework. These habits aren’t checklists, scripts, or hard-and-fast rules. They’re ways of thinking that help people use AI more productively—and they don’t require years of experience. I’ve seen new developers pick them up quickly, sometimes faster than seasoned developers who didn’t realize they were stuck in shallow prompting loops.

The key insight into these habits came to me when I was updating the coding exercises in the most recent edition of Head First C#. I test the exercises using AI by pasting the instructions and starter code into tools like ChatGPT and Copilot. If they produce the correct solution, that means I’ve given the model enough information to solve it—which means I’ve given readers enough information too. But if it fails to solve the problem, something’s missing from the exercise instructions.

The process of using AI to test the exercises in the book reminded me of a problem I ran into in the first edition, back in 2007. One exercise kept tripping people up, and after reading a lot of feedback, I realized the problem: I hadn’t given readers all the information they needed to solve it. That helped connect the dots for me. The AI struggles with some coding problems for the same reason the learners were struggling with that exercise—because the context wasn’t there. Writing a good coding exercise and writing a good prompt both depend on understanding what the other side needs to make sense of the problem.

That experience helped me realize that to make developers successful with AI, we need to do more than just teach the basics of prompt engineering. We need to explicitly instill these thinking habits and give developers a way to build them alongside their core coding skills. If we want developers to succeed, we can’t just tell them to “prompt better.” We need to show them how to think with AI.

Where We Go from Here

If AI really is changing how we write software—and I believe it is—then we need to change how we teach it. We’ve made it easy to give people access to the tools. The harder part is helping them develop the habits and judgment to use them well, especially when things go wrong. That’s not just an education problem; it’s also a design problem, a documentation problem, and a tooling problem. Sens-AI is one answer, but it’s just the beginning. We still need clearer examples and better ways to guide, debug, and refine the model’s output. If we teach developers how to think with AI, we can help them become not just code generators but thoughtful engineers who understand what their code is doing and why it matters.



Source link

Continue Reading

Trending