Connect with us

Books, Courses & Certifications

Tailor responsible AI with new safeguard tiers in Amazon Bedrock Guardrails

Published

on


Amazon Bedrock Guardrails provides configurable safeguards to help build trusted generative AI applications at scale. It provides organizations with integrated safety and privacy safeguards that work across multiple foundation models (FMs), including models available in Amazon Bedrock, as well as models hosted outside Amazon Bedrock from other model providers and cloud providers. With the standalone ApplyGuardrail API, Amazon Bedrock Guardrails offers a model-agnostic and scalable approach to implementing responsible AI policies for your generative AI applications. Guardrails currently offers six key safeguards: content filters, denied topics, word filters, sensitive information filters, contextual grounding checks, and Automated Reasoning checks (preview), to help prevent unwanted content and align AI interactions with your organization’s responsible AI policies.

As organizations strive to implement responsible AI practices across diverse use cases, they face the challenge of balancing safety controls with varying performance and language requirements across different applications, making a one-size-fits-all approach ineffective. To address this, we’ve introduced safeguard tiers for Amazon Bedrock Guardrails, so you can choose appropriate safeguards based on your specific needs. For instance, a financial services company can implement comprehensive, multi-language protection for customer-facing AI assistants while using more focused, lower-latency safeguards for internal analytics tools, making sure each application upholds responsible AI principles with the right level of protection without compromising performance or functionality.

In this post, we introduce the new safeguard tiers available in Amazon Bedrock Guardrails, explain their benefits and use cases, and provide guidance on how to implement and evaluate them in your AI applications.

Solution overview

Until now, when using Amazon Bedrock Guardrails, you were provided with a single set of the safeguards associated to specific AWS Regions and a limited set of languages supported. The introduction of safeguard tiers in Amazon Bedrock Guardrails provides three key advantages for implementing AI safety controls:

  • A tier-based approach that gives you control over which guardrail implementations you want to use for content filters and denied topics, so you can select the appropriate protection level for each use case. We provide more details about this in the following sections.
  • Cross-Region Inference Support (CRIS) for Amazon Bedrock Guardrails, so you can use compute capacity across multiple Regions, achieving better scaling and availability for your guardrails. With this, your requests get automatically routed during guardrail policy evaluation to the optimal Region within your geography, maximizing available compute resources and model availability. This helps maintain guardrail performance and reliability when demand increases. There’s no additional cost for using CRIS with Amazon Bedrock Guardrails, and you can select from specific guardrail profiles for controlling model versioning and future upgrades.
  • Advanced capabilities as a configurable tier option for use cases where more robust protection or broader language support are critical priorities, and where you can accommodate a modest latency increase.

Safeguard tiers are applied at the guardrail policy level, specifically for content filters and denied topics. You can tailor your protection strategy for different aspects of your AI application. Let’s explore the two available tiers:

  • Classic tier (default):
    • Maintains the existing behavior of Amazon Bedrock Guardrails
    • Limited language support: English, French, and Spanish
    • Does not require CRIS for Amazon Bedrock Guardrails
    • Optimized for lower-latency applications
  • Standard tier:
    • Provided as a new capability that you can enable for existing or new guardrails
    • Multilingual support for more than 60 languages
    • Enhanced robustness against prompt typos and manipulated inputs
    • Enhanced prompt attack protection covering modern jailbreak and prompt injection techniques, including token smuggling, AutoDAN, and many-shot, among others
    • Enhanced topic detection with improved understanding and handling of complex topics
    • Requires the use of CRIS for Amazon Bedrock Guardrails and might have a modest increase in latency profile compared to the Classic tier option

You can select each tier independently for content filters and denied topics policies, allowing for mixed configurations within the same guardrail, as illustrated in the following hierarchy. With this flexibility, companies can implement the right level of protection for each specific application.

  • Policy: Content filters
    • Tier: Classic or Standard
  • Policy: Denied topics
    • Tier: Classic or Standard
  • Other policies: Word filters, sensitive information filters, contextual grounding checks, and Automated Reasoning checks (preview)

To illustrate how these tiers can be applied, consider a global financial services company deploying AI in both customer-facing and internal applications:

  • For their customer service AI assistant, they might choose the Standard tier for both content filters and denied topics, to provide comprehensive protection across many languages.
  • For internal analytics tools, they could use the Classic tier for content filters prioritizing low latency, while implementing the Standard tier for denied topics to provide robust protection against sensitive financial information disclosure.

You can configure the safeguard tiers for content filters and denied topics in each guardrail through the AWS Management Console, or programmatically through the Amazon Bedrock SDK and APIs. You can use a new or existing guardrail. For information on how to create or modify a guardrail, see Create your guardrail.

Your existing guardrails are automatically set to the Classic tier by default to make sure you have no impact on your guardrails’ behavior.

Quality enhancements with the Standard tier

According to our tests, the new Standard tier improves harmful content filtering recall by more than 15% with a more than 7% gain in balanced accuracy compared to the Classic tier. A key differentiating feature of the new Standard tier is its multilingual support, maintaining strong performance with over 78% recall and over 88% balanced accuracy for the most common 14 languages.The enhancements in protective capabilities extend across several other aspects. For example, content filters for prompt attacks in the Standard tier show a 30% improvement in recall and 16% gain in balanced accuracy compared to the Classic tier, while maintaining a lower false positive rate. For denied topic detection, the new Standard tier delivers a 32% increase in recall, resulting in an 18% improvement in balanced accuracy.These substantial evolutions in detection capabilities for Amazon Bedrock Guardrails, combined with consistently low false positive rates and robust multilingual performance, also represent a significant advancement in content protection technology compared to other commonly available solutions. The multilingual improvements are particularly noteworthy, with the new Standard tier in Amazon Bedrock Guardrails showing consistent performance gains of 33–49% in recall across different language evaluations compared to other competitors’ options.

Benefits of safeguard tiers

Different AI applications have distinct safety requirements based on their audience, content domain, and geographic reach. For example:

  • Customer-facing applications often require stronger protection against potential misuse compared to internal applications
  • Applications serving global customers need guardrails that work effectively across many languages
  • Internal enterprise tools might prioritize controlling specific topics in just a few primary languages

The combination of the safeguard tiers with CRIS for Amazon Bedrock Guardrails also addresses various operational needs with practical benefits that go beyond feature differences:

  • Independent policy evolution – Each policy (content filters or denied topics) can evolve at its own pace without disrupting the entire guardrail system. You can configure these with specific guardrail profiles in CRIS for controlling model versioning in the models powering your guardrail policies.
  • Controlled adoption – You decide when and how to adopt new capabilities, maintaining stability for production applications. You can continue to use Amazon Bedrock Guardrails with your previous configurations without changes and only move to the new tiers and CRIS configurations when you consider it appropriate.
  • Resource efficiency – You can implement enhanced protections only where needed, balancing security requirements with performance considerations.
  • Simplified migration path – When new capabilities become available, you can evaluate and integrate them gradually by policy area rather than facing all-or-nothing choices. This also simplifies testing and comparison mechanisms such as A/B testing or blue/green deployments for your guardrails.

This approach helps organizations balance their specific protection requirements with operational considerations in a more nuanced way than a single-option system could provide.

Configure safeguard tiers on the Amazon Bedrock console

On the Amazon Bedrock console, you can configure the safeguard tiers for your guardrail in the Content filters tier or Denied topics tier sections by selecting your preferred tier.

Use of the new Standard tier requires setting up cross-Region inference for Amazon Bedrock Guardrails, choosing the guardrail profile of your choice.

Configure safeguard tiers using the AWS SDK

You can also configure the guardrail’s tiers using the AWS SDK. The following is an example to get started with the Python SDK:

import boto3
import json

bedrock = boto3.client(
    "bedrock",
    region_name="us-east-1"
)

# Create a guardrail with Standard tier for both Content Filters and Denied Topics
response = bedrock.create_guardrail(
    name="enhanced-safety-guardrail",
    # cross-Region is required for STANDARD tier
    crossRegionConfig={
        'guardrailProfileIdentifier': 'us.guardrail.v1:0'
    },
    # Configure Denied Topics with Standard tier
    topicPolicyConfig={
        "topicsConfig": [
            {
                "name": "Financial Advice",
                "definition": "Providing specific investment advice or financial recommendations",
                "type": "DENY",
                "inputEnabled": True,
                "inputAction": "BLOCK",
                "outputEnabled": True,
                "outputAction": "BLOCK"
            }
        ],
        "tierConfig": {
            "tierName": "STANDARD"
        }
    },
    # Configure Content Filters with Standard tier
    contentPolicyConfig={
        "filtersConfig": [
            {
                "inputStrength": "HIGH",
                "outputStrength": "HIGH",
                "type": "SEXUAL"
            },
            {
                "inputStrength": "HIGH",
                "outputStrength": "HIGH",
                "type": "VIOLENCE"
            }
        ],
        "tierConfig": {
            "tierName": "STANDARD"
        }
    },
    blockedInputMessaging="I cannot respond to that request.",
    blockedOutputsMessaging="I cannot provide that information."
)

Within a given guardrail, the content filter and denied topic policies can be configured with its own tier independently, giving you granular control over how guardrails behave. For example, you might choose the Standard tier for content filtering while keeping denied topics in the Classic tier, based on your specific requirements.

For migrating existing guardrails’ configurations to use the Standard tier, add the sections highlighted in the preceding example for crossRegionConfig and tierConfig to your current guardrail definition. You can do this using the UpdateGuardrail API, or create a new guardrail with the CreateGuardrail API.

Evaluating your guardrails

To thoroughly evaluate your guardrails’ performance, consider creating a test dataset that includes the following:

  • Safe examples – Content that should pass through guardrails
  • Harmful examples – Content that should be blocked
  • Edge cases – Content that tests the boundaries of your policies
  • Examples in multiple languages – Especially important when using the Standard tier

You can also rely on openly available datasets for this purpose. Ideally, your dataset should be labeled with the expected response for each case for assessing accuracy and recall of your guardrails.

With your dataset ready, you can use the Amazon Bedrock ApplyGuardrail API as shown in the following example to efficiently test your guardrail’s behavior for user inputs without invoking FMs. This way, you can save the costs associated with the large language model (LLM) response generation.

import boto3
import json

bedrock_runtime = boto3.client(
    "bedrock-runtime",
    region_name="us-east-1"
)

# Test the guardrail with potentially problematic content
content = [
    {
        "text": {
            "text": "Your test prompt here"
        }
    }
]

response = bedrock_runtime.apply_guardrail(
    content=content,
    source="INPUT",
    guardrailIdentifier="your-guardrail-id",
    guardrailVersion="DRAFT"
)

print(json.dumps(response, indent=2, default=str))

Later, you can repeat the process for the outputs of the LLMs if needed. For this, you can use the ApplyGuardrail API if you want an independent evaluation for models in AWS or outside in another provider, or you can directly use the Converse API if you intend to use models in Amazon Bedrock. When using the Converse API, the inputs and outputs are evaluated with the same invocation request, optimizing latency and reducing coding overheads.

Because your dataset is labeled, you can directly implement a mechanism for assessing the accuracy, recall, and potential false negatives or false positives through the use of libraries like SKLearn Metrics:

# scoring script
# labels and preds store list of ground truth label and guardrails predictions

from sklearn.metrics import confusion_matrix

tn, fp, fn, tp = confusion_matrix(labels, preds, labels=[0, 1]).ravel()

recall = tp / (tp + fn) if (tp + fn) != 0 else 0
fpr = fp / (fp + tn) if (fp + tn) != 0 else 0
balanced_accuracy = 0.5 * (recall + 1 - fpr)

Alternatively, if you don’t have labeled data or your use cases have subjective responses, you can also rely on mechanisms such as LLM-as-a-judge, where you pass the inputs and guardrails’ evaluation outputs to an LLM for assessing a score based on your own predefined criteria. For more information, see Automate building guardrails for Amazon Bedrock using test-drive development.

Best practices for implementing tiers

We recommend considering the following aspects when configuring your tiers for Amazon Bedrock Guardrails:

  • Start with staged testing – Test both tiers with a representative sample of your expected inputs and responses before making broad deployment decisions.
  • Consider your language requirements – If your application serves users in multiple languages, the Standard tier’s expanded language support might be essential.
  • Balance safety and performance – Evaluate both the accuracy improvements and latency differences to make informed decisions. Consider if you can afford a few additional milliseconds of latency for improved robustness with the Standard tier or prefer a latency-optimized option for more straight forward evaluations with the Classic tier.
  • Use policy-level tier selection – Take advantage of the ability to select different tiers for different policies to optimize your guardrails. You can choose separate tiers for content filters and denied topics, while combining with the rest of the policies and features available in Amazon Bedrock Guardrails.
  • Remember cross-Region requirements – The Standard tier requires cross-Region inference, so make sure your architecture and compliance requirements can accommodate this. With CRIS, your request originates from the Region where your guardrail is deployed, but it might be served from a different Region from the ones included in the guardrail inference profile for optimizing latency and availability.

Conclusion

The introduction of safeguard tiers in Amazon Bedrock Guardrails represents a significant step forward in our commitment to responsible AI. By providing flexible, powerful, and evolving safety tools for generative AI applications, we’re empowering organizations to implement AI solutions that are not only innovative but also ethical and trustworthy. This capabilities-based approach enables you to tailor your responsible AI practices to each specific use case. You can now implement the right level of protection for different applications while creating a path for continuous improvement in AI safety and ethics.The new Standard tier delivers significant improvements in multilingual support and detection accuracy, making it an ideal choice for many applications, especially those serving diverse global audiences or requiring enhanced protection. This aligns with responsible AI principles by making sure AI systems are fair and inclusive across different languages and cultures. Meanwhile, the Classic tier remains available for use cases prioritizing low latency or those with simpler language requirements, allowing organizations to balance performance with protection as needed.

By offering these customizable protection levels, we’re supporting organizations in their journey to develop and deploy AI responsibly. This approach helps make sure that AI applications are not only powerful and efficient but also align with organizational values, comply with regulations, and maintain user trust.

To learn more about safeguard tiers in Amazon Bedrock Guardrails, refer to Detect and filter harmful content by using Amazon Bedrock Guardrails, or visit the Amazon Bedrock console to create your first tiered guardrail.


About the Authors

Koushik Kethamakka is a Senior Software Engineer at AWS, focusing on AI/ML initiatives. At Amazon, he led real-time ML fraud prevention systems for Amazon.com before moving to AWS to lead development of AI/ML services like Amazon Lex and Amazon Bedrock. His expertise spans product and system design, LLM hosting, evaluations, and fine-tuning. Recently, Koushik’s focus has been on LLM evaluations and safety, leading to the development of products like Amazon Bedrock Evaluations and Amazon Bedrock Guardrails. Prior to joining Amazon, Koushik earned his MS from the University of Houston.

Hang Su is a Senior Applied Scientist at AWS AI. He has been leading the Amazon Bedrock Guardrails Science team. His interest lies in AI safety topics, including harmful content detection, red-teaming, sensitive information detection, among others.

Shyam Srinivasan is on the Amazon Bedrock product team. He cares about making the world a better place through technology and loves being part of this journey. In his spare time, Shyam likes to run long distances, travel around the world, and experience new cultures with family and friends.

Aartika Sardana Chandras is a Senior Product Marketing Manager for AWS Generative AI solutions, with a focus on Amazon Bedrock. She brings over 15 years of experience in product marketing, and is dedicated to empowering customers to navigate the complexities of the AI lifecycle. Aartika is passionate about helping customers leverage powerful AI technologies in an ethical and impactful manner.

Satveer Khurpa is a Sr. WW Specialist Solutions Architect, Amazon Bedrock at Amazon Web Services, specializing in Amazon Bedrock security. In this role, he uses his expertise in cloud-based architectures to develop innovative generative AI solutions for clients across diverse industries. Satveer’s deep understanding of generative AI technologies and security principles allows him to design scalable, secure, and responsible applications that unlock new business opportunities and drive tangible value while maintaining robust security postures.

Antonio Rodriguez is a Principal Generative AI Specialist Solutions Architect at Amazon Web Services. He helps companies of all sizes solve their challenges, embrace innovation, and create new business opportunities with Amazon Bedrock. Apart from work, he loves to spend time with his family and play sports with his friends.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Books, Courses & Certifications

Teaching Developers to Think with AI – O’Reilly

Published

on


Developers are doing incredible things with AI. Tools like Copilot, ChatGPT, and Claude have rapidly become indispensable for developers, offering unprecedented speed and efficiency in tasks like writing code, debugging tricky behavior, generating tests, and exploring unfamiliar libraries and frameworks. When it works, it’s effective, and it feels incredibly satisfying.

But if you’ve spent any real time coding with AI, you’ve probably hit a point where things stall. You keep refining your prompt and adjusting your approach, but the model keeps generating the same kind of answer, just phrased a little differently each time, and returning slight variations on the same incomplete solution. It feels close, but it’s not getting there. And worse, it’s not clear how to get back on track.

That moment is familiar to a lot of people trying to apply AI in real work. It’s what my recent talk at O’Reilly’s AI Codecon event was all about.

Over the last two years, while working on the latest edition of Head First C#, I’ve been developing a new kind of learning path, one that helps developers get better at both coding and using AI. I call it Sens-AI, and it came out of something I kept seeing:

There’s a learning gap with AI that’s creating real challenges for people who are still building their development skills.

My recent O’Reilly Radar article “Bridging the AI Learning Gap” looked at what happens when developers try to learn AI and coding at the same time. It’s not just a tooling problem—it’s a thinking problem. A lot of developers are figuring things out by trial and error, and it became clear to me that they needed a better way to move from improvising to actually solving problems.

From Vibe Coding to Problem Solving

Ask developers how they use AI, and many will describe a kind of improvisational prompting strategy: Give the model a task, see what it returns, and nudge it toward something better. It can be an effective approach because it’s fast, fluid, and almost effortless when it works.

That pattern is common enough to have a name: vibe coding. It’s a great starting point, and it works because it draws on real prompt engineering fundamentals—iterating, reacting to output, and refining based on feedback. But when something breaks, the code doesn’t behave as expected, or the AI keeps rehashing the same unhelpful answers, it’s not always clear what to try next. That’s when vibe coding starts to fall apart.

Senior developers tend to pick up AI more quickly than junior ones, but that’s not a hard-and-fast rule. I’ve seen brand-new developers pick it up quickly, and I’ve seen experienced ones get stuck. The difference is in what they do next. The people who succeed with AI tend to stop and rethink: They figure out what’s going wrong, step back to look at the problem, and reframe their prompt to give the model something better to work with.

When developers think critically, AI works better. (slide from my May 8, 2025, talk at O’Reilly AI Codecon)

The Sens-AI Framework

As I started working more closely with developers who were using AI tools to try to find ways to help them ramp up more easily, I paid attention to where they were getting stuck, and I started noticing that the pattern of an AI rehashing the same “almost there” suggestions kept coming up in training sessions and real projects. I saw it happen in my own work too. At first it felt like a weird quirk in the model’s behavior, but over time I realized it was a signal: The AI had used up the context I’d given it. The signal tells us that we need a better understanding of the problem, so we can give the model the information it’s missing. That realization was a turning point. Once I started paying attention to those breakdown moments, I began to see the same root cause across many developers’ experiences: not a flaw in the tools but a lack of framing, context, or understanding that the AI couldn’t supply on its own.

The Sens-AI framework steps (slide from my May 8, 2025, talk at O’Reilly AI Codecon)

Over time—and after a lot of testing, iteration, and feedback from developers—I distilled the core of the Sens-AI learning path into five specific habits. They came directly from watching where learners got stuck, what kinds of questions they asked, and what helped them move forward. These habits form a framework that’s the intellectual foundation behind how Head First C# teaches developers to work with AI:

  1. Context: Paying attention to what information you supply to the model, trying to figure out what else it needs to know, and supplying it clearly. This includes code, comments, structure, intent, and anything else that helps the model understand what you’re trying to do.
  2. Research: Actively using AI and external sources to deepen your own understanding of the problem. This means running examples, consulting documentation, and checking references to verify what’s really going on.
  3. Problem framing: Using the information you’ve gathered to define the problem more clearly so the model can respond more usefully. This involves digging deeper into the problem you’re trying to solve, recognizing what the AI still needs to know about it, and shaping your prompt to steer it in a more productive direction—and going back to do more research when you realize that it needs more context.
  4. Refining: Iterating your prompts deliberately. This isn’t about random tweaks; it’s about making targeted changes based on what the model got right and what it missed, and using those results to guide the next step.
  5. Critical thinking: Judging the quality of AI output rather than just simply accepting it. Does the suggestion make sense? Is it correct, relevant, plausible? This habit is especially important because it helps developers avoid the trap of trusting confident-sounding answers that don’t actually work.

These habits let developers get more out of AI while keeping control over the direction of their work.

From Stuck to Solved: Getting Better Results from AI

I’ve watched a lot of developers use tools like Copilot and ChatGPT—during training sessions, in hands-on exercises, and when they’ve asked me directly for help. What stood out to me was how often they assumed the AI had done a bad job. In reality, the prompt just didn’t include the information the model needed to solve the problem. No one had shown them how to supply the right context. That’s what the five Sens-AI habits are designed to address: not by handing developers a checklist but by helping them build a mental model for how to work with AI more effectively.

In my AI Codecon talk, I shared a story about my colleague Luis, a very experienced developer with over three decades of coding experience. He’s a seasoned engineer and an advanced AI user who builds content for training other developers, works with large language models directly, uses sophisticated prompting techniques, and has built AI-based analysis tools.

Luis was building a desktop wrapper for a React app using Tauri, a Rust-based toolkit. He pulled in both Copilot and ChatGPT, cross-checking output, exploring alternatives, and trying different approaches. But the code still wasn’t working.

Each AI suggestion seemed to fix part of the problem but break another part. The model kept offering slightly different versions of the same incomplete solution, never quite resolving the issue. For a while, he vibe-coded through it, adjusting the prompt and trying again to see if a small nudge would help, but the answers kept circling the same spot. Eventually, he realized the AI had run out of context and changed his approach. He stepped back, did some focused research to better understand what the AI was trying (and failing) to do, and applied the same habits I emphasize in the Sens-AI framework.

That shift changed the outcome. Once he understood the pattern the AI was trying to use, he could guide it. He reframed his prompt, added more context, and finally started getting suggestions that worked. The suggestions only started working once Luis gave the model the missing pieces it needed to make sense of the problem.

Applying the Sens-AI Framework: A Real-World Example

Before I developed the Sens-AI framework, I ran into a problem that later became a textbook case for it. I was curious whether COBOL, a decades-old language developed for mainframes that I had never used before but wanted to learn more about, could handle the basic mechanics of an interactive game. So I did some experimental vibe coding to build a simple terminal app that would let the user move an asterisk around the screen using the W/A/S/D keys. It was a weird little side project—I just wanted to see if I could make COBOL do something it was never really meant for, and learn something about it along the way.

The initial AI-generated code compiled and ran just fine, and at first I made some progress. I was able to get it to clear the screen, draw the asterisk in the right place, handle raw keyboard input that didn’t require the user to press Enter, and get past some initial bugs that caused a lot of flickering.

But once I hit a more subtle bug—where ANSI escape codes like ";10H" were printing literally instead of controlling the cursor—ChatGPT got stuck. I’d describe the problem, and it would generate a slightly different version of the same answer each time. One suggestion used different variable names. Another changed the order of operations. A few attempted to reformat the STRING statement. But none of them addressed the root cause.

The COBOL app with a bug, printing a raw escape sequence instead of moving the asterisk.

The pattern was always the same: slight code rewrites that looked plausible but didn’t actually change the behavior. That’s what a rehash loop looks like. The AI wasn’t giving me worse answers—it was just circling, stuck on the same conceptual idea. So I did what many developers do: I assumed the AI just couldn’t answer my question and moved on to another problem.

At the time, I didn’t recognize the rehash loop for what it was. I assumed ChatGPT just didn’t know the answer and gave up. But revisiting the project after developing the Sens-AI framework, I saw the whole exchange in a new light. The rehash loop was a signal that the AI needed more context. It got stuck because I hadn’t told it what it needed to know.

When I started working on the framework, I remembered this old failure and thought it’d be a perfect test case. Now I had a set of steps that I could follow:

  • First, I recognized that the AI had run out of context. The model wasn’t failing randomly—it was repeating itself because it didn’t understand what I was asking it to do.
  • Next, I did some targeted research. I brushed up on ANSI escape codes and started reading the AI’s earlier explanations more carefully. That’s when I noticed a detail I’d skimmed past the first time while vibe coding: When I went back through the AI explanation of the code that it generated, I saw that the PIC ZZ COBOL syntax defines a numeric-edited field. I suspected that could potentially cause it to introduce leading spaces into strings and wondered if that could break an escape sequence.
  • Then I reframed the problem. I opened a new chat and explained what I was trying to build, what I was seeing, and what I suspected. I told the AI I’d noticed it was circling the same solution and treated that as a signal that we were missing something fundamental. I also told it that I’d done some research and had three leads I suspected were related: how COBOL displays multiple items in sequence, how terminal escape codes need to be formatted, and how spacing in numeric fields might be corrupting the output. The prompt didn’t provide answers; it just gave some potential research areas for the AI to investigate. That gave it what it needed to find the additional context it needed to break out of the rehash loop.
  • Once the model was unstuck, I refined my prompt. I asked follow-up questions to clarify exactly what the output should look like and how to construct the strings more reliably. I wasn’t just looking for a fix—I was guiding the model toward a better approach.
  • And most of all, I used critical thinking. I read the answers closely, compared them to what I already knew, and decided what to try based on what actually made sense. The explanation checked out. I implemented the fix, and the program worked.
My prompt that broke ChatGPT out of its rehash loop

Once I took the time to understand the problem—and did just enough research to give the AI a few hints about what context it was missing—I was able to write a prompt that broke ChatGPT out of the rehash loop, and it generated code that did exactly what I needed. The generated code for the working COBOL app is available in this GitHub GIST.

The working COBOL app that moves an asterisk around the screen

Why These Habits Matter for New Developers

I built the Sens-AI learning path in Head First C# around the five habits in the framework. These habits aren’t checklists, scripts, or hard-and-fast rules. They’re ways of thinking that help people use AI more productively—and they don’t require years of experience. I’ve seen new developers pick them up quickly, sometimes faster than seasoned developers who didn’t realize they were stuck in shallow prompting loops.

The key insight into these habits came to me when I was updating the coding exercises in the most recent edition of Head First C#. I test the exercises using AI by pasting the instructions and starter code into tools like ChatGPT and Copilot. If they produce the correct solution, that means I’ve given the model enough information to solve it—which means I’ve given readers enough information too. But if it fails to solve the problem, something’s missing from the exercise instructions.

The process of using AI to test the exercises in the book reminded me of a problem I ran into in the first edition, back in 2007. One exercise kept tripping people up, and after reading a lot of feedback, I realized the problem: I hadn’t given readers all the information they needed to solve it. That helped connect the dots for me. The AI struggles with some coding problems for the same reason the learners were struggling with that exercise—because the context wasn’t there. Writing a good coding exercise and writing a good prompt both depend on understanding what the other side needs to make sense of the problem.

That experience helped me realize that to make developers successful with AI, we need to do more than just teach the basics of prompt engineering. We need to explicitly instill these thinking habits and give developers a way to build them alongside their core coding skills. If we want developers to succeed, we can’t just tell them to “prompt better.” We need to show them how to think with AI.

Where We Go from Here

If AI really is changing how we write software—and I believe it is—then we need to change how we teach it. We’ve made it easy to give people access to the tools. The harder part is helping them develop the habits and judgment to use them well, especially when things go wrong. That’s not just an education problem; it’s also a design problem, a documentation problem, and a tooling problem. Sens-AI is one answer, but it’s just the beginning. We still need clearer examples and better ways to guide, debug, and refine the model’s output. If we teach developers how to think with AI, we can help them become not just code generators but thoughtful engineers who understand what their code is doing and why it matters.



Source link

Continue Reading

Books, Courses & Certifications

Honoring service, empowering futures: Coursera’s partnership with United Services Organization

Published

on


By Isa Rivera, Military ERG Lead, Coursera

On Independence Day, we celebrate the freedoms we enjoy — and we’re proud to support U.S. service members and military spouses with job-relevant skills to help build their futures.

As a leader of Coursera’s Military Employee Resource Group (ERG) and a member of a military family, I’m especially proud of our work supporting military organizations through our social impact program, which provides free learning to over 100 nonprofits. 

Today, I want to highlight our collaboration with the United Services Organization (USO), a nonprofit dedicated to strengthening service members that offers free access to Coursera certificate programs alongside the USO’s wraparound services. Together, we’ve helped over 10,000 military learners build the skills and credentials they need to advance their careers. 

“This partnership has always been such a strong pairing since we started back in April 2021 and it has been incredible to watch the collective impact Coursera and the USO have made to amplify career opportunities for our military community,” said Lisa Elswick, USO Vice President of Programs.

Creating pathways from service to civilian careers

In 2024, the USO Transition Program created over 10,000 personalized Action Plans to help service members and military spouses advance their careers, which include career counseling and access to the job-relevant catalog on Coursera. More than 75% of the enrollments came from active service members, with the Army and Navy being the most active military branches. 

One learner, U.S. Army Capt. Philip H., had a degree in mechanical engineering before spending nine years in the Army, with four of those in special forces.  He said, I was interested in broadening my skill set as much as possible to make myself a more marketable candidate in the civilian workforce.”

Philip achieved his ultimate goal of becoming a software engineer after completing the IBM Data Analyst Certificate on Coursera.

Top certificate programs among USO members

Coursera offers over 90+ Entry-level Professional Certificates that build job-ready skills in high-demand fields – no degree or previous experience required. In 2024, certificate completions through the USO Program rose 97% over the previous year. Industry-relevant skills shined with the most popular certificates being: 

We’re humbled to partner with the USO and other nonprofits to help military learners build the skills they need to shape their future.



Source link

Continue Reading

Books, Courses & Certifications

Large Public Libraries Give Young Adults Across U.S. Access to Banned Books

Published

on


Young adults are finding it harder to borrow books reflective of their lived experiences in their schools and public libraries. It isn’t because these stories don’t exist — they do — but because they’ve been challenged and removed, restricted, or were never purchased at all.

This is especially true in parts of the country where state legislatures have enacted laws criminalizing what educators can and can’t say about politically, religiously, or morally divisive topics, as well as regions where public services are underfunded and access to books is already scarce.

But in recent years, a handful of urban library systems have stepped up to offer readers who are at least 13 years old a chance to read the books that might be unavailable in their home areas.

Since 2022, thousands of eligible young adults have registered for a little-known program called Books Unbanned, which Brooklyn Public Library in New York created that year to counter efforts to restrict access to certain books.

Books Unbanned’s popularity among young readers — more than 8,000 have signed up — comes amid record-breaking book censorship efforts, according to data compiled by the American Library Association. The ALA’s Office of Intellectual Freedom has tracked a more-than-400-percent increase in the number of reported book challenges in the U.S. between 2020 and 2024. The challenges reported to the ALA in 2024 alone targeted 2,452 titles.

The Supreme Court’s recent ruling to allow parents to pull their children out of classroom discussions around books covering LGBTQ+ and other themes that may conflict with their religious beliefs could embolden efforts to restrict more titles.

Brooklyn’s program gives readers between 13 and 21 anywhere in the country the ability to opt in. As it turns out, its digital “banned book” library cards are a bit of a misnomer because they also provide access to materials unaffected by bans.

“It’s our entire book collection,” said Amy Mikel, director of customer experience and librarian at Brooklyn Public Library. “Half a million items. You can read whatever you want” that’s in a digital format.

The Brooklyn library’s records show Books Unbanned cardholders are collectively borrowing more than 100,000 unique titles a year, many of which have nothing to do with the most frequently challenged subjects for youth, such as race, sex, gender, or lived experiences that are decidedly difficult or hard to read.

“Obviously there are people who write to us and say, ‘thank you so much — now I can access the books that have been taken away from me,’” said Mikel. “But the fact is that these young people are accessing books that are not controversial at all.”

Other libraries have since launched their own programs, though not every library can afford to provide the level of access Brooklyn’s program does.

Private Funding

Each program is based on different parameters that are largely determined by the level of private funding libraries receive and the subsequent licensing agreements they’re able to secure.

Because most libraries with foundations are based in major cities, so far all of the programs come from urban libraries receiving robust support from their respective foundations, which raise money in addition to the funding they’ve historically received from the federal government to cover operational costs.

Many public libraries have “Friends of the Library” groups that raise money and advocate for their libraries by organizing community events such as used-book sales. Some foundations for larger library systems attract large philanthropic gifts that can pay for specific licenses negotiated with publishers. These negotiations often determine what type of digital book access libraries can afford to provide patrons.

The breadth of access differs among libraries. While Seattle Public Library’s Books Unbanned e-card gives young adults up to age 26 access to its entire OverDrive collection and is open to readers throughout the U.S., the LA County Library Books Unbanned program is limited to teens 13 to 18, and is available only to residents of California.

Boston Public Library and San Diego Public Library took a more refined approach to their Books Unbanned programs. Both offer access to young adults who register throughout the U.S., but their collections are limited to frequently challenged or banned titles.

Each of the participating libraries encourage young adults to apply for as many banned book e-cards as they’re eligible for to make use of as many collections as possible.

Empty Shelves

What Brooklyn Public Library did wasn’t novel in terms of what librarians routinely do. But it was innovative in the sense that it re-envisioned big ideas — like what is a service area in the post-digital age. Books Unbanned responded to a perceived threat to young adults’ First Amendment rights to receive information. The perceived threat has escalated.

Since the program launched, a patchwork of legislation across several states criminalizes teachers to varying degrees for what they say about sexual orientation, gender identity or racial ideology in an educational context. Moms for Liberty targeted young adult books with LGBTQ+ and BIPOC characters. The group’s website cites passages about sexual content from young adult books out of context and then rates them according to its own proprietary system. This website equipped adults with the quotes they needed to challenge books on school library shelves, leading to record bans nearly every year since 2021.

In rural areas, the problem is less likely to be book challenges but instead chronic underfunding of library services.

“This program wouldn’t need to exist if everybody just had access to a robust digital collection where they live,” said Mikel at Brooklyn Public Library.

Participating libraries invite cardholders to share their experiences with book censorship when they sign up or renew a banned book card. Last year, Brooklyn Public Library and Seattle Public Library issued a report documenting how teenagers and young adults are encountering censorship in their communities.

Teens reported witnessing the obvious shrinking of collections, with gaps on shelves where certain books used to be. They also said that if they do have access to a library, that its collection was dated or limited. And some reported intentional self-censorship: Jennifer Jenkins, deputy director of customer experience with the San Diego Public Library, heard from several young adults who said they could check out a frequently challenged book from their local library, but they chose not to in order to protect their teachers and librarians from retaliation.

Cardholders also cite state-specific legislation that alters what their teachers can teach and their libraries can shelve, and librarians who draw unwanted attention to the age-appropriateness of the titles they check out. This aligns with other restrictive policies some libraries have introduced, including age limitations, parental permissions, content warning labels, and removing tags from online catalogs, which makes certain books harder to find in the system.

Mikel in Brooklyn says restrictions can be hard to measure but can significantly impact a young adult’s ability to access information.

“When people say things like, ‘It’s not a book ban, we just removed it from the school library,’” Mikel said. “In some cases, removing that book from that one place of access is effectively erasing the book altogether from that young person’s life.”

Tacit censorship resulting from restrictive lending policies is harder for researchers to track.

“Most librarians work really hard to give their students what they need, but there are certainly a group of librarians who just aren’t comfortable with these trends of LGBTQ+ and BIPOC literature,” said Tasslyn Magnussun, an independent consultant for PEN America and other groups tracking the rise of book censorship. “So there’s what was purchased and what wasn’t purchased: Self-censorship before the rise of big censorship.”

Limits of Privacy

The types of censorship librarians are experiencing is also true of teachers. A 2024 RAND Corporation report found that while roughly half of K-12 public school teachers face some sort of state or district policies that limit what they can say about political and social issues, some teachers are still more likely to avoid certain topics even with supportive administrators and parents. Jenkins says digital cardholder comments give library workers in urban systems more insight into how the cards are affecting librarians outside major metropolitan areas.

“There is a chilling effect happening, self-censorship, where it’s affecting the decision-making ability of educated, trained, [and] skilled librarians and educators, in terms of selecting materials that are age-appropriate and appropriate for various readers,” Jenkins said. “It’s inadvertently causing people to make more conservative choices just by default.”

Part of the appeal for Books Unbanned e-card holders is some semblance of a private reading life. And while the librarians involved in the program through their institutions are committed to connecting readers with the titles they want to read, access doesn’t necessarily come easily to everyone because it’s not safe to assume every young adult has a device with e-reader capabilities, reliable internet access or working headphones. Or privacy, for that matter.

In the case of digital books, librarians work closely with vendors to secure licenses to circulate ebook and audiobook copies of titles. These professional partnerships are sometimes fraught. Part of that has to do with librarians having to relinquish control over infrastructure and access to the vendors’ applications, which take users from the library’s website to platforms like Libby. This is different from how physical book vendors work with libraries. Once books are ordered from a distributor, they belong to the library. Libraries don’t have to keep paying for digital borrowers. The digital rules don’t apply.

One criticism librarians have of vendor software is that it’s designed to support the licensing model for publishers but not the end-users facing challenges to their First Amendment rights. Vendors are facing pressure to comply with legislation in states where the right to receive information through school curriculums and library collections is vulnerable.

Take, for instance, Destiny, a widely used book checkout system in school libraries across the country. In 2022, its parent company announced and quickly walked back that it was considering a parental control module in its Destiny software to address requests to opt out of LGBTQ+ tagged books. But the company canceled the feature after librarians pointed out how it could be abused by releasing their library checkout history and placing borrowing restrictions on accounts — in violation of both the American Library Association’s Library Bill of Rights and student privacy rights under the Family Educational Rights and Privacy Act (FERPA).

Melissa Andrews, Boston Public Library’s chief of collection management, says it’s important for libraries to retain the ability to opt out of contractual clauses. Without it, digital contracts could result in a book being removed from circulation for everyone, including young adults living in areas without book bans.

“Once it’s coded into that software, it makes it easier for other libraries to do that without the law in place,” said Andrews. “And it also doesn’t necessarily go away if our culture changes in three to four years.”

InterLibrary Loan Threatened

In certain parts of the country, searching for the nearest copy of a frequently banned, challenged, or restricted book through the Worldcat catalog might show one that is 200 miles away, creating an ersatz banned-book desert akin to a news desert.

What’s more, libraries are vulnerable to the whims of political spending. The Trump Administration’s budget, if passed, is expected to result in the elimination of InterLibrary Loan for most institutions, unless they have the money in their budgets to opt in.

“The amount [for] my library to buy into the InterLibrary Loan system, if it’s not [federally] funded, is like the size of our entire budget,” Magnussun said. “There’s just no way our tiny little one-room library would be able to participate. So then those kids are definitely not getting those books.”

If InterLibrary Loan became too expensive for most libraries, it would put more pressure on the resources belonging to libraries participating in Books Unbanned. Such an outcome raises important questions about young readers in rural America accessing digital books from just a handful of well-resourced urban libraries hundreds of miles away. But Magnussun says the cost of not making the books accessible for queer and Brown youth, especially, is worse.

“There’s a question of a balance between, what’s the ideal situation — certainly not having [only] three libraries in the country fund the only LGBTQ+ literature that will be available to young people, but that’s where we are at this moment in time,” said Magnussun of PEN. “What I don’t want to see people doing, especially the library organizations, is [saying], ‘Oh, problem solved. We’re going to have Brooklyn Public Library or San Diego carry the rest of the country.’

“Because,” Magnussun adds, “that’s not right.”

Mikel said Brooklyn and other participating libraries are looking for new participant libraries. She remains confident in the program’s private funding even amid interference from groups and lawmakers in favor of bans. But despite the interest in Books Unbanned, most knowledge workers agree that it’s far from ideal. The program should be regarded as a stop-gap while communities wrestle with the tougher question of censorship.

“We’re proud of this initiative — it’s really important, but this is not the solution to anything,” said Andrews at Boston Public Library. Yet for the young readers putting their banned book e-library cards to use, “[H]opefully it helps right now.”



Source link

Continue Reading

Trending