Books, Courses & Certifications
7 Generative AI Certifications for In-Demand Future Skills

Generative AI certifications can help you stand out from other job candidates in the dynamic field of generative artificial intelligence. As the field continues to evolve, organizations will keep investing in skilled staff who can develop and implement the technology.
Gen AI certifications validate your expertise and demonstrate your skills to employers as the demand for skilled tech professionals increases. I evaluated a wide range of certifications to see how they compared in terms of level, duration, and cost. Here are my picks for the top generative AI certifications.
Best generative AI certifications: Comparison chart
Here’s a summary of the experience level, certifying institutions, duration, cost, and prerequisites of the seven top generative AI courses to help you find the right fit for your skills and interests. Keep reading for more detailed information about each of our picks.
Top seven AI certifications
Generative AI certification programs provide valuable skills for staying ahead in the dynamic field of AI, unlocking better job prospects with higher salaries.
Introductory generative AI courses cover the basics of machine learning, neural networks, and AI ethics. In contrast, specialized courses include lessons on natural language processing (NLP), image generation, and deep learning models. Advanced learners can also earn certificates that validate expertise in building and deploying generative AI models for applications in content creation, design, and automation.
Introduction to generative AI
Best for learning basic generative AI concepts | Beginner level
This introduction to generative AI course, offered by Google Cloud Training instructors on Coursera, provides an overview of the fundamental concepts of generative AI. The one-module course covers a range of topics, from the basics of generative AI to its applications. By the end of the course, you’ll be able to define generative AI, explain how it works, understand the different model types, and explore its various applications.
Top seven AI certifications
Generative AI certification programs provide valuable skills for staying ahead in the dynamic field of AI, unlocking better job prospects with higher salaries.
Introductory generative AI courses cover the basics of machine learning, neural networks, and AI ethics. In contrast, specialized courses include lessons on natural language processing (NLP), image generation, and deep learning models. Advanced learners can also earn certificates that validate expertise in building and deploying generative AI models for applications in content creation, design, and automation.
Introduction to generative AI
Best for learning basic generative AI concepts | Beginner level
This introduction to generative AI course, offered by Google Cloud Training instructors on Coursera, provides an overview of the fundamental concepts of generative AI. The one-module course covers a range of topics, from the basics of generative AI to its applications. By the end of the course, you’ll be able to define generative AI, explain how it works, understand the different model types, and explore its various applications.
Why I picked it
I chose this course because it offers a concise and informative introduction to generative AI. This course is specifically designed for those without prior knowledge of the field. The instructors use clear explanations and avoid complex jargon, making it accessible to anyone interested in the topic.
The course also introduces Google tools for developing your own generative AI applications. As part of Google’s Introduction to Generative AI Learning Path specialization, Introduction to Generative AI allows you to build subject-matter expertise in this field and develop job-relevant skills with hands-on projects throughout the program.
Skills acquired
At the end of this course, you will gain an understanding of the following:
- Definition of generative AI
- Generative AI model types
- How generative AI works
- Various generative AI applications
Key course details
Whom it’s for
- Beginners looking for an introductory course on Generative AI and its real-world applications
Career tracks
Course requirements
Course fee, duration, and format
- Free to audit
- $59 per month Coursera Plus subscription for a shareable certificate
- One hour to complete
- Self-paced online learning via Coursera
Course content and assessments
- Explains what generative AI is, how it’s used, and how it differs from traditional machine learning methods
- Covers Google Tools to help you develop your generative AI apps
- One assessment is required to pass
- Part of the Introduction to Generative AI Learning Path Specialization
Microsoft Azure AI Fundamentals: Generative AI
Best for familiarizing with Azure AI studio | Beginner level
The Azure AI Fundamentals Generative AI course introduces generative AI technology, focusing on Microsoft Azure services and covering the fundamentals of generative AI.
This course also explains how large language models form the foundation of generative AI and details the capabilities of Azure AI Studios, the efficiency gains provided by generative AI applications like copilots, the importance of fine-tuning prompts and responses, and how Microsoft’s responsible AI principles drive ethical AI advancements in the field of generative AI.
Why I picked it
I chose this course because it covers fundamental concepts and practical applications of generative AI technology, focusing on Azure services. Its main objective is to help you learn how to use Azure AI Studio and understand the importance of responsible AI principles.
This course will also teach you how to design a responsible generative AI solution, assess and minimize potential risks, and effectively manage its operation. As a Microsoft Learn course, it provides direct insights from industry experts and exclusive access to Microsoft resources, ensuring you learn best practices from the leaders in the field.
Skills acquired
By the end of the course, you’ll be able to understand the following:
- How LLMs form the foundation of generative AI
- The capabilities of Azure AI Studio
- How generative AI applications like copilots improve efficiency
- How to fine-tune prompts and responses for better outcomes
- Ethical considerations and responsible AI principles in developing and deploying generative AI solutions
Key course details
Whom it’s for
- Beginner- to intermediate-level Microsoft developers who want to leverage the Azure OpenAI Service to build AI-powered solutions
Career tracks
- AI engineers, developers, and solution architects
Course requirements
- Familiarity with Azure and the Azure portal
Course fee, duration, and format
- Free
- Four hours
- Self-paced online learning via Microsoft Learn
Course content and assessments
There are four modules in this course:
- Introduction to generative AI
- Plan and prepare to develop AI solutions on Azure
- Implement a responsible generative AI solution in Azure AI Foundry
- Get started with AI agent development on Azure
Generative AI Overview for Project Managers
Best for understanding AI in project management | Beginner level
The Generative AI Overview for Project Managers is an online course offered by the Project Management Institute (PMI) as part of its professional development course portfolio. It is designed to give project managers a comprehensive understanding of artificial intelligence and generative AI, as well as its various applications in project management.
By completing this course, project managers will also earn five PDUs (four for Ways of Working and one for Business Acumen).
Why I picked it
As a microlearning course offered by PMI, a globally recognized authority in project management, project managers can trust the quality and credibility of its content. I picked this course because it helps project managers enhance their understanding and application of generative AI within the project management domain.
This course offers a hands-on learning experience that project management professionals can apply to their next project. Its content has also been updated to reflect the latest on generative AI, including new resources, practical examples, recent thought leadership research, and access to PMI Infinity in the AI Tool Library.
Skills acquired
By the end of the course, you’ll gain a deep understanding of the following:
- Definitions of artificial intelligence (AI) and generative AI
- Generative AI applications in project management
- Generative AI project manager tools for maximizing efficiency
- Best practices for project management in AI as a way of working
Key course details
Whom it’s for
- Project managers who want to learn more about generative AI within the project management domain
Career tracks
- Project manager, project coordinator, and agile facilitator
Course requirements
Course fee, duration, and format
- Free
- One hour
- Self-paced online learning via the Project Management Institute
Course content and assessments
There are five lessons in this micro-learning course:
- Introduction to GenAI
- Enhancing PM with GenAI
- Voice of the PM
- ChatGPT Lab
- AI Tool Library
NVIDIA-Certified Associate Generative AI LLMs
Best for demonstrating expertise in generative AI and LLMs using NVIDIA solutions | Intermediate level
The NVIDIA-Certified Associate Generative AI LLMs certification is a foundational credential for individuals involved in developing, integrating, and maintaining AI-driven applications using generative AI and large language models with NVIDIA solutions. This certification validates the fundamental concepts required for working with generative AI and LLMs, making it an efficient learning opportunity for AI DevOps engineers, data scientists, ML engineers, and more.
The exam for this certification covers essential topics, including machine learning fundamentals, neural networks, prompt engineering, data analysis, experimentation, software development, Python libraries for LLMs, integration, and deployment.
Why I picked it
Understanding how to train, fine-tune, and deploy LLMs is an essential skill for AI developers. This certification is designed to assess your knowledge and skills in generative AI and LLMs within the context of NVIDIA’s solutions and frameworks.
As an official NVIDIA certification, it’s recognized within the industry and shows employers you have the necessary skills and knowledge to work with NVIDIA’s leading AI solutions. Ultimately, this certification provides you with an edge within a competitive field because it combines industry recognition, practical applications, and alignment with NVIDIA solutions.
Skills acquired
This certification will give you a range of foundational skills and knowledge in the following areas:
- Fundamentals of ML and neural networks
- Prompt engineering
- Data analysis and visualization
- Data preprocessing and feature engineering
- Experiment design
- Python libraries for LLMs
- LLM integration and deployment
Key course details
Whom it’s for
- Entry-level AI professionals, data scientists, and senior researchers who want to validate their skills in generative AI and LLMs using NVIDIA solutions
Career tracks
- AI developer, ML engineer, software engineer, and data scientist
Certification exam requirements
- Basic understanding of generative AI and large language models
Exam fee, duration, and format
- $125
- One hour (50–60 questions)
- Online and proctored remotely
Exam content and assessments
Topics covered in the exam include the following:
- Fundamentals of machine learning and neural networks
- Data analysis and visualization
- Data preprocessing and feature engineering
- Python libraries for LLMs
- LLM integration and deployment
Upon passing the exam, participants will receive a digital badge from NVIDIA and an optional certificate that indicates the certification level and topic. The certification is valid for two years from its issuance, and recertification may be achieved by retaking the exam.
Generative AI for Data Scientists Specialization
Best for applying generative AI in data science projects | Intermediate level
Taught by four experts from IBM, the Generative AI for Data Scientists Specialization course on Coursera is designed to help data professionals understand and implement generative AI in their data science projects. It is a three-course specialization that covers the basics of generative AI, prompt engineering concepts, tools, and techniques, and how to apply generative AI throughout the data science methodology.
The specialization includes hands-on projects and activities to reinforce learning and practical application of generative AI skills. Participants will learn how to use generative AI models for text, code, image, audio, and video generation, as well as data augmentation, feature engineering, and model development and refinement.
Why I picked it
The specialization is specifically designed for data scientists, and it deep-dives into real-world data science problems where generative AI can be applied. It includes hands-on scenarios where you’ll learn to use generative AI models for querying and preparing data, enhancing data science workflows, augmenting datasets, and refining machine learning models.
The Generative AI for Data Scientists Specialization course also introduces powerful tools like GPT-3.5, ChatCSV, and Tomat.ai and demonstrates how to seamlessly integrate them into data science workflows.
Skills acquired
At the end of the course, you’ll gain a deep understanding of the following:
- Real-world generative AI use cases
- Popular AI models and tools for text, code, image, audio, and video
- Appropriate generative AI tools for data science
- Generative AI prompt engineering concepts and examples
- Prompt techniques to generate and augment datasets for developing and refining ML models
Key course details
Whom it’s for
- Data scientists and aspiring data science professionals who want to apply generative AI into their projects
Career tracks
- Data scientist, data analyst, AI consultant, and product analyst
Course requirements
- Prior data science experience
Course fee, duration, and format
- $49 per month Coursera subscription
- One month at 10 hours a week
- Online via Coursera
Course content and assessments
This three-course series specialization includes the following:
- Generative AI: Introduction and Applications
- Generative AI: Prompt Engineering Basics
- Generative AI: Elevate Your Data Science Career
To pass the three-course specialization, learners must complete three modules for each course, which include readings, assignments, and discussion prompts.
Generative AI with Large Language Models
Best for real-world use case of generative AI and LLMs | Intermediate level
Generative AI with Large Language Models is a joint offering of AWS and DeepLearning.ai, two cloud computing and AI leaders. Through this collaboration, learners can benefit from the expertise and resources of AWS and DeepLearning.ai in the domain of generative AI with LLM, helping them gain foundational knowledge, practical skills, and a functional understanding of how LLMs work and how they can be deployed effectively in real-world scenarios.
Why I picked it
I chose this certification because it offers a hands-on, practical approach to working with the latest AI technologies in real-world scenarios. Unlike courses that focus solely on the technical aspects of LLM, it tackles the entire generative AI life cycle — from data gathering to performance evaluation and deployment. You’ll also learn from industry practitioners about the challenges and opportunities that generative AI creates for businesses.
Skills acquired
At the end of the three-module course, you’ll gain the following:
- Foundational knowledge and understanding of how generative AI works
- Awareness of the latest research on generative AI and how companies use this technology
- Instructions for expert AWS AI practitioners who build and deploy AI in business use cases
Key course details
Whom it’s for
- AI developers and engineers who want to learn real-life applications of generative AI and LLMs
Career tracks
- LLM model developer, ML engineer, AI product manager, and LLM solutions architect
Course requirements
- Experience with coding in Python
Course fee, duration, and format
- $49 per month Coursera subscription
- Approximately 16 hours
- Online via Coursera
Course content and assessments
There are three modules in this course:
- Generative AI use cases, project life cycle, and model pre-training
- Fine-tuning and evaluating large language models
- Reinforcement learning and LLM-powered applications
Generative AI for Software Developers Specialization
Best for integrating AI into development workflows | Intermediate level
Whether you’re a seasoned developer or a beginner, this specialization will help you enhance your programming capabilities by incorporating generative AI techniques into your projects.
Throughout this specialization’s three self-paced course series, you will learn the basics of generative AI, including its applications, models, and tools for generating text, code, images, audio, and video. You will also learn about prompt engineering, exploring approaches and tools such as Prompt Lab, Spellbook, and Dust to enhance your generative AI skills.
Why I picked it
This course provides hands-on experience with tools like IBM watsonx and open-source libraries to help you integrate generative AI into development workflow tasks. You will also benefit from the knowledge and expertise of IBM and learn the latest advancements and best practices in the field. This specialization format breaks down the learning into manageable modules, offering a structured and comprehensive approach to mastering generative AI.
Skills acquired
At the end of the three-module course, you’ll learn the following:
- Real-world generative AI use cases and popular generative AI models
- Tools and techniques to generate snippets, scripts, test cases, and applications using generative AI models
- Generative AI prompt engineering concepts and examples
- How to develop innovative software engineering solutions using AI-powered tools and LLMs
Key course details
Whom it’s for
- AI developers and engineers who want to learn real-life applications of generative AI and LLMs
Career tracks
- LLM application developer, full-stack developer, prompt engineer, and solutions developer
Course requirements
- Knowledge in software engineering
Course fee, duration, and format
- $49 per month Coursera subscription
- One month at 10 hours a week
- Online via Coursera
Course content and assessments
This three-course series includes the following:
- Generative AI: Introduction and Applications
- Generative AI: Prompt Engineering Basics
- Generative AI: Elevate Your Software Development Career
Real-world applications of generative AI certifications
Generative AI certifications aren’t just about theoretical knowledge; they equip professionals with the skills to tackle diverse real-world challenges across industries. Here are some of their notable applications.
Content creation and marketing
Generative AI certifications empower marketers and creatives to produce high-quality content for various platforms. They help them understand how to use AI tools effectively, including prompt engineering, content generation, and data analysis.
Content creators and marketing teams can develop ad campaigns, generate engaging website copy, and brainstorm blog post ideas using generative AI solutions. Beyond content ideation and generation, AI tools increase productivity and efficiency, so having the skills to automate repetitive tasks and simplify workflows will help marketers focus on strategic initiatives.
Product design and prototyping
Mastering generative AI empowers product designers to accelerate innovation through rapid prototype creation. Using the right AI tools, designers can automatically generate design variations and concepts, freeing valuable time for creative exploration and ideation. With targeted prompts, generative AI models can produce UI/UX mockups, technical documentation, and user stories precisely aligned with the design objectives.
Data generation and training
Generative AI creates synthetic but realistic training data for AI models, enhancing output quality and reducing the need for extensive real-world datasets. It makes for reliable predictive analytics tools by automatically analyzing historical data and forecasting trends. Mastering the right generative AI tool for data generation and AI model training gives data professionals a significant competitive advantage.
Enterprise implementation
Within organizations, certified experts lead the implementation and deployment of generative AI technologies to automate manual tasks, personalize user experiences, and drive innovation. Certified AI professionals can lead the adoption of generative AI solutions for different business use cases, including knowledge management, workflow automation, and scalable solutions.
Frequently asked questions
Does generative AI require coding?
It’s not always necessary. Building a generative AI model from scratch often requires coding to customize it for specific needs, but when dealing with pre-built models, extensive coding is not required, as these models have user-friendly interfaces and require little to no programming. Additionally, there are no-code solutions that come with drag-and-drop interfaces and natural language prompts that let you construct generative AI apps without writing a single line of code.
Is a generative AI certification course worth the investment?
If you are interested in building a career in AI, yes. Certifications make you stand out to potential employers and demonstrate your commitment to continuous learning and development. Or if you are interested in learning more about generative AI and its applications in various industries, a certification course may be worth the investment. It can help you gain valuable skills and knowledge to apply to your current job or transition into a new career.
How can I choose the right generative AI certification?
The best generative AI certification course for you will depend on your knowledge and experience with generative AI and your specific goals and interests. If you are new to generative AI, look for beginner-friendly courses that provide a solid foundation in the basics. If you are more experienced, consider more advanced courses that dive deeper into certifications for prompt engineering, model fine-tuning, deployment, and more.
Ensure the course covers the topics and skills you are interested in learning. As much as possible, only take courses from a reputable institution or organization in the field. A certification from a recognized entity can boost your credibility and increase your chances of getting hired. Also take into account your schedule and preferred learning style. Look for courses that offer flexible timing, online options, and self-paced learning and that are within your budget.
Can I become an AI expert without a degree?
Having a degree in computer science, engineering, or a related field provides a strong foundation when breaking into the AI field, but it’s not needed. You can become an AI expert without a traditional degree by taking certifications, building your portfolio, and seeking mentorship within the AI community.
Platforms like Coursera, edX, and Udacity offer numerous affordable AI courses and certifications that can help you expand your knowledge and hone your skills. You can also build your portfolio through hands-on projects or contribute to open-source AI projects to learn from AI experts. Additionally, you can seek mentorship from professionals who can provide guidance, support, and connections within the AI community.
What companies should I work for post-certification?
After completing a generative AI certification, you may be interested in working for these top companies that are leaders in AI and technology.
- Google is known for its innovative use of AI and machine learning in various products and services, such as Google Photos, Google Search, Google Assistant, and Gemini. Working at Google will allow you to work on projects that will help develop your generative AI skills.
- NVIDIA is a leading technology company specializing in graphics processing units (GPUs) and AI computing. It is heavily involved in research and development in AI, including generative AI for applications such as image generation and style transfer. Working at NVIDIA means being involved in advanced AI projects at a company that employs the world’s leading experts in artificial intelligence.
- OpenAI is an organization focused on artificial intelligence research and development. It is known for its work in creating advanced AI models, including generative models such as GPT-4, Dall-E, and more.
Bottom line: Best generative AI certifications
Generative AI certifications are gaining popularity, as the technology is transforming the tech landscape, driving creativity and innovation. The best programs offer a comprehensive curriculum, hands-on experience, and industry-recognized credentials tailored to your career goals. My recommendations highlight top programs that meet these standards, helping you choose the right path to boost your skills and expand your career opportunities.
Know more about the innovation giants shaping the future of AI by exploring our list of the top generative AI companies.
Books, Courses & Certifications
XPROMOS Launches Theia Institute™-Endorsed AI Fluency Program Offering Practitioner Certification Across Business Roles. Certified AI Training With Nod From Emerging Tech Think Tank Signals AI Fluency

“The XPROMOS AI training program delivers productivity gains beyond traditional business functions like IT, BI, and analytics. It democratizes AI productivity while increasing business ROI so that everyone wins,” said Executive Director, Todd A. Jacobs.
XPROMOS launches the first Theia Institute-endorsed certified AI training program designed to build AI fluency across non-technical teams in marketing, sales, HR, and finance. This premier global endorsement supports XPROMOS’ certified AI training that turns curiosity into capability by guiding participants to create scalable AI pilots that drive measurable value. The program aligns with the Washington DC-based nonprofit think tank’s mission of responsible, ethical, and practical AI adoption.
LOS ANGELES, CA – XPROMOS, a longtime leader in revenue‑driving strategy for enterprise brands, announces a premier global endorsement by Washington DC’s Theia Institute, a non-profit emerging technologies think tank shaping the standards of responsible AI use in business and policy. XPROMOS now offers an official Theia Institute certification for AI Fluency to qualified AI Training participants in their respective domains, including marketing, sales, operations, HR, finance, and more.
“This program turns dabblers into AI Fluents: people who use AI with clarity, not just curiosity,” said co-founder Yvette Brown.
“We built it to teach AI fluency and drive business value across functions, grounded in real understanding of governance, bias, and responsible use. Theia’s endorsement validates what we’ve always believed: AI literacy isn’t enough. If teams are going to extract real value responsibly, they need fluency, so they can think with the tech, not just use it.” Yvette Brown added, “When humans don’t understand AI’s capabilities and its limitations, they create unnecessary risk. This program changes that,” concluded Yvette Brown.
XPROMOS’ training is one of the first programs of its kind to be endorsed by Theia Institute, making it a trusted on‑ramp to strategic, ethical AI integration for non‑technical professionals. Participants who complete the program are awarded a credential that aligns directly with their business function, offering credibility, clarity, and a new kind of career capital.
“We’re proud to provide our most exclusive endorsement seal to XPROMOS’ AI training materials and educational methodology as it aligns with our think tank’s focus at the intersection of people and technology of preparing people for today’s evolving workplace.” stated Executive Director, Todd A. Jacobs.
“AI Fluency credentials ensure that people in marketing, sales, and HR also benefit from the growing workplace adoption of AI tools. The XPROMOS AI training program delivers productivity gains beyond traditional business functions like IT, BI, and analytics. It democratizes AI productivity while increasing business ROI so that everyone wins.”
— Todd A. Jacobs, Executive Director
Theia Institute™ Non-Profit Think Tank
The program was built for professionals navigating the AI shift without hype; early adopters in business units who need capability, not just content. With Theia’s endorsement, XPROMOS positions its AI training not just as a course, but as a new standard for responsible intelligence.
About XPROMOS
XPROMOS is an AI Fluency accelerator built by enterprise marketing veterans. With decades of experience driving results at scale, the company now helps professionals across industries gain the skills and strategic perspective needed to lead with AI. Through its Theia Institute-endorsed training, XPROMOS empowers creators and business leaders to earn real certification as Generative AI Practitioners, making them relevant, resilient, and ready for what’s next.
About Theia Institute
Theia Institute is a nonprofit AI governance, ethics, and cybersecurity think tank based in Washington, D.C., dedicated to advancing policy and decision-making through rigorous research and comprehensive analysis. Its commitment to an ethical, balanced, and unbiased approach sets it apart in the realm of business privacy, AI governance, and public policy.
Media Contact
Company Name: XPROMOS
Contact Person: Yvette Brown, XPROMOS Co-Founder
Email: Send Email
Phone: 7143370371
City: Laguna Hills
State: California
Country: United States
Website: https://xpromos.com
Books, Courses & Certifications
Detect Amazon Bedrock misconfigurations with Datadog Cloud Security
This post was co-written with Nick Frichette and Vijay George from Datadog.
As organizations increasingly adopt Amazon Bedrock for generative AI applications, protecting against misconfigurations that could lead to data leaks or unauthorized model access becomes critical. The AWS Generative AI Adoption Index, which surveyed 3,739 senior IT decision-makers across nine countries, revealed that 45% of organizations selected generative AI tools as their top budget priority in 2025. As more AWS and Datadog customers accelerate their adoption of AI, building AI security into existing processes will become essential, especially as more stringent regulations emerge. But looking at AI risks in a silo isn’t enough; AI risks must be contextualized alongside other risks such as identity exposures and misconfigurations. The combination of Amazon Bedrock and Datadog’s comprehensive security monitoring helps organizations innovate faster while maintaining robust security controls.
Amazon Bedrock delivers enterprise-grade security by incorporating built-in protections across data privacy, access controls, network security, compliance, and responsible AI safeguards. Customer data is encrypted both in transit using TLS 1.2 or above and at rest with AWS Key Management Service (AWS KMS), and organizations have full control over encryption keys. Data privacy is central: your input, prompts, and outputs are not shared with model providers nor used to train or improve foundation models (FMs). Fine-tuning and customizations occur on private copies of models, providing data confidentiality. Access is tightly governed through AWS Identity and Access Management (IAM) and resource-based policies, supporting granular authorization for users and roles. Amazon Bedrock integrates with AWS PrivateLink and supports virtual private cloud (VPC) endpoints for private, internal communication, so traffic doesn’t leave the Amazon network. The service complies with key industry standards such as ISO, SOC, CSA STAR, HIPAA eligibility, GDPR, and FedRAMP High, making it suitable for regulated industries. Additionally, Amazon Bedrock includes configurable guardrails to filter sensitive or harmful content and promote responsible AI use. Security is structured under the AWS Shared Responsibility Model, where AWS manages infrastructure security and customers are responsible for secure configurations and access controls within their Amazon Bedrock environment.
Building on these robust AWS security features, Datadog and AWS have partnered to provide a holistic view of AI infrastructure risks, vulnerabilities, sensitive data exposure, and other misconfigurations. Datadog Cloud Security employs both agentless and agent-based scanning to help organizations identify, prioritize, and remediate risks across cloud resources. This integration helps AWS users prioritize risks based on business criticality, with security findings enriched by observability data, thereby enhancing their overall security posture in AI implementations.
We’re excited to announce new security capabilities in Datadog Cloud Security that can help you detect and remediate Amazon Bedrock misconfigurations before they become security incidents. This integration helps organizations embed robust security controls and secure their use of the powerful capabilities of Amazon Bedrock by offering three critical advantages: holistic AI security by integrating AI security into your broader cloud security strategy, real-time risk detection through identifying potential AI-related security issues as they emerge, and simplified compliance to help meet evolving AI regulations with pre-built detections.
AWS and Datadog: Empowering customers to adopt AI securely
The partnership between AWS and Datadog is focused on helping customers operate their cloud infrastructure securely and efficiently. As organizations rapidly adopt AI technologies, extending this partnership to include Amazon Bedrock is a natural evolution. Amazon Bedrock is a fully managed service that makes high-performing FMs from leading AI companies and Amazon available through a unified API, making it an ideal starting point for Datadog’s AI security capabilities.
The decision to prioritize Amazon Bedrock integration is driven by several factors, including strong customer demand, comprehensive security needs, and the existing integration foundation. With over 900 integrations and a partner-built Marketplace, Datadog’s long-standing partnership with AWS and deep integration capabilities have helped Datadog quickly develop comprehensive security monitoring for Amazon Bedrock while using their existing cloud security expertise.
Throughout Q4 2024, Datadog Security Research observed increasing threat actor interest in cloud AI environments, making this integration particularly timely. By combining the powerful AI capabilities of AWS with Datadog’s security expertise, you can safely accelerate your AI adoption while maintaining robust security controls.
How Datadog Cloud Security helps secure Amazon Bedrock resources
After adding the AWS integration to your Datadog account and enabling Datadog Cloud Security, Datadog Cloud Security continuously monitors your AWS environment, identifying misconfigurations, identity risks, vulnerabilities, and compliance violations. These detections use the Datadog Severity Scoring system to prioritize them based on infrastructure context. The scoring considers a variety of variables, including if the resource is in production, is publicly accessible, or has access to sensitive data. This multi-layer analysis can help you reduce noise and focus your attention to the most critical misconfigurations by considering runtime behavior.
Partnering with AWS, Datadog is excited to offer detections for Datadog Cloud Security customers, such as:
- Amazon Bedrock custom models should not output model data to publicly accessible S3 buckets
- Amazon Bedrock custom models should not train from publicly writable S3 buckets
- Amazon Bedrock guardrails should have a prompt attack filter enabled and block prompt attacks at high sensitivity
- Amazon Bedrock agent guardrails should have the sensitive information filter enabled and block highly sensitive PII entities
Detect AI misconfigurations with Datadog Cloud Security
To understand how these detections can help secure your Amazon Bedrock infrastructure, let’s look at a specific use case, in which Amazon Bedrock custom models should not train from publicly writable Amazon Simple Storage Service (Amazon S3) buckets.
With Amazon Bedrock, you can customize AI models by fine-tuning on domain specific data. To do this, that data is stored in an S3 bucket. Threat actors are constantly evaluating the configuration of S3 buckets, looking for the potential to access sensitive data or even the ability to write to S3 buckets.
If a threat actor finds an S3 bucket that was misconfigured to permit public write access, and that same bucket contained data that was used to train an AI model, a bad actor could poison that dataset and introduce malicious behavior or output to the model. This is known as a data poisoning attack.
Normally, detecting these types of misconfigurations requires multiple steps: one to identify the S3 bucket misconfigured with write access, and one to identify that the bucket is being used by Amazon Bedrock. With Datadog Cloud Security, this detection is one of hundreds that are activated out of the box.
In the Datadog Cloud Security system, you can view this issue alongside surrounding infrastructure using Cloud Map. This provides live diagrams of your cloud architecture, as shown in the following screenshot. AI risks are then contextualized alongside sensitive data exposure, identity risks, vulnerabilities, and other misconfigurations to give you a 360-view of risks.
For example, you might see that your application is using Anthropic’s Claude 3.7 on Amazon Bedrock and accessing training or prompt data stored in an S3 bucket that also has public write access. This could inadvertently impact model integrity by introducing unapproved data to the large language model (LLM), so you will want to update this configuration. Though basic, the first step for most security initiatives is identifying the issue. With agentless scanning, Datadog scans your AWS environment at intervals between 15 minutes and 2 hours, so users can identify misconfigurations as they are introduced to their environment. The next step is to remediate this risk. Datadog Cloud Security offers automatically generated remediation guidance, specifically for each risk (see the following screenshot). You will get a step-by-step explanation of how to fix each finding. In this situation, we can remediate this issue by modifying the S3 bucket’s policy, helping prevent public write access. You can do this directly in AWS, create a JIRA ticket, or use the built-in workflow automation tools. From here, you can apply remediation steps directly within Datadog and confirm that the misconfiguration has been resolved.
Resolving this issue will positively impact your compliance posture, as illustrated by the posture score in Datadog Cloud Security, helping teams meet internal benchmarks and regulatory standards. Teams can also create custom frameworks or iterate on existing ones for tailored compliance controls.
As generative AI is embraced across industries, the regulatory environment will evolve. Datadog will continue partnering with AWS to expand their detection library and support secure AI adoption and compliance.
How Datadog Cloud Security detects misconfigurations in your cloud environment
You can deploy Datadog Cloud Security either with the Datadog agent, agentlessly, or both to maximize security coverage in your cloud environment. Datadog customers can start monitoring their AWS accounts for misconfigurations by first adding the AWS integration to Datadog. This enables Datadog to crawl cloud resources in customer AWS accounts.
As the Datadog system finds resources, it runs through a catalog of hundreds of out-of-the-box detection rules against these resources, looking for misconfigurations and threat paths that adversaries can exploit.
Secure your AI infrastructure with Datadog
Misconfigurations in AI systems can be risky, but with the right tools, you can have the visibility and context needed to manage them. With Datadog Cloud Security, teams gain visibility into these risks, detect threats early, and remediate issues with confidence. In addition, Datadog has also released numerous agentic AI security features, designed to help teams gain visibility into the health and security of critical AI workload, which includes new announcements made to Datadog’s LLM observability features.
Lastly, Datadog announced Bits AI Security Analyst alongside other Bits AI agents at DASH. Included as part of Cloud SIEM, Bits is an agentic AI security analyst that automates triage for AWS CloudTrail signals. Bits investigates each alert like a seasoned analyst: pulling in relevant context from across your Datadog environment, annotating key findings, and offering a clear recommendation on whether the signal is likely benign or malicious. By accelerating triage and surfacing real threats faster, Bits helps reduce mean time to remediation (MTTR) and frees analysts to focus on important threat hunting and response initiatives. This helps across different threats, including AI-related threats.
To learn more about how Datadog helps secure your AI infrastructure, see Monitor Amazon Bedrock with Datadog or check out our security documentation. If you’re not already using Datadog, you can get started with Datadog Cloud Security with a 14-day free trial.
About the Authors
Nina Chen is a Customer Solutions Manager at AWS specializing in leading software companies to use the power of the AWS Cloud to accelerate their product innovation and growth. With over 4 years of experience working in the strategic independent software vendor (ISV) vertical, Nina enjoys guiding ISV partners through their cloud transformation journeys, helping them optimize their cloud infrastructure, driving product innovation, and delivering exceptional customer experiences.
Sujatha Kuppuraju is a Principal Solutions Architect at AWS, specializing in cloud and generative AI security. She collaborates with software companies’ leadership teams to architect secure, scalable solutions on AWS and guide strategic product development. Using her expertise in cloud architecture and emerging technologies, Sujatha helps organizations optimize offerings, maintain robust security, and bring innovative products to market in an evolving tech landscape.
Nick Frichette is a Staff Security Researcher for Cloud Security Research at Datadog.
Vijay George is a Product Manager for AI Security at Datadog.
Books, Courses & Certifications
Set up custom domain names for Amazon Bedrock AgentCore Runtime agents

When deploying AI agents to Amazon Bedrock AgentCore Runtime (currently in preview), customers often want to use custom domain names to create a professional and seamless experience.
By default, AgentCore Runtime agents use endpoints like https://bedrock-agentcore.{region}.amazonaws.com/runtimes/{EncodedAgentARN}/invocations
.
In this post, we discuss how to transform these endpoints into user-friendly custom domains (like https://agent.yourcompany.com
) using Amazon CloudFront as a reverse proxy. The solution combines CloudFront, Amazon Route 53, and AWS Certificate Manager (ACM) to create a secure, scalable custom domain setup that works seamlessly with your existing agents.
Benefits of Amazon Bedrock AgentCore Runtime
If you’re building AI agents, you have probably wrestled with hosting challenges: managing infrastructure, handling authentication, scaling, and maintaining security. Amazon Bedrock AgentCore Runtime helps address these problems.
Amazon Bedrock AgentCore Runtime is framework agnostic; you can use it with LangGraph, CrewAI, Strands Agents, or custom agents you have built from scratch. It supports extended execution times up to 8 hours, perfect for complex reasoning tasks that traditional serverless functions can’t handle. Each user session runs in its own isolated microVM, providing security that’s crucial for enterprise applications.
The consumption-based pricing model means you only pay for what you use, not what you provision. And unlike other hosting solutions, Amazon Bedrock AgentCore Runtime includes built-in authentication and specialized observability for AI agents out of the box.
Benefits of custom domains
When using Amazon Bedrock AgentCore Runtime with Open Authorization (OAuth) authentication, your applications make direct HTTPS requests to the service endpoint. Although this works, custom domains offer several benefits:
- Custom branding – Client-side applications (web browsers, mobile apps) display your branded domain instead of AWS infrastructure details in network requests
- Better developer experience – Development teams can use memorable, branded endpoints instead of copying and pasting long AWS endpoints across code bases and configurations
- Simplified maintenance – Custom domains make it straightforward to manage endpoints when deploying multiple agents or updating configurations across environments
Solution overview
In this solution, we use CloudFront as a reverse proxy to transform requests from your custom domain into Amazon Bedrock AgentCore Runtime API calls. Instead of using the default endpoint, your applications can make requests to a user-friendly URL like https://agent.yourcompany.com/
.
The following diagram illustrates the solution architecture.
The workflow consists of the following steps:
- A client application authenticates with Amazon Cognito and receives a bearer token.
- The client makes an HTTPS request to your custom domain.
- Route 53 resolves the DNS request to CloudFront.
- CloudFront forwards the authenticated request to the Amazon Bedrock Runtime agent.
- The agent processes the request and returns the response through the same path.
You can use the same CloudFront distribution to serve both your frontend application and backend agent endpoints, avoiding cross-origin resource sharing (CORS) issues because everything originates from the same domain.
Prerequisites
To follow this walkthrough, you must have the following in place:
Although Amazon Bedrock AgentCore Runtime can be in other supported AWS Regions, CloudFront requires SSL certificates to be in the us-east-1
Region.
You can choose from the following domain options:
- Use an existing domain – Add a subdomain like
agent.yourcompany.com
- Register a new domain – Use Route 53 to register a domain if you don’t have one
- Use the default URL from CloudFront – No domain registration or configuration required
Choose the third option if you want to test the solution quickly before setting up a custom domain.
Create an agent with inbound authentication
If you already have an agent deployed with OAuth authentication, you can skip to the next section to set up the custom domain. Otherwise, follow these steps to create a new agent using Amazon Cognito as your OAuth provider:
- Create a new directory for your agent with the following structure:
- Create the main agent code in
agent_example.py
:
- Add dependencies to
requirements.txt
:
- Run the following commands to create an Amazon Cognito user pool and test user:
- Deploy the agent using the Amazon Bedrock AgentCore command line interface (CLI) provided by the starter toolkit:
Make note of your agent runtime Amazon Resource Name (ARN) after deployment. You will need this for the custom domain configuration.
For additional examples and details, see Authenticate and authorize with Inbound Auth and Outbound Auth.
Set up the custom domain solution
Now let’s implement the custom domain solution using the AWS CDK. This section shows you how to create the CloudFront distribution that proxies your custom domain requests to Amazon Bedrock AgentCore Runtime endpoints.
- Create a new directory and initialize an AWS CDK project:
- Encode the agent ARN and prepare the CloudFront origin configuration:
If your frontend application runs on a different domain than your agent endpoint, you must configure CORS headers. This is common if your frontend is hosted on a different domain (for example, https://app.yourcompany.com
calling https://agent.yourcompany.com
), or if you’re developing locally (for example, http://localhost:3000
calling your production agent endpoint).
- To handle CORS requirements, create a CloudFront response headers policy:
- Create a CloudFront distribution to act as a reverse proxy for your agent endpoints:
Set cache_policy=CachePolicy.CACHING_DISABLED
to make sure your agent responses remain dynamic and aren’t cached by CloudFront.
- If you’re using a custom domain, add an SSL certificate and DNS configuration to your stack:
The following code is the complete AWS CDK stack that combines all the components:
- Configure the AWS CDK
app
entry point:
Deploy your custom domain
Now you can deploy the solution and verify it works with both custom and default domains. Complete the following steps:
- Update the following values in
agentcore_custom_domain_stack.py
:- Your Amazon Bedrock AgentCore Runtime ARN
- Your domain name (if using a custom domain)
- Your hosted zone ID (if using a custom domain)
- Deploy using the AWS CDK:
Test your endpoint
After you deploy the custom domain, you can test your endpoints using either the custom domain or the CloudFront default domain.First, get a JWT token from Amazon Cognito:
Use the following code to test with your custom domain:
Alternatively, use the following code to test with the CloudFront default domain:
Considerations
As you implement this solution in production, the following are some important considerations:
- Cost implications – CloudFront adds costs for data transfer and requests. Review Amazon CloudFront pricing to understand the impact for your usage patterns.
- Security enhancements – Consider implementing the following security measures:
- AWS WAF rules to help protect against common web exploits.
- Rate limiting to help prevent abuse.
- Geo-restrictions if your agent should only be accessible from specific Regions.
- Monitoring – Enable CloudFront access logs and set up Amazon CloudWatch alarms to monitor error rates, latency, and request volume.
Clean up
To avoid ongoing costs, delete the resources when you no longer need them:
You might need to manually delete the Route 53 hosted zones and ACM certificates from their respective service consoles.
Conclusion
In this post, we showed you how to create custom domain names for your Amazon Bedrock AgentCore Runtime agent endpoints using CloudFront as a reverse proxy. This solution provides several key benefits: simplified integration for development teams, custom domains that align with your organization, cleaner infrastructure abstraction, and straightforward maintenance when endpoints need updates. By using CloudFront as a reverse proxy, you can also serve both your frontend application and backend agent endpoints from the same domain, avoiding common CORS challenges.
We encourage you to explore this solution further by adapting it to your specific needs. You might want to enhance it with additional security features, set up monitoring, or integrate it with your existing infrastructure.
To learn more about building and deploying AI agents, see the Amazon Bedrock AgentCore Developer Guide. For advanced configurations and best practices with CloudFront, refer to the Amazon CloudFront documentation. You can find detailed information about SSL certificates in the AWS Certificate Manager documentation, and domain management in the Amazon Route 53 documentation.
Amazon Bedrock AgentCore is currently in preview and subject to change. Standard AWS pricing applies to additional services used, such as CloudFront, Route 53, and Certificate Manager.
About the authors
Rahmat Fedayizada is a Senior Solutions Architect with the AWS Energy and Utilities team. He works with energy companies to design and implement scalable, secure, and highly available architectures. Rahmat is passionate about translating complex technical requirements into practical solutions that drive business value.
Paras Bhuva is a Senior Manager of Solutions Architecture at AWS, where he leads a team of solution architects helping energy customers innovate and accelerate their transformation. Having started as a Solution Architect in 2012, Paras is passionate about architecting scalable solutions and building organizations focused on application modernization and AI initiatives.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers2 months ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle