Books, Courses & Certifications
University of Cambridge Professional and Continuing Education Brings World-Class Learning to Coursera

Partnership brings Cambridge’s renowned expertise to 175 million learners worldwide
By Marni Baker Stein, Chief Content Officer, Coursera
Today, I’m delighted to announce that University of Cambridge Professional and Continuing Education will bring a selection of the University of Cambridge’s professional education courses to our global learning platform for the first time.
University of Cambridge Professional and Continuing Education, ranked sixth globally by QS Quacquarelli Symonds, will begin by launching eight courses and two Specialisations between now and 31st August. The first of these courses, Foundations of Finance, is now live, as is the first Specialization, The Science of Mind and Decision Making.
Designed to equip learners for in-demand careers and the evolving skills economy, the offerings span high-growth fields such as finance, AI, psychology, and forensic science. From Principles of Financial Leadership to Ethical AI and Forensic DNA Analysis, each course is built to prepare learners for the practical challenges and opportunities shaping the future of work.
This partnership combines the academic excellence of the University of Cambridge with Coursera’s AI-powered learning platform and global reach of 175 million learners – including over 4.5 million in the United Kingdom.
Situated at the heart of one of the world’s largest technology clusters, the University of Cambridge has established itself as a pioneer at the intersection of academia and business, with a proven track record of fostering innovation and entrepreneurship. Recent research indicates that more seed-round start-ups in Cambridge (41%) make it to a Series A round, more than in any other university city in the United Kingdom, surpassing Oxford (35%) and London (33%).
The partnership will enable University of Cambridge Professional and Continuing Education and Coursera to:
- Connect learners to proven career outcomes in high-demand job fields
- Extend Cambridge’s world-renowned expertise in professional education to learners at a global scale
- Address critical workforce skills gaps through internationally recognised courses in emerging fields such as AI and behavioral science
Cory Saarinen, Assistant Director of Technology Enhanced Learning at Professional and Continuing Education, University of Cambridge said: “We are looking forward to a successful partnership with Coursera. Professional and Continuing Education at the University of Cambridge has extended access to university-level study to adults for over 150 years. Through this new partnership we aim to support learners from across the world to engage with lifelong learning for professional and personal development backed by the expertise and insights of Cambridge academics”
This partnership further strengthens Coursera’s commitment to providing accessible, high-quality education to learners globally, while expanding its comprehensive portfolio of courses from the world’s leading academic institutions, including all four UK universities ranked within the top ten of the latest QS World University Rankings.
For centuries, a University of Cambridge education has been a gateway to exceptional career opportunities and intellectual development. Today, we’re extending that gateway to millions through our platform. This collaboration represents the future of professional education, where geographical barriers no longer limit access to world-class learning, and where Cambridge’s academic excellence can reach talented learners in every corner of the globe.
Begin your learning journey with the University of Cambridge Professional and Continuing Education today here.
Books, Courses & Certifications
Looking Forward to AI Codecon – O’Reilly

I’m really looking forward to our second O’Reilly AI Codecon, Coding for the Agentic World, which is happening on September 9, online from 8 am to noon pacific time, with a follow-on day of additional demos on September 16. But I’m also looking forward to how the AI market itself unfolds: the surprising twists and turns ahead as users and developers apply AI to real world problems.
The pages linked above give details on the program for the events. What I want to give here is a bit of the why behind the program, with a bit more detail on some of the fireside chats I will be leading.
From Invention to Application
There has been so much focus in the past on the big AI labs, the model developers, and their razzle dazzle about AGI, or even ASI. That narrative implied that we were heading towards something unprecedented. But if this is a “normal technology” (albeit one as transformational as electricity, the internal combustion engine, or the internet), we know that LLMs themselves are just the beginning of a long process of discovery, product invention, business adoption, and societal adaptation.
That process of collaborative discovery of the real uses for AI and reinvention of the businesses that use it is happening most clearly in the software industry. It is where AI is being pushed to the limits, where new products beyond the chatbot are being introduced, where new workflows are being developed, and where we understand what works and what doesn’t.
This work is often being pushed forward by individuals, who are “learning by doing.” Some of these individuals work for large companies, others for startups, others for enterprises, and others as independent hackers.
Our focus in these AI Codecon events is to smooth adoption of AI by helping our customers cut through the hype and understand what is working. O’Reilly’s mission has always been changing the world by sharing the knowledge of innovators. In our events, we always look for people who are at the forefront of invention. As outlined in the call to action for the first event, I was concerned about the chatter that AI would make developers obsolete. I argued instead that it would profoundly change the process of software development and the jobs that developers do, but that it would make them more important than ever.
It looks like I was right. There is a huge ferment, with so much new to learn and do that it’s a really exciting time to be a software developer. I’m really excited about the practicality of the conversation. We’re not just talking about the “what if.” We’re seeing new AI powered services meeting real business needs. We are witnessing the shift from human-centric workflows to agent-centric workflows, and it’s happening faster than you think.
We’re also seeing widespread adoption of the protocols that will power it all. If you’ve followed my work from open source to web 2.0 to the present, you know that I believe strongly that the most dynamic systems have “an architecture of participation.” That is, they aren’t monolithic. The barriers to entry need to be low and business models fluid (at least in the early stages) for innovation to flourish.
When AI was framed as a race for superintelligence, there was a strong expectation that it would be winner takes all. The first company to get to ASI (or even just to AGI) would soon be so far ahead that it would inevitably become a dominant monopoly. Developers would all use its APIs, making it into the single dominant platform for AI development.
Protocols like MCP and A2A are instead enabling a decentralized AI future. The explosion of entrepreneurial activity around agentic AI reminds me of the best kind of open innovation, much like I saw in the early days of the personal computer and the internet.
I was going to use my opening remarks to sound that theme, and then I read Alex Komoroske’s marvelous essay, “Why Centralized AI Is Not Our Inevitable Future.” So I asked him to do it instead. He’s going to give an updated, developer-focused version of that as our kickoff talk.
Then we’re going into a section on agentic interfaces. We’ve lived for decades with the GUI (either on computers or mobile applications) and the web as the dominant ways we use computers. AI is changing all that.
It’s not just agentic interfaces, though. It’s really developing true AI-native products, searching out the possibilities of this new computing fabric.
The Great Interface Rethink
In the “normal technology” framing, a fundamental technology innovation is distinct from products based on it. Think of the invention of the LLM itself as electricity, and ChatGPT as the equivalent of Edison’s incandescent light bulb and the development of the distribution network to power it.
There’s a bit of a lesson in the fact that the telegraph was the first large-scale practical application of electricity, over 40 years before Edison’s lightbulb. The telephone was another killer app that used electricity to power it. But despite their scale, these were specialized devices. It was the infrastructure for incandescent lighting that turned electricity into a general purpose technology.
The world soon saw electrical resistance products like irons and toasters, and electric motors powering not just factories but household appliances such as washing machines and eventually refrigerators and air conditioning. Many of these household products were plugged into light sockets, since the pronged plug as we know it today wasn’t introduced until 30 years after the first light bulb.
The lesson is that at some point in the development of a general purpose technology, product innovation takes over from pure technology innovation. That’s the phase we’re entering now.
Look at the evolution of LLM-based products: Github Copilot embedded AI into Visual Studio Code; the interface was an extension to VS Code, a ten year old GUI-based program. Google’s AI efforts were tied into its web-based search products. ChatGPT broke the mold and introduced the first radically new interface since the web browser. Suddenly, chat was the preferred new interface for everything. But Claude Code took things further with Artifacts and then Claude Code, and once coding assistants gained more complex interfaces, that kicked off today’s fierce competition between coding tools. The next revolution is the construction of a new computing paradigm where software is composed of intelligent, autonomous agents.
I’m really looking forward to Rachel-Lee Nabors’ talk on how, with an agentic interface, we might transcend the traditional browser: AI agents can adapt content directly to users, offering privacy, accessibility, and flexibility that legacy web interfaces cannot match.
But it seems to me that there will be two kinds of agents, which I call “demand side” and “supply side” agents. What’s a “demand side” agent? Instead of navigating complex apps, you’ll simply state your goal. The agent will understand the context, access the necessary tools, and present you with the result. The vision is still science fiction. The reality is often a kludge powered by browser use or API calls, with MCP servers increasingly offering an AI-friendlier interface for those demand side agents to interact with. But why should it stop there? MCP servers are static interfaces. What if there were agents on both sides of the conversation, in a dynamic negotiation? I suspect that while demand side agents will be developed by venture funded startups, most server side agents will be developed by enterprises as a kind of conversational interface for both humans and AI agents that want access to their complex workflows, data, and business models. And those enterprises will often be using agentic platforms tailored for their use. That’s part of the “supply side agent” vision of companies like Sierra. I’ll be talking with Sierra co-founder Clay Bavor about this next step in agentic development.
We’ve grown accustomed to thinking about agents as lonely consumers—“tell me the weather,” “scan my code,” “summarize my inbox.” But that’s only half the story. If we build supply-side agent infrastructure—autonomous, discoverable, governed, negotiated—we unlock agility, resilience, security, and collaboration.
My interest in product innovation, not just advances in the underlying technology, is also why I’m excited about my fireside chat with Josh Woodward, who co-led the team that developed Notebook.LM at Google. I’m a huge fan of Notebook.LM, which in many ways brought the power of RAG (Retrieval Augmented Generation) to end users, allowing them to collect a set of documents into a Google drive, and then use that collection to drive chat, audio overviews of documents, study guides, mind maps, and much more.
Notebook.LM is also a lovely way to build on the deep collaborative infrastructure provided by Google Drive. We need to think more deeply about collaborative interfaces for AI. Right now, AI interaction is mostly a solitary sport. You can share the outputs with others, but not the generative process. I wrote about this recently in “People Work in Teams, AI Assistants in Silos.” I think that’s a big miss, and I’m hoping to probe Josh about Google’s plans in this area, and eager to see other innovations in AI-mediated human collaboration.
GitHub is another existing tool for collaboration that has become central to the AI ecosystem. I’m really looking forward to talking with outgoing CEO Thomas Dohmke both about the ways that Github already provides a kind of exoskeleton for collaboration when using AI code generation tools. It seems to me that one of the frontiers of AI-human interfaces will be those that enable not just small teams but eventually large groups to collaborate. I suspect that Github may have more to teach us about that future than we now suspect.
And finally, we are now learning that managing context is a critical part of designing effective AI applications. My co-chair Addy Osmani will be talking about the emergence of context engineering as a real discipline, and its relevance to agentic AI development.
Tool-Chaining Agents and Real Workflows
Today’s AI tools are largely solo performers—a Copilot suggesting code or a ChatGPT answering a query. The next leap is from single agents to interconnected systems. The program is filled with sessions on “tool-to-tool workflows” and multi-agent systems.
Ken Kousen will showcase the new generation of coding agents, including Claude Code, Codex CLI, Gemini CLI, and Junie, that help developers navigate codebases, automate tasks, and even refactor intelligently. In her talk, Angie Jones takes it further: agents that go beyond code generation to manage PRs, write tests, and update documentation—stepping “out of the IDE” and into real-world workflows.
Even more exciting is the idea of agents collaborating with each other. The Demo Day will showcase a multi-agent coding system where agents share, correct, and evolve code together. This isn’t science fiction; Amit Rustagi’s talk on decentralized AI agent infrastructure using technologies like WebAssembly and IPFS provides a practical architectural framework for making these agent swarms a reality.
The Crucial Ingredient: Common Protocols
How do all these agents talk to each other? How do they discover new tools and use them safely? The answer that echoes throughout the agenda is the Model Context Protocol (MCP).
Much as the distribution network for electricity was the enabler for all of the product innovation of the electrical revolution, MCP is the foundational plumbing, the universal language that will allow this new ecosystem to flourish. Multiple sessions and an entire Demo Day are dedicated to it. We’ll see how Google is using it for agent-to-agent communication, how it can be used to control complex software like Blender with natural language, and even how it can power novel SaaS product demos.
The heavy focus on a standardized protocol signals that the industry is maturing past cool demos and is now building the robust, interoperable infrastructure needed for a true agentic economy.
If the development of the internet is any guide, though MCP is a beginning, not the end. TCP/IP became the foundation of a layered protocol stack. It is likely that MCP will be followed by many more specialized protocols.
Why This Matters
Theme | Why It’s Thrilling |
---|---|
Autonomous, Distributed AI | Agents that chain tasks and operate behind the scenes can unlock entirely new ways of building software. |
Human Empowerment & Privacy | The push against centralized AI systems is a reminder that tools should serve users, not control them. |
Context as Architecture | Elevating input design to first-class engineering—this will greatly improve reliability, trust, and AI behavior over time. |
New Developer Roles | We’re seeing developers transition from writing code to orchestrating agents, designing workflows, and managing systems. |
MCP & Network Effects | The idea of an “AI-native web,” where agents use standardized protocols to talk, is powerful, open-ended, and full of opportunity. |
I look forward to seeing you there!
AI tools are quickly moving beyond chat UX to sophisticated agent interactions. Our upcoming AI Codecon event, Coding for the Agentic World, will highlight how developers are already using agents to build innovative and effective AI-powered experiences. We hope you’ll join us on September 9 to explore the tools, workflows, and architectures defining the next era of programming. It’s free to attend. Register now to save your seat. And join us for O’Reilly Demo Day on September 16 to see how experts are shaping AI systems to work for them via MCP.
Books, Courses & Certifications
Head Start Funding Is on Track for Approval. It Still May Not Be Enough.

The funding and overall future of Head Start — which helps low-income families with child development and family support services — has been in the headlines for the better half of the year because of potential program cuts, followed by lawsuits, then think pieces and statements lauding its benefits.
The program, which is turning 60 this year and has served more than 40 million families, appears to be in the calm amid the eye of the storm. Local Head Start offices are largely operating business as usual, but leaders have bated breath — the future of its funding will be decided on Oct. 1.
While it may come into an additional $85 million windfall, or maintain its $12.2 billion in funding, both local and national Head Start officials have concerns that either scenario will not be enough.
“On the one hand we’re relieved that the initial proposal to eliminate Head Start is out of the way and we don’t have to have those conversations,” says Michelle Haimowitz, executive director of the Massachusetts Head Start Association. “But another year of flat funding would continue to cut us off at the knees. And the costs don’t magically stay flat; the only way to do that is cut enrollment and make other changes we don’t want to make.”
The concern comes amid months of confusion for staff and parents on the fate of Head Start. In April, leaked documents detailing fiscal year 2026 budgets revealed plans to cut Head Start funding entirely. That same month, four state Head Start advocacy organizations — Illinois, Pennsylvania, Washington and Wisconsin — and two parent groups sued the Trump administration over potential spending cuts on diversity, equity and inclusion initiatives.
The yo-yoing policy proposals brought delays in accessing funds. Megan Woller, executive director of Idaho’s Head Start Association, recalls one local Head Start office considered taking out a loan in July in order to pay staff before the funding came through. Haimowitz added the Massachusetts offices saw “significant” delays in the first half of the year accessing funds and getting grant approvals. Many Head Start offices across the nation, including in Washington, Mississippi and Illinois, have reported experiencing confusion, but meanwhile others, including in Colorado, Ohio and Virginia, are expanding.
The administrative funding hiccups were exacerbated by the stress of not being able to reach regional federal Head Start offices: In April, the 10 Head Start offices that helped local Head Start offices throughout the country were whittled down to five, with the remaining half of offices in Boston, Chicago, New York, San Francisco and Seattle closing. The closures followed plans to reduce the scope of the U.S. Department of Health and Human Services.
“While program specialists are doing everything they can to support us, their capacity to be as communicative and in touch as our program specialist in the Boston office — when they had half as many cases — is going to be significantly diminished,” Haimowitz says.
It also created confusion among parents who did not know the shuttered regional offices did not directly serve children, and instead were intermediaries.
“People got confused because they don’t know who that is; that it’s the federal government supporting the grantees, it’s not your kids’ center,” Woller says. “But the public doesn’t know the difference between all this. I was getting calls of ‘Wait, is my kid’s center closed tomorrow?’”
The funding hangups have largely been alleviated for now — Woller and Haimowitz both said the delays are continuing but seem to be improving — but a collective breath is being held as the future of Head Start’s funding remains in flux. While the Senate Appropriations Committee recommended an $85 million increase to Head Start funding in July — a roughly 0.6 percent bump — on Sept. 2, the House Appropriations Committee pushed the bill forward, proposing maintaining its current level of funding of $12.2 billion. The full Senate and House still need to give final approval and have until Oct. 1 to do so.
‘There Is No Plan B’
Tommy Sheridan, deputy director of the National Head Start Association, has served in the role for close to two decades. He acknowledged Head Start has been a pawn in political games on both sides of the aisle long before this year, pointing to a proposed funding cut in 2011 that was ultimately reversed, and the sequestration efforts in 2013.
Critics of Head Start have argued that it doesn’t produce strong enough outcomes for families to justify taxpayer support. Supporters contest that characterization.
Sheridan maintains what he calls a “cautious optimism” when it comes to the program’s funding future.
“Yes, we’ve seen those types of stressors and feel very confident Congress and the president will continue to keep their commitment to support families in every corner of the country,” he says. “Sometimes you have to take a step back to go forward; it feels that’s where the conversation has been, but we’re excited to move forward.”
However, what is unique in this year’s case is the possibility for Head Start’s funding to stay flat. The federal program has only had three instances over six decades when it did not receive an increase in funding, according to Sheridan. If the government decides to keep its funding flat yet again for the program this year, it would be the first time in its history that it did not receive a funding boost two fiscal years in a row.
Even if the 0.6 percent proposed increase for Head Start funding were enacted, it would not keep up with the rising cost of living — Social Security benefits, for example, increased 2.5 percent to account for cost of living in 2025. Each state has its own amount of Head Start funding, with some receiving more than others due to additional state investments. Massachusetts, for example, allocated an additional $20 million for the Head Start Supplemental Grant in fiscal year 2025, largely to boost classroom teacher salaries.
“Our concern is the fact we’re facing incredibly high costs: inflationary costs, rising health care costs, the need to pay staff competitive wages,” Sheridan says. “It’s not like any warm body can work as a Head Start teacher; that is a very specific set of skills, it requires degrees and training. So when we work with our staff and train them up, we want to reward them. With seeing flat funding, programs do have to make those cuts somewhere.”
The early childhood education sector is already battling with keeping its workforce, which has long been plagued by low wages. Woller says concern over the future of funding could accelerate the workforce exodus.
“The purpose of Head Start is to help lift families out of poverty, but we have to demonstrate that in part in how we pay the staff, and it’s really hard when the funding is as low as it is,” she says. “And when staff see everything crumbling at the federal level, they may look elsewhere; that’s also a big concern.”
There are also no viable alternative funding pathways, according to local and national officials. Head Start services are free for families.
“The types of services that Head Start provides take manpower other streams of child care funding don’t support,” Haimowitz says. “The state supplement has been growing and we’re incredibly grateful for that, but no alternative source is going to meet the types of needs that Head Start funding provides.”
Woller put it more simply.
“No, there is no Plan B,” she says with a self-defeated laugh. “There’s no backup plan when it’s this amount of dollars.”
Serving All Children?
There’s the added confusion of the recently announced policy change to reclassify Head Start as a federal public benefit, which would bar non-U.S. citizens from enrolling in Head Start services. There are currently no systems in place to check for immigration status.
The policy idea has not been passed as of the beginning of September. Both regional and national Head Start officials say they have not been given any directive or guidance to enforce these proposed rules, and that all families that were eligible for Head Start according to preexisting guidelines continue to be.
“Philosophically, the Head Start promise is all children, regardless of circumstance at birth, can succeed at school and life,” Woller says. “We want to make sure we uphold that.”
While the funding future of Head Start remains in flux, officials are trying to spread the word that the programming remains open and available for any one that needs it.
“The tough part is the uncertainty and lack of answers; that’s the part that’s keeping folks up at night,” Haimowitz says. “There are so few answers for all the questions we have, and directors are trying to keep their teachers on staff, keep families feeling comfortable and showing Head Start is open and enrolling amidst all this real uncertainty. It’s tough.”
Books, Courses & Certifications
Authenticate Amazon Q Business data accessors using a trusted token issuer

Since its general availability in 2024, Amazon Q Business (Amazon Q) has enabled independent software vendors (ISVs) to enhance their Software as a Service (SaaS) solutions through secure access to customers’ enterprise data by becoming Amazon Q Business data accessor. To find out more on data accessor, see this page. The data accessor now supports trusted identity propagation. With trusted token issuer (TTI) authorization support, ISVs as data accessor can integrate with Amazon Q index while maintaining enterprise-grade security standards for their software-as-a-service (SaaS) solutions.
Prior to TTI support, data accessors needed to implement authorization code flow with AWS IAM Identity Center integration when accessing the Amazon Q index. With TTI support for data accessors, ISVs can now use their own OpenID Provider to authenticate enterprise users, alleviating the need for double authentication while maintaining security standards.
In this blog post, we show you how to implement TTI authorization for data accessors, compare authentication options, and provide step-by-step guidance for both ISVs and enterprises.
Prerequisites
Before you begin, make sure you have the following requirements:
- An AWS account with administrator access
- Access to Amazon Q Business
- For ISVs:
- An OpenID Connect (OIDC) compatible authorization server
- For enterprises:
- Amazon Q Business administrator access
- Permission to create trusted token issuers
Solution Overview
This solution demonstrates how to implement TTI authentication for Amazon Q Business data accessors. The following diagram illustrates the overall flow between different resources, from ISV becoming a data accessor, customer enabling ISV data accessor, to ISV accessing customer’s Amazon Q index:
Understanding Trusted Token Issuer Authentication
Trusted Token Issuer represents an advanced identity integration capability for Amazon Q. At its core, TTI is a token exchange API that propagates identity information into IAM role sessions, enabling AWS services to make authorization decisions based on the actual end user’s identity and group memberships. This mechanism allows AWS services to apply authorization and security controls based on the authenticated user context. The TTI support simplifies the identity integration process while maintaining robust security standards, making it possible for organizations to ensure that access to Amazon Q respects user-level permissions and group memberships. This enables fine-grained access control and maintains proper security governance within Amazon Q implementations.
Trusted Token Issuer authentication simplifies the identity integration process for Amazon Q by enabling the propagation of user identity information into AWS IAM role sessions. Each token exchange allows AWS services to make authorization decisions based on the authenticated user’s identity and group memberships. The TTI support streamlines the integration process while maintaining robust security standards, enabling organizations to implement appropriate access controls within their Amazon Q implementations.
Understanding Data Accessors
A data accessor is an ISV that has registered with AWS and is authorized to use their customers’ Amazon Q index for the ISV’s Large Language Model (LLM) solution. The process begins with ISV registration, where they provide configuration information including display name, business logo, and OpenID Connect (OIDC) configuration details for TTI support.
During ISV registration, providers must specify their tenantId configuration – a unique identifier for their application tenant. This identifier might be known by different names in various applications (such as Workspace ID in Slack or Domain ID in Asana) and is required for proper customer isolation in multi-tenant environments.
Amazon Q customers then add the ISV as a data accessor to their environment, granting access to their Amazon Q index based on specific permissions and data source selections. Once authorized, the ISV can query the customers’ index through API requests using their TTI authentication flow, creating a secure and controlled pathway for accessing customer data.
Implementing TTI Authentication for Amazon Q index Access
This section explains how to implement TTI authentication for accessing the Amazon Q index. The implementation involves initial setup by the customer and subsequent authentication flow implemented by data accessors for user access.
TTI provides capabilities that enable identity-enhanced IAM role sessions through Trusted Identity Propagation (TIP), allowing AWS services to make authorization decisions based on authenticated user identities and group memberships. Here’s how it works:
To enable data accessor access to a customer’s Amazon Q index through TTI, customers must perform an initial one-time setup by adding a data accessor on Amazon Q Business application. During setup, a TTI with the data accessor’s identity provider information is created in the customer’s AWS IAM Identity Center, allowing the data accessor’s identity provider to authenticate access to the customer’s Amazon Q index.
The process to set up an ISV data accessor with TTI authentication consists of the following steps:
- The customer’s IT administrator accesses their Amazon Q Business application and creates a trusted token issuer with the ISV’s OAuth information. This returns a TrustedTokenIssuer (TTI) Amazon Resource Name (ARN).
- The IT administrator creates an ISV data accessor with the TTI ARN received in Step 1.
- Amazon Q Business confirms the provided TTI ARN with AWS IAM Identity Center and creates a data accessor application.
- Upon successful creation of the ISV data accessor, the IT administrator receives data accessor details to share with the ISV.
- The IT administrator provides these details to the ISV application.
Once the data accessor setup is complete in the customer’s Amazon Q environment, users can access the Amazon Q index through the ISV application by authenticating only against the data accessor’s identity provider.
The authentication flow proceeds as follows:
- A user authenticates against the data accessor’s identity provider through the ISV application. The ISV application receives an ID token for that user, generated from the ISV’s identity provider with the same client ID registered on their data accessor.
- The ISV application needs to use the AWS Identity and Access Management (IAM) role that they created during the data accessor onboarding process by calling AssumeRole API, then make CreateTokenWithIAM API request to the customer’s AWS IAM Identity Center with the ID token. AWS IAM Identity Center validates the ID token with the ISV’s identity provider and returns an IAM Identity Center token.
- The ISV application requests an AssumeRole API with: IAM Identity Center token, extracted identity context, and tenantId. The tenantId is a security control jointly established between the ISV and their customer, with the customer maintaining control over how it’s used in their trust relationships. This combination facilitates secure access to the correct customer environment.
- The ISV application calls the SearchRelevantContent API with the session credentials and receives relevant content from the customer’s Amazon Q index.
When implementing Amazon Q integration, ISVs need to consider two approaches, each with its own benefits and considerations:
Trusted Token Issuer | Authorization Code | |
Advantages | Single authentication on the ISV system | Enhanced security through mandatory user initiation for each session |
Enables backend-only access to SearchRelevantContent API without user interaction | ||
Considerations | Some enterprises may prefer authentication flows that require explicit user consent for each session, providing additional control over API access timing and duration | Requires double authentication on the ISV system |
Requires ISVs to host and maintain OpenID Provider |
TTI excels in providing a seamless user experience through single authentication on the ISV system and enables backend-only implementations for SearchRelevantContent API access without requiring direct user interaction. However, this approach requires ISVs to maintain their own OIDC authorization server, which may present implementation challenges for some organizations. Additionally, some enterprises might have concerns about ISVs having persistent ability to make API requests on behalf of their users without explicit per-session authorization.
Next Steps
For ISVs: Becoming a Data Accessor with TTI Authentication
Getting started on Amazon Q data accessor registration process with TTI authentication is straightforward. If you already have an OIDC compatible authorization server for your application’s authentication, you’re most of the way there.
To begin the registration process, you’ll need to provide the following information:
- Display name and business logo that will be displayed on AWS Management Console
- OIDC configuration details (OIDC ClientId and discovery endpoint URL)
- TenantID configuration details that specify how your application identifies different customer environments
For details, see Information to be provided to the Amazon Q Business team.
For ISVs using Amazon Cognito as their OIDC authorization server, here’s how to retrieve the required OIDC configuration details:
- To get the OIDC ClientId:- Navigate to the Amazon Cognito console- Select your User Pool- Go to “Applications” > “App clients”- The ClientId is listed under “Client ID” for your app client
To get the discovery endpoint URL:- The URL follows this format:
https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/openid-configuration
– Replace {region} with your AWS region (e.g., us-east-1)- Replace {userPoolId} with your Cognito User Pool IDFor example, if your User Pool is in us-east-1 with ID ‘us-east-1_abcd1234’, your discovery endpoint URL would be:https://cognito-idp.us-east-1.amazonaws.com/us-east-1_abcd1234/.well-known/openid-configuration
Note: While this example uses Amazon Cognito, the process will vary depending on your OIDC provider. Common providers like Auth0, Okta, or custom implementations will have their own methods for accessing these configuration details.
Once registered, you can enhance your generative AI application with the powerful capabilities of Amazon Q, allowing your customers to access their enterprise knowledge base through your familiar interface. AWS provides comprehensive documentation and support to help you implement the authentication flow and API integration efficiently.
For Enterprises: Enabling TTI-authenticated Data Accessor
To enable a TTI-authenticated data accessor, your IT administrator needs to complete the following steps in the Amazon Q console:
- Create a trusted token issuer using the ISV’s OAuth information
- Set up the data accessor with the generated TTI ARN
- Configure appropriate data source access permissions
This streamlined setup allows your users to access Amazon Q index through the ISV’s application using their existing ISV application credentials, alleviating the need for multiple logins while maintaining security controls over your enterprise data.
Both ISVs and enterprises benefit from AWS’s comprehensive documentation and support throughout the implementation process, facilitating a smooth and secure integration experience.
Clean up resources
To avoid unused resources, follow these steps if you no longer need the data accessor:
- Delete the data accessor:
- On the Amazon Q Business console, choose Data accessors in the navigation pane
- Select your data accessor and choose Delete.
- Delete the TTI:
- On the IAM Identity Center console, choose Trusted Token Issuers in the navigation pane.
- Select the associated issuer and choose Delete.
Conclusion
The introduction of Trusted Token Issuer (TTI) authentication for Amazon Q data accessors marks a significant advancement in how ISVs integrate with Amazon Q Business. By enabling data accessors to use their existing OIDC infrastructure, we’ve alleviated the need for double authentication while maintaining enterprise-grade security standards through TTI’s robust tenant isolation mechanisms and secure multi-tenant access controls, making sure each customer’s data remains protected within their dedicated environment. This streamlined approach not only enhances the end-user experience but also simplifies the integration process for ISVs building generative AI solutions.
In this post, we showed how to implement TTI authentication for Amazon Q data accessors. We covered the setup process for both ISVs and enterprises and demonstrated how TTI authentication simplifies the user experience while maintaining security standards.
To learn more about Amazon Q Business and data accessor integration, refer to Share your enterprise data with data accessors using Amazon Q index and Information to be provided to the Amazon Q Business team. You can also contact your AWS account team for personalized guidance. Visit the Amazon Q Business console to begin using these enhanced authentication capabilities today.
About the Authors
Takeshi Kobayashi is a Senior AI/ML Solutions Architect within the Amazon Q Business team, responsible for developing advanced AI/ML solutions for enterprise customers. With over 14 years of experience at Amazon in AWS, AI/ML, and technology, Takeshi is dedicated to leveraging generative AI and AWS services to build innovative solutions that address customer needs. Based in Seattle, WA, Takeshi is passionate about pushing the boundaries of artificial intelligence and machine learning technologies.
Siddhant Gupta is a Software Development Manager on the Amazon Q team based in Seattle, WA. He is driving innovation and development in cutting-edge AI-powered solutions.
Akhilesh Amara is a Software Development Engineer on the Amazon Q team based in Seattle, WA. He is contributing to the development and enhancement of intelligent and innovative AI tools.
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions