AI Research
Trump megabill preventing artificial intelligence regulation could harm planet’s climate
U.S. Republicans in Congress are currently working to pass a tax and spending bill that may include a provision banning states from regulating artificial intelligence (AI)—a move that could increase the technology’s electricity consumption and worsen climate change, according to experts.
A June 27 Guardian article explained that data centers used to run AI require large amounts of electricity, producing high levels of greenhouse gases. “By limiting oversight, [the bill] could slow the transition away from fossil fuels and reduce incentives for more energy-efficient AI energy reliance,” said Gianluca Guidi, visiting graduate student at Harvard T.H. Chan School of Public Health, who was quoted in the article.
“We talk a lot about what AI can do for us, but not nearly enough about what it’s doing to the planet,” Guidi added. “If we’re serious about using AI to improve human well-being, we can’t ignore the growing toll it’s taking on climate stability and public health.”
The provision preventing AI regulation was included in the version of the bill initially passed in May by the House of Representatives. On July 1, the Senate passed a different version of the bill that struck the provision. The bill now returns to the House of Representative for another vote.
Read the Guardian article: Trump’s tax bill seeks to prevent AI regulations. Experts fear a heavy toll on the planet
Last Updated
AI Research
Build an MCP application with Mistral models on AWS
This post is cowritten with Siddhant Waghjale and Samuel Barry from Mistral AI.
Model Context Protocol (MCP) is a standard that has been gaining significant traction in recent months. At a high level, it consists of a standardized interface designed to streamline and enhance how AI models interact with external data sources and systems. Instead of hardcoding retrieval and action logic or relying on one-time tools, MCP offers a structured way to pass contextual data (for example, user profiles, environment metadata, or third-party content) into a large language model (LLM) context and to route model outputs to external systems. For developers, MCP abstracts away integration complexity and creates a unified layer for injecting external knowledge and executing model actions, making it more straightforward to build robust and efficient agentic AI systems that remain decoupled from data-fetching logic.
Mistral AI is a frontier research lab that emerged in 2023 as a leading open source contender in the field of generative AI. Mistral has released many state-of-the-art models, from Mistral 7B and Mixtral in the early days up to the recently announced Mistral Medium 3 and Small 3—effectively popularizing the mixture of expert architecture along the way. Mistral models are generally described as extremely efficient and versatile, frequently reaching state-of-the-art levels of performance at a fraction of the cost. These models are now seamlessly integrated into Amazon Web Services (AWS) services, unlocking powerful deployment options for developers and enterprises. Through Amazon Bedrock, users can access Mistral models using a fully managed API, enabling rapid prototyping without managing infrastructure. Amazon Bedrock Marketplace further extends this by allowing quick model discovery, licensing, and integration into existing workflows. For power users seeking fine-tuning or custom training, Amazon SageMaker JumpStart offers a streamlined environment to customize Mistral models with their own data, using the scalable infrastructure of AWS. This integration makes it faster than ever to experiment, scale, and productionize Mistral models across a wide range of applications.
This post demonstrates building an intelligent AI assistant using Mistral AI models on AWS and MCP, integrating real-time location services, time data, and contextual memory to handle complex multimodal queries. This use case, restaurant recommendations, serves as an example, but this extensible framework can be adapted for enterprise use cases by modifying MCP server configurations to connect with your specific data sources and business systems.
Solution overview
This solution uses Mistral models on Amazon Bedrock to understand user queries and route the query to relevant MCP servers to provide accurate and up-to-date answers. The system follows this general flow:
- User input – The user sends a query (text, image, or both) through either a terminal-based or web-based Gradio interface
- Image processing – If an image is detected, the system processes and optimizes it for the AI model
- Model request – The query is sent to the Amazon Bedrock Converse API with appropriate system instructions
- Tool detection – If the model determines it needs external data, it requests a tool invocation
- Tool execution – The system routes the tool request to the appropriate MCP server and executes it
- Response generation – The model incorporates the tool’s results to generate a comprehensive response
- Response delivery – The final answer is displayed to the user
In this example, we demonstrate the MCP framework using a general use case of restaurant or location recommendation and route planning. Users can provide multimodal input (such as text plus image), and the application integrates Google Maps, Time, and Memory MCP servers. Additionally, this post showcases how to use the Strands Agent framework as an alternative approach to build the same MCP application with significantly reduced complexity and code. Strands Agent is an open source, multi-agent coordination framework that simplifies the development of intelligent, context-aware agent systems across various domains. You can build your own MCP application by modifying the MCP server configurations to suit your specific needs. You can find the complete source code for this example in our Git repository. The following diagram is the solution architecture.
Prerequisites
Before implementing the example, you need to set up the account and environment. Use the following steps.To set up the AWS account :
- Create an AWS account. If you don’t already have one, sign up at https://aws.amazon.com
- To enable Amazon Bedrock access, go to the Amazon Bedrock console and request access to the models you plan to use (for this walkthrough, request access to Mistral Pixtral Large). Or deploy Mistral Small 3 model from Amazon Bedrock Marketplace. (For more details, refer to the Mistral Model Deployments on AWS section later in this post.) When your request is approved, you’ll be able to use these models through the Amazon Bedrock Converse API
To set up the local environment:
- Install the required tools:
- Python 3.10 or later
- Node.js (required for MCP tool servers)
- AWS Command Line Interface (AWS CLI), which is needed for configuration
- Clone the Repository:
- Install Python dependencies:
- Configure AWS credentials:
Then enter your AWS access key ID, secret access key, and preferred AWS Region.
- Set up MCP tool servers. The server configurations are provided in file:
server_configs.py
. The system uses Node.js-based MCP servers. They’ll be installed automatically when you run the application for the first time using NPM. You can add other MCP server configurations in this file. This solution can be quickly modified and extended to meet your business requirements.
Mistral model deployments on AWS
Mistral models can be accessed or deployed using the following methods. To use foundation models (FMs) in MCP applications, the models must support tool use functionality.
Amazon Bedrock serverless (Pixtral Large)
To enable this model, follow these steps:
- Go to the Amazon Bedrock console.
- From the left navigation pane, select Model access.
- Choose Manage model access.
- Search for the model using the keyword Pixtral, select it, and choose Next, as shown in the following screenshot. The model will then be ready to use.
This model has cross-Region inference enabled. When using the model ID, always add the Region prefix eu
or us
before the model ID, such as eu.mistral.pixtral-large-2502-v1:0
. Provide this model ID in config.py. You can now test the example with the Gradio web-based app.
Amazon Bedrock Marketplace (Mistral-Small-24B-Instruct-2501)
Amazon Bedrock Marketplace and SageMaker JumpStart deployments are dedicated instances (serverful) and incur charges as long as the instance remains deployed. For more information, refer to Amazon Bedrock pricing and Amazon SageMaker pricing.
To enable this model, follow these steps:
- Go to the Amazon Bedrock console
- In the left navigation pane, select Model catalog
- In the search bar, search for “Mistral-Small-24B-Instruct-25-1,” as shown in the following screenshot
- Select the model and select Deploy.
- In the configuration page, you can keep all fields as default. This endpoint requires an instance type ml.g6.12xlarge. Check service quotas under the Amazon SageMaker service to make sure you have more than two instances available for endpoint usage (you’ll use another instance for Amazon SageMaker JumpStart deployment). If you don’t have more than two instances, request a quota increase for this instance type. Then choose Deploy. The model deployment might take a few minutes.
- When the model is in service, copy the endpoint Amazon Resource Name (ARN), as shown in the following screenshot, and add it to the config.py file in the
model_id
field. Then you can test the solution with the Gradio web-based app. - The Mistral-Small-24B-Instruct-25-1 model doesn’t support image input, so only text-based Q&A is supported.
Amazon SageMaker JumpStart (Mistral-Small-24B-Instruct-2501)
To enable this model, follow these steps:
- Go to the Amazon SageMaker console
- Create a domain and user profile
- Under the created user profile, launch Studio
- In the left navigation pane, select JumpStart, then search for “Mistral”
- Select Mistral-Small-24B-Instruct-2501, then choose Deploy
This deployment might take a few minutes. The following screenshot shows that this model is marked as Bedrock ready. This means you can register this model as an Amazon Bedrock Marketplace deployment and use Amazon Bedrock APIs to invoke this Amazon SageMaker endpoint.
- After the model is in service, copy its endpoint ARN from the Amazon Bedrock Marketplace deployment, as shown in the following screenshot, and provide it to the config.py file in the
model_id
field. Then you can test the solution with the Gradio web-based app.
The Mistral-Small-24B-Instruct-25-1 model doesn’t support image input, so only text-based Q&A is supported.
Build an MCP application with Mistral models on AWS
The following sections provide detailed insights into building MCP applications from the ground up using a component-level approach. We explore how to implement the three core MCP components, MCP host, MCP client, and MCP servers, giving you complete control and understanding of the underlying architecture.
MCP host component
The MCP is designed to facilitate seamless interaction between AI models and external tools, systems, and data sources. In this architecture, the MCP host plays a pivotal role in managing the lifecycle and orchestration of MCP clients and servers, enabling AI applications to access and utilize external resources effectively. The MCP host is responsible for integration with FMs, providing context, capabilities discovery, initialization, and MCP client management. In this solution, we have three files to provide this capability.
The first file is agent.py. The BedrockConverseAgent
class in agent.py
is the core component that manages communication with the Amazon Bedrock service and provides the FM models integration. The constructor initializes the agent with model settings and sets up the AWS Bedrock client.
Then, the agent intelligently handles multimodal inputs with its image processing capabilities. This method validates image URLs provided by the user, downloads images, detects and normalizes image formats, resizes large images to meet API constraints, and converts incompatible formats to JPEG.
When users enter a prompt, the agent detects whether it contains an uploaded image or an image URL and processes it accordingly in the invoke_with_prompt
function. This way, users can paste an image URL in their query or upload an image from their local device and have it analyzed by the AI model.
The most powerful feature is the agent’s ability to use external tools provided by MCP servers. When the model wants to use a tool, the agent detects the tool_use
stop reason from Amazon Bedrock and extracts tool request details, including names and inputs. It then executes the tool through the UtilityHelper
, and the tool use results are returned back to the model. The MCP host then continues the conversation with the tool results incorporated.
The second file is utility.py. The UtilityHelper
class in utility.py serves as a bridge between Amazon Bedrock and external tools. It manages tool registration, formatting tool specifications for Bedrock compatibility, and tool execution.
For Amazon Bedrock to understand available tools from MCP servers, the utility module generates tool specifications by providing name, description, and inputSchema
in the following function:
When the model requests a tool, the utility module executes it and formats the result:
The final component in the MCP host is the gradio_app.py
file, which implements a web-based interface for our AI assistant using Gradio. First, it initializes the model configurations and the agent, then connects to MCP servers and retrieves available tools from the MCP servers.
When a user sends a message, the app processes it through the agent invoke_with_prompt()
function. The response from the model is displayed on the Gradio GUI:
MCP client implementation
MCP clients serve as intermediaries between the AI model and the MCP server. Each client maintains a one-to-one session with a server, managing the lifecycle of interactions, including handling interruptions, timeouts, and reconnections. MCP clients route protocol messages bidirectionally between the host application and the server. They parse responses, handle errors, and make sure that the data is relevant and appropriately formatted for the AI model. They also facilitate the invocation of tools exposed by the MCP server and manage the context so that the AI model has access to the necessary resources and tools for its tasks.
The following function in the mcpclient.py file is designed to establish connections to MCP servers and manage connection sessions.
After it’s connected with MCP servers, the client lists available tools from each MCP server with their specifications:
When a tool is defined and called, the client first validates the session is active, then executes the tool through the MCP session that is established between client and server. Finally, it returns the structured response.
MCP server configuration
The server_configs.py file defines the MCP tool servers that our application will connect to. This configuration sets up Google Maps MCP server with an API key, adds a time server for date and time operations, and includes a memory server for storing conversation context. Each server is defined as a StdioServerParameters
object, which specifies how to launch the server process using Node.js (using npx). You can add or remove MCP server configurations based on your application objectives and requirements.
Alternative implementation: Strands Agent framework
For developers seeking a more streamlined approach to building MCP-powered applications, the Strands Agents framework provides an alternative that significantly reduces implementation complexity while maintaining full MCP compatibility. This section demonstrates how the same functionality can be achieved with substantially less code using Strands Agents. The code sample is available in this Git repository.
First, initialize the model and provide the Mistral model ID on Amazon Bedrock.
The following code creates multiple MCP clients from server configurations, automatically manages their lifecycle using context managers, collects available tools from each client, and initializes an AI agent with the unified set of tools.
The following function processes user messages with optional image inputs by formatting them for multimodal AI interaction, sending them to an agent that handles tool routing and response generation, and returning the agent’s text response:
The Strands Agents approach streamlines MCP integration by reducing code complexity, automating resource management, and unifying tools from multiple servers into a single interface. It also offers built-in error handling and native multimodal support, minimizing manual effort and enabling more robust, efficient development.
Demo
This demo showcases an intelligent food recognition application with integrated location services. Users can submit an image of a dish, and the AI assistant:
-
- Accurately identifies the cuisine from the image
- Provides restaurant recommendations based on the identified food
- Offers route planning powered by the Google Maps MCP server
The application demonstrates sophisticated multi-server collaboration to answer complex queries such as “Is the restaurant open when I arrive?” To answer this, the system:
- Determines the current time in the user’s location using the time MCP server
- Retrieves restaurant operating hours and calculates travel time using the Google Maps MCP server
- Synthesizes this information to provide a clear, accurate response
We encourage you to modify the solution by adding additional MCP server configurations tailored to your specific personal or business requirements.
Clean up
When you finish experimenting with this example, delete the SageMaker endpoints that you created in the process:
- Go to Amazon SageMaker console
- In the left navigation pane, choose Inference and then choose Endpoints
- From the endpoints list, delete the ones that you created from Amazon Bedrock Marketplace and SageMaker JumpStart.
Conclusion
This post covers how integrating MCP with Mistral AI models on AWS enables the rapid development of intelligent applications that interact seamlessly with external systems. By standardizing tool use, developers can focus on core logic while keeping AI reasoning and tool execution cleanly separated, improving maintainability and scalability. The Strands Agent framework enhances this by streamlining implementation without sacrificing MCP compatibility. With AWS offering flexible deployment options, from Amazon Bedrock to Amazon Bedrock Marketplace and SageMaker, this approach balances performance and cost. The solution demonstrates how even lightweight setups can connect AI to real-time services.
We encourage developers to build upon this foundation by incorporating additional MCP servers tailored to their specific requirements. As the landscape of MCP-compatible tools continues to expand, organizations can create increasingly sophisticated AI assistants that effectively reason over external knowledge and take meaningful actions, accelerating the adoption of practical, agentic AI systems across industries while reducing implementation barriers.
Ready to implement MCP in your own projects? Explore the official AWS MCP server repository for examples and reference implementations. For more information about the Strands Agents framework, which simplifies agent building with its intuitive, code-first approach to data source integration, visit Strands Agent. Finally, dive deeper into open protocols for agent interoperability in the recent AWS blog post: Open Protocols for Agent Interoperability, which explores how these technologies are shaping the future of AI agent development.
About the authors
Ying Hou, PhD, is a Sr. Specialist Solution Architect for Gen AI at AWS, where she collaborates with model providers to onboard the latest and most intelligent AI models onto AWS platforms. With deep expertise in Gen AI, ASR, computer vision, NLP, and time-series forecasting models, she works closely with customers to design and build cutting-edge ML and GenAI applications.
Siddhant Waghjale, is an Applied AI Engineer at Mistral AI, where he works on challenging customer use cases and applied science, helping customers achieve their goals with Mistral models. He’s passionate about building solutions that bridge AI capabilities with actual business applications, specifically in agentic workflows and code generation.
Samuel Barry is an Applied AI Engineer at Mistral AI, where he helps organizations design, deploy, and scale cutting-edge AI systems. He partners with customers to deliver high-impact solutions across a range of use cases, including RAG, agentic workflows, fine-tuning, and model distillation. Alongside engineering efforts, he also contributes to applied research initiatives that inform and strengthen production use cases.
Preston Tuggle is a Sr. Specialist Solutions Architect with the Third-Party Model Provider team at AWS. He focuses on working with model providers across Amazon Bedrock and Amazon SageMaker, helping them accelerate their go-to-market strategies through technical scaling initiatives and customer engagement.
AI Research
Google’s open MedGemma AI models could transform healthcare
Instead of keeping their new MedGemma AI models locked behind expensive APIs, Google will hand these powerful tools to healthcare developers.
The new arrivals are called MedGemma 27B Multimodal and MedSigLIP and they’re part of Google’s growing collection of open-source healthcare AI models. What makes these special isn’t just their technical prowess, but the fact that hospitals, researchers, and developers can download them, modify them, and run them however they see fit.
Google’s AI meets real healthcare
The flagship MedGemma 27B model doesn’t just read medical text like previous versions did; it can actually “look” at medical images and understand what it’s seeing. Whether it’s chest X-rays, pathology slides, or patient records potentially spanning months or years, it can process all of this information together, much like a doctor would.
The performance figures are quite impressive. When tested on MedQA, a standard medical knowledge benchmark, the 27B text model scored 87.7%. That puts it within spitting distance of much larger, more expensive models whilst costing about a tenth as much to run. For cash-strapped healthcare systems, that’s potentially transformative.
The smaller sibling, MedGemma 4B, might be more modest in size but it’s no slouch. Despite being tiny by modern AI standards, it scored 64.4% on the same tests, making it one of the best performers in its weight class. More importantly, when US board-certified radiologists reviewed chest X-ray reports it had written, they deemed 81% accurate enough to guide actual patient care.
MedSigLIP: A featherweight powerhouse
Alongside these generative AI models, Google has released MedSigLIP. At just 400 million parameters, it’s practically featherweight compared to today’s AI giants, but it’s been specifically trained to understand medical images in ways that general-purpose models cannot.
This little powerhouse has been fed a diet of chest X-rays, tissue samples, skin condition photos, and eye scans. The result? It can spot patterns and features that matter in medical contexts whilst still handling everyday images perfectly well.
MedSigLIP creates a bridge between images and text. Show it a chest X-ray, and ask it to find similar cases in a database, and it’ll understand not just visual similarities but medical significance too.
Healthcare professionals are putting Google’s AI models to work
The proof of any AI tool lies in whether real professionals actually want to use it. Early reports suggest doctors and healthcare companies are excited about what these models can do.
DeepHealth in Massachusetts has been testing MedSigLIP for chest X-ray analysis. They’re finding it helps spot potential problems that might otherwise be missed, acting as a safety net for overworked radiologists. Meanwhile, at Chang Gung Memorial Hospital in Taiwan, researchers have discovered that MedGemma works with traditional Chinese medical texts and answers staff questions with high accuracy.
Tap Health in India has highlighted something crucial about MedGemma’s reliability. Unlike general-purpose AI that might hallucinate medical facts, MedGemma seems to understand when clinical context matters. It’s the difference between a chatbot that sounds medical and one that actually thinks medically.
Why open-sourcing the AI models is critical in healthcare
Beyond generosity, Google’s decision to make these models is also strategic. Healthcare has unique requirements that standard AI services can’t always meet. Hospitals need to know their patient data isn’t leaving their premises. Research institutions need models that won’t suddenly change behaviour without warning. Developers need the freedom to fine-tune for very specific medical tasks.
By open-sourcing the AI models, Google has addressed these concerns with healthcare deployments. A hospital can run MedGemma on their own servers, modify it for their specific needs, and trust that it’ll behave consistently over time. For medical applications where reproducibility is crucial, this stability is invaluable.
However, Google has been careful to emphasise that these models aren’t ready to replace doctors. They’re tools that require human oversight, clinical correlation, and proper validation before any real-world deployment. The outputs need checking, the recommendations need verifying, and the decisions still rest with qualified medical professionals.
This cautious approach makes sense. Even with impressive benchmark scores, medical AI can still make mistakes, particularly when dealing with unusual cases or edge scenarios. The models excel at processing information and spotting patterns, but they can’t replace the judgment, experience, and ethical responsibility that human doctors bring.
What’s exciting about this release isn’t just the immediate capabilities, but what it enables. Smaller hospitals that couldn’t afford expensive AI services can now access cutting-edge technology. Researchers in developing countries can build specialised tools for local health challenges. Medical schools can teach students using AI that actually understands medicine.
The models are designed to run on single graphics cards, with the smaller versions even adaptable for mobile devices. This accessibility opens doors for point-of-care AI applications in places where high-end computing infrastructure simply doesn’t exist.
As healthcare continues grappling with staff shortages, increasing patient loads, and the need for more efficient workflows, AI tools like Google’s MedGemma could provide some much-needed relief. Not by replacing human expertise, but by amplifying it and making it more accessible where it’s needed most.
(Photo by Owen Beard)
See also: Tencent improves testing creative AI models with new benchmark
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
AI Research
Pope: AI development must build bridges of dialogue and promote fraternity
In a message signed by the Cardinal Secretary of State Pietro Parolin, to the United Nations’ AI for Good Summit happening in Geneva, Pope Leo XIV encourages nations to create frameworks and regulations to work for the common good.
By Isabella H. de Carvalho
Pope Leo XIV encouraged nations to establish frameworks and regulations on AI so that it can be developed and used according to the common good, in a message sent on July 10 to the participants of the AI for Good Summit, taking place in Geneva, Switzerland, from July 8 to 11.
“I would like to take this opportunity to encourage you to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person”, the message, signed by the Secretary of State, Cardinal Pietro Parolin, said.
The summit is organized by the United Nations’ International Telecommunication Union (ITU) and is co-hosted by the Swiss government. The event sees the participation of governments, tech leaders, academics and others who are interested and work with AI.
In this “era of profound innovation” where many are reflecting on “what it means to be human”, the world “is at crossroads, facing the immense potential generated by the digital revolution driven by Artificial Intelligence”, the Pope highlighted in his message.
AI requires ethical management and regulatory frameworks
“As AI becomes capable of adapting autonomously to many situations by making purely technical algorithmic choices, it is crucial to consider its anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values”, the Pope underlined in his message.
He emphasized that the “responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them” but users also need to share this mission. AI “requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency,” the Pope insisted.
Building peaceful societies
Citing St. Augustine’s concept of the “tranquility of order”, Pope Leo highlighted that this should be the common goal and thus AI should foster “more human order of social relations” and “peaceful and just societies in the service of integral human development and the good of the human family”.
While AI can simulate human reasoning and perform tasks quickly and efficiently or transform areas such as “education, work, art, healthcare, governance, the military, and communication”, “it cannot replicate moral discernment or the ability to form genuine relationships”, Pope Leo warned.
For him the development of this technology “must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience, and growth in human responsibility”. It requires “discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity”, the Pope urged. AI needs to serve “the interests of humanity as a whole”.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle