34 open source tools compared. Sorted by stars. Scroll down for our analysis.
| Tool | Stars | Velocity | Score | ||
|---|---|---|---|---|---|
cloudwatch-applicationsignals-mcp-serverMCP An MCP (Model Context Protocol) server that provides comprehensive tools for monitoring and analyzing AWS services using [AWS Application Signals](https://docs.aws.amazon.com/AmazonCloudWatch/latest/m | 8.8k | - | 53 | ||
elasticache-mcp-serverMCP The official MCP Server for interacting with AWS ElastiCache control plane. In order to interact with your data in ElastiCache Serverless caches and self-designed clusters use the [Valkey MCP Server]( | 8.8k | - | 53 | ||
aws-appsync-mcp-serverMCP A Model Context Protocol (MCP) server for AWS AppSync that enables AI assistants to manage and interact with backend APIs. | 8.8k | - | 53 | ||
| 3.6k | - | 47 | |||
tsp-clientMCP This directory contains npm package definitions for `@azure-tools/typespec-client-generator-cli` (tsp-client) with pinned versions to ensure reproducible builds across environments. | 3.0k | - | 47 | ||
| 1.2k | - | 43 | |||
Stay ahead of the category
New tools and momentum shifts, every Wednesday.
Connects your AI assistant to CloudWatch Application Signals, AWS's application performance monitoring layer. You can check service health, review SLO compliance, and investigate performance anomalies across your instrumented services. The MCP server is free and open source. Application Signals pricing is based on the number of monitored operations. Setup requires your applications to be instrumented with the CloudWatch agent or OpenTelemetry, which is the real lift here. Maintained by AWS Labs. Only useful if you've already adopted Application Signals for APM. If you're using Datadog, New Relic, or even plain CloudWatch metrics, this won't apply. For teams already invested in the AWS observability stack, it's a natural extension.
Connects your AI assistant to Amazon ElastiCache for Redis and Memcached. You can check cluster health, inspect node status, review replication groups, and monitor cache performance metrics. Useful for diagnosing cache hit rates and connection issues. The MCP server is free and open source. ElastiCache pricing starts around $0.017/hour for the smallest node. The server needs network access to your ElastiCache cluster, which lives inside a VPC, so connectivity setup is the main hurdle. Maintained by AWS Labs. Caching problems are notoriously hard to debug. If you're running ElastiCache and regularly troubleshoot eviction rates or replication lag, having this in your editor saves console tab-switching. VPC access requirement adds setup friction.
Connects your AI assistant to AWS AppSync GraphQL APIs. The model can manage schemas, resolvers, data sources, and run GraphQL operations. Useful for building and debugging AppSync backends without constantly switching between console tabs. You get full AppSync management: schema updates, resolver configuration, data source setup, and query execution. Requires AWS credentials. The MCP server is free, AppSync charges per query and per real-time connection. Maintained by AWS Labs. If AppSync is your GraphQL layer, this collapses a lot of console clicking into AI-assisted commands. Solid productivity boost for teams deep in the AppSync ecosystem.
Connects your AI assistant to AWS Support, letting you create, update, and check the status of support cases. You can also pull Trusted Advisor recommendations and browse support case history. The MCP server is free and open source. AWS Support access depends on your support plan. Developer plan starts at $29/month, Business at $100/month. Without a paid plan, most API calls will fail. Setup is just AWS credentials. Maintained by AWS Labs. Useful if you file support cases regularly and want to track them from your editor. The Trusted Advisor integration is a nice bonus. If you're on the free Basic support plan, this server won't do much for you.
Valkey MCP connects your AI assistant to Amazon ElastiCache running Valkey, the open source Redis fork. Get, set, and inspect keys. Run commands against your cache cluster. Setup is your cluster endpoint plus AWS credentials. ElastiCache bills per node-hour. The MCP is free. AWS Labs maintains it as the successor to their Redis MCP. Practical for debugging cache state during development. Beats writing throwaway redis-cli scripts. The catch: Valkey is Redis-compatible but not Redis. Some Redis modules and extensions are not available yet.
Connects your AI assistant to the AWS Pricing API so you can look up on-demand pricing for any AWS service. Ask about EC2 instance costs, compare RDS pricing across regions, or estimate what a Lambda workload will run you. The MCP server is free and open source. The Pricing API itself is free to call. Setup is minimal, just AWS credentials. This is one of the simplest MCP servers in the AWS Labs collection. Maintained by AWS Labs. Genuinely useful for anyone building on AWS who regularly checks pricing. Beats opening the pricing calculator in a browser tab. Quick to install, no ongoing maintenance.
Step Functions MCP lets your AI assistant start and inspect AWS Step Functions workflows. Trigger state machines, check execution status, and review outputs. Config is AWS credentials in your MCP setup. Step Functions bills per state transition. The free tier covers 4,000 transitions per month. The MCP itself is free. AWS Labs maintains it. Useful for teams with complex orchestration workflows who want to test and debug without the console. The catch: starting workflows from your AI assistant with no confirmation step is risky. Scope your IAM to non-production state machines.
Lambda Tool MCP lets your AI assistant invoke AWS Lambda functions by name. Pick a function, pass a payload, get the response. Config is AWS credentials plus an optional function filter. Lambda pricing is pay-per-invocation, but the free tier covers a million requests per month. The MCP itself costs nothing. AWS Labs maintains it. Great for testing and debugging Lambda functions without leaving your editor. The catch: no guardrails on which functions get invoked, so scope your IAM permissions tightly or you will have a bad day.
Connects your AI assistant to AWS IoT SiteWise, the service that ingests and organizes industrial sensor data. You can query asset models, pull real-time measurements, and browse your equipment hierarchy without leaving your editor. The MCP server is free and open source. IoT SiteWise itself bills per metric ingested and stored, so costs scale with your sensor fleet. Setup requires active SiteWise assets and proper IAM permissions, which puts this firmly in "you already know if you need it" territory. Maintained by AWS Labs. If you're running an industrial IoT stack on AWS and want to debug sensor data from your AI assistant, this is a clean integration. Everyone else can skip it.
Connects your AI assistant to CloudWatch metrics, logs, and alarms. You can query log groups, pull metric data, check alarm states, and search through log streams. This is the workhorse of AWS observability, now accessible conversationally. The MCP server is free and open source. CloudWatch charges per metric, per log GB ingested, and per query. Most AWS accounts already generate CloudWatch data, so there's no new cost from the MCP server itself. Setup is just AWS credentials. Maintained by AWS Labs. One of the most useful MCP servers in the entire collection. Every AWS developer checks CloudWatch logs. Querying them by describing what you're looking for instead of writing CloudWatch Insights syntax is a real time saver.
HealthLake MCP connects your AI assistant to AWS HealthLake, Amazon's FHIR-compliant health data store. You get read and write access to FHIR resources (patients, observations, conditions) directly from your editor. Setup requires AWS credentials and a running HealthLake data store, which is not trivial. HealthLake itself bills per request and per GB stored. Not cheap for experimentation. The MCP just wraps the API, so you still need the underlying service provisioned and configured. Worth it if you are already deep in HealthLake and want faster iteration on FHIR queries. Skip it if you are not in healthcare. The catch: HealthLake is one of AWS's pricier services, so testing against real data adds up fast.
Gives your AI assistant broad access to AWS services through CLI commands. Instead of service-specific MCP servers, this one wraps the AWS CLI itself, letting the model interact with virtually any AWS service through a single integration. You get AWS CLI command execution with built-in safety features and IAM-scoped access. Requires AWS credentials with appropriate permissions. The MCP server is free, underlying AWS service usage costs apply. Maintained by AWS Labs. This is the Swiss Army knife approach to AWS MCP integration. If you need coverage across many AWS services and do not want ten separate MCP servers, start here. The tradeoff is less depth per service.
Connects your AI assistant to Amazon Q Business in anonymous mode. The model can query your Q Business application without user authentication, making it useful for internal knowledge retrieval and document search across your organization's indexed content. You get Q Business chat and query capabilities in anonymous access mode. Requires AWS credentials and an existing Q Business application. The MCP server is free, Q Business pricing is usage-based starting at $3/user/month. Maintained by AWS Labs. Useful if your org already runs Q Business and you want to pipe its knowledge into your AI workflow. Niche but practical for existing AWS enterprise setups.
Connects your AI assistant to AWS CloudFormation and CDK for infrastructure as code. The model can generate templates, troubleshoot deployments, analyze stack drift, and help debug failed stacks. Turns infrastructure debugging from painful to conversational. You get CloudFormation template generation, stack analysis, drift detection, and CDK assistance. Requires AWS credentials. The MCP server is free, CloudFormation itself is free (you pay for the resources it provisions). Maintained by AWS Labs. If you write CloudFormation or CDK, this is genuinely useful. Infrastructure debugging with AI context beats staring at nested stack events alone. Strong recommendation for IaC-heavy teams.
This server gives your AI assistant direct access to AWS Identity and Access Management. Query policies, roles, and users without switching to the console. Setup is just AWS credentials in your MCP config. IAM itself is free. No cost for the service or the MCP. Maintained by AWS Labs, so it tracks API changes closely. Useful for anyone managing AWS permissions. Debugging "why can't this role do X" becomes a conversation instead of a console scavenger hunt. The catch: you are giving your AI assistant read access to your IAM config. Make sure your credentials are scoped appropriately.
Connects your AI assistant to AWS Location Service for maps, geocoding, route calculation, and place search. You can convert addresses to coordinates, calculate driving distances, and search for nearby points of interest directly from your editor. The MCP server is free and open source. AWS Location Service has a generous free tier (up to 100K geocoding requests/month), then pay-per-request pricing after that. Setup is straightforward if you already have AWS credentials configured. Maintained by AWS Labs. Solid choice if you're building location-aware features on AWS and want to prototype geocoding or routing queries conversationally. If you're using Google Maps or Mapbox already, this won't pull you over.
Connects your AI assistant to AWS CloudTrail, giving you access to API activity logs across your AWS account. You can search for specific API calls, investigate who changed a resource, and audit access patterns. Essential for security investigations and debugging "who did what." The MCP server is free and open source. CloudTrail's first management trail is free. Data event logging and CloudTrail Lake queries have per-event costs. Setup is standard AWS credentials with CloudTrail read permissions. Maintained by AWS Labs. Excellent for incident response and security audits. When something breaks in production and you need to know which API call caused it, asking your AI assistant beats scrolling through the CloudTrail console.
Connects your AI assistant to Amazon CodeCatalyst, AWS's dev platform for managing projects, source repos, issues, and CI/CD workflows. You can browse projects, check build status, and manage issues without opening the CodeCatalyst console. The MCP server is free and open source. CodeCatalyst has a free tier for individuals and charges per active user for teams. Setup requires a CodeCatalyst personal access token and your space name. Maintained by AWS Labs. Only relevant if your team uses CodeCatalyst. Most AWS developers still use GitHub or GitLab, which makes this niche. If you are on CodeCatalyst though, it's a clean integration for managing your workflow from the editor.
Well-Architected Security MCP gives your AI assistant access to AWS's security review framework. Run security posture checks, review configurations against best practices, and surface risks. Setup is AWS credentials with appropriate permissions. The underlying Well-Architected Tool is free. The MCP is free. AWS Labs maintains it. Handy for getting a quick security gut-check on your AWS setup without opening the console. The catch: it reviews against AWS best practices, not your specific threat model. Passing does not mean secure. It means you follow the checklist.
Connects your AI assistant to AWS HealthOmics for genomics and bioinformatics workflows. The model can manage reference stores, sequence stores, annotation stores, and run genomics workflows. Highly specialized for life sciences teams. You get workflow management, data store operations, and run monitoring across HealthOmics services. Requires AWS credentials with HealthOmics permissions. The MCP server is free, HealthOmics pricing is usage-based. Maintained by AWS Labs. This is extremely niche. If you are doing genomics work on AWS, it is a welcome addition. For everyone else, it is irrelevant. Know your audience before installing.
Prometheus MCP lets your AI assistant query AWS Managed Prometheus metrics using PromQL. Ask questions about your infrastructure and get real data back. Setup needs your Prometheus endpoint and AWS credentials. AWS Managed Prometheus bills on ingested samples and queries. The MCP is free. Maintained by AWS Labs. Having your AI assistant understand your metrics is powerful for debugging and capacity planning. The catch: you need to know PromQL or trust your AI to write correct queries. Bad queries against production metrics can be slow and expensive.
Gives your AI assistant searchable access to AWS documentation. Instead of the model relying on potentially outdated training data, it can pull current AWS docs on demand. Simple concept, but it solves a real accuracy problem. You get documentation search and retrieval across AWS services. No AWS credentials needed for the docs themselves. The MCP server is free, no underlying costs. Maintained by AWS Labs. If you use AWS at all and work with AI assistants, this is an easy install. The model gives better AWS answers when it can check the actual docs instead of guessing from training data. No reason to skip this one.
Connects your AI assistant to AWS HealthImaging, the service for storing, accessing, and analyzing medical images (DICOM format). You can search image sets, retrieve metadata, and browse imaging datastores from your editor. The MCP server is free and open source. HealthImaging charges per GB stored and per retrieval request. Setup requires an active HealthImaging datastore and IAM permissions, plus HIPAA compliance considerations for any production medical data. Maintained by AWS Labs. Extremely niche. This is for healthcare developers building on AWS HealthImaging who want AI-assisted exploration of their medical imaging pipeline. If you're in that space, it's a clean integration. Everyone else, skip it.
Connects your AI assistant to AWS Lambda and API Gateway. You can deploy functions, update configurations, manage API routes, and check invocation logs. Covers the core serverless workflow from code to deployment. The MCP server is free and open source. Lambda and API Gateway bill per invocation, with generous free tiers (1M Lambda requests/month free). Setup requires IAM permissions scoped to Lambda and API Gateway, and you should be careful with write operations in production accounts. Maintained by AWS Labs. Strong fit if you're iterating on serverless functions and want to deploy or debug without switching to the console. The write capabilities mean you should scope IAM tightly. Read-only users can relax.
Connects your AI assistant to Amazon Kendra, AWS's enterprise search service. Query indexed documents, get ranked results, and retrieve answers from your corporate knowledge base without leaving your editor. Setup requires an existing Kendra index and proper IAM credentials. Not trivial if you're starting from scratch, but straightforward if your org already runs Kendra. The MCP server is free. Kendra itself starts at $810/month for the developer edition, so this is firmly enterprise territory. Maintained by AWS Labs. Worth installing if your team already pays for Kendra. Skip it otherwise, the underlying service cost makes this a non-starter for solo developers.
Connects your AI assistant to AWS VPC networking. You can inspect VPCs, subnets, security groups, route tables, and network ACLs. Useful for debugging connectivity issues or auditing your network configuration without clicking through the console. The MCP server is free and open source. VPC resources themselves have no hourly cost (NAT Gateways and some endpoints do). Setup just requires AWS credentials with the right IAM permissions, which most AWS developers already have configured. Maintained by AWS Labs. If you spend time troubleshooting security group rules or tracing packet paths through your VPC, having this in your AI assistant is genuinely useful. Pure read operations, low risk.
Connects your AI assistant to AWS Cost Explorer and Billing. You can query spending by service, check cost trends over time, review savings plans utilization, and get budget alerts. All the cost data you normally dig through in the console, available conversationally. The MCP server is free and open source. Cost Explorer API charges $0.01 per request, which is negligible. Setup requires AWS credentials with billing read access, which needs to be explicitly enabled for IAM users in the management account. Maintained by AWS Labs. This is one of the most practically useful MCP servers in the collection. Every AWS team asks "what are we spending?" regularly. Having that answer one question away is worth the two-minute setup.
Connects your AI assistant to Amazon Q Index for enterprise search. The model can retrieve and search across documents indexed by Q Business, pulling structured results from your organization's knowledge base. You get document retrieval and search against your Q Index data sources. Requires AWS credentials and an existing Q Business index. The MCP server is free, underlying Q Business pricing applies. Maintained by AWS Labs. This is the retrieval-focused counterpart to the Q Business chat server. If you need your AI to search enterprise documents rather than chat with them, this is the right tool. Very specific to AWS enterprise stacks.
Connects your AI assistant to AWS Glue and Athena for data processing. The model can build ETL jobs, run SQL queries against S3 data, manage crawlers, and monitor pipeline execution. Turns your AI into a data engineering co-pilot. You get Glue job management, Athena query execution, crawler configuration, and pipeline monitoring. Requires AWS credentials. The MCP server is free, Glue and Athena charge based on compute time and data scanned. Maintained by AWS Labs. If your data stack runs on Glue and Athena, this is a significant productivity multiplier. Writing Glue jobs and Athena queries with AI assistance beats the console experience by a wide margin.
Connects your AI assistant to Amazon Bedrock's custom model import workflow. The model can help you bring fine-tuned or custom models into Bedrock, managing the import process, checking compatibility, and monitoring status. You get model import management, format validation, and import status tracking. Requires AWS credentials with Bedrock permissions. The MCP server is free, Bedrock custom model hosting has its own pricing based on compute. Maintained by AWS Labs. Very niche, only relevant if you are importing custom models into Bedrock. If that is your workflow, this removes a lot of manual console work. Otherwise, skip it.
This handler flips the script. Instead of calling Lambda from MCP, it lets you run an MCP server as a Lambda function. Your AI tools become serverless endpoints. Setup requires packaging your MCP server as a Lambda deployment. Lambda's free tier is generous, but cold starts add latency to tool calls. This is a library, not a standalone server, so you need to write the glue code yourself. Interesting pattern for teams that want centralized, shared MCP tools without running persistent servers. The catch: cold starts make interactive tool use sluggish, and debugging Lambda-hosted MCP servers is harder than local ones.
Cloudflare's official MCP server lets your AI assistant manage Workers, KV stores, R2 buckets, and D1 databases directly. Deploy code, read storage, query databases, all through your assistant instead of the dashboard. Maintained by Cloudflare. Setup requires your Cloudflare API token and account ID. Cloudflare's free tier is generous: Workers, KV, R2, and D1 all have meaningful free allocations, so you can use this without spending anything. Install this if you build on Cloudflare. Managing Workers and storage through your assistant is faster than the dashboard for most operations. The catch: the MCP covers core services but not every Cloudflare product. DNS, Pages, and some newer features may require the dashboard.
Wraps the TypeSpec client generator CLI as an MCP server. Generate client SDKs from TypeSpec API definitions, manage code generation configuration, and update generated clients through AI conversation. This is specialized tooling for teams using TypeSpec (formerly Cadl) to define APIs. You need familiarity with TypeSpec and the Azure SDK ecosystem to get value here. Maintained by Microsoft. Only useful if your team generates client libraries from TypeSpec definitions. A niche tool for a niche workflow, but it does that workflow well.
Azure's VS Code MCP integration. Brings Azure service management directly into your editor, letting your AI assistant provision resources, inspect deployments, and troubleshoot Azure infrastructure without leaving VS Code. Requires Azure credentials and appropriate role assignments. The MCP server is free. Azure services have their own pricing across hundreds of SKUs. Maintained by Microsoft's Azure team. Worth installing if you develop and deploy on Azure using VS Code. Pairs well with the broader Azure MCP ecosystem for full cloud management from your editor.