A Model Context Protocol (MCP) server for AWS Athena that provides tools for managing Amazon Athena resources.
This project is inspired by aws-dataprocessing-mcp-server and adapts its code to create a dedicated Athena MCP Server with support for Streamable HTTP transport type.
- Execute and manage Athena SQL queries
- Create and manage named queries
- Manage Athena workgroups
- Interact with Athena data catalogs
- Support for both stdio and Streamable HTTP transport types
athena-mcp-server/
├── main.py # Main entry point
├── pyproject.toml # Project configuration
├── README.md # Documentation
└── athena_mcp_server/
├── server.py # MCP server implementation
├── handlers/
│ ├── __init__.py
│ ├── athena_query_handler.py # Query execution and named queries
│ ├── athena_data_catalog_handler.py # Data catalog operations
│ └── athena_workgroup_handler.py # Workgroup operations
├── models/
│ ├── __init__.py
│ └── athena_models.py # Response models
└── utils/
├── __init__.py
├── aws_helper.py # AWS client utilities
└── logging_helper.py # Logging utilities
# Clone the repository
git clone https://github.com/yourusername/athena-mcp-server.git
cd athena-mcp-server
# Install dependencies using uv
uv sync
# Start the server in read-only mode (default)
uv run main.py
# Start the server with write access
uv run main.py --allow-write
# Start the server with HTTP transport (default is stdio)
uv run main.py --transport http
# Start the server with custom host and port
uv run main.py --host 127.0.0.1 --port 8080
AWS_REGION
: AWS region to use for AWS API callsAWS_PROFILE
: AWS profile to use for credentialsFASTMCP_LOG_LEVEL
: Log level (default: WARNING)
manage_aws_athena_query_executions
: Execute and manage Athena SQL queries- Operations: batch-get-query-execution, get-query-execution, get-query-results, get-query-runtime-statistics, list-query-executions, start-query-execution, stop-query-execution
manage_aws_athena_named_queries
: Manage saved SQL queries in Athena- Operations: batch-get-named-query, create-named-query, delete-named-query, get-named-query, list-named-queries, update-named-query
manage_aws_athena_data_catalogs
: Manage Athena data catalogs- Operations: create-data-catalog, delete-data-catalog, get-data-catalog, list-data-catalogs, update-data-catalog
manage_aws_athena_workgroups
: Manage Athena workgroups- Operations: create-work-group, delete-work-group, get-work-group, list-work-groups, update-work-group
# Start a new query
response = await manage_aws_athena_query_executions(
operation='start-query-execution',
query_string='SELECT * FROM my_database.my_table LIMIT 10',
query_execution_context={'Database': 'my_database', 'Catalog': 'my_catalog'},
work_group='primary',
)
# Get the query results
results = await manage_aws_athena_query_executions(
operation='get-query-results', query_execution_id=response.query_execution_id
)
# Create a named query
create_response = await manage_aws_athena_named_queries(
operation='create-named-query',
name='Daily Active Users',
description='Query to calculate daily active users',
database='analytics',
query_string='SELECT date, COUNT(DISTINCT user_id) AS active_users FROM user_events GROUP BY date ORDER BY date DESC',
work_group='primary',
)
# Later, retrieve the named query
query = await manage_aws_athena_named_queries(
operation='get-named-query', named_query_id=create_response.named_query_id
)
# Create a workgroup with cost controls
create_response = await manage_aws_athena_workgroups(
operation='create-work-group',
work_group_name='data-science-team',
description='Workgroup for data science team',
configuration={
'ResultConfiguration': {'OutputLocation': 's3://my-bucket/athena-results/'},
'EnforceWorkGroupConfiguration': True,
'BytesScannedCutoffPerQuery': 10737418240, # 10GB
'PublishCloudWatchMetricsEnabled': True,
},
)
# List all workgroups
workgroups = await manage_aws_athena_workgroups(operation='list-work-groups')
This project is licensed under the MIT License - see the LICENSE file for details.