The Model Context Protocol (MCP) is a standardized way to connect AI agents to tools. Instead of exposing a flat list of every tool on every request, Concierge progressively discloses only what's relevant. Concierge guarantees deterministic results and reliable tool invocation.
Note
Concierge requires Python 3.9+. We recommend installing with uv for faster dependency resolution, but pip works just as well.
pip install concierge-sdkScaffold a new project:
concierge init my-store # Generate a ready to run project
cd my-store # Enter project
python main.py # Start the MCP serverOr wrap an existing MCP server two lines, nothing else changes:
# Before
from mcp.server.fastmcp import FastMCP
app = FastMCP("my-server")
# After: just wrap it
from concierge import Concierge
app = Concierge(FastMCP("my-server"))Tip
Concierge works at the MCP protocol level. It dynamically changes which tools are returned by tools/list based on the current workflow step. The agent and client don't need to know Concierge exists, they just see fewer, more relevant tools at each point.
from concierge import Concierge
from mcp.server.fastmcp import FastMCP
app = Concierge(FastMCP("my-server"))
# Your @app.tool() decorators stay exactly the same.
# You can additionally add app.stages and app.transitions.Note
The wrap and go gives you progressive tool disclosure immediately. Add app.stages and app.transitions when you want full workflow control, no code changes required.
Instead of exposing everything at once, group related tools together. Only the current step's tools are visible to the agent:
app.stages = {
"browse": ["search_products", "view_product"],
"cart": ["add_to_cart", "remove_from_cart", "view_cart"],
"checkout": ["apply_coupon", "complete_purchase"],
}Control which steps can follow which. The agent moves forward (or backward) only along paths you allow:
app.transitions = {
"browse": ["cart"], # Can only move to cart
"cart": ["browse", "checkout"], # Can go back or proceed
"checkout": [], # Terminal step
}Share state between steps
Pass data between workflow steps without round-tripping through the LLM. State is session-scoped and works across distributed replicas:
# In the "browse" step - save a selection
app.set_state("selected_product", {"id": "p1", "name": "Laptop"})
# In the "cart" step retrieve it directly
product = app.get_state("selected_product")Scale with semantic search
When you have hundreds of tools, enable semantic search to collapse your entire API behind two meta-tools:
from concierge import Concierge, Config, ProviderType
app = Concierge("large-api", config=Config(
provider_type=ProviderType.SEARCH,
max_results=5,
))No matter how many tools you register, the agent only ever sees:
search_tools(query: str) β Find tools by description
call_tool(tool_name: str, args: dict) β Execute a discovered tool
Concierge supports multiple transports. Use streamable HTTP for web deployments:
# Streamable HTTP (recommended for web)
http_app = app.streamable_http_app()
# Or run over stdio (default, for CLI-based clients)
app.run()Tip
All of the above: stages, transitions, state, semantic search are optional and independent. Use any combination. Start simple and add structure as your workflow grows.
| Progressive Disclosure: Only expose the tools that matter right now. Fewer tools in context means less confusion and lower cost. | Enforced Tool Ordering: Define which tools unlock which. The agent follows your business logic, not its own guesses. |
| Shared State: Pass data between workflow steps server-side. No tool-call chaining through the LLM, no re-injecting data into prompts. | Semantic Search: For large APIs (100+ tools), collapse everything behind two meta-tools. The agent searches by description, then invokes. |
Protocol Compatible: Wraps any MCP server. Your existing @app.tool() decorators, resources, and prompts work unchanged. |
Session Isolation: Each conversation gets its own workflow state. Atomic, consistent, works across distributed replicas. |
| Multiple Transports: Run over stdio, streamable HTTP, or SSE. Deploy anywhere: serverless, containers, bare metal. | Scaffolding CLI: concierge init generates a ready to run project with tools, stages, and transitions wired up ready to go. |
A complete e-commerce workflow in under 30 lines:
from concierge import Concierge
app = Concierge("shopping")
@app.tool()
def search_products(query: str) -> dict:
"""Search the product catalog."""
return {"products": [{"id": "p1", "name": "Laptop", "price": 999}]}
@app.tool()
def add_to_cart(product_id: str) -> dict:
"""Add a product to the cart."""
cart = app.get_state("cart", [])
cart.append(product_id)
app.set_state("cart", cart)
return {"cart": cart}
@app.tool()
def checkout(payment_method: str) -> dict:
"""Complete the purchase."""
cart = app.get_state("cart", [])
return {"order_id": "ORD-123", "items": len(cart), "status": "confirmed"}
app.stages = {
"browse": ["search_products"],
"cart": ["add_to_cart"],
"checkout": ["checkout"],
}
app.transitions = {
"browse": ["cart"],
"cart": ["browse", "checkout"],
"checkout": [],
}
app.run() # Start over stdioThe agent starts at browse. It can move to cart, then to checkout. It cannot call checkout from browse. Concierge enforces this at the protocol level, no prompt engineering required.
Full guides, API reference, and deployment patterns are available at docs.getconcierge.app.
- Discord: Ask questions, share what you're building, get help.
- Issues: Report bugs or request features.
- Discussions: Longer form discussions and RFCs.
We are building the agentic web. Come join us.