A minimal example that combines an OpenCV MCP tool server with an AutoGen assistant agent to perform basic dermatology lesion image analysis (segmentation + measurements) and optional interactive Q&A.
Note that segmentation and especially measurement performance is currently quite poor, but this is intended as a simple starting point for building image-centric agent workflows.
Example input image:
- Launches the
opencv-mcp-server(viauv run) and dynamically discovers its tools. - Sends an initial task to segment the lesion in the provided image (largest connected component of inverted threshold result). OpenCV seems to save intermediary and result JPGs in the
data/HAM10000_images_part_1and sometimes in the root folder. - Reports key quantitative metrics: center position, pixel area, major/minor diameters, and mean gray value (0–1 scale).
- (Optional) Enters an interactive loop so you can ask follow‑up questions about the image or derived data.
- Python (compatible with
uv– typically 3.11+) uvinstalled (https://github.com/astral-sh/uv)- OpenAI or Azure OpenAI API key set as one of
OPENAI_API_KEYorAZURE_OPENAI_API_KEY(loaded via.envif present), or access to a local ollama server with a suitable model (changed inload_model_config()). - Packages specified in the project
pyproject.toml/uv.lock opencv-mcp-server(fetched automatically byuv run opencv-mcp-server)- Probably: A patched FastMCP - Due to a version incompatibility, you currently (opencv-mcp-server 0.1.1 with latest FastMCP) need to patch
mcp.server.fastmcp.server.FastMCP.__init__to add a description parameterdescription: str | None = None,so that the openCV MCP server works.
# From opencv_example directory
uv run main.pyAdd --interactive to enter chat mode after the initial analysis.
On Windows PowerShell you can also run:
uv run main.py --interactive(Default image path is already set; pass a different one with --image <path>.)
- Console logs list available MCP tools and the generated analysis messages.
- A segmentation visualization JPG is saved alongside the script (name includes timestamp and descriptors).
- Use a different model by modifying the
load_model_config()function inmain.py. - Modify the initial system prompt in
main.py(system_message) to change assistant behavior. - Adjust segmentation strategy by editing the initial task string (
initial_task). - Limit or expand tool iterations via
max_tool_iterationsin theAssistantAgentconstructor.
- If you see an API key error, ensure
.envcontainsOPENAI_API_KEY=(or Azure equivalents) or export it in your shell. - File not found: verify the
--imagepath (relative to repo root) or supply an absolute path. - Tool discovery issues: confirm
uvis installed and on PATH; try runninguv run opencv-mcp-server --helpmanually.
