You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model Context Protocol Server for Uyuni Server API.
4
3
5
-
## Disclaimer
4
+
The Uyuni MCP Server is a Model Context Protocol (MCP) server implementation that bridges the gap between Large Language Models (LLMs) and the Uyuni systems management solution.
6
5
7
-
This is an open-source project provided "AS IS" without any warranty, express or implied. Use at your own risk. For full details, please refer to the [License](#license) section.
6
+
This project allows AI agents (such as Claude Desktop or other MCP-compliant clients) to securely interact with your Uyuni server. By exposing Uyuni's API as standardized MCP tools, it enables users to manage their Linux infrastructure using natural language commands. Instead of navigating the web UI or writing complex API scripts, you can simply ask your AI assistant to perform tasks like auditing systems, checking for updates, or scheduling maintenance.
7
+
8
+
Key Capabilities
9
+
This server exposes a suite of tools that allow LLMs to:
10
+
11
+
Inspect Infrastructure: Retrieve lists of active systems, check CPU usage, and view system details.
12
+
13
+
Manage Updates: Identify systems with pending security updates or CVEs and schedule patch applications.
14
+
15
+
Execute Actions: Schedule reboots.
8
16
9
-
## Tools
17
+
It is designed to be run as a Docker container or locally, offering a streamlined way to integrate AI-driven automation into your system administration workflows.
18
+
19
+
20
+
21
+
## Tool List
10
22
11
23
* get_list_of_active_systems
12
24
* get_cpu_of_a_system
@@ -28,11 +40,13 @@ This is an open-source project provided "AS IS" without any warranty, express or
28
40
29
41
There are two main ways to run the `mcp-server-uyuni`: using the pre-built Docker container or running it locally with `uv`. Both methods require a `config` file.
30
42
31
-
### Config File
43
+
To use `mcp-server-uyuni`, you must first create a configuration file. Once configured, you can run the server using Docker (recommended) or locally with uv.
32
44
33
-
Before running the server, you need to create a `config` file. You can place it anywhere, but you must provide the correct path to it when running the server.
45
+
### 1\. Configuration
34
46
35
-
```
47
+
Create a file (e.g., `uyuni-config.env`) to store your environment variables. You can place this file anywhere, but you must reference its path when running the server.
48
+
49
+
```bash
36
50
# Required: Basic server parameters.
37
51
UYUNI_SERVER=192.168.1.124:8443
38
52
UYUNI_USER=admin
@@ -81,46 +95,11 @@ Replace the values with your Uyuni server details. **This file contains sensitiv
81
95
82
96
Alternatively, you can also set environment variables instead of using a file.
83
97
84
-
## Debug with mcp inspect
85
-
86
-
You can run (docker option)
87
-
88
-
`npx @modelcontextprotocol/inspector docker run -i --rm --env-file /path/to/your/config ghcr.io/uyuni-project/mcp-server-uyuni:latest`
89
98
90
-
or you can run (uv option)
99
+
### 2\. Running as a Container (Recommended)
91
100
92
-
`npx @modelcontextprotocol/inspector uv run --env-file=.venv/config --directory . mcp-server-uyuni`
101
+
The easiest way to run the server is using the pre-built Docker image. This method isolates the environment and requires no local dependencies other than Docker.
93
102
94
-
## Use with Open WebUI
95
-
96
-
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution. More at https://docs.openwebui.com/
97
-
98
-
> [!NOTE]
99
-
> The following instructions describe how to set up Open WebUI and the MCP proxy for**local development and testing purposes**. For production deployments, please refer to the official [Open WebUI documentation](https://docs.openwebui.com/) for recommended setup procedures.
100
-
101
-
### Setup Open WebUI
102
-
103
-
You need `uv` installed. See https://docs.astral.sh/uv
104
-
105
-
Start v0.6.10 (for MCP support we need a version >= 0.6.7)
For gemini, use the URL https://generativelanguage.googleapis.com/v1beta/openai and get the token API from the Google AI Studio https://aistudio.google.com/
116
-
117
-
### Setup Open WebUI MCP Support
118
-
119
-
First, ensure you have your `config` file ready as described in the Usage section.
120
-
121
-
Then, you need a `config.json` for the MCP to OpenAPI proxy server.
122
-
123
-
### Option 1: Running with Docker (Recommended)
124
103
125
104
This is the easiest method for deployment. Pre-built container images are available on the GitHub Container Registry.
126
105
@@ -139,7 +118,6 @@ This is the easiest method for deployment. Pre-built container images are availa
139
118
}
140
119
}
141
120
}
142
-
```
143
121
144
122
Alternatively, you can use environment variables instead of a file.
145
123
@@ -158,101 +136,80 @@ Alternatively, you can use environment variables instead of a file.
158
136
}
159
137
}
160
138
}
139
+
140
+
**Command:**
141
+
142
+
```bash
143
+
docker run -i --rm --env-file /path/to/uyuni-config.env ghcr.io/uyuni-project/mcp-server-uyuni:latest
161
144
```
162
145
163
-
### Option 2: Running Locally with `uv`
146
+
***`-i`**: Keeps STDIN open (required for MCP communication).
147
+
***`--rm`**: Removes the container after it exits.
148
+
***`--env-file`**: Points to the configuration file you created in step 1.
149
+
150
+
### 3\. Running Locally with `uv`
151
+
152
+
If you are developing or prefer running Python directly, you can use `uv`.
3. Replace `/path/to/your/config` with the absolute path to your `config` file.
165
+
2. Run the server:
166
+
```bash
167
+
uv run --env-file /path/to/uyuni-config.env --directory . mcp-server-uyuni
168
+
169
+
170
+
### 4\. Client Configuration Examples
171
+
172
+
MCP servers are rarely run manually; they are usually configured within an MCP Client (like Claude Desktop). Below are examples of how to configure your client to use `mcp-server-uyuni`.
173
+
174
+
#### Claude Desktop Configuration
175
+
176
+
Add the following to your `claude_desktop_config.json`:
Then, you can start the Model Context Protocol to Open API proxy server:
194
-
195
-
```
196
-
uvx mcpo --port 9000 --config ./config.json
197
-
```
198
-
199
-
### Add the tool
200
-
201
-
And then you can add the tool to the Open Web UI. See https://docs.openwebui.com/openapi-servers/open-webui#step-2-connect-tool-server-in-open-webui .
202
-
203
-
Note the url should be http://localhost/mcp-server-uyuni as explained in https://docs.openwebui.com/openapi-servers/open-webui#-optional-using-a-config-file-with-mcpo
204
-
205
-
206
-

207
-

208
-
209
-
### Testing Advanced Capabilities (Elicitation)
210
-
211
-
> [!NOTE]
212
-
> The Model Context Protocol (MCP) includes advanced features like **Elicitation**, which allows tools to interactively prompt the user for missing information or confirmation.
213
-
>
214
-
> As of this writing, not all MCP clients support this capability. For example, **Open WebUI does not currently implement elicitation**.
215
-
>
216
-
> To test tools that leverage elicitation (like the `add_system` tool when an activation key is missing), you need a compatible client. The official **MCP extension for Visual Studio Code** is a reference client that fully supports elicitation and is recommended for developing and testing these features.
217
-
195
+
**Local `uv` Method:**
218
196
219
-
## Local Development Build
220
-
221
-
To build the Docker image locally for development or testing purposes:
Then, you can use `docker run -i --rm --env-file .venv/config mcp-server-uyuni` at any of the mcp-client configurations explained above.
227
-
228
-
## Release Process
229
-
230
-
To create a new release for `mcp-server-uyuni`, follow these steps.
231
-
232
-
1.** Create a release branch:**`git fetch upstream && git checkout upstream/main -b release-x.y.z`. Assuming upstream is the remote alias for the upstream git
233
-
2.**Update Documentation (`README.md`):**
234
-
* Ensure the list of available tools under the "## Tools" section is current and reflects all implemented tools in `srv/mcp-server-uyuni/server.py`.
235
-
* Review and update any screenshots in the `docs/` directory and their references in this `README.md` to reflect the latest UI or functionality, if necessary.
236
-
* Verify all usage instructions and examples are still accurate.
237
-
3.**Update Test Cases (`TEST_CASES.md`):**
238
-
* Refer to the "How to Update for a New Tag/Release" section within `TEST_CASES.md`.
239
-
4.**Commit Changes:** Commit all the updates to `README.md`, `TEST_CASES.md`, and any other changed files.
240
-
5.**Update version in pyproject.toml:** Use semantic versioning to set the new version.
241
-
6.**Update uv.lock:** Run `uv lock` to update uv.lock file with the version set in pyproject.toml
242
-
7.**Update CHANGELOG.md:**
243
-
* Generate the changelog using `conventional-changelog-cli`. If you don't have it installed globally, you can use `npx`.
244
-
* The command to generate the changelog using the `conventionalcommits` preset and output it to `CHANGELOG.md` (prepending the new changes) is:
* Review the generated `CHANGELOG.md`for accuracy and formatting.
249
-
* Commit the updated `CHANGELOG.md`.
250
-
8. Push the branch and create a new Pull Request: `git push origin release-x.y.z`. Assuming origin is the remote aliasfor your git fork.
251
-
7. **Create Git Tag:** Create a new Git tag for the release (e.g., `git tag vX.Y.Z`). Follow [semantic versioning rules](https://semver.org/).
252
-
8. **Push Changes and Tags:** Push your commits (including the changelog update) and the new tag to the repository (e.g., `git push && git push --tags`).
253
-
9. **Automated Build and Push:** Pushing the tag to GitHub will automatically trigger the "Docker Publish" GitHub Action. This action builds the Docker image and pushes it to the GitHub Container Registry (`ghcr.io`) with tags for the specific version (e.g., `v0.3.0`) and major.minor (e.g., `v0.3`). Pushing to `main` will update the `latest` tag.
254
-
10. **Test the container:** Pull the newly published image from `ghcr.io` and run the tests in`TEST_CASES.md` against it.
255
-
`docker run -i --rm --env-file .venv/config ghcr.io/uyuni-project/mcp-server-uyuni:VERSION` (replace VERSION with the new tag).
256
213
257
214
## Feedback
258
215
@@ -265,3 +222,9 @@ Thanks in advance from the uyuni team!
265
222
## License
266
223
267
224
This project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details.
225
+
226
+
227
+
## Disclaimer
228
+
229
+
This is an open-source project provided "AS IS" without any warranty, express or implied. Use at your own risk. For full details, please refer to the [License](#license) section.
# Optional: Set to 'false' to disable SSL certificate verification. Defaults to 'true'.
7
+
# UYUNI_MCP_SSL_VERIFY=false
8
+
9
+
# Optional: Set to 'true' to enable tools that perform write actions (e.g., POST requests). Defaults to 'false'.
10
+
# UYUNI_MCP_WRITE_TOOLS_ENABLED=false
11
+
12
+
> [!WARNING]
13
+
> **Security Note on Write Tools:** Enabling `UYUNI_MCP_WRITE_TOOLS_ENABLED` allows the execution of state-changing and potentially destructive actions (e.g., removing systems, applying updates). When combined with `UYUNI_MCP_TRANSPORT=http`, this risk is amplified, as any client with network access can perform these actions. Only enable write tools in a trusted environment.
14
+
15
+
# Optional: Set the transport protocol. Can be 'stdio' (default) or 'http'.
16
+
# UYUNI_MCP_TRANSPORT=stdio
17
+
18
+
> [!WARNING]
19
+
> **Security Note on HTTP Transport:** When `UYUNI_MCP_TRANSPORT` is set to `http`, the server runs without authentication. This means any client with network access can execute commands. Only use this mode in a trusted, isolated network environment. For more details, see the Security Policy.
20
+
21
+
# Optional: Set the path for the server log file. Defaults to logging to the console.
0 commit comments