Skip to content

Roles and filters #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
122 changes: 122 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a script go generate an executable file,
# not directly plain files format, so names are preceded by a star
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# PEP 582; __pypackages__ directory
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath files
.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/
56 changes: 56 additions & 0 deletions netbox_brief_mode_plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Plan: Enhance NetBox MCP Server Brief Device Queries

**Goal:** Modify the NetBox MCP server to include `manufacturer`, `model`, `serial number`, and `site` in the "brief" output for device queries, and update in-code documentation to reflect this change for clarity, especially for LLM consumers.

**Affected File:** `server.py`

## Revised Plan Details:

1. **Target Function & Comments:**
* Code modifications will be made within the `netbox_get_objects` function in `server.py`.
* Documentation (comment) modifications will be at the top of `server.py` (around lines 8-35, specifically updating the description of `brief=True` behavior).

2. **Locate Brief Mode Logic (Code):**
* Inside the `netbox_get_objects` function, the focus will be on the section that processes results when `brief=True`. This is typically within a loop iterating through fetched items, after an initial `brief_item` dictionary is created.

3. **Device-Specific Enhancement (Code Change):**
* A conditional check will be added: `if object_type == "devices" and isinstance(item, dict):`.
* Inside this condition, the following fields will be extracted from the full `item` and added to the `brief_item` dictionary:
* **Manufacturer Name:** `brief_item['manufacturer_name'] = item.get('manufacturer', {}).get('name')`
* **Model Name:** `brief_item['model_name'] = item.get('device_type', {}).get('model')`
* **Serial Number:** `brief_item['serial_number'] = item.get('serial')` (only if it has a value).
* **Site Name:** `brief_item['site_name'] = item.get('site', {}).get('name')`

4. **Graceful Handling (Code):**
* The use of `.get('key', {}).get('nested_key')` for nested objects and checking `item.get('serial')` will ensure that if any of these fields or their parent objects are missing for a particular device, the process will not error out. Instead, the field will be omitted from the brief output for that specific device.

5. **Update Documentation (Comment Change):**
* The comment block at the beginning of `server.py` (describing `netbox_get_objects` and the `brief` parameter, typically around lines 8-35) will be updated.
* The description of what `brief=True` returns (currently detailed around lines 20-27) will be amended.
* It will be clearly stated that **for `object_type="devices"`**, the brief output will now *also* include `manufacturer_name`, `model_name`, `serial_number`, and `site_name` when these fields are available on the device object. This provides explicit guidance for users and LLMs.

6. **No Impact on Existing Filters:**
* These changes are focused on the *display* of information in brief mode and its documentation. They will not affect the existing filter resolution logic (e.g., `RESOLVABLE_FIELD_MAP`) or how `context_filters` are added to the `brief_item`. The new keys (`manufacturer_name`, `model_name`, `serial_number`, `site_name`) are chosen to be descriptive and avoid clashes with existing filter keys.

## Visual Plan (Mermaid Diagram):

```mermaid
graph TD
A[Start: User requests enhanced brief device output & LLM guidance] --> B{Analyze `server.py`};
B --> C[Identify `netbox_get_objects` function & its documentation comments];
C --> D[Locate 'brief' mode processing loop in function];
D --> E{Is `object_type == "devices"`?};
E -- Yes --> F[Extract Manufacturer Name];
F --> G[Add `manufacturer_name` to `brief_item`];
G --> H[Extract Model Name from `device_type`];
H --> I[Add `model_name` to `brief_item`];
I --> J[Extract Serial Number];
J --> K[Add `serial_number` to `brief_item` (if exists)];
K --> L[Extract Site Name];
L --> M[Add `site_name` to `brief_item`];
M --> N[Continue with existing filter context logic];
E -- No --> N;
N --> O[Modify `server.py` comments (lines 8-35)];
O -- Add details about new device fields in brief mode --> P;
P[Return `brief_results`];
P --> Q[End: Brief device queries include new fields & docs updated];
41 changes: 36 additions & 5 deletions netbox_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@
import abc
from typing import Any, Dict, List, Optional, Union
import requests
import urllib3

# Disable SSL certificate warnings when verify_ssl is False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


class NetBoxClientBase(abc.ABC):
Expand Down Expand Up @@ -177,6 +181,7 @@ def _build_url(self, endpoint: str, id: Optional[int] = None) -> str:
def get(self, endpoint: str, id: Optional[int] = None, params: Optional[Dict[str, Any]] = None) -> Union[Dict[str, Any], List[Dict[str, Any]]]:
"""
Retrieve one or more objects from NetBox via the REST API.
Handles pagination for list endpoints.

Args:
endpoint: The API endpoint (e.g., 'dcim/sites', 'ipam/prefixes')
Expand All @@ -190,14 +195,40 @@ def get(self, endpoint: str, id: Optional[int] = None, params: Optional[Dict[str
requests.HTTPError: If the request fails
"""
url = self._build_url(endpoint, id)
response = self.session.get(url, params=params, verify=self.verify_ssl)
# Make a copy of params, as NetBox 'next' URLs usually include necessary params.
# Initial params are used for the first request.
current_params = params.copy() if params else {}

response = self.session.get(url, params=current_params, verify=self.verify_ssl)
response.raise_for_status()

data = response.json()
if id is None and 'results' in data:
# Handle paginated results
return data['results']
return data

# If an ID is provided, it's a request for a single object, no pagination.
if id is not None:
return data

# If 'results' is in data, it's a list endpoint.
# This is the primary path for paginated results.
if 'results' in data:
all_results = data['results'] # First page of results
next_url = data.get('next') # URL for the next page, if any

while next_url:
# Subsequent page requests use the 'next' URL directly,
# which already contains necessary filters/offsets.
response = self.session.get(next_url, verify=self.verify_ssl)
response.raise_for_status()
page_data = response.json()
# Extend the list with results from the current page
all_results.extend(page_data.get('results', []))
next_url = page_data.get('next') # Get URL for the *next* next page
return all_results # Return all accumulated results
else:
# This handles cases where 'id' is None (list endpoint) but 'results' key is missing.
# This could be an endpoint returning a list directly (uncommon for NetBox standard API)
# or an error/unexpected response format.
return data

def create(self, endpoint: str, data: Dict[str, Any]) -> Dict[str, Any]:
"""
Expand Down
63 changes: 63 additions & 0 deletions netbox_pagination_plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Plan to Implement Pagination Handling in NetBox Client

This document outlines the plan to update the `get` method in the `NetBoxRestClient` to fully support API pagination.

## Current State

The current `get` method in [`netbox_client.py`](netbox_client.py) has a basic check for paginated results:

```python
# In NetBoxRestClient.get()
# ...
data = response.json()
if id is None and 'results' in data:
# Handle paginated results
return data['results']
# ...
```

This extracts results from the *first page* only.

## Proposed Changes

1. **Modify the `get` method in `NetBoxRestClient` (currently at line [`netbox_client.py:181`](netbox_client.py:181)):**
* After the initial API request, check if `id` is `None` (indicating a list endpoint was called) and if the response JSON (`data`) contains a `next` key with a non-null value. This `next` key holds the URL for the subsequent page of results.
* If a `next` URL exists:
* Initialize an empty list, `all_results`, and add the `results` from the current page (`data['results']`) to this list.
* Store the `next` URL (e.g., `current_url = data['next']`).
* Enter a `while` loop that continues as long as `current_url` is not `None`.
* Inside the loop:
* Make a GET request to `current_url`.
* Update `data` with the JSON response from this new request.
* Append the `results` from this new page (`data['results']`) to the `all_results` list.
* Update `current_url` with the new `data['next']` value (which could be another URL or `None`).
* Once the loop finishes (i.e., `current_url` is `None`), all pages have been fetched. Return the `all_results` list.
* If the initial response is not paginated (e.g., `id` is provided, or the `next` key is not present or is `None` in the initial response), the existing logic to return `data` (for a single object) or `data['results']` (for a single page of a non-paginated list) should be maintained.

## Mermaid Diagram of the `get` method logic:

```mermaid
graph TD
A[Start get(endpoint, id, params)] --> B{id is None?};
B -- Yes --> C{Initial API Call};
B -- No --> D[API Call for single object];
D --> E[Parse response.json() as data];
E --> F[Return data];
C --> G{Parse response.json() as data};
G --> H{data has 'next' URL and 'results'?};
H -- No --> I[Return data['results'] (if 'results' exists, else data)];
H -- Yes --> J[Initialize all_results = data['results']];
J --> K[current_url = data['next']];
K --> L{current_url is not None?};
L -- Yes --> M[Fetch data from current_url];
M --> N{Parse new_response.json() as data_page};
N --> O[Append data_page['results'] to all_results];
O --> P[current_url = data_page['next']];
P --> L;
L -- No --> Q[Return all_results];
```

## Decisions Made

* **Rate Limiting:** No explicit delay will be added between fetching pages for now. This can be revisited if rate-limiting issues arise.
* **Scope:** The focus will be solely on the `get` method. Pagination for responses of bulk operations (`bulk_create`, `bulk_update`, `bulk_delete`) will not be investigated at this time.
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ requires-python = ">=3.13"
dependencies = [
"httpx>=0.28.1",
"mcp[cli]>=1.3.0",
"requests>=2.31.0",
"requests>=2.25.0",
]
Loading