Skip to content

Merge stable into develop #458

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Jul 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
12623ca
Initial plan
Copilot Jun 30, 2025
265c9f0
Fix vale lint command to exclude node_modules directory
Copilot Jun 30, 2025
1d78e57
Add setting typename, display_label, and kind if it exists when calli…
FragmentedPacket Jun 30, 2025
560238a
release 1.13.3 (#459)
ajtmccarty Jun 30, 2025
ce43631
fix _process_relationships for Sync client
BeArchiTek Jul 1, 2025
8c99a80
add changelog
BeArchiTek Jul 1, 2025
02b4f12
Merge pull request #460 from opsmill/bkr-fix-process_relationships
BeArchiTek Jul 2, 2025
f58c459
Fix `load_from_disk` method to support folder with multiple file exte…
BaptisteGi Jul 4, 2025
d527bff
fix signature delta
BeArchiTek Jul 14, 2025
cdd64c3
do not ommit offset when offset=0
BeArchiTek Jul 14, 2025
0dd2541
fragment and rollback node change
BeArchiTek Jul 14, 2025
ec3627f
Merge pull request #468 from opsmill/bkr-fix-sync-parallel-filters
BeArchiTek Jul 14, 2025
c67fb29
fixes #469
BeArchiTek Jul 15, 2025
8d89883
Merge pull request #470 from opsmill/bkr-remove-node-process-page
BeArchiTek Jul 16, 2025
121c94d
add check for empty list of schema
BeArchiTek Jul 16, 2025
8c71f45
add changelog
BeArchiTek Jul 16, 2025
6465b92
fix: improve cardinality many relationship fetch (#476)
fatih-acar Jul 22, 2025
19a03ae
Merge pull request #472 from opsmill/bkr-fix-load-schemas
dgarros Jul 22, 2025
396c50b
Prepare version 1.13.4
dgarros Jul 22, 2025
5246e3f
Merge pull request #477 from opsmill/dga-release-1.13.4
dgarros Jul 22, 2025
ef7fc07
Merge pull request #457 from opsmill/copilot/fix-372
dgarros Jul 22, 2025
d227402
respect ordering of files when loading
ajtmccarty Jul 22, 2025
e12cc84
Merge pull request #478 from opsmill/ajtm-07222025-respect-file-order
dgarros Jul 23, 2025
142df70
Prepare release 1.13.5
dgarros Jul 23, 2025
64a7957
Merge pull request #480 from opsmill/dga-release-1.13.5
dgarros Jul 23, 2025
f5e3b69
pass branch into count call
ajtmccarty Jul 23, 2025
de6cfbf
Create batch directly instead of using create_batch while fetching re…
dgarros Jul 24, 2025
21def0a
Merge pull request #483 from opsmill/dga-20250724-create-batch
dgarros Jul 24, 2025
7876f13
add changelog
ajtmccarty Jul 24, 2025
16da1ba
Merge pull request #482 from opsmill/ajtm-0723205-branch-in-count
dgarros Jul 24, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,28 @@ This project uses [*towncrier*](https://towncrier.readthedocs.io/) and the chang

<!-- towncrier release notes start -->

## [1.13.5](https://github.com/opsmill/infrahub-sdk-python/tree/v1.13.5) - 2025-07-23

### Fixed

- Respect ordering when loading files from a directory

## [1.13.4](https://github.com/opsmill/infrahub-sdk-python/tree/v1.13.4) - 2025-07-22

### Fixed

- Fix processing of relationshhip during nodes retrieval using the Sync Client, when prefecthing related_nodes. ([#461](https://github.com/opsmill/infrahub-sdk-python/issues/461))
- Fix schema loading to ignore non-YAML files in folders. ([#462](https://github.com/opsmill/infrahub-sdk-python/issues/462))
- Fix ignored node variable in filters(). ([#469](https://github.com/opsmill/infrahub-sdk-python/issues/469))
- Fix use of parallel with filters for Infrahub Client Sync.
- Avoid sending empty list to infrahub if no valids schemas are found.

## [1.13.3](https://github.com/opsmill/infrahub-sdk-python/tree/v1.13.3) - 2025-06-30

### Fixed

- Update InfrahubNode creation to include __typename, display_label, and kind from a RelatedNode ([#455](https://github.com/opsmill/infrahub-sdk-python/issues/455))

## [1.13.2](https://github.com/opsmill/infrahub-sdk-python/tree/v1.13.2) - 2025-06-27

### Fixed
Expand Down
1 change: 1 addition & 0 deletions changelog/+batch.fixed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Create a new batch while fetching relationships instead of using the reusing the same one.
1 change: 1 addition & 0 deletions changelog/+branch-in-count.fixed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Update internal calls to `count` to include the branch parameter so that the query is performed on the correct branch
15 changes: 7 additions & 8 deletions infrahub_sdk/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -784,7 +784,6 @@ async def filters(
if at:
at = Timestamp(at)

node = InfrahubNode(client=self, schema=schema, branch=branch)
filters = kwargs
pagination_size = self.pagination_size

Expand Down Expand Up @@ -825,12 +824,12 @@ async def process_batch() -> tuple[list[InfrahubNode], list[InfrahubNode]]:
nodes = []
related_nodes = []
batch_process = await self.create_batch()
count = await self.count(kind=schema.kind, partial_match=partial_match, **filters)
count = await self.count(kind=schema.kind, branch=branch, partial_match=partial_match, **filters)
total_pages = (count + pagination_size - 1) // pagination_size

for page_number in range(1, total_pages + 1):
page_offset = (page_number - 1) * pagination_size
batch_process.add(task=process_page, node=node, page_offset=page_offset, page_number=page_number)
batch_process.add(task=process_page, page_offset=page_offset, page_number=page_number)

async for _, response in batch_process.execute():
nodes.extend(response[1]["nodes"])
Expand All @@ -847,7 +846,7 @@ async def process_non_batch() -> tuple[list[InfrahubNode], list[InfrahubNode]]:

while has_remaining_items:
page_offset = (page_number - 1) * pagination_size
response, process_result = await process_page(page_offset, page_number)
response, process_result = await process_page(page_offset=page_offset, page_number=page_number)

nodes.extend(process_result["nodes"])
related_nodes.extend(process_result["related_nodes"])
Expand Down Expand Up @@ -1946,9 +1945,9 @@ def filters(
"""
branch = branch or self.default_branch
schema = self.schema.get(kind=kind, branch=branch)
node = InfrahubNodeSync(client=self, schema=schema, branch=branch)
if at:
at = Timestamp(at)

filters = kwargs
pagination_size = self.pagination_size

Expand Down Expand Up @@ -1990,12 +1989,12 @@ def process_batch() -> tuple[list[InfrahubNodeSync], list[InfrahubNodeSync]]:
related_nodes = []
batch_process = self.create_batch()

count = self.count(kind=schema.kind, partial_match=partial_match, **filters)
count = self.count(kind=schema.kind, branch=branch, partial_match=partial_match, **filters)
total_pages = (count + pagination_size - 1) // pagination_size

for page_number in range(1, total_pages + 1):
page_offset = (page_number - 1) * pagination_size
batch_process.add(task=process_page, node=node, page_offset=page_offset, page_number=page_number)
batch_process.add(task=process_page, page_offset=page_offset, page_number=page_number)

for _, response in batch_process.execute():
nodes.extend(response[1]["nodes"])
Expand All @@ -2012,7 +2011,7 @@ def process_non_batch() -> tuple[list[InfrahubNodeSync], list[InfrahubNodeSync]]

while has_remaining_items:
page_offset = (page_number - 1) * pagination_size
response, process_result = process_page(page_offset, page_number)
response, process_result = process_page(page_offset=page_offset, page_number=page_number)

nodes.extend(process_result["nodes"])
related_nodes.extend(process_result["related_nodes"])
Expand Down
3 changes: 3 additions & 0 deletions infrahub_sdk/ctl/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,9 @@ def load_yamlfile_from_disk_and_exit(
has_error = False
try:
data_files = file_type.load_from_disk(paths=paths)
if not data_files:
console.print("[red]No valid files found to load.")
raise typer.Exit(1)
except FileNotValidError as exc:
console.print(f"[red]{exc.message}")
raise typer.Exit(1) from exc
Expand Down
44 changes: 28 additions & 16 deletions infrahub_sdk/node/node.py
Original file line number Diff line number Diff line change
Expand Up @@ -402,10 +402,10 @@ def generate_query_data_init(
if order:
data["@filters"]["order"] = order

if offset:
if offset is not None:
data["@filters"]["offset"] = offset

if limit:
if limit is not None:
data["@filters"]["limit"] = limit

if include and exclude:
Expand Down Expand Up @@ -507,11 +507,17 @@ def _init_relationships(self, data: dict | RelatedNode | None = None) -> None:

if rel_schema.cardinality == "one":
if isinstance(rel_data, RelatedNode):
peer_id_data: dict[str, Any] = {}
if rel_data.id:
peer_id_data["id"] = rel_data.id
if rel_data.hfid:
peer_id_data["hfid"] = rel_data.hfid
peer_id_data: dict[str, Any] = {
key: value
for key, value in (
("id", rel_data.id),
("hfid", rel_data.hfid),
("__typename", rel_data.typename),
("kind", rel_data.kind),
("display_label", rel_data.display_label),
)
if value is not None
}
if peer_id_data:
rel_data = peer_id_data
else:
Expand Down Expand Up @@ -1090,11 +1096,17 @@ def _init_relationships(self, data: dict | None = None) -> None:

if rel_schema.cardinality == "one":
if isinstance(rel_data, RelatedNodeSync):
peer_id_data: dict[str, Any] = {}
if rel_data.id:
peer_id_data["id"] = rel_data.id
if rel_data.hfid:
peer_id_data["hfid"] = rel_data.hfid
peer_id_data: dict[str, Any] = {
key: value
for key, value in (
("id", rel_data.id),
("hfid", rel_data.hfid),
("__typename", rel_data.typename),
("kind", rel_data.kind),
("display_label", rel_data.display_label),
)
if value is not None
}
if peer_id_data:
rel_data = peer_id_data
else:
Expand Down Expand Up @@ -1481,15 +1493,15 @@ def _process_relationships(
for rel_name in self._relationships:
rel = getattr(self, rel_name)
if rel and isinstance(rel, RelatedNodeSync):
relation = node_data["node"].get(rel_name)
if relation.get("node", None):
relation = node_data["node"].get(rel_name, None)
if relation and relation.get("node", None):
related_node = InfrahubNodeSync.from_graphql(
client=self._client, branch=branch, data=relation, timeout=timeout
)
related_nodes.append(related_node)
elif rel and isinstance(rel, RelationshipManagerSync):
peers = node_data["node"].get(rel_name)
if peers:
peers = node_data["node"].get(rel_name, None)
if peers and peers["edges"]:
for peer in peers["edges"]:
related_node = InfrahubNodeSync.from_graphql(
client=self._client, branch=branch, data=peer, timeout=timeout
Expand Down
7 changes: 7 additions & 0 deletions infrahub_sdk/node/related_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ def __init__(self, branch: str, schema: RelationshipSchemaAPI, data: Any | dict,
self._hfid: list[str] | None = None
self._display_label: str | None = None
self._typename: str | None = None
self._kind: str | None = None

if isinstance(data, (CoreNodeBase)):
self._peer = data
Expand Down Expand Up @@ -118,6 +119,12 @@ def typename(self) -> str | None:
return self._peer.typename
return self._typename

@property
def kind(self) -> str | None:
if self._peer:
return self._peer.get_kind()
return self._kind

def _generate_input_data(self, allocate_from_pool: bool = False) -> dict[str, Any]:
data: dict[str, Any] = {}

Expand Down
45 changes: 43 additions & 2 deletions infrahub_sdk/node/relationship.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
from __future__ import annotations

from collections import defaultdict
from collections.abc import Iterable
from typing import TYPE_CHECKING, Any

from ..batch import InfrahubBatch
from ..exceptions import (
Error,
UninitializedError,
)
from ..types import Order
from .constants import PROPERTIES_FLAG, PROPERTIES_OBJECT
from .related_node import RelatedNode, RelatedNodeSync

Expand Down Expand Up @@ -156,8 +160,26 @@ async def fetch(self) -> None:
self.peers = rm.peers
self.initialized = True

ids_per_kind_map = defaultdict(list)
for peer in self.peers:
await peer.fetch() # type: ignore[misc]
if not peer.id or not peer.typename:
raise Error("Unable to fetch the peer, id and/or typename are not defined")
ids_per_kind_map[peer.typename].append(peer.id)

batch = InfrahubBatch(max_concurrent_execution=self.client.max_concurrent_execution)
for kind, ids in ids_per_kind_map.items():
batch.add(
task=self.client.filters,
kind=kind,
ids=ids,
populate_store=True,
branch=self.branch,
parallel=True,
order=Order(disable=True),
)

async for _ in batch.execute():
pass

def add(self, data: str | RelatedNode | dict) -> None:
"""Add a new peer to this relationship."""
Expand Down Expand Up @@ -261,8 +283,27 @@ def fetch(self) -> None:
self.peers = rm.peers
self.initialized = True

ids_per_kind_map = defaultdict(list)
for peer in self.peers:
peer.fetch()
if not peer.id or not peer.typename:
raise Error("Unable to fetch the peer, id and/or typename are not defined")
ids_per_kind_map[peer.typename].append(peer.id)

# Unlike Async, no need to create a new batch from scratch because we are not using a semaphore
batch = self.client.create_batch()
for kind, ids in ids_per_kind_map.items():
batch.add(
task=self.client.filters,
kind=kind,
ids=ids,
populate_store=True,
branch=self.branch,
parallel=True,
order=Order(disable=True),
)

for _ in batch.execute():
pass

def add(self, data: str | RelatedNodeSync | dict) -> None:
"""Add a new peer to this relationship."""
Expand Down
20 changes: 13 additions & 7 deletions infrahub_sdk/yaml.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,16 +120,22 @@ def load_file_from_disk(cls, path: Path) -> list[Self]:
@classmethod
def load_from_disk(cls, paths: list[Path]) -> list[Self]:
yaml_files: list[Self] = []
file_extensions = {".yaml", ".yml", ".json"} # FIXME: .json is not a YAML file, should be removed

for file_path in paths:
if file_path.is_file() and file_path.suffix in [".yaml", ".yml", ".json"]:
yaml_files.extend(cls.load_file_from_disk(path=file_path))
if not file_path.exists():
# Check if the provided path exists, relevant for the first call coming from the user
raise FileNotValidError(name=str(file_path), message=f"{file_path} does not exist!")
if file_path.is_file():
if file_path.suffix in file_extensions:
yaml_files.extend(cls.load_file_from_disk(path=file_path))
# else: silently skip files with unrelevant extensions (e.g. .md, .py...)
elif file_path.is_dir():
# Introduce recursion to handle sub-folders
sub_paths = [Path(sub_file_path) for sub_file_path in file_path.glob("*")]
sub_files = cls.load_from_disk(paths=sub_paths)
sorted_sub_files = sorted(sub_files, key=lambda x: x.location)
yaml_files.extend(sorted_sub_files)
else:
raise FileNotValidError(name=str(file_path), message=f"{file_path} does not exist!")
sub_paths = sorted(sub_paths, key=lambda p: p.name)
yaml_files.extend(cls.load_from_disk(paths=sub_paths))
# else: skip non-file, non-dir (e.g., symlink...)

return yaml_files

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "infrahub-sdk"
version = "1.13.2"
version = "1.13.5"
description = "Python Client to interact with Infrahub"
authors = ["OpsMill <[email protected]>"]
readme = "README.md"
Expand Down
2 changes: 1 addition & 1 deletion tasks.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ def lint_vale(context: Context) -> None:
return

print(" - Check documentation style with vale")
exec_cmd = r'vale $(find ./docs -type f \( -name "*.mdx" -o -name "*.md" \))'
exec_cmd = r'vale $(find ./docs -type f \( -name "*.mdx" -o -name "*.md" \) -not -path "*/node_modules/*")'
with context.cd(MAIN_DIRECTORY_PATH):
context.run(exec_cmd)

Expand Down
Empty file.
Empty file.
1 change: 1 addition & 0 deletions tests/integration/test_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ async def test_node_create_with_relationships_using_related_node(
assert node_after.name.value == node.name.value
assert node_after.manufacturer.peer.id == manufacturer_mercedes.id
assert node_after.owner.peer.id == person_joe.id
assert node_after.owner.peer.typename == "TestingPerson"

async def test_node_update_with_original_data(
self,
Expand Down
26 changes: 24 additions & 2 deletions tests/unit/sdk/test_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,13 @@ async def test_init_node_data_user_with_relationships(client, location_schema: N


@pytest.mark.parametrize("client_type", client_types)
@pytest.mark.parametrize("rel_data", [{"id": "pppppppp"}, {"hfid": ["pppp", "pppp"]}])
@pytest.mark.parametrize(
"rel_data",
[
{"id": "pppppppp", "__typename": "BuiltinTag"},
{"hfid": ["pppp", "pppp"], "display_label": "mmmm", "kind": "BuiltinTag"},
],
)
async def test_init_node_data_user_with_relationships_using_related_node(
client, location_schema: NodeSchemaAPI, client_type, rel_data
):
Expand Down Expand Up @@ -231,6 +237,9 @@ async def test_init_node_data_user_with_relationships_using_related_node(
assert isinstance(node.primary_tag, RelatedNodeBase)
assert node.primary_tag.id == rel_data.get("id")
assert node.primary_tag.hfid == rel_data.get("hfid")
assert node.primary_tag.typename == rel_data.get("__typename")
assert node.primary_tag.kind == rel_data.get("kind")
assert node.primary_tag.display_label == rel_data.get("display_label")

keys = dir(node)
assert "name" in keys
Expand Down Expand Up @@ -1874,6 +1883,19 @@ async def test_node_fetch_relationship(
)

response2 = {
"data": {
"BuiltinTag": {
"count": 1,
}
}
}

httpx_mock.add_response(
method="POST",
json=response2,
)

response3 = {
"data": {
"BuiltinTag": {
"count": 1,
Expand All @@ -1886,7 +1908,7 @@ async def test_node_fetch_relationship(

httpx_mock.add_response(
method="POST",
json=response2,
json=response3,
match_headers={"X-Infrahub-Tracker": "query-builtintag-page1"},
)

Expand Down
Loading