-
Notifications
You must be signed in to change notification settings - Fork 2
Flake schemas #217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Flake schemas #217
Conversation
WalkthroughAdds a flake schema subsystem: documentation and Nix schema definitions, a runtime C++ nix::flake_schemas API integrated with EvalCache, AttrPath-based eval-cache refactor, role-based defaults replacing default attr-paths, InstallableFlake integration with cached schema evaluation, and tests updated for schema-driven flake show/check behavior. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Cmd
participant FlakeSchemas
participant EvalCache
participant LockedFlake
User->>Cmd: run (e.g., nix flake show / check)
Cmd->>FlakeSchemas: call(state, lockedFlake, defaultSchemasFlake)
FlakeSchemas->>LockedFlake: mount/read builtin/default schema flake
LockedFlake-->>FlakeSchemas: LockedFlake
FlakeSchemas->>EvalCache: evaluate call-flake-schemas expression
EvalCache-->>FlakeSchemas: inventory (AttrCursor root)
FlakeSchemas-->>Cmd: cached EvalCache / inventory
loop per output
Cmd->>FlakeSchemas: forEachOutput(inventory, callback)
FlakeSchemas->>FlakeSchemas: visit(node)
alt leaf node
FlakeSchemas->>EvalCache: evaluate evalChecks / derivation attribute
EvalCache-->>FlakeSchemas: result / drv path
else non-leaf
FlakeSchemas->>FlakeSchemas: enumerate children
end
FlakeSchemas-->>Cmd: callback with output info
end
Cmd->>User: render JSON/tree and optionally build derivations
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/libexpr/include/nix/expr/eval-cache.hh (1)
8-9: Missing<vector>include forAttrPath.Prevents brittle transitive include reliance.
#include <functional> #include <variant> +#include <vector>
🧹 Nitpick comments (20)
tests/functional/completions.sh (1)
72-72: Don’t hide a completion regression; make the assertion resilient or gate it.Commenting out this check can mask breakage. Either:
- Restore a looser assertion that only checks for presence of “./foo#sampleOutput” regardless of the category line, or
- Guard the test behind a capability check and leave a clear FIXME.
Example making the check robust to category changes:
-#[[ "$(NIX_GET_COMPLETIONS=2 nix eval ./foo\#sam)" == $'attrs\n./foo#sampleOutput\t' ]] +NIX_GET_COMPLETIONS=2 nix eval ./foo#sa | tail -n +2 | grep -F -- './foo#sampleOutput'Please confirm if schema-driven completions changed categories; if so, add a TODO comment with a tracking issue.
tests/functional/chroot-store.sh (1)
49-51: Avoid blanket skip with “if false”; restore conditional and document why skip is needed.Replacing the sandbox guard with a hard skip drops coverage and can hide regressions. Revert to the original capability check (or a named env toggle), and keep the FIXME if instability persists.
-if false; then # FIXME -#if canUseSandbox; then +if canUseSandbox; then # FIXME: re-evaluate once chroot path is stabilizedIf there’s a reproducible flake-schemas interaction here, reference it in the FIXME so we can track re-enabling.
src/libcmd/include/nix/cmd/installable-flake.hh (1)
66-70: Update docstring to match schema/fragment model.Comment still references getActualAttrPaths(); replace with wording around schema-driven fragments and cursor retrieval from the eval cache.
src/libcmd/builtin-flake-schemas.nix (2)
7-8: Remove dead bindingmapAttrsToList.Defined but unused; drop to keep the module clean.
- mapAttrsToList = f: attrs: map (name: f name attrs.${name}) (builtins.attrNames attrs);
248-263: Harden overlay validation to avoid spurious throws.Some overlays force attributes even during shape checks; wrap in
lib.tryso schema inventory doesn’t abort on non-critical overlays.- evalChecks.isOverlay = - # FIXME: should try to apply the overlay to an actual - # Nixpkgs. But we don't have access to a nixpkgs - # flake here. Maybe this schema should be moved to the - # nixpkgs flake, where it does have access. - if !builtins.isFunction overlay then - throw "overlay is not a function, but a set instead" - else - builtins.isAttrs (overlay { } { }); + evalChecks.isOverlay = + # FIXME: ideally validate against real nixpkgs. + if !builtins.isFunction overlay then + false + else + self.lib.try (builtins.isAttrs (overlay { } { })) false;tests/functional/formatter.sh (1)
87-89: Improve test portability and robustness.
- Replace
grep -P(PCRE) withgrep -Efor broader availability.- Prefer
clearStoreIfPossibleto avoid failures on environments without permissions.-clearStore -nix flake show | grep -P "package.*\[formatter\]" +clearStoreIfPossible +nix flake show | grep -E "package.*\[formatter\]"doc/manual/source/protocols/flake-schemas.md (1)
1-64: Excellent documentation with one minor formatting issue.The new Flake Schemas documentation is comprehensive and well-structured. It clearly explains:
- The purpose and mechanism of flake schemas
- Schema structure and attributes
- Inventory node types and their attributes
- A practical example for validation
Apply this diff to fix the bare URL on line 7:
-A flake can define schemas for its outputs by defining a `schemas` output. `schemas` should be an attribute set with an attribute for -every output type that you want to be supported. If a flake does not have a `schemas` attribute, Nix uses a built-in set of schemas (namely https://github.com/DeterminateSystems/flake-schemas). +A flake can define schemas for its outputs by defining a `schemas` output. `schemas` should be an attribute set with an attribute for +every output type that you want to be supported. If a flake does not have a `schemas` attribute, Nix uses a built-in set of schemas (namely [https://github.com/DeterminateSystems/flake-schemas](https://github.com/DeterminateSystems/flake-schemas)).Based on static analysis hints.
src/libflake/include/nix/flake/flake.hh (2)
228-239: API polish: document parameters and mark results nodiscard.Recommend tightening the API:
- Document lockedRef vs resolvedRef vs originalRef and lockRootPath explicitly.
- Mark results [[nodiscard]] to discourage accidental discards.
-/** - * Return a `Flake` object representing the flake read from the - * `flake.nix` file in `rootDir`. - */ -Flake readFlake( +/** + * Return a `Flake` read from `flake.nix` in `rootDir`. + * - originalRef: user-specified ref + * - resolvedRef: registry-resolved ref + * - lockedRef: fetched ref (may include commit hash) + * - lockRootPath: input path in the lock graph that this flake anchors to + */ +[[nodiscard]] Flake readFlake( EvalState & state, const FlakeRef & originalRef, const FlakeRef & resolvedRef, const FlakeRef & lockedRef, const SourcePath & rootDir, const InputAttrPath & lockRootPath);
247-249: Avoid an unnecessary copy of Flake (or clarify move intent).Passing Flake by value can copy a large structure. Prefer const Flake& if not modifying, or Flake&& if you intend to move.
-LockedFlake lockFlake( - const Settings & settings, EvalState & state, const FlakeRef & topRef, const LockFlags & lockFlags, Flake flake); +LockedFlake lockFlake( + const Settings & settings, EvalState & state, const FlakeRef & topRef, const LockFlags & lockFlags, const Flake & flake);If move semantics are intended:
- ..., Flake flake); + ..., Flake&& flake);src/nix/profile.cc (1)
730-735: Optional defensive check: consider guarding against leading "." to prevent edge cases.The verification confirms that:
element.source->attrPathis stored without a leading "." (sourced from manifest JSON and flake info, displayed as-is in logs and UI)- Empty roles behavior is correct and intentional—the loop over roles simply doesn't execute, yielding an empty attrPaths list, which is valid for commands not using flake schemas
The suggested defensive check
!element.source->attrPath.empty() && element.source->attrPath[0] == '.'is optional robustness: while no current code path creates a leading ".", guarding against it would future-proof the absolute lookup against accidental malformed attrPath values.- "." + element.source->attrPath, // absolute lookup + ( !element.source->attrPath.empty() && element.source->attrPath[0] == '.' + ? element.source->attrPath + : "." + element.source->attrPath ), // absolute lookup (defensive) element.source->outputs, StringSet{}, lockFlags, getDefaultFlakeSchemas());src/libcmd/include/nix/cmd/command.hh (2)
135-142: Make MixFlakeSchemas methods const and ensure FlakeRef is visible in this header
- getDefaultFlakeSchemas() and the data member access can be const; no mutation implied.
- Ensure FlakeRef is declared/visible here (include flake.hh or add a forward decl) to avoid ODR/visibility surprises if lockfile.hh stops re-exporting it.
Proposed header tweaks:
-struct MixFlakeSchemas : virtual Args, virtual StoreCommand +struct MixFlakeSchemas : virtual Args, virtual StoreCommand { std::optional<std::string> defaultFlakeSchemas; MixFlakeSchemas(); - std::optional<FlakeRef> getDefaultFlakeSchemas(); + std::optional<FlakeRef> getDefaultFlakeSchemas() const; };If FlakeRef isn’t guaranteed via current includes, add:
+#include "nix/flake/flake.hh"or forward-declare in the correct namespace.
144-144: Prefer const on getRoles()The roles a command implements are immutable per command type. Mark getRoles() const in the base to encourage consistent overrides.
- virtual StringSet getRoles(); + virtual StringSet getRoles() const;src/libflake/flake.cc (1)
207-214: Clarify readFlake linkagereadFlake is no longer static. If it’s meant to be TU-internal, keep it static; if public, add the declaration to the appropriate header to avoid accidental ODR exposure.
src/libcmd/installables.cc (2)
236-239: Make getRoles() const and document overridesThe default {"nix-build"} is fine, but mark it const and document that subcommands (run, develop, repl, etc.) should override to avoid mis-scoping defaults.
-StringSet SourceExprCommand::getRoles() +StringSet SourceExprCommand::getRoles() const { return {"nix-build"}; }Also update the declaration in command.hh accordingly.
428-459: openEvalCache helper duplicationThis helper overlaps with flake_schemas::call’s cache setup. Consider consolidating to a single code path to avoid divergence, or clearly mark this as legacy path and add a TODO with the intended removal milestone.
src/nix/flake.cc (3)
418-429: Tighten lambda capture for readabilityInitializer capture
output(ref(output))shadows the outer parameter. Works, but is non-obvious. Prefer an explicit local or different capture name to avoid confusion.- if (output) - futures.spawn(1, [&visit, output(ref(output))]() { visit(output); }); + if (output) { + auto outRef = ref(output); + futures.spawn(1, [&visit, outRef]() { visit(outRef); }); + }
432-434: Unchecked outputs warning: great, add hint for schemasMessage is helpful. Optionally mention that missing schemas in the flake or defaults could cause “unknown” outputs.
850-869: Parallel JSON rendering is fine; guard unknown/skipped renderingCurrently, the tree renderer always calls render(j["output"]) even when “unknown” or “skipped”. Consider skipping render() in those cases to avoid creating empty nodes and to annotate “skipped” explicitly.
- if (!showLegacy && state->symbols[outputName] == "legacyPackages") { - j.emplace("skipped", true); - } else if (output) { + if (!showLegacy && state->symbols[outputName] == "legacyPackages") { + j.emplace("skipped", true); + } else if (output) { ... } else j.emplace("unknown", true);And in the tree output loop:
- render(child.value()["output"], ...); + if (child.value().contains("output")) + render(child.value()["output"], ...); + else if (child.value().contains("skipped")) + logger->cout(ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_ITALIC "(skipped)" ANSI_NORMAL, nextPrefix, treeLast); + else if (child.value().contains("unknown")) + logger->cout(ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_ITALIC "(unknown flake output)" ANSI_NORMAL, nextPrefix, treeLast);src/libcmd/installable-flake.cc (1)
147-216: Schema/role-based resolution logic: good, but add small hardening
- Flow correctly prioritizes schemas and falls back to outputs traversal and repl-compat hack.
- Suggest caching parsedFragment emptiness before loops to avoid repeated checks, and dedup attrPaths to reduce duplicate probes when multiple roles overlap.
- std::vector<eval_cache::AttrPath> attrPaths; + std::vector<eval_cache::AttrPath> attrPaths; + auto pushUnique = [&](eval_cache::AttrPath &&p) { + if (std::find(attrPaths.begin(), attrPaths.end(), p) == attrPaths.end()) + attrPaths.push_back(std::move(p)); + }; @@ - attrPaths.push_back(parseAttrPath(state, fragment.substr(1))); + pushUnique(parseAttrPath(state, fragment.substr(1))); @@ - attrPaths.push_back(attrPath2); + pushUnique(std::move(attrPath2)); @@ - attrPaths.push_back(attrPath2); + pushUnique(std::move(attrPath2)); @@ - attrPaths.push_back(parsedFragment); + pushUnique(parsedFragment); @@ - attrPaths.push_back({}); + pushUnique({});Also applies to: 221-238, 240-242
src/libcmd/include/nix/cmd/flake-schemas.hh (1)
18-26: Preferusingovertypedeffor clarityStyle nit: use
using ForEachChild = std::function<...>;.-typedef std::function<void(Symbol attrName, ref<AttrCursor> attr, bool isLast)> ForEachChild; +using ForEachChild = std::function<void(Symbol attrName, ref<AttrCursor> attr, bool isLast)>;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
doc/manual/source/SUMMARY.md.in(1 hunks)doc/manual/source/protocols/flake-schemas.md(1 hunks)src/libcmd/builtin-flake-schemas.nix(1 hunks)src/libcmd/call-flake-schemas.nix(1 hunks)src/libcmd/flake-schemas.cc(1 hunks)src/libcmd/include/nix/cmd/command.hh(3 hunks)src/libcmd/include/nix/cmd/flake-schemas.hh(1 hunks)src/libcmd/include/nix/cmd/installable-flake.hh(2 hunks)src/libcmd/include/nix/cmd/meson.build(1 hunks)src/libcmd/installable-flake.cc(5 hunks)src/libcmd/installables.cc(7 hunks)src/libcmd/meson.build(2 hunks)src/libcmd/package.nix(1 hunks)src/libexpr/eval-cache.cc(8 hunks)src/libexpr/include/nix/expr/eval-cache.hh(6 hunks)src/libflake/flake.cc(3 hunks)src/libflake/include/nix/flake/flake.hh(1 hunks)src/nix/bundle.cc(2 hunks)src/nix/develop.cc(2 hunks)src/nix/flake-check.md(1 hunks)src/nix/flake.cc(8 hunks)src/nix/formatter.cc(1 hunks)src/nix/profile.cc(1 hunks)src/nix/repl.cc(1 hunks)src/nix/run.cc(1 hunks)src/nix/search.cc(1 hunks)tests/functional/chroot-store.sh(1 hunks)tests/functional/completions.sh(1 hunks)tests/functional/flakes/check.sh(1 hunks)tests/functional/flakes/common.sh(2 hunks)tests/functional/flakes/show.sh(5 hunks)tests/functional/formatter.sh(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
src/libexpr/include/nix/expr/eval-cache.hh (2)
src/libcmd/flake-schemas.cc (1)
state(248-248)src/libcmd/include/nix/cmd/installable-flake.hh (2)
state(63-63)state(69-69)
src/nix/flake.cc (4)
src/libflake/flake.cc (4)
lockFlake(379-894)lockFlake(379-380)lockFlake(896-904)lockFlake(897-897)src/libflake/include/nix/flake/flake.hh (2)
lockFlake(245-245)lockFlake(247-248)src/libcmd/flake-schemas.cc (14)
call(37-120)call(38-38)getDefaultFlakeSchemas(295-301)getDefaultFlakeSchemas(295-295)visit(146-186)visit(146-151)derivation(206-209)derivation(206-206)forEachOutput(122-144)forEachOutput(122-124)what(188-194)what(188-188)shortDescription(196-204)shortDescription(196-196)src/libcmd/include/nix/cmd/flake-schemas.hh (6)
call(12-12)visit(20-25)derivation(31-31)forEachOutput(14-16)what(27-27)shortDescription(29-29)
src/nix/search.cc (1)
src/libexpr/include/nix/expr/eval-cache.hh (1)
attrPath(158-158)
src/libcmd/installable-flake.cc (5)
src/libcmd/flake-schemas.cc (7)
state(248-248)call(37-120)call(38-38)getSchema(246-277)getSchema(246-246)getOutput(211-244)getOutput(211-211)src/libexpr/eval-cache.cc (2)
toAttrPathStr(398-401)toAttrPathStr(398-398)src/libcmd/include/nix/cmd/flake-schemas.hh (3)
call(12-12)getSchema(52-52)getOutput(40-40)src/libexpr/attr-path.cc (4)
parseAttrPath(6-31)parseAttrPath(6-6)parseAttrPath(33-39)parseAttrPath(33-33)src/libcmd/installables.cc (2)
openEvalCache(429-458)openEvalCache(429-429)
src/libflake/flake.cc (3)
src/libflake/include/nix/flake/flake.hh (4)
settings(68-68)lockFlake(245-245)lockFlake(247-248)getFlake(124-125)src/nix/flake.cc (2)
lockFlake(58-61)lockFlake(58-58)src/libflake/flake-primops.cc (2)
getFlake(9-56)getFlake(9-9)
src/nix/profile.cc (1)
src/libcmd/flake-schemas.cc (2)
getDefaultFlakeSchemas(295-301)getDefaultFlakeSchemas(295-295)
src/libcmd/installables.cc (3)
src/libcmd/include/nix/cmd/command.hh (3)
completeFlakeRefWithFragment(390-395)completions(166-166)prefix(315-315)src/libcmd/command.cc (2)
getEvalState(125-145)getEvalState(125-125)src/libcmd/flake-schemas.cc (2)
getDefaultFlakeSchemas(295-301)getDefaultFlakeSchemas(295-295)
src/libcmd/include/nix/cmd/flake-schemas.hh (2)
src/libcmd/flake-schemas.cc (1)
state(248-248)src/libexpr/include/nix/expr/eval-cache.hh (1)
attrPath(158-158)
src/libcmd/flake-schemas.cc (3)
src/libflake/flake.cc (8)
readFlake(207-311)readFlake(207-213)lockFlake(379-894)lockFlake(379-380)lockFlake(896-904)lockFlake(897-897)callFlake(927-970)callFlake(927-927)src/libcmd/include/nix/cmd/flake-schemas.hh (1)
what(27-27)src/libcmd/installables.cc (2)
completeFlakeRef(387-408)completeFlakeRef(387-387)
src/libexpr/eval-cache.cc (2)
src/libexpr/include/nix/expr/eval-cache.hh (5)
attrPath(158-158)name(136-136)name(138-138)name(142-142)toAttrPathStr(18-18)src/libutil/include/nix/util/strings.hh (4)
dropEmptyInitThenConcatStringsSep(87-87)dropEmptyInitThenConcatStringsSep(89-89)dropEmptyInitThenConcatStringsSep(90-90)dropEmptyInitThenConcatStringsSep(91-91)
🪛 Clang (14.0.6)
src/libcmd/include/nix/cmd/flake-schemas.hh
[error] 3-3: 'nix/expr/eval-cache.hh' file not found
(clang-diagnostic-error)
src/libcmd/flake-schemas.cc
[error] 1-1: 'nix/cmd/flake-schemas.hh' file not found
(clang-diagnostic-error)
🪛 markdownlint-cli2 (0.18.1)
doc/manual/source/protocols/flake-schemas.md
7-7: Bare URL used
(MD034, no-bare-urls)
🔇 Additional comments (36)
src/libcmd/include/nix/cmd/meson.build (1)
12-12: LGTM: export of flake-schemas.hh.Header exposure looks correct and matches usage elsewhere.
Ensure this header has no private includes to avoid leaking build-only headers into the public surface.
src/libcmd/include/nix/cmd/installable-flake.hh (2)
58-59: LGTM: fragment-based identity and lazy eval cache.The
what()change and cachedopenEvalCache()accessor look good.Consider a brief note on thread-safety of
_evalCacheif used across threads.Also applies to: 75-80
39-45: All construction sites properly updated; lockFlags lifetime is safe.Verification confirms the changes are complete:
- Single construction site (src/nix/flake.cc:513) includes all required fields:
fragment(templateName),roles({"nix-template"}),extendedOutputsSpec,lockFlags, anddefaultFlakeSchemas.- lockFlags lifetime is safe: it's stored by const reference from a
CmdFlakeInitCommonmember variable that outlives the temporaryInstallableFlakeobject.- roles propagation is correct through schema filtering and compatibility checks.
- getActualAttrPaths is only mentioned in a documentation comment at the header; no actual method exists to update.
src/libcmd/meson.build (1)
71-71: All verification checks passed. Build wiring is properly configured.The verification confirms:
- Generator subdir is visible at line 70:
subdir('nix-meson-build-support/generate-header')- Both input files exist:
src/libcmd/call-flake-schemas.nixandsrc/libcmd/builtin-flake-schemas.nix- gen_header.process calls are correctly wired at lines 92-93
- Output header file is properly generated and included in multiple source files
- No presence check issues identified
src/libcmd/flake-schemas.cc (1)
286-293: Nice CLI integration for default flake schemas.Flag wiring and completer look good.
src/libcmd/package.nix (1)
52-53: The Meson infrastructure is already properly configured. Thegen_header.process()calls at lines 92–93 ofsrc/libcmd/meson.buildgenerate the.gen.hhheaders from both.nixsources, and the fileset insrc/libcmd/package.nixalready includes:
- The
.nixsources (lines 52–53, your PR addition)- The
include/nix/cmd/meson.buildsubdirectory (line 49)- All
.hhheaders via the file filter (line 51)No further action needed; the changes are correct.
doc/manual/source/SUMMARY.md.in (1)
119-119: LGTM!The addition of the Flake Schemas entry to the Table of Contents is well-placed within the "Formats and Protocols" section and follows the existing formatting conventions.
src/nix/formatter.cc (1)
37-40: LGTM!The migration from
getDefaultFlakeAttrPaths()togetRoles()is clean and aligns with the role-based flake schema approach introduced in this PR. The role name"nix-fmt"is descriptive and appropriate for the formatter component.src/nix/repl.cc (1)
48-51: LGTM!The migration to
getRoles()returning{"nix-repl"}is appropriate and consistent with the role-based approach. This is a clear improvement over the previous empty string return value.tests/functional/flakes/common.sh (2)
98-104: LGTM!The refactoring of
writeIfdFlaketo wrap the IFD import in a proper derivation structure aligns well with the schema-driven approach. The derivation name "top" matches the test expectations inshow.sh.
5-5: No issues found—change maintains IFD test coverage through controlled, opt-in approach.The variable was intentionally moved from a global export to explicit per-test invocation. Evidence shows:
- Still actively used in tests that require it (non-flake-inputs.sh, flake-in-submodule.sh, mercurial.sh) by explicitly setting
_NIX_TEST_BARF_ON_UNCACHEABLE=''- Dedicated IFD test coverage exists via
allow-import-from-derivationandtrace-import-from-derivationoptions (trace-ifd.sh, eval-cache.sh)- This refactoring provides more granular control than the previous global setting
src/nix/flake-check.md (1)
21-31: LGTM!The updated description clearly explains the new schema-driven validation approach for
nix flake check. The documentation accurately describes:
- Schema-based content extraction from flake outputs
- Evaluation of
evalChecksattributes with error reporting- Conditional building of derivations based on
isFlakeCheckThis aligns well with the new flake schemas documentation and provides users with a clear understanding of the command's behavior.
tests/functional/flakes/show.sh (5)
18-20: LGTM!The updated assertions correctly validate the new schema-driven output structure. The changes properly test:
- System filtering via
.filteredattribute- Derivation name extraction via
.derivationName- Legacy packages skip behavior via
.skipped
29-30: LGTM!The
--all-systemstest correctly verifies that packages for all systems are shown with proper derivation names, while legacy packages remain skipped by default.
39-39: LGTM!The
--legacyflag test properly validates that legacy packages are shown with the correct nested structure and derivation name.
59-61: LGTM!The error handling test correctly validates that failed evaluations and successful derivations coexist in the output structure with appropriate
.failedand.derivationNameattributes.
70-70: LGTM!The simplified jq-based assertion is more concise and directly validates that the IFD test produces the expected derivation name "top".
tests/functional/flakes/check.sh (1)
115-115: Unable to verify pattern match in sandbox environment.The test script requires Nix tooling and environment variables (
_NIX_TEST_SOURCE_DIR) that are not available in the sandbox. The pattern"Evaluation check.*apps.system-1.default.isValidApp.*failed"appears structurally sound based on code context (it targets schema validation errors for app definitions), but actual verification requires running the full test suite locally.The pattern uses flexible matching (.*) for the middle section, which is appropriate for accommodating variation in error messages while remaining specific enough to target the intended error case.
src/nix/run.cc (1)
137-140: LGTM: role matches command semantics.Returning {"nix-run"} aligns with run’s schema-based resolution.
src/nix/develop.cc (2)
461-464: LGTM: role name is appropriate for develop.Using {"nix-develop"} for default resolution fits the feature.
644-647: Role nix-build is properly supported in the schema and exception handling is already present; review comment is incorrect.The nix-build role is explicitly defined in the flake schema for legacyPackages output, which means bashInteractive lookup via this role is schema-compliant. Additionally, the code already includes exception handling (lines 663-665) that silently falls back to the system shell if resolution fails. The suggested diff only adds a comment without implementing functional fallback logic. No changes are required.
Likely an incorrect or invalid review comment.
src/nix/bundle.cc (1)
61-64: Verify semantic alignment of getRoles() with bundle command purpose.CmdBundle declares role
"nix-run"(for apps), but conceptually operates on derivations for bundling. While this technically works—because thepackagesschema also carries thenix-runrole—the role declaration is semantically misaligned. When a user runsnix bundle <flake>without an explicit fragment, the role"nix-run"filters to schemas with that role; bothpackagesandappsmatch, but declaring"nix-run"obscures the bundling intent. Consider confirming whether this is intentional or ifgetRoles()should reflect the bundling semantic (e.g.,{"nix-bundler"}if feasible, or a clarifying comment if current design is deliberate).src/nix/search.cc (1)
91-95: AttrPath migration looks correctSwitch to eval_cache::AttrPath and resolving via symbols is consistent with eval-cache APIs. No issues spotted.
src/libflake/flake.cc (2)
379-405: New lockFlake overload: solid factoringAccepting a precomputed Flake avoids refetching and simplifies call chains. The by-value parameter is appropriate since it’s mutated and moved into the result.
896-904: Delegating overload is correctThe legacy signature now delegates to the new overload using getFlake; nice for backwards compatibility.
src/nix/flake.cc (3)
303-321: Good: schema-driven wiring in CmdFlakeCheckAdopting MixFlakeSchemas and deferring to flake_schemas::call is the right direction.
483-504: Template completion uses roles correctlyUsing {"nix-template"} for template completion aligns with the new roles model.
761-779: CmdFlakeShow: role/schema integration LGTMMixFlakeSchemas integration and inventory traversal look correct.
src/libcmd/installables.cc (1)
292-293: Completion pipeline correctly wired through getRoles()Verification confirms the implementation is sound: derived commands appropriately override
getRoles()with their specific role identifiers (nix-repl, nix-develop, nix-run, nix-fmt, etc.), and the base implementation returns{"nix-build"}. The call at line 292 correctly passes roles to the completion system.src/libcmd/installable-flake.cc (3)
29-37: Stateful attr-path rendering looks correctUses eval_cache::toAttrPathStr(state, ...) and clear separators; no issues spotted.
48-58: Constructor API change verified: roles/defaultFlakeSchemas wiring confirmedVerification complete. Found one call site at src/nix/flake.cc:513-521, which correctly passes all 8 parameters to the InstallableFlake constructor:
rolesis non-empty:{"nix-template"}defaultFlakeSchemasis properly passed as an empty optional{}All other parameters (cmd, state, flakeRef, fragment, extendedOutputsSpec, lockFlags) are correctly wired. No outdated constructor calls remain.
258-264: Verify cache consolidation before mergingThe concern is valid. The codebase has multiple cache implementations:
- The method under review creates a schema-aware cache (using
flake_schemas::call()withfingerprint2)- A free function at
src/libcmd/installables.cc:429(marked// FIXME: remove) creates a non-schema cache and is still called at line 327- Separate schema cache creation in
src/libcmd/flake-schemas.cc:61(with// FIXME: merge with openEvalCache())These use different fingerprints and may cause inconsistent
cleanupAttrPathbehavior. The codebase already flags this as technical debt. Confirm that this PR doesn't introduce new inconsistencies or is part of a broader consolidation effort.src/libexpr/eval-cache.cc (3)
122-137: AttrPath-based caching looks consistentDB writes/reads now store lists as AttrPath; getAttrs returns sorted symbols; no issues spotted.
Also applies to: 257-262, 687-717
371-385: The concern in the review comment is unfounded—cleanupAttrPath already has a safe default.The member declaration in
src/libexpr/include/nix/expr/eval-cache.hh(line 45) contains an in-class initializer that provides an identity lambda as the default:std::function<AttrPath(AttrPath &&)> cleanupAttrPath = [](AttrPath && attrPath) { return std::move(attrPath); };This means all
EvalCacheinstances—whether created ininstallables.ccorflake-schemas.cc—automatically receive this safe default. The empty constructor ineval-cache.cc(lines 298–304) does not override this initialization. The call toroot->cleanupAttrPath()ineval-cache.cc(lines 383, 395) will always invoke a valid function, eliminating any risk ofstd::bad_function_call.The suggested constructor modification is redundant.
Likely an incorrect or invalid review comment.
311-315: Confirm this is intentional test infrastructure and verify test environment does not set NIX_ALLOW_EVAL=0The behavior change is real and intentional. New code throws when
NIX_ALLOW_EVAL=0, guarding against incomplete cache evaluation. Since the default is"1", normal operation is unaffected. However, if any test framework or CI sets this variable to0before triggeringgetRoot()(called from CLI code in installables.cc, installable-flake.cc, flake.cc), it will fail with "not everything is cached, but evaluation is not allowed."No existing functional tests currently set or reference this variable, so verify that test setup and CI configuration don't inadvertently introduce this condition.
src/libcmd/include/nix/cmd/flake-schemas.hh (1)
3-3: Based on my verification of the codebase:libexpr properly exports include directories: The export/meson.build in libexpr declares the dependency with
include_directories : include_dirs, whereinclude_dirspoints tosrc/libexpr/include/.libcmd correctly declares nix-expr as a public dependency: In libcmd's meson.build,
dependency('nix-expr')is listed indeps_public_maybe_subproject, which properly includes nix-expr's include paths during compilation.The header exists: eval-cache.hh is confirmed in src/libexpr/include/nix/expr/meson.build's headers list.
The Meson build system automatically propagates include directories from declared dependencies to consumers, so the #include "nix/expr/eval-cache.hh" in flake-schemas.hh should resolve correctly during the build of libcmd. The dependency chain is properly configured in the Meson files.
| allSchemas = (flake.outputs.schemas or defaultSchemasFlake.schemas) // schemaOverrides; | ||
|
|
||
| schemaOverrides = { }; # FIXME | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix dynamic attribute membership checks (syntax/semantics).
attrs ? ${name} is invalid for dynamic keys; use builtins.hasAttr. Also guard defaultSchemasFlake.schemas with a fallback. Without this, evaluation will fail.
- allSchemas = (flake.outputs.schemas or defaultSchemasFlake.schemas) // schemaOverrides;
+ allSchemas = (flake.outputs.schemas or (defaultSchemasFlake.schemas or { })) // schemaOverrides;
@@
- if allSchemas ? ${outputName} then
+ if builtins.hasAttr outputName allSchemas then
@@
- if schemas ? ${outputName} && schemas.${outputName}.version == 1 then
+ if builtins.hasAttr outputName schemas && schemas.${outputName}.version == 1 thenOptional simplification (clearer intent, fewer passes):
- schemas = builtins.listToAttrs (
- builtins.concatLists (
- mapAttrsToList (
- outputName: output:
- if builtins.hasAttr outputName allSchemas then
- [
- {
- name = outputName;
- value = allSchemas.${outputName};
- }
- ]
- else
- [ ]
- ) flake.outputs
- )
- );
+ schemas = builtins.mapAttrs (n: _: allSchemas.${n})
+ (builtins.filterAttrs (n: _: builtins.hasAttr n allSchemas) flake.outputs);Also applies to: 22-37, 41-53
🤖 Prompt for AI Agents
In src/libcmd/call-flake-schemas.nix around lines 18-21 (and similarly for 22-37
and 41-53), dynamic attribute membership checks using the attrs ? ${name}
pattern are invalid and will break evaluation; replace those with
builtins.hasAttr name attrs, and guard accesses to defaultSchemasFlake.schemas
with a fallback (e.g., treat defaultSchemasFlake as optional and use an empty
set or defaultSchemas when it’s missing). Also simplify the logic by resolving
the base schemas via a conditional/fallback once (e.g., pick
flake.outputs.schemas if present else defaultSchemasFlake.schemas or {}), then
merge in schemaOverrides, so you avoid repeated membership checks and ensure
evaluation never fails on missing attributes.
| #include "nix/cmd/flake-schemas.hh" | ||
| #include "nix/expr/eval-settings.hh" | ||
| #include "nix/fetchers/fetch-to-store.hh" | ||
| #include "nix/util/memory-source-accessor.hh" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include required standard headers.
This file uses std::span and std::find; add <span> and <algorithm> to avoid relying on transitive includes.
#include "nix/cmd/flake-schemas.hh"
#include "nix/expr/eval-settings.hh"
#include "nix/fetchers/fetch-to-store.hh"
#include "nix/util/memory-source-accessor.hh"
+#include <span>
+#include <algorithm>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #include "nix/cmd/flake-schemas.hh" | |
| #include "nix/expr/eval-settings.hh" | |
| #include "nix/fetchers/fetch-to-store.hh" | |
| #include "nix/util/memory-source-accessor.hh" | |
| #include "nix/cmd/flake-schemas.hh" | |
| #include "nix/expr/eval-settings.hh" | |
| #include "nix/fetchers/fetch-to-store.hh" | |
| #include "nix/util/memory-source-accessor.hh" | |
| #include <span> | |
| #include <algorithm> |
🧰 Tools
🪛 Clang (14.0.6)
[error] 1-1: 'nix/cmd/flake-schemas.hh' file not found
(clang-diagnostic-error)
🤖 Prompt for AI Agents
In src/libcmd/flake-schemas.cc lines 1 to 5, the file uses std::span and
std::find but does not include the corresponding standard headers; add #include
<span> and #include <algorithm> to the top of the file (adjacent to the existing
includes) so the code does not depend on transitive includes.
| if (system) { | ||
| if (auto forSystems = node->maybeGetAttr("forSystems")) { | ||
| auto systems = forSystems->getListOfStrings(); | ||
| if (std::find(systems.begin(), systems.end(), system) == systems.end()) { | ||
| visitFiltered(node, systems); | ||
| return; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compile fix: dereference system when searching.
std::find(..., system) passes std::optional<std::string> instead of std::string. Use *system.
- if (std::find(systems.begin(), systems.end(), system) == systems.end()) {
+ if (std::find(systems.begin(), systems.end(), *system) == systems.end()) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (system) { | |
| if (auto forSystems = node->maybeGetAttr("forSystems")) { | |
| auto systems = forSystems->getListOfStrings(); | |
| if (std::find(systems.begin(), systems.end(), system) == systems.end()) { | |
| visitFiltered(node, systems); | |
| return; | |
| } | |
| } | |
| if (system) { | |
| if (auto forSystems = node->maybeGetAttr("forSystems")) { | |
| auto systems = forSystems->getListOfStrings(); | |
| if (std::find(systems.begin(), systems.end(), *system) == systems.end()) { | |
| visitFiltered(node, systems); | |
| return; | |
| } | |
| } |
🤖 Prompt for AI Agents
In src/libcmd/flake-schemas.cc around lines 156–163, the call to std::find is
passing the std::optional<std::string> named system instead of the contained
string; inside the surrounding if (system) block dereference the optional (use
*system) when calling std::find so the comparison uses std::string, e.g. replace
system with *system in the std::find call.
| #pragma once | ||
|
|
||
| #include "nix/expr/eval-cache.hh" | ||
| #include "nix/flake/flake.hh" | ||
| #include "nix/cmd/command.hh" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Public header hygiene: add required includes and remove using namespace
- Avoid
using namespacein headers; it pollutes consumers. - Include all standard headers used here: , , , , .
- Ensure StringSet definition is available (typically nix/util/types.hh).
#pragma once
-#include "nix/expr/eval-cache.hh"
-#include "nix/flake/flake.hh"
-#include "nix/cmd/command.hh"
+#include <functional>
+#include <map>
+#include <optional>
+#include <string>
+#include <vector>
+
+#include "nix/util/types.hh"
+#include "nix/expr/eval-cache.hh"
+#include "nix/flake/flake.hh"
+#include "nix/cmd/command.hh"
-namespace nix::flake_schemas {
-
-using namespace eval_cache;
+namespace nix::flake_schemas {If desired, add explicit aliases instead of a global using:
- ref<eval_cache::EvalCache>
+ ref<eval_cache::EvalCache>
call(EvalState & state, std::shared_ptr<flake::LockedFlake> lockedFlake, std::optional<FlakeRef> defaultSchemasFlake);(no change needed beyond removing using namespace since fully-qualified names are already used)
Also applies to: 7-13, 33-53
🧰 Tools
🪛 Clang (14.0.6)
[error] 3-3: 'nix/expr/eval-cache.hh' file not found
(clang-diagnostic-error)
🤖 Prompt for AI Agents
In src/libcmd/include/nix/cmd/flake-schemas.hh (lines 1-6, and also apply fixes
around 7-13 and 33-53), remove any "using namespace" directives from the header,
and add explicit #include directives for the standard headers referenced in the
header: <functional>, <map>, <optional>, <string>, and <vector>; also include
the header that defines StringSet (e.g., nix/util/types.hh). Ensure all types
are referenced with fully-qualified names or explicit aliases in the header
rather than introducing a global using directive.
| const StringSet & roles, | ||
| std::string_view prefix) | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fragment completion is currently stubbed out; re-enable completions and use roles
The body under this function leaves the actual attribute completion disabled (#if 0) and ignores the roles argument. This breaks nix <cmd> <flake>#<fragment> completions.
Apply a minimal reimplementation that:
- Uses the eval cache root to findAlongAttrPath for the typed fragment
- Completes the last token
- Consumes roles later (or mark unused to avoid warnings)
Proposed patch (keeps schema-neutral behavior and fixes completions today):
void completeFlakeRefWithFragment(
AddCompletions & completions,
ref<EvalState> evalState,
flake::LockFlags lockFlags,
- const StringSet & roles,
+ const StringSet & roles,
std::string_view prefix)
{
/* Look for flake output attributes that match the
prefix. */
try {
auto hash = prefix.find('#');
if (hash == std::string::npos) {
completeFlakeRef(completions, evalState->store, prefix);
} else {
completions.setType(AddCompletions::Type::Attrs);
auto fragment = prefix.substr(hash + 1);
std::string prefixRoot = "";
if (fragment.starts_with(".")) {
fragment = fragment.substr(1);
prefixRoot = ".";
}
auto flakeRefS = std::string(prefix.substr(0, hash));
// TODO: ideally this would use the command base directory instead of assuming ".".
auto flakeRef =
parseFlakeRef(fetchSettings, expandTilde(flakeRefS), std::filesystem::current_path().string());
- auto evalCache = openEvalCache(
- *evalState,
- std::make_shared<flake::LockedFlake>(lockFlake(flakeSettings, *evalState, flakeRef, lockFlags)));
+ auto evalCache = openEvalCache(
+ *evalState,
+ std::make_shared<flake::LockedFlake>(lockFlake(flakeSettings, *evalState, flakeRef, lockFlags)));
auto root = evalCache->getRoot();
-
-#if 0
- if (prefixRoot == ".") {
- attrPathPrefixes.clear();
- }
- /* Complete 'fragment' relative to all the
- attrpath prefixes as well as the root of the
- flake. */
- attrPathPrefixes.push_back("");
-
- for (auto & attrPathPrefixS : attrPathPrefixes) {
- auto attrPathPrefix = parseAttrPath(*evalState, attrPathPrefixS);
- auto attrPathS = attrPathPrefixS + std::string(fragment);
- auto attrPath = parseAttrPath(*evalState, attrPathS);
-
- std::string lastAttr;
- if (!attrPath.empty() && !hasSuffix(attrPathS, ".")) {
- lastAttr = evalState->symbols[attrPath.back()];
- attrPath.pop_back();
- }
-
- auto attr = root->findAlongAttrPath(attrPath);
- if (!attr)
- continue;
-
- for (auto & attr2 : (*attr)->getAttrs()) {
- if (hasPrefix(evalState->symbols[attr2], lastAttr)) {
- auto attrPath2 = (*attr)->getAttrPath(attr2);
- /* Strip the attrpath prefix. */
- attrPath2.erase(attrPath2.begin(), attrPath2.begin() + attrPathPrefix.size());
- // FIXME: handle names with dots
- completions.add(
- flakeRefS + "#" + prefixRoot
- + concatStringsSep(".", evalState->symbols.resolve(attrPath2)));
- }
- }
- }
-
- /* And add an empty completion for the default
- attrpaths. */
- if (fragment.empty()) {
- for (auto & attrPath : defaultFlakeAttrPaths) {
- auto attr = root->findAlongAttrPath(parseAttrPath(*evalState, attrPath));
- if (!attr)
- continue;
- completions.add(flakeRefS + "#" + prefixRoot);
- }
- }
-#endif
+ // Temporary: roles currently unused in completion logic
+ (void)roles;
+
+ // Minimal reimplementation without attrPathPrefixes:
+ const std::string fragS(fragment);
+ auto attrPath = parseAttrPath(*evalState, fragS);
+ std::string lastToken;
+ if (!fragS.empty() && !hasSuffix(fragS, ".")) {
+ lastToken = evalState->symbols[attrPath.back()];
+ attrPath.pop_back();
+ }
+ auto cur = root->findAlongAttrPath(attrPath);
+ if (!cur) return;
+ for (auto & name : (*cur)->getAttrs()) {
+ auto s = std::string(evalState->symbols[name]);
+ if (!lastToken.empty() && !hasPrefix(s, lastToken)) continue;
+ auto p2 = (*cur)->getAttrPath(name);
+ auto rel = concatStringsSep(".", evalState->symbols.resolve(p2).substr(attrPath.size()));
+ // TODO: handle names with dots
+ completions.add(flakeRefS + "#" + prefixRoot + rel);
+ }
}
} catch (Error & e) {
warn(e.msg());
}
}If you want role-aware completions now, I can wire this to flake_schemas::call and enumerate inventory paths filtered by roles.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/libcmd/installables.cc around lines 303 to 305, the function body
currently leaves attribute completion disabled and ignores the roles parameter;
re-enable a minimal completion implementation that uses the eval cache root to
call findAlongAttrPath for the typed fragment, generate completions for the last
token only, and return them; ensure the roles parameter is either consumed later
or explicitly marked unused (e.g. (void)roles) to avoid warnings; keep the
implementation schema-neutral and minimal so completions work for `nix <cmd>
<flake>#<fragment>` today.
| {"nix-bundler"}, | ||
| lockFlags, | ||
| std::nullopt}; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Bundler InstallableFlake: verify role name and consider default schemas.
Roles={"nix-bundler"} assumes schema support; if missing, resolution will fail. Also consider honoring a user-provided default schemas flake here for consistency with other commands.
Apply if CmdBundle has MixFlakeSchemas; otherwise ignore:
- {"nix-bundler"},
- lockFlags,
- std::nullopt};
+ {"nix-bundler"},
+ lockFlags,
+ getDefaultFlakeSchemas()};Verification:
🏁 Script executed:
#!/bin/bash
# Ensure CmdBundle inherits a mixin that provides getDefaultFlakeSchemas.
rg -n --no-ignore 'struct\s+CmdBundle\s*:\s*[^\\{]*' -C2
rg -n --no-ignore 'struct\s+InstallableValueCommand\b' -C10
# Confirm "nix-bundler" appears in schema code.
rg -n --no-ignore -S 'nix-bundler'Length of output: 2062
---
🏁 Script executed:
```shell
#!/bin/bash
# Find InstallableCommand definition and its inheritance chain
rg -n --no-ignore 'struct\s+InstallableCommand\b' -A5
# Search for MixFlakeSchemas in the codebase
rg -n --no-ignore 'MixFlakeSchemas' -C2
# Check if CmdBundle or its parents use MixFlakeSchemas
rg -n --no-ignore 'class CmdBundle\|struct CmdBundle' -A20
# Find getDefaultFlakeSchemas definition
rg -n --no-ignore 'getDefaultFlakeSchemas' -C3
Length of output: 5042
---
🏁 Script executed:
```shell
#!/bin/bash
# View the full context around the code in question
sed -n '75,90p' src/nix/bundle.cc
# Verify InstallableFlake constructor signature
rg -n --no-ignore 'InstallableFlake\s*\(' -A3 src/libcmd/installables.hh
Length of output: 602
---
🏁 Script executed:
```shell
#!/bin/bash
# Search for InstallableFlake class definition
rg -n --no-ignore 'struct\s+InstallableFlake\b|class\s+InstallableFlake\b' -A15
# Find where InstallableFlake is declared
fd -name 'installables*' -type f | head -20
Length of output: 1557
---
🏁 Script executed:
```shell
#!/bin/bash
# View the full InstallableFlake constructor signature
sed -n '46,65p' src/libcmd/include/nix/cmd/installable-flake.hh
Length of output: 557
Replace std::nullopt with getDefaultFlakeSchemas() in InstallableFlake constructor call.
The review comment is correct. CmdBundle inherits from InstallableValueCommand → InstallableCommand → SourceExprCommand, which extends MixFlakeSchemas. This inheritance provides access to getDefaultFlakeSchemas(). The last parameter of the InstallableFlake constructor at line 83 should pass the default schemas instead of std::nullopt, consistent with usage patterns in installables.cc and profile.cc. The "nix-bundler" role is properly defined in the schema (builtin-flake-schemas.nix:364).
🤖 Prompt for AI Agents
In src/nix/bundle.cc around lines 81-84, the InstallableFlake constructor call
currently passes std::nullopt for the flake schemas; replace that argument with
getDefaultFlakeSchemas() so the call uses the class-provided default schemas
(accessible via the MixFlakeSchemas inheritance chain) consistent with
installables.cc and profile.cc and matching the defined "nix-bundler" role in
builtin-flake-schemas.nix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/libcmd/flake-schemas.cc (2)
1-6: Include required standard headers.This file uses
std::span(line 228) andstd::find(line 161) but does not include the corresponding headers. Add#include <span>and#include <algorithm>to avoid relying on transitive includes.Apply this diff:
#include "nix/cmd/flake-schemas.hh" #include "nix/expr/eval-settings.hh" #include "nix/fetchers/fetch-to-store.hh" #include "nix/util/memory-source-accessor.hh" #include "nix/util/mounted-source-accessor.hh" +#include <span> +#include <algorithm>
158-165: Dereference optional when searching systems.Inside the
if (system)block,systemisstd::optional<std::string>and must be dereferenced when passed tostd::find.Apply this diff:
if (auto forSystems = node->maybeGetAttr("forSystems")) { auto systems = forSystems->getListOfStrings(); - if (std::find(systems.begin(), systems.end(), system) == systems.end()) { + if (std::find(systems.begin(), systems.end(), *system) == systems.end()) { visitFiltered(node, systems); return; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/libcmd/flake-schemas.cc(1 hunks)src/libstore/include/nix/store/store-dir-config.hh(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/libcmd/flake-schemas.cc (3)
src/libflake/include/nix/flake/flake.hh (4)
readFlake(232-238)lockFlake(245-245)lockFlake(247-248)callFlake(250-250)src/libflake/flake.cc (8)
readFlake(207-311)readFlake(207-213)lockFlake(379-894)lockFlake(379-380)lockFlake(896-904)lockFlake(897-897)callFlake(927-970)callFlake(927-927)src/libcmd/installables.cc (2)
completeFlakeRef(387-408)completeFlakeRef(387-387)
🪛 Clang (14.0.6)
src/libcmd/flake-schemas.cc
[error] 1-1: 'nix/cmd/flake-schemas.hh' file not found
(clang-diagnostic-error)
🔇 Additional comments (8)
src/libcmd/flake-schemas.cc (8)
12-37: LGTM: Synthetic flake construction is well-structured.The function correctly constructs a virtual flake from builtin schemas, materializes it to the store, and returns a locked flake. The approach of using
MemorySourceAccessorwith a mounted virtual store path is sound.
39-122: LGTM: Eval cache wiring with fingerprint-based invalidation.The function correctly:
- Constructs a composite fingerprint from all inputs for cache invalidation
- Conditionally skips default schemas when the flake provides its own (lines 76-79)
- Transforms inventory attribute paths back to output paths via
cleanupAttrPath(lines 91-119)The lazy evaluation and cache integration are well-designed.
124-146: LGTM: Output iteration with graceful handling of unknown outputs.The function correctly iterates over inventory outputs and handles outputs that couldn't be fully evaluated (lines 135-139). Error tracing is appropriate.
168-188: LGTM: Node traversal logic with appropriate error handling.The recursive traversal correctly distinguishes between leaf and non-leaf nodes. The special case for
legacyPackages(line 176) to suppress evaluation errors is appropriate, as legacy packages often have platform-specific or broken attributes.
190-211: LGTM: Helper functions are straightforward.The attribute accessor helpers (
what,shortDescription,derivation) are correctly implemented with appropriate null handling and string trimming.
213-246: LGTM: Output resolution correctly walks the attribute path.The function appropriately:
- Uses
std::spanfor efficient path traversal without copying (line 228)- Walks through nested children to resolve the target node (lines 230-239)
- Returns
std::nulloptwhen the path doesn't exist- Constructs
OutputInfowith the resolved cursors and remaining path
248-279: LGTM: Schema collection logic is correct.The function correctly:
- Iterates inventory attributes to collect schema definitions
- Extracts roles, appendSystem flag, and defaultAttrPath from each schema
- Converts string lists to symbol-based
AttrPath(lines 270-271)- Builds and returns the
Schemasmap
285-303: LGTM: CLI integration is standard and correct.The
MixFlakeSchemasclass correctly:
- Registers the
--default-flake-schemasflag with flake ref completion (lines 287-294)- Parses the flag value into a
FlakeRefwith proper base directory resolution (line 302)
| std::string_view name, | ||
| const SourcePath & path, | ||
| ContentAddressMethod method = FileIngestionMethod::NixArchive, | ||
| ContentAddressMethod method = ContentAddressMethod::Raw::NixArchive, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify migration from FileIngestionMethod to ContentAddressMethod.
The default parameter update looks correct and aligns with the type system refactoring.
Run the following script to verify all references to FileIngestionMethod have been properly migrated:
🏁 Script executed:
#!/bin/bash
# Description: Search for remaining references to FileIngestionMethod that may need updating
# Search for FileIngestionMethod usage in the codebase
echo "=== Searching for FileIngestionMethod references ==="
rg -n --type=cpp --type=c 'FileIngestionMethod' -C 2
# Search for calls to computeStorePath that might need updating
echo -e "\n=== Searching for computeStorePath calls ==="
rg -n --type=cpp --type=c 'computeStorePath\s*\(' -C 2Length of output: 44797
🏁 Script executed:
#!/bin/bash
# Verify the exact context of ContentAddressMethod parameters with FileIngestionMethod defaults
echo "=== Checking dummy-store.cc line 87 ==="
sed -n '82,92p' src/libstore/dummy-store.cc
echo -e "\n=== Checking legacy-ssh-store.hh line 116 ==="
sed -n '111,121p' src/libstore/include/nix/store/legacy-ssh-store.hh
echo -e "\n=== Checking remote-store.hh line 89 ==="
sed -n '84,94p' src/libstore/include/nix/store/remote-store.hh
echo -e "\n=== Checking content-address.hh for ContentAddressMethod definition ==="
grep -A 5 "class ContentAddressMethod" src/libstore/include/nix/store/content-address.hh | head -20
echo -e "\n=== Check for any implicit constructor/conversion in ContentAddressMethod ==="
rg -A 10 "struct ContentAddressMethod\|class ContentAddressMethod" src/libstore/include/nix/store/content-address.hh | head -30Length of output: 1716
🏁 Script executed:
#!/bin/bash
# Find the ContentAddressMethod type definition
echo "=== Full grep for ContentAddressMethod definition ==="
rg "ContentAddressMethod\s+\{" src/libstore/include/nix/store/content-address.hh -A 20 | head -40
echo -e "\n=== Search for ContentAddressMethod in header ==="
cat src/libstore/include/nix/store/content-address.hh | grep -n "ContentAddressMethod" | head -15
echo -e "\n=== Check for implicit conversion or constructor ==="
rg "ContentAddressMethod\(" src/libstore/include/nix/store/content-address.hh
echo -e "\n=== Check the actual struct definition (not class) ==="
sed -n '20,80p' src/libstore/include/nix/store/content-address.hhLength of output: 2852
🏁 Script executed:
#!/bin/bash
# Find MAKE_WRAPPER_CONSTRUCTOR definition
echo "=== Search for MAKE_WRAPPER_CONSTRUCTOR definition ==="
rg "MAKE_WRAPPER_CONSTRUCTOR" --type=cpp --type=h -B 2 -A 5 | head -40
echo -e "\n=== Check if there's a conversion from FileIngestionMethod to ContentAddressMethod ==="
rg "FileIngestionMethod.*ContentAddressMethod\|ContentAddressMethod.*FileIngestionMethod" --type=cpp --type=h | head -10
echo -e "\n=== Verify the method signature patterns in the three files have same pattern ==="
echo "dummy-store.cc:"
sed -n '85,90p' src/libstore/dummy-store.cc | grep -A 3 "ContentAddressMethod"
echo -e "\nlegacy-ssh-store.hh:"
sed -n '115,120p' src/libstore/include/nix/store/legacy-ssh-store.hh | grep -A 3 "ContentAddressMethod"
echo -e "\nremote-store.hh:"
sed -n '88,93p' src/libstore/include/nix/store/remote-store.hh | grep -A 3 "ContentAddressMethod"Length of output: 3848
Update three addToStoreFromDump method default parameters to match the refactoring pattern.
The default parameter change in store-dir-config.hh is correct, but three related addToStoreFromDump method declarations still use the old FileIngestionMethod::NixArchive as defaults instead of the explicit ContentAddressMethod::Raw::NixArchive. While these work due to implicit conversion via MAKE_WRAPPER_CONSTRUCTOR, they should be updated for consistency:
- src/libstore/dummy-store.cc line 87
- src/libstore/include/nix/store/legacy-ssh-store.hh line 116
- src/libstore/include/nix/store/remote-store.hh line 89
Update all three to use ContentAddressMethod::Raw::NixArchive.
🤖 Prompt for AI Agents
In src/libstore/dummy-store.cc around line 87,
src/libstore/include/nix/store/legacy-ssh-store.hh around line 116, and
src/libstore/include/nix/store/remote-store.hh around line 89, update the
addToStoreFromDump method declarations so their default parameter uses
ContentAddressMethod::Raw::NixArchive instead of the old
FileIngestionMethod::NixArchive; change the default value in each declaration to
ContentAddressMethod::Raw::NixArchive to match the refactored
store-dir-config.hh and ensure consistency with the new enum wrapper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/libcmd/installables.cc (1)
303-385: Re-enable fragment completion logic.The completion implementation is stubbed out with
#if 0, breakingnix <cmd> <flake>#<fragment>completions. Therolesparameter is also unused.This issue was previously flagged with a detailed fix proposal. Please address the past review comment that provides a reimplementation approach.
🧹 Nitpick comments (2)
src/libcmd/installables.cc (1)
428-458: Track removal of legacy openEvalCache.The
FIXME: removecomment indicates this is transitional code. Consider opening a tracking issue for its removal once all call sites migrate to the new schema-based approach.Would you like me to search the codebase for remaining call sites to assess when this can be safely removed?
src/libflake/flake.cc (1)
897-905: Consider eliminating redundant computation.The calculation of
useRegistriesanduseRegistriesTopat lines 900-901 is duplicated in the mainlockFlakeimplementation at lines 383-385. While this doesn't affect correctness, it introduces minor redundancy.Consider refactoring to avoid duplicate computation:
LockedFlake lockFlake(const Settings & settings, EvalState & state, const FlakeRef & topRef, const LockFlags & lockFlags) { - auto useRegistries = lockFlags.useRegistries.value_or(settings.useRegistries); - auto useRegistriesTop = useRegistries ? fetchers::UseRegistries::All : fetchers::UseRegistries::No; - return lockFlake( - settings, state, topRef, lockFlags, getFlake(state, topRef, useRegistriesTop, {}, lockFlags.requireLockable)); + settings, state, topRef, lockFlags, + getFlake(state, topRef, + (lockFlags.useRegistries.value_or(settings.useRegistries) ? fetchers::UseRegistries::All : fetchers::UseRegistries::No), + {}, lockFlags.requireLockable)); }Alternatively, calculate once and pass through, though the current pattern is acceptable given the minor cost.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
doc/manual/source/SUMMARY.md.in(1 hunks)src/libcmd/installable-flake.cc(5 hunks)src/libcmd/installables.cc(7 hunks)src/libcmd/meson.build(2 hunks)src/libexpr/eval-cache.cc(8 hunks)src/libflake/flake.cc(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- doc/manual/source/SUMMARY.md.in
🧰 Additional context used
🧬 Code graph analysis (4)
src/libcmd/installables.cc (2)
src/libcmd/include/nix/cmd/command.hh (3)
completeFlakeRefWithFragment(390-395)completions(166-166)prefix(315-315)src/libcmd/flake-schemas.cc (2)
getDefaultFlakeSchemas(297-303)getDefaultFlakeSchemas(297-297)
src/libexpr/eval-cache.cc (2)
src/libexpr/include/nix/expr/parser-state.hh (3)
attrs(93-94)attrs(95-95)attrPath(91-91)src/libexpr/include/nix/expr/eval-cache.hh (10)
attrPath(158-158)name(136-136)name(138-138)name(142-142)name(144-144)name(146-146)name(148-148)name(150-150)name(152-152)toAttrPathStr(18-18)
src/libflake/flake.cc (2)
src/libflake/include/nix/flake/flake.hh (4)
settings(68-68)lockFlake(245-245)lockFlake(247-248)getFlake(124-125)src/nix/flake.cc (2)
lockFlake(58-61)lockFlake(58-58)
src/libcmd/installable-flake.cc (3)
src/libcmd/flake-schemas.cc (7)
state(250-250)call(39-122)call(40-40)getSchema(248-279)getSchema(248-248)getOutput(213-246)getOutput(213-213)src/libexpr/eval-cache.cc (2)
toAttrPathStr(398-401)toAttrPathStr(398-398)src/libcmd/installables.cc (2)
openEvalCache(429-458)openEvalCache(429-429)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build_aarch64-darwin / build
- GitHub Check: build_x86_64-linux / build
🔇 Additional comments (21)
src/libcmd/meson.build (3)
79-79: flake-schemas.cc exists and is correctly added to sources.The file has been verified to exist at
src/libcmd/flake-schemas.cc. The addition to the meson.build sources list is valid.
92-95: Both schema files exist — code is correct.Verified that
call-flake-schemas.nixandbuiltin-flake-schemas.nixare present insrc/libcmd/. The header generation pattern is correctly configured.
71-71: All referenced files and directories exist; no issues found.Verification confirms:
- ✓ generate-header subproject directory exists
- ✓ flake-schemas.cc source file exists
- ✓ call-flake-schemas.nix and builtin-flake-schemas.nix files exist
- ✓ Proper ordering: subdir() on line 71 is defined before gen_header.process() usage on lines 93-94
src/libcmd/installables.cc (3)
236-239: LGTM! Role-based default is clean.The new
getRoles()method clearly replaces the old path-based defaults with a role-based approach, aligning with the schema-driven design.
292-292: LGTM! Updated to pass roles.The call correctly uses
getRoles()to provide role-based context for completions.
520-532: LGTM! Constructor call updated correctly.The
InstallableFlakeinstantiation properly passes the new role-based parameters (getRoles(),lockFlags,getDefaultFlakeSchemas()), aligning with the updated constructor signature.src/libcmd/installable-flake.cc (5)
20-20: LGTM! Required header added.The flake-schemas header is necessary for the new schema-driven resolution logic.
29-40: LGTM! Updated to use AttrPath.The signature correctly uses
eval_cache::AttrPathand leverages the newtoAttrPathStrhelper for path formatting.
42-61: LGTM! Constructor signature updated for role-based schema.The constructor properly accepts
rolesanddefaultFlakeSchemas, replacing the old path-based approach. Fields are correctly initialized.
145-244: Schema-driven resolution implemented correctly, with noted hacks.The refactored
getCursorsproperly integrates with the flake-schemas subsystem:
- Uses
flake_schemas::callto get inventory and outputs- Correctly filters schemas by roles
- Handles both absolute (
.-prefixed) and schema-based fragments- Provides suggestions on attribute lookup failure
The implementation includes two acknowledged compatibility hacks (lines 163-180 for schema precedence, lines 212-215 for nix-repl), which are reasonable for maintaining backward compatibility during the transition.
Note: The schema precedence ordering preserves historical behavior but may benefit from a more declarative approach in future schema definitions, as the FIXME suggests.
258-264: LGTM! Lazy evaluation cache initialization.The
openEvalCache()method properly implements lazy initialization with memoization, callingflake_schemas::callonly once per instance.src/libexpr/eval-cache.cc (8)
122-137: LGTM! Updated to use AttrPath.The
setAttrssignature correctly adopts the newAttrPathtype, maintaining the same logic for attribute storage.
257-262: LGTM! Local variable updated to AttrPath.The type change is consistent with the broader refactor.
310-318: LGTM! Testing utility for cache completeness.The
NIX_ALLOW_EVALcheck is useful for verifying that evaluation caching is complete during testing. The default value of "1" ensures no behavior change in production.
371-396: LGTM! Raw and cleaned path variants.The split between
getAttrPathRaw()andgetAttrPath()properly separates raw traversal paths from cleaned/transformed paths. This supports the schema-driven design where inventory paths need post-processing (viacleanupAttrPath) to derive output paths.
398-411: LGTM! Extracted path formatting utility.The
toAttrPathStrhelper provides consistent path-to-string conversion, properly handling symbol resolution and formatting. This is correctly reused by thegetAttrPathStrmethods.
463-467: LGTM! Variant type updated to AttrPath.The cached value variant correctly uses
AttrPathinstead ofstd::vector<Symbol>.
533-545: LGTM! Signature updated to use AttrPath.The
findAlongAttrPathmethod correctly acceptsconst AttrPath &instead of the old vector type. The implementation logic remains sound.
687-717: LGTM! getAttrs updated to return AttrPath.The method correctly returns
AttrPathand uses it consistently for both cached and fresh attribute retrieval. The sorting logic is preserved.src/libflake/flake.cc (2)
380-381: LGTM: Valid API extension with backward compatibility.The addition of a
Flakeparameter enables callers to pass precomputed flake objects, reducing redundant fetching. The original signature is preserved via the overload at lines 897-905, maintaining backward compatibility.
207-214: Verify that exposingreadFlake(removingstatic) is intentional.Removing
staticfrom the file-scope function changes its linkage from internal (translation-unit-local) to external (visible across translation units). This makesreadFlakepart of the library's internal API. Confirm this visibility change is intentional and that there are no conflicts (e.g., the function should be declared in a header if other translation units will reference it).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/nix/flake.cc (1)
833-833: Add bounds checking before accessing attrPath[0].The code accesses
node->getAttrPath()[0]without verifying the attrPath is non-empty. While this may work in practice due to how nodes are constructed, defensive bounds checking would prevent potential out-of-bounds access.Apply this diff:
- if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages") + auto & attrPath = node->getAttrPath(); + if (!attrPath.empty() && node->root->state.symbols[attrPath[0]] == "legacyPackages")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
doc/manual/source/SUMMARY.md.in(1 hunks)src/nix/flake.cc(9 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/nix/flake.cc (2)
src/libcmd/flake-schemas.cc (14)
call(39-122)call(40-40)getDefaultFlakeSchemas(297-303)getDefaultFlakeSchemas(297-297)visit(148-188)visit(148-153)derivation(208-211)derivation(208-208)forEachOutput(124-146)forEachOutput(124-126)what(190-196)what(190-190)shortDescription(198-206)shortDescription(198-198)src/libcmd/include/nix/cmd/flake-schemas.hh (6)
call(12-12)visit(20-25)derivation(31-31)forEachOutput(14-16)what(27-27)shortDescription(29-29)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build_aarch64-darwin / build
- GitHub Check: build_x86_64-linux / build
🔇 Additional comments (9)
doc/manual/source/SUMMARY.md.in (1)
119-119: LGTM! Well-placed documentation entry.The Flake Schemas entry is correctly formatted, properly indented, and logically placed within the "Formats and Protocols" section.
src/nix/flake.cc (8)
21-21: LGTM!The include is necessary for the flake_schemas API used throughout this file.
303-303: LGTM!Adding
MixFlakeSchemasprovides the schema configuration interface needed for the refactored implementation.
375-416: LGTM - Schema-driven traversal implementation is well-structured.The visit function correctly uses
flake_schemas::visitwith three callbacks for leaves, non-leaves, and filtered nodes. The parallel evaluation with futures, error handling, and derivation path collection are all implemented appropriately.
418-428: LGTM - forEachOutput integration is correct.The code properly handles both known outputs (with schema information) and unknown outputs, tracking unchecked outputs for later warning.
481-481: LGTM!Adding
MixFlakeSchemasis consistent with the schema-driven refactor and enables the template fragment handling below.
759-759: LGTM!Adding
MixFlakeSchemasis consistent with the schema-driven refactor implemented in the run() method below.
802-846: LGTM - Schema-driven JSON construction is well-implemented.The visit function correctly populates the JSON structure with leaf attributes (what, shortDescription, derivationName) and hierarchical children, with proper error handling for legacyPackages.
511-520: No issues found. Constructor parameters are correctly specified.The InstallableFlake constructor signature confirms all three parameters in the call are correct:
{"nix-template"}→StringSet roles(parameter 6)lockFlags→const flake::LockFlags & lockFlags(parameter 7){}→std::optional<FlakeRef> defaultFlakeSchemas(parameter 8)The empty braces
{}is valid initialization of the optional parameter representing no default schema.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/nix/flake.cc (2)
368-369: Preserve fullErrorInfowhen logging
Only forwardinge.info().msgleaveslevel,pos, andtracesuninitialised, yielding UB and dropping vital context under--keep-going. Either pass the originalErrorInfostraight through or copy it and adjust the message so every field is defined.- logError({.msg = e.info().msg}); + auto info = e.info(); + logError(info);
950-966: Guard skipped/unknown outputs in tree render
WhenlegacyPackagesis skipped (default), this block still indexeschild.value()["output"], throwingout_of_rangeand breakingnix flake show. Handleskipped/unknowncases before descending intorender.- render( - child.value()["output"], - fmt(ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL, - "", - last ? treeLast : treeConn, - child.key()), - nextPrefix); - if (child.value().contains("unknown")) - logger->cout( - ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_ITALIC "(unknown flake output)" ANSI_NORMAL, - nextPrefix, - treeLast); + auto &info = child.value(); + auto header = fmt( + ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL, + "", + last ? treeLast : treeConn, + child.key()); + + if (info.contains("skipped")) { + logger->cout("%s " ANSI_ITALIC "(skipped)" ANSI_NORMAL, header); + continue; + } + if (info.contains("unknown")) { + logger->cout( + "%s " ANSI_ITALIC "(unknown flake output)" ANSI_NORMAL, + header); + continue; + } + + render(info["output"], header, nextPrefix);src/libcmd/include/nix/cmd/flake-schemas.hh (1)
1-13: Fix public header hygiene and includes
This header still pulls instd::function,std::optional,std::string,std::vector,std::map, andStringSetwithout including their definitions, and it exportsusing namespace eval_cache;. That breaks downstream builds and pollutes consumers. Please add the missing<functional>,<optional>,<string>,<vector>,<map>plus"nix/util/types.hh"(or whichever header definesStringSet) and drop the global namespace using.#pragma once -#include "nix/expr/eval-cache.hh" -#include "nix/flake/flake.hh" -#include "nix/cmd/command.hh" +#include <functional> +#include <map> +#include <optional> +#include <string> +#include <vector> + +#include "nix/util/types.hh" +#include "nix/expr/eval-cache.hh" +#include "nix/flake/flake.hh" +#include "nix/cmd/command.hh" -namespace nix::flake_schemas { - -using namespace eval_cache; +namespace nix::flake_schemas {src/libcmd/flake-schemas.cc (1)
159-163: Dereference the optional beforestd::find
systemis anstd::optional<std::string>. Passing it directly tostd::findis a compile error; use*systemso the comparison operates on astd::string.- if (std::find(forSystems->begin(), forSystems->end(), system) == forSystems->end()) { + if (std::find(forSystems->begin(), forSystems->end(), *system) == forSystems->end()) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.github/workflows/build.yml(2 hunks)src/libcmd/builtin-flake-schemas.nix(1 hunks)src/libcmd/flake-schemas.cc(1 hunks)src/libcmd/include/nix/cmd/flake-schemas.hh(1 hunks)src/nix/flake.cc(10 hunks)tests/functional/flakes/show.sh(5 hunks)tests/functional/formatter.sh(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
src/nix/flake.cc (1)
src/libcmd/flake-schemas.cc (15)
call(39-122)call(40-40)state(254-254)getDefaultFlakeSchemas(301-307)getDefaultFlakeSchemas(301-301)visit(148-187)visit(148-153)forEachOutput(124-146)forEachOutput(124-126)what(197-203)what(197-197)shortDescription(205-210)shortDescription(205-205)forSystems(189-195)forSystems(189-189)
src/libcmd/flake-schemas.cc (3)
src/libflake/flake.cc (8)
readFlake(207-312)readFlake(207-213)lockFlake(380-895)lockFlake(380-381)lockFlake(897-905)lockFlake(898-898)callFlake(928-971)callFlake(928-928)src/libcmd/include/nix/cmd/flake-schemas.hh (1)
Node(22-25)src/libcmd/installables.cc (2)
completeFlakeRef(387-408)completeFlakeRef(387-387)
src/libcmd/include/nix/cmd/flake-schemas.hh (2)
src/libcmd/flake-schemas.cc (1)
state(254-254)src/libexpr/include/nix/expr/eval-cache.hh (1)
attrPath(158-158)
🪛 Clang (14.0.6)
src/libcmd/flake-schemas.cc
[error] 1-1: 'nix/cmd/flake-schemas.hh' file not found
(clang-diagnostic-error)
src/libcmd/include/nix/cmd/flake-schemas.hh
[error] 3-3: 'nix/expr/eval-cache.hh' file not found
(clang-diagnostic-error)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build_aarch64-darwin / build
| if (output) | ||
| futures.spawn(1, [&visit, output(ref(output))]() { visit(output); }); | ||
| else |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix lambda capture for spawned visits
output(ref(output)) is invalid C++ and currently fails to compile. Capture a properly constructed ref<AttrCursor> so the spawned task can call visit with the expected type.
- if (output)
- futures.spawn(1, [&visit, output(ref(output))]() { visit(output); });
+ if (output) {
+ auto outputRef = ref(output);
+ futures.spawn(1, [&visit, outputRef]() { visit(outputRef); });
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (output) | |
| futures.spawn(1, [&visit, output(ref(output))]() { visit(output); }); | |
| else | |
| if (output) { | |
| auto outputRef = ref(output); | |
| futures.spawn(1, [&visit, outputRef]() { visit(outputRef); }); | |
| } | |
| else |
🤖 Prompt for AI Agents
In src/nix/flake.cc around lines 424-426, the lambda capture uses invalid syntax
`output(ref(output))`; fix by first constructing a proper ref<AttrCursor> local
(e.g. `auto out = ref(output);`) and then capture that local by value in the
spawned task (e.g. ` [&visit, out]() { visit(out); }`) so the spawned lambda
receives a correctly typed ref<AttrCursor>.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
src/nix/flake.cc (3)
368-368: [Duplicate] Fix uninitialized ErrorInfo field.This issue was flagged in a previous review. The aggregate initialization
{.msg = e.info().msg}leaves thelevelfield uninitialized (Verbosity has no default), causing undefined behavior. Additionally, error position and traces are lost.Apply the fix from the previous review:
- logError(e.info()); + logError({.level = lvlError, .msg = e.info().msg, .pos = e.info().pos, .traces = e.info().traces});Or simply preserve the original error info:
- logError({.msg = e.info().msg}); + logError(e.info());
424-426: [Duplicate] Fix invalid lambda capture syntax.This issue was flagged in a previous review. The syntax
output(ref(output))in the lambda capture list is invalid C++ and will not compile.Apply the fix from the previous review:
if (output) - futures.spawn(1, [&visit, output(ref(output))]() { visit(output); }); + futures.spawn(1, [&visit, output]() { visit(ref(output)); });
951-966: [Duplicate] Handle skipped outputs before accessing "output" key.This is the same issue flagged in a previous review, now at a different location. At line 955,
child.value()["output"]is accessed unconditionally, but skipped outputs (set at lines 894-895) only have a "skipped" key, not an "output" key. This will create a null JSON value and cause incorrect rendering.Apply this diff to handle skipped outputs:
for (const auto & [i, child] : enumerate(inv.items())) { bool last = i + 1 == inv.size(); auto nextPrefix = last ? treeNull : treeLine; + + // Handle skipped and unknown outputs + if (child.value().contains("skipped")) { + logger->cout( + ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL " " ANSI_ITALIC "(skipped)" ANSI_NORMAL, + "", + last ? treeLast : treeConn, + child.key()); + continue; + } + + if (child.value().contains("unknown")) { + logger->cout( + ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL, + "", + last ? treeLast : treeConn, + child.key()); + logger->cout( + ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_ITALIC "(unknown flake output)" ANSI_NORMAL, + nextPrefix, + treeLast); + continue; + } + render( child.value()["output"], fmt(ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL, "", last ? treeLast : treeConn, child.key()), nextPrefix); - if (child.value().contains("unknown")) - logger->cout( - ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_ITALIC "(unknown flake output)" ANSI_NORMAL, - nextPrefix, - treeLast); }
🧹 Nitpick comments (1)
src/nix/flake.cc (1)
828-851: Cache derivation path to avoid redundant calls.When both
showDrvPathsandshowOutputPathsare true,drv->forceDerivation()is called twice (lines 834 and 840), which is inefficient.Apply this diff to cache the derivation path:
if (auto drv = leaf.derivation()) { auto drvObj = nlohmann::json::object(); drvObj.emplace("name", drv->getAttr(state->s.name)->getString()); + std::optional<StorePath> drvPath; if (showDrvPaths) { - auto drvPath = drv->forceDerivation(); - drvObj.emplace("path", store->printStorePath(drvPath)); + drvPath = drv->forceDerivation(); + drvObj.emplace("path", store->printStorePath(*drvPath)); } if (showOutputPaths) { auto outputs = nlohmann::json::object(); - auto drvPath = drv->forceDerivation(); - auto drv = getEvalStore()->derivationFromPath(drvPath); + if (!drvPath) + drvPath = drv->forceDerivation(); + auto drv = getEvalStore()->derivationFromPath(*drvPath); for (auto & i : drv.outputsAndOptPaths(*store)) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
doc/manual/source/SUMMARY.md.in(1 hunks)src/nix/flake.cc(10 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- doc/manual/source/SUMMARY.md.in
🧰 Additional context used
🧬 Code graph analysis (1)
src/nix/flake.cc (2)
src/libcmd/include/nix/cmd/flake-schemas.hh (3)
call(12-12)visit(47-52)forEachOutput(14-16)src/libcmd/flake-schemas.cc (15)
call(39-122)call(40-40)state(254-254)getDefaultFlakeSchemas(301-307)getDefaultFlakeSchemas(301-301)visit(148-187)visit(148-153)forEachOutput(124-146)forEachOutput(124-126)what(197-203)what(197-197)shortDescription(205-210)shortDescription(205-205)forSystems(189-195)forSystems(189-189)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build_aarch64-darwin / build
- GitHub Check: build_x86_64-linux / build
This denotes the result of a build that didn't succeed or fail, but was cancelled because some other goal failed and --keep-going was not enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/nix/flake.cc (1)
320-343: Perfect. I have verified all aspects of the review comment. Let me compile my findings:Verification Summary
I have thoroughly verified the review comment about the conflicting semantics of
--no-buildand--build-allflags inCmdFlakeCheck. Here are my findings:Confirmed Issues:
Code Logic Verified (lines 406-419): When both flags are used, the condition
if (buildAll || leaf.isFlakeCheck())is TRUE, causing derivations to be added todrvPaths_.Build Prevention Confirmed (line 463): The critical check
if (build && !drvPaths->empty())prevents actual building whenbuild=false, but no user feedback is given about this.Misleading Output Confirmed (lines 421-426): When
--no-build --build-allis used:
- Derivations are added to the queue
- Building is skipped (line 463 condition fails)
buildSkippedremainsfalse(never set to true, since theelsebranch at line 417 is not taken)- User sees "✅ attrPath" with no "(build skipped)" indication
- This creates a false impression that everything succeeded normally
No Flag Conflict Validation: The codebase does not validate or reject conflicting flags.
User Impact: A user running
nix flake check --no-build --build-allgets no indication that despite requesting--build-all, the builds are actually being disabled. The output is misleading.
Clarify semantics of
--no-buildcombined with--build-allWhen a user runs
nix flake check --no-build --build-all, derivations are queued for building but silently skipped due tobuild == false, yet the output displays "✅ attrPath" without indicating builds were skipped. This combination creates misleading feedback.Consider either:
- Rejecting the conflicting combination with a clear error, or
- Ensuring output always includes "(build skipped)" when
build == false, regardless ofbuildAllvalue.This preserves evaluation semantics while making CLI behavior less surprising.
Also applies to: 406-421
♻️ Duplicate comments (1)
src/nix/flake.cc (1)
450-460: Fix invalid lambda capture for spawned flake output visitsThe capture list
[&visit, output(ref(output))]is invalid C++ and won’t compile, and this exact issue was already noted in an earlier review.You want to construct a
ref<AttrCursor>once and capture that by value:flake_schemas::forEachOutput( inventory, [&](Symbol outputName, std::shared_ptr<eval_cache::AttrCursor> output, const std::string & doc, bool isLast) { - if (output) - futures.spawn(1, [&visit, output(ref(output))]() { visit(output); }); - else - uncheckedOutputs.lock()->insert(std::string(state->symbols[outputName])); + if (output) { + auto outputRef = ref(output); + futures.spawn(1, [&visit, outputRef]() { visit(outputRef); }); + } else { + uncheckedOutputs.lock()->insert(std::string(state->symbols[outputName])); + } });This both fixes the syntax and ensures the spawned task holds a proper owning reference to the cursor.
Is a lambda capture of the form `[x(y)]` valid C++ syntax, and what are the allowed forms of lambda capture initializers in C++17/C++20?
🧹 Nitpick comments (1)
src/nix/flake.cc (1)
304-318: Perfect. I now have all the information needed to rewrite the review comment. The verification shows:
- Web search result:
throw;outside an exception context invokesstd::terminate—not undefined behavior, but program termination.- Codebase verification:
logEvalError()is called only once (line 431) and it's safely inside acatch (Error & e)block.- Current usage is safe: No actual misuse exists.
The review comment has one technical inaccuracy: it claims "undefined behavior / termination" when the C++ standard specifically defines this as
std::terminate(), which is well-defined behavior, not UB.
Technical correction:
throw;outside exception context callsstd::terminate(), not undefined behavior
logEvalError()relies on a nakedthrow;to rethrow the currently handled exception. Per C++ standard, callingthrow;when no exception is active invokesstd::terminate()— this is well-defined termination, not undefined behavior.Right now the function is only called inside
catch (Error &)blocks (line 431 inCmdFlakeCheck), so it's safe today. However, the helper's signature doesn't encode that precondition, making accidental misuse possible.Consider:
- Passing the caught exception explicitly and using
dynamic_cast/std::exception_ptrinstead ofthrow;, or- Adding an explicit comment documenting that this must only be called from within a
catchhandler.This reduces the chance of future misuse while preserving the "force
CachedEvalErrorthen log" behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/libstore/build-result.cc(4 hunks)src/libstore/include/nix/store/build-result.hh(3 hunks)src/libstore/include/nix/store/build/goal.hh(1 hunks)src/nix/flake.cc(10 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/libstore/include/nix/store/build-result.hh (1)
src/libstore/build-result.cc (4)
statusToString(16-30)statusToString(16-16)statusToString(32-64)statusToString(32-32)
src/libstore/build-result.cc (1)
src/libstore/include/nix/store/build-result.hh (4)
status(34-34)status(45-49)status(45-45)status(83-83)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build_aarch64-darwin / build
- GitHub Check: build_x86_64-linux / build
🔇 Additional comments (7)
src/libstore/include/nix/store/build-result.hh (3)
34-34: LGTM! Good code organization.Converting
statusToStringto static member functions improves code organization by keeping the functionality with the types rather than using free functions.Also applies to: 83-83
80-80: LGTM! Correctly placed for protocol compatibility.The
Cancelledstatus is correctly added at the end of the enum with an explicit value (16), following the protocol compatibility requirement stated in the comments above.
104-107: LGTM! Improved error reporting.Using
statusToString(status)as a fallback whenerrorMsgis empty ensures thatrethrow()always provides a meaningful error message rather than throwing with an empty string.src/libstore/build-result.cc (2)
16-64: LGTM! Complete and correct implementations.Both
statusToStringimplementations:
- Correctly handle all enum values including the new
Cancelledstatus- Use exhaustive switch statements with
unreachable()defaults for compile-time coverage checking- Return consistent string representations
66-85: LGTM! Consistent with the refactoring.The
to_jsonfunction correctly uses the fully qualified static member functions for status-to-string conversion, maintaining consistent JSON serialization.src/libstore/include/nix/store/build/goal.hh (1)
112-112: Based on my thorough verification of the codebase, I can now provide the rewritten review comment:The initialization is semantically correct and follows proper patterns.
The
buildResultfield initialization toCancelledpairs withexitCode = ecBusy. This is defensive initialization: goals only have theirbuildResultexplicitly set viadoneSuccess()ordoneFailure()methods upon completion, and all code readsbuildResultonly afterexitCodehas changed fromecBusy. TheCancelleddefault is never exposed to callers—it serves as a safe sentinel value that remains unobserved in normal operation.src/nix/flake.cc (1)
547-585: Template init flow viaInstallableFlakeandnix-templatelooks coherentThe new
CmdFlakeInitCommonwiring:
- Uses
completeFlakeRefWithFragment(..., {"nix-template"}, prefix)for completion, and- Constructs an
InstallableFlakewithtemplateNameas fragment and"nix-template"as the default attr path,then reads
pathand optionalwelcomeTextfrom the resulting cursor.This keeps fragment handling (
--template foo#bar) and the default template output (nix-template) nicely aligned and should compose well with the schema-based flake machinery.
| std::function<void(ref<eval_cache::AttrCursor> node, nlohmann::json & obj)> visit; | ||
|
|
||
| Activity act(*logger, lvlInfo, actUnknown, fmt("evaluating '%s'", concatStringsSep(".", attrPathS))); | ||
| visit = [&](ref<eval_cache::AttrCursor> node, nlohmann::json & obj) { | ||
| flake_schemas::visit( | ||
| showAllSystems ? std::optional<std::string>() : localSystem, | ||
| node, | ||
|
|
||
| try { | ||
| auto recurse = [&]() { | ||
| for (const auto & attr : visitor.getAttrs()) { | ||
| const auto & attrName = state->symbols[attr]; | ||
| auto visitor2 = visitor.getAttr(attrName); | ||
| auto & j2 = *j.emplace(attrName, nlohmann::json::object()).first; | ||
| futures.spawn(1, [&, visitor2]() { visit(*visitor2, j2); }); | ||
| } | ||
| }; | ||
| [&](const flake_schemas::Leaf & leaf) { | ||
| if (auto what = leaf.what()) | ||
| obj.emplace("what", *what); | ||
|
|
||
| auto showDerivation = [&]() { | ||
| auto name = visitor.getAttr(state->s.name)->getString(); | ||
| std::optional<std::string> description; | ||
| if (auto aMeta = visitor.maybeGetAttr(state->s.meta)) { | ||
| if (auto aDescription = aMeta->maybeGetAttr(state->s.description)) | ||
| description = aDescription->getString(); | ||
| } | ||
| j.emplace("type", "derivation"); | ||
| if (!json) | ||
| j.emplace( | ||
| "subtype", | ||
| attrPath.size() == 2 && attrPathS[0] == "devShell" ? "development environment" | ||
| : attrPath.size() >= 2 && attrPathS[0] == "devShells" ? "development environment" | ||
| : attrPath.size() == 3 && attrPathS[0] == "checks" ? "derivation" | ||
| : attrPath.size() >= 1 && attrPathS[0] == "hydraJobs" ? "derivation" | ||
| : "package"); | ||
| j.emplace("name", name); | ||
| if (description) | ||
| j.emplace("description", *description); | ||
| }; | ||
| if (auto shortDescription = leaf.shortDescription()) | ||
| obj.emplace("shortDescription", *shortDescription); | ||
|
|
||
| auto omit = [&](std::string_view flag) { | ||
| if (json) | ||
| logger->warn(fmt("%s omitted (use '%s' to show)", concatStringsSep(".", attrPathS), flag)); | ||
| else { | ||
| j.emplace("type", "omitted"); | ||
| j.emplace("message", fmt(ANSI_WARNING "omitted" ANSI_NORMAL " (use '%s' to show)", flag)); | ||
| } | ||
| }; | ||
| if (auto drv = leaf.derivation()) { | ||
| auto drvObj = nlohmann::json::object(); | ||
|
|
||
| if (attrPath.size() == 0 | ||
| || (attrPath.size() == 1 | ||
| && (attrPathS[0] == "defaultPackage" || attrPathS[0] == "devShell" | ||
| || attrPathS[0] == "formatter" || attrPathS[0] == "nixosConfigurations" | ||
| || attrPathS[0] == "nixosModules" || attrPathS[0] == "defaultApp" | ||
| || attrPathS[0] == "templates" || attrPathS[0] == "overlays")) | ||
| || ((attrPath.size() == 1 || attrPath.size() == 2) | ||
| && (attrPathS[0] == "checks" || attrPathS[0] == "packages" || attrPathS[0] == "devShells" | ||
| || attrPathS[0] == "apps"))) { | ||
| recurse(); | ||
| } | ||
| drvObj.emplace("name", drv->getAttr(state->s.name)->getString()); | ||
|
|
||
| else if ( | ||
| (attrPath.size() == 2 | ||
| && (attrPathS[0] == "defaultPackage" || attrPathS[0] == "devShell" || attrPathS[0] == "formatter")) | ||
| || (attrPath.size() == 3 | ||
| && (attrPathS[0] == "checks" || attrPathS[0] == "packages" || attrPathS[0] == "devShells"))) { | ||
| if (!showAllSystems && std::string(attrPathS[1]) != localSystem) { | ||
| omit("--all-systems"); | ||
| } else { | ||
| try { | ||
| if (visitor.isDerivation()) | ||
| showDerivation(); | ||
| else | ||
| throw Error("expected a derivation"); | ||
| } catch (IFDError & e) { | ||
| logger->warn(fmt( | ||
| "%s omitted due to use of import from derivation", concatStringsSep(".", attrPathS))); | ||
| if (showDrvPaths) { | ||
| auto drvPath = drv->forceDerivation(); | ||
| drvObj.emplace("path", store->printStorePath(drvPath)); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| else if (attrPath.size() > 0 && attrPathS[0] == "hydraJobs") { | ||
| try { | ||
| if (visitor.isDerivation()) | ||
| showDerivation(); | ||
| else | ||
| recurse(); | ||
| } catch (IFDError & e) { | ||
| logger->warn( | ||
| fmt("%s omitted due to use of import from derivation", concatStringsSep(".", attrPathS))); | ||
| } | ||
| } | ||
|
|
||
| else if (attrPath.size() > 0 && attrPathS[0] == "legacyPackages") { | ||
| if (attrPath.size() == 1) | ||
| recurse(); | ||
| else if (!showLegacy) { | ||
| omit("--legacy"); | ||
| } else if (!showAllSystems && std::string(attrPathS[1]) != localSystem) { | ||
| omit("--all-systems"); | ||
| } else { | ||
| try { | ||
| if (visitor.isDerivation()) | ||
| showDerivation(); | ||
| else if (attrPath.size() <= 2) | ||
| // FIXME: handle recurseIntoAttrs | ||
| recurse(); | ||
| } catch (IFDError & e) { | ||
| logger->warn(fmt( | ||
| "%s omitted due to use of import from derivation", concatStringsSep(".", attrPathS))); | ||
| if (showOutputPaths) { | ||
| auto outputs = nlohmann::json::object(); | ||
| auto drvPath = drv->forceDerivation(); | ||
| auto drv = getEvalStore()->derivationFromPath(drvPath); | ||
| for (auto & i : drv.outputsAndOptPaths(*store)) { | ||
| if (auto outPath = i.second.second) | ||
| outputs.emplace(i.first, store->printStorePath(*outPath)); | ||
| else | ||
| outputs.emplace(i.first, nullptr); | ||
| } | ||
| drvObj.emplace("outputs", std::move(outputs)); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| else if ( | ||
| (attrPath.size() == 2 && attrPathS[0] == "defaultApp") | ||
| || (attrPath.size() == 3 && attrPathS[0] == "apps")) { | ||
| auto aType = visitor.maybeGetAttr("type"); | ||
| std::optional<std::string> description; | ||
| if (auto aMeta = visitor.maybeGetAttr(state->s.meta)) { | ||
| if (auto aDescription = aMeta->maybeGetAttr(state->s.description)) | ||
| description = aDescription->getString(); | ||
| obj.emplace("derivation", std::move(drvObj)); | ||
| } | ||
| if (!aType || aType->getString() != "app") | ||
| state->error<EvalError>("not an app definition").debugThrow(); | ||
| j.emplace("type", "app"); | ||
| if (description) | ||
| j.emplace("description", *description); | ||
| } | ||
|
|
||
| else if ( | ||
| (attrPath.size() == 1 && attrPathS[0] == "defaultTemplate") | ||
| || (attrPath.size() == 2 && attrPathS[0] == "templates")) { | ||
| auto description = visitor.getAttr("description")->getString(); | ||
| j.emplace("type", "template"); | ||
| j.emplace("description", description); | ||
| } | ||
| if (auto forSystems = leaf.forSystems()) | ||
| obj.emplace("forSystems", *forSystems); | ||
|
|
||
| // FIXME: add more stuff | ||
| }, | ||
|
|
||
| [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) { | ||
| auto children = nlohmann::json::object(); | ||
| forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) { | ||
| auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value(); | ||
| futures.spawn(1, [&visit, &j, node]() { | ||
| try { | ||
| visit(node, j); | ||
| } catch (EvalError & e) { | ||
| // FIXME: make it a flake schema attribute whether to ignore evaluation errors. | ||
| if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages") | ||
| j.emplace("failed", true); | ||
| else | ||
| throw; | ||
| } | ||
| }); | ||
| }); | ||
| obj.emplace("children", std::move(children)); | ||
| }, | ||
|
|
||
| else { | ||
| auto [type, description] = (attrPath.size() == 1 && attrPathS[0] == "overlay") | ||
| || (attrPath.size() == 2 && attrPathS[0] == "overlays") | ||
| ? std::make_pair("nixpkgs-overlay", "Nixpkgs overlay") | ||
| : attrPath.size() == 2 && attrPathS[0] == "nixosConfigurations" | ||
| ? std::make_pair("nixos-configuration", "NixOS configuration") | ||
| : (attrPath.size() == 1 && attrPathS[0] == "nixosModule") | ||
| || (attrPath.size() == 2 && attrPathS[0] == "nixosModules") | ||
| ? std::make_pair("nixos-module", "NixOS module") | ||
| : std::make_pair("unknown", "unknown"); | ||
| j.emplace("type", type); | ||
| j.emplace("description", description); | ||
| } | ||
| } catch (EvalError & e) { | ||
| if (!(attrPath.size() > 0 && attrPathS[0] == "legacyPackages")) | ||
| throw; | ||
| } | ||
| [&](ref<eval_cache::AttrCursor> node, const std::vector<std::string> & systems) { | ||
| obj.emplace("filtered", true); | ||
| }); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Script executed:
# First, let's look at the file and understand the context around lines 880-948
cd /tmp && git clone --depth 1 https://github.com/NixOS/nix.git 2>&1 | head -20Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
Cloning into 'nix'...
warning: templates not found in /usr/share/git-core/templates
</shell_result>
Script executed:
# Get the actual file context
wc -l nix/src/nix/flake.ccRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 1
stderr:
wc: nix/src/nix/flake.cc: No such file or directory
</shell_result>
Script executed:
# Read the relevant section with context
sed -n '860,960p' nix/src/nix/flake.cc | cat -nRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stderr:
sed: can't read nix/src/nix/flake.cc: No such file or directory
</shell_result>
Script executed:
# Let's search for futures.finishAll() to understand the synchronization point
rg -n "finishAll|futures\.spawn" nix/src/nix/flake.cc -B 3 -A 3Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 2
stderr:
nix/src/nix/flake.cc: No such file or directory (os error 2)
</shell_result>
Script executed:
# Let's understand the ThreadPool/futures implementation
rg -n "class.*Sync|finishAll|spawn" nix/src/libutil/ -g "*.h" -g "*.cc" | head -50Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stderr:
nix/src/libutil/: No such file or directory (os error 2)
</shell_result>
Script executed:
# See what was actually cloned
ls -la nix/ | head -20Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stderr:
ls: cannot access 'nix/': No such file or directory
</shell_result>
Script executed:
# Find flake.cc
find nix -name "flake.cc" -type f 2>/dev/nullRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 1
</shell_result>
Script executed:
# List the directory structure at root
ls -la nix/ | grep -E "^d"Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 1
stderr:
ls: cannot access 'nix/': No such file or directory
</shell_result>
Script executed:
# Check current working directory and contents
pwd
ls -la | head -20Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
/home/jailuser/git
total 136
drwxr-xr-x 14 jailuser jailuser 700 Nov 26 17:28 .
drwxr-xr-x 1 jailuser jailuser 100 Nov 26 17:28 ..
-rw-r--r-- 1 jailuser jailuser 1010 Nov 26 17:28 .clang-format
-rw-r--r-- 1 jailuser jailuser 135 Nov 26 17:28 .clang-tidy
-rw-r--r-- 1 jailuser jailuser 598 Nov 26 17:28 .dir-locals.el
-rw-r--r-- 1 jailuser jailuser 610 Nov 26 17:28 .editorconfig
drwxr-xr-x 6 jailuser jailuser 240 Nov 26 17:36 .git
-rw-r--r-- 1 jailuser jailuser 366 Nov 26 17:28 .git-blame-ignore-revs
drwxr-xr-x 5 jailuser jailuser 220 Nov 26 17:28 .github
-rw-r--r-- 1 jailuser jailuser 723 Nov 26 17:28 .gitignore
-rw-r--r-- 1 jailuser jailuser 3945 Nov 26 17:28 .mergify.yml
-rw-r--r-- 1 jailuser jailuser 116 Nov 26 17:28 .shellcheckrc
-rw-r--r-- 1 jailuser jailuser 7 Nov 26 17:28 .version
-rw-r--r-- 1 jailuser jailuser 7 Nov 26 17:28 .version-determinate
-rw-r--r-- 1 jailuser jailuser 1157 Nov 26 17:28 CITATION.cff
-rw-r--r-- 1 jailuser jailuser 6640 Nov 26 17:28 CONTRIBUTING.md
-rw-r--r-- 1 jailuser jailuser 26419 Nov 26 17:28 COPYING
lrwxrwxrwx 1 jailuser jailuser 41 Nov 26 17:28 HACKING.md -> doc/manual/source/development/building.md
-rw-r--r-- 1 jailuser jailuser 5492 Nov 26 17:28 README.md
</shell_result>
Script executed:
# Find flake.cc in current directory
find . -name "flake.cc" -type f 2>/dev/null | head -5Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
./src/libflake/flake.cc
./src/nix/flake.cc
</shell_result>
Script executed:
# Read the code around lines 880-948
wc -l src/nix/flake.ccRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
1127 src/nix/flake.cc
</shell_result>
Script executed:
# Read the relevant section with more context
sed -n '850,980p' src/nix/flake.cc | cat -nRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
1 .longName = "drv-paths",
2 .description = "Include the store paths of derivations in the JSON output.",
3 .handler = {&showDrvPaths, true},
4 });
5 }
6
7 std::string description() override
8 {
9 return "show the outputs provided by a flake";
10 }
11
12 std::string doc() override
13 {
14 return
15 #include "flake-show.md"
16 ;
17 }
18
19 void run(nix::ref<nix::Store> store) override
20 {
21 auto state = getEvalState();
22 auto flake = std::make_shared<LockedFlake>(lockFlake());
23 auto localSystem = std::string(settings.thisSystem.get());
24
25 auto cache = flake_schemas::call(*state, flake, getDefaultFlakeSchemas());
26
27 auto inventory = cache->getRoot()->getAttr("inventory");
28
29 FutureVector futures(*state->executor);
30
31 std::function<void(ref<eval_cache::AttrCursor> node, nlohmann::json & obj)> visit;
32
33 visit = [&](ref<eval_cache::AttrCursor> node, nlohmann::json & obj) {
34 flake_schemas::visit(
35 showAllSystems ? std::optional<std::string>() : localSystem,
36 node,
37
38 [&](const flake_schemas::Leaf & leaf) {
39 if (auto what = leaf.what())
40 obj.emplace("what", *what);
41
42 if (auto shortDescription = leaf.shortDescription())
43 obj.emplace("shortDescription", *shortDescription);
44
45 if (auto drv = leaf.derivation()) {
46 auto drvObj = nlohmann::json::object();
47
48 drvObj.emplace("name", drv->getAttr(state->s.name)->getString());
49
50 if (showDrvPaths) {
51 auto drvPath = drv->forceDerivation();
52 drvObj.emplace("path", store->printStorePath(drvPath));
53 }
54
55 if (showOutputPaths) {
56 auto outputs = nlohmann::json::object();
57 auto drvPath = drv->forceDerivation();
58 auto drv = getEvalStore()->derivationFromPath(drvPath);
59 for (auto & i : drv.outputsAndOptPaths(*store)) {
60 if (auto outPath = i.second.second)
61 outputs.emplace(i.first, store->printStorePath(*outPath));
62 else
63 outputs.emplace(i.first, nullptr);
64 }
65 drvObj.emplace("outputs", std::move(outputs));
66 }
67
68 obj.emplace("derivation", std::move(drvObj));
69 }
70
71 if (auto forSystems = leaf.forSystems())
72 obj.emplace("forSystems", *forSystems);
73
74 // FIXME: add more stuff
75 },
76
77 [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
78 auto children = nlohmann::json::object();
79 forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
80 auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value();
81 futures.spawn(1, [&visit, &j, node]() {
82 try {
83 visit(node, j);
84 } catch (EvalError & e) {
85 // FIXME: make it a flake schema attribute whether to ignore evaluation errors.
86 if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages")
87 j.emplace("failed", true);
88 else
89 throw;
90 }
91 });
92 });
93 obj.emplace("children", std::move(children));
94 },
95
96 [&](ref<eval_cache::AttrCursor> node, const std::vector<std::string> & systems) {
97 obj.emplace("filtered", true);
98 });
99 };
100
101 auto inv = nlohmann::json::object();
102
103 flake_schemas::forEachOutput(
104 inventory,
105 [&](Symbol outputName,
106 std::shared_ptr<eval_cache::AttrCursor> output,
107 const std::string & doc,
108 bool isLast) {
109 auto & j = inv.emplace(state->symbols[outputName], nlohmann::json::object()).first.value();
110
111 if (!showLegacy && state->symbols[outputName] == "legacyPackages") {
112 j.emplace("skipped", true);
113 } else if (output) {
114 j.emplace("doc", doc);
115 auto & j2 = j.emplace("output", nlohmann::json::object()).first.value();
116 futures.spawn(1, [&visit, output, &j2]() { visit(ref(output), j2); });
117 } else
118 j.emplace("unknown", true);
119 });
120
121 futures.finishAll();
122
123 if (json) {
124 auto res = nlohmann::json{{"version", 2}, {"inventory", std::move(inv)}};
125 printJSON(res);
126 } else {
127
128 // Render the JSON into a tree representation.
129 std::function<void(nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix)>
130 render;
131
</shell_result>
Script executed:
# Search for futures.finishAll() and spawn to understand synchronization
rg -n "finishAll|futures\.spawn" src/nix/flake.cc -B 5 -A 5Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
435- }
436- },
437-
438- [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
439- forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
440: futures.spawn(2, [&visit, node]() { visit(node); });
441- });
442- },
443-
444- [&](ref<eval_cache::AttrCursor> node, const std::vector<std::string> & systems) {
445- for (auto & s : systems)
--
452- [&](Symbol outputName,
453- std::shared_ptr<eval_cache::AttrCursor> output,
454- const std::string & doc,
455- bool isLast) {
456- if (output)
457: futures.spawn(1, [&visit, output(ref(output))]() { visit(output); });
458- else
459- uncheckedOutputs.lock()->insert(std::string(state->symbols[outputName]));
460- });
461-
462: futures.finishAll();
463-
464- if (!uncheckedOutputs.lock()->empty())
465- warn("The following flake outputs are unchecked: %s.", concatStringsSep(", ", *uncheckedOutputs.lock()));
466-
467- auto drvPaths(drvPaths_.lock());
--
925-
926- [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
927- auto children = nlohmann::json::object();
928- forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
929- auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value();
930: futures.spawn(1, [&visit, &j, node]() {
931- try {
932- visit(node, j);
933- } catch (EvalError & e) {
934- // FIXME: make it a flake schema attribute whether to ignore evaluation errors.
935- if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages")
--
960- if (!showLegacy && state->symbols[outputName] == "legacyPackages") {
961- j.emplace("skipped", true);
962- } else if (output) {
963- j.emplace("doc", doc);
964- auto & j2 = j.emplace("output", nlohmann::json::object()).first.value();
965: futures.spawn(1, [&visit, output, &j2]() { visit(ref(output), j2); });
966- } else
967- j.emplace("unknown", true);
968- });
969-
970: futures.finishAll();
971-
972- if (json) {
973- auto res = nlohmann::json{{"version", 2}, {"inventory", std::move(inv)}};
974- printJSON(res);
975- } else {
</shell_result>
Script executed:
# Read the exact lines around the problematic handler to see where children is moved
sed -n '925,945p' src/nix/flake.cc | cat -nRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
1
2 [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
3 auto children = nlohmann::json::object();
4 forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
5 auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value();
6 futures.spawn(1, [&visit, &j, node]() {
7 try {
8 visit(node, j);
9 } catch (EvalError & e) {
10 // FIXME: make it a flake schema attribute whether to ignore evaluation errors.
11 if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages")
12 j.emplace("failed", true);
13 else
14 throw;
15 }
16 });
17 });
18 obj.emplace("children", std::move(children));
19 },
20
21 [&](ref<eval_cache::AttrCursor> node, const std::vector<std::string> & systems) {
</shell_result>
Script executed:
# Find the FutureVector class definition
rg -n "class FutureVector|struct FutureVector" src/ -A 20Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
src/libexpr/include/nix/expr/parallel-eval.hh:65:struct FutureVector
src/libexpr/include/nix/expr/parallel-eval.hh-66-{
src/libexpr/include/nix/expr/parallel-eval.hh-67- Executor & executor;
src/libexpr/include/nix/expr/parallel-eval.hh-68-
src/libexpr/include/nix/expr/parallel-eval.hh-69- struct State
src/libexpr/include/nix/expr/parallel-eval.hh-70- {
src/libexpr/include/nix/expr/parallel-eval.hh-71- std::vector<std::future<void>> futures;
src/libexpr/include/nix/expr/parallel-eval.hh-72- };
src/libexpr/include/nix/expr/parallel-eval.hh-73-
src/libexpr/include/nix/expr/parallel-eval.hh-74- Sync<State> state_;
src/libexpr/include/nix/expr/parallel-eval.hh-75-
src/libexpr/include/nix/expr/parallel-eval.hh-76- ~FutureVector();
src/libexpr/include/nix/expr/parallel-eval.hh-77-
src/libexpr/include/nix/expr/parallel-eval.hh-78- // FIXME: add a destructor that cancels/waits for all futures.
src/libexpr/include/nix/expr/parallel-eval.hh-79-
src/libexpr/include/nix/expr/parallel-eval.hh-80- void spawn(std::vector<std::pair<Executor::work_t, uint8_t>> && work);
src/libexpr/include/nix/expr/parallel-eval.hh-81-
src/libexpr/include/nix/expr/parallel-eval.hh-82- void spawn(uint8_t prioPrefix, Executor::work_t && work)
src/libexpr/include/nix/expr/parallel-eval.hh-83- {
src/libexpr/include/nix/expr/parallel-eval.hh-84- spawn({{std::move(work), prioPrefix}});
src/libexpr/include/nix/expr/parallel-eval.hh-85- }
</shell_result>
Script executed:
# Look at the broader context of the run function to confirm finishAll() timing
sed -n '950,975p' src/nix/flake.cc | cat -nRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
1 auto inv = nlohmann::json::object();
2
3 flake_schemas::forEachOutput(
4 inventory,
5 [&](Symbol outputName,
6 std::shared_ptr<eval_cache::AttrCursor> output,
7 const std::string & doc,
8 bool isLast) {
9 auto & j = inv.emplace(state->symbols[outputName], nlohmann::json::object()).first.value();
10
11 if (!showLegacy && state->symbols[outputName] == "legacyPackages") {
12 j.emplace("skipped", true);
13 } else if (output) {
14 j.emplace("doc", doc);
15 auto & j2 = j.emplace("output", nlohmann::json::object()).first.value();
16 futures.spawn(1, [&visit, output, &j2]() { visit(ref(output), j2); });
17 } else
18 j.emplace("unknown", true);
19 });
20
21 futures.finishAll();
22
23 if (json) {
24 auto res = nlohmann::json{{"version", 2}, {"inventory", std::move(inv)}};
25 printJSON(res);
26 } else {
</shell_result>
Script executed:
# Find finishAll() implementation
rg -n "void finishAll" src/ -A 10Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
src/libexpr/include/nix/expr/parallel-eval.hh:87: void finishAll();
src/libexpr/include/nix/expr/parallel-eval.hh-88-};
src/libexpr/include/nix/expr/parallel-eval.hh-89-
src/libexpr/include/nix/expr/parallel-eval.hh-90-} // namespace nix
</shell_result>
Script executed:
# Check nlohmann::json semantics and move behavior
rg -n "class basic_json|template.*json" src/ --type=h | head -20Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
</shell_result>
Script executed:
# Let's search for any synchronization between spawn and finishAll
rg -n "std::future" src/libexpr/include/nix/expr/parallel-eval.hh -B 5 -A 5Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
55-
56- void createWorker(State & state);
57-
58- void worker();
59-
60: std::vector<std::future<void>> spawn(std::vector<std::pair<work_t, uint8_t>> && items);
61-
62- static thread_local bool amWorkerThread;
63-};
64-
65-struct FutureVector
66-{
67- Executor & executor;
68-
69- struct State
70- {
71: std::vector<std::future<void>> futures;
72- };
73-
74- Sync<State> state_;
75-
76- ~FutureVector();
</shell_result>
Script executed:
# Find finishAll() implementation in .cc files
rg -n "finishAll" src/libexpr/ --type=cc -A 15Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 2
stderr:
unrecognized file type: cc
</shell_result>
Script executed:
# Let's also check if there's any documentation about finishAll behavior
rg -n "finishAll|futures" src/nix/flake.cc -B 2 -A 2 | head -50Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
372- auto inventory = cache->getRoot()->getAttr("inventory");
373-
374: FutureVector futures(*state->executor);
375-
376- Sync<std::vector<DerivedPath>> drvPaths_;
--
438- [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
439- forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
440: futures.spawn(2, [&visit, node]() { visit(node); });
441- });
442- },
--
455- bool isLast) {
456- if (output)
457: futures.spawn(1, [&visit, output(ref(output))]() { visit(output); });
458- else
459- uncheckedOutputs.lock()->insert(std::string(state->symbols[outputName]));
460- });
461-
462: futures.finishAll();
463-
464- if (!uncheckedOutputs.lock()->empty())
--
876- auto inventory = cache->getRoot()->getAttr("inventory");
877-
878: FutureVector futures(*state->executor);
879-
880- std::function<void(ref<eval_cache::AttrCursor> node, nlohmann::json & obj)> visit;
--
928- forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
929- auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value();
930: futures.spawn(1, [&visit, &j, node]() {
931- try {
932- visit(node, j);
--
963- j.emplace("doc", doc);
964- auto & j2 = j.emplace("output", nlohmann::json::object()).first.value();
965: futures.spawn(1, [&visit, output, &j2]() { visit(ref(output), j2); });
966- } else
967- j.emplace("unknown", true);
968- });
969-
970: futures.finishAll();
971-
972- if (json) {
</shell_result>
Script executed:
# Search for how futures are managed and when they complete
rg -n "std::future|\.get\(\)|\.wait" src/libexpr/include/ --type=hRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 1
</shell_result>
Script executed:
# Search for finishAll implementation in all file types
find src -name "*.cc" -o -name "*.cpp" | xargs grep -l "finishAll" 2>/dev/nullRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
src/libexpr/parallel-eval.cc
src/nix/flake.cc
src/nix/search.cc
</shell_result>
Script executed:
# Look at the actual finishAll implementation
find src -name "*.cc" -o -name "*.cpp" | xargs grep -A 20 "FutureVector::finishAll" 2>/dev/nullRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
src/libexpr/parallel-eval.cc:void FutureVector::finishAll()
src/libexpr/parallel-eval.cc-{
src/libexpr/parallel-eval.cc- std::exception_ptr ex;
src/libexpr/parallel-eval.cc- while (true) {
src/libexpr/parallel-eval.cc- std::vector<std::future<void>> futures;
src/libexpr/parallel-eval.cc- {
src/libexpr/parallel-eval.cc- auto state(state_.lock());
src/libexpr/parallel-eval.cc- std::swap(futures, state->futures);
src/libexpr/parallel-eval.cc- }
src/libexpr/parallel-eval.cc- debug("got %d futures", futures.size());
src/libexpr/parallel-eval.cc- if (futures.empty())
src/libexpr/parallel-eval.cc- break;
src/libexpr/parallel-eval.cc- for (auto & future : futures)
src/libexpr/parallel-eval.cc- try {
src/libexpr/parallel-eval.cc- future.get();
src/libexpr/parallel-eval.cc- } catch (...) {
src/libexpr/parallel-eval.cc- if (ex) {
src/libexpr/parallel-eval.cc- if (!getInterrupted())
src/libexpr/parallel-eval.cc- ignoreExceptionExceptInterrupt();
src/libexpr/parallel-eval.cc- } else
src/libexpr/parallel-eval.cc- ex = std::current_exception();
</shell_result>
Script executed:
# Let's also check the header for any inline implementation
cat src/libexpr/include/nix/expr/parallel-eval.hh | tail -50Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
const bool enabled;
const std::unique_ptr<InterruptCallback> interruptCallback;
Sync<State> state_;
std::condition_variable wakeup;
static unsigned int getEvalCores(const EvalSettings & evalSettings);
Executor(const EvalSettings & evalSettings);
~Executor();
void createWorker(State & state);
void worker();
std::vector<std::future<void>> spawn(std::vector<std::pair<work_t, uint8_t>> && items);
static thread_local bool amWorkerThread;
};
struct FutureVector
{
Executor & executor;
struct State
{
std::vector<std::future<void>> futures;
};
Sync<State> state_;
~FutureVector();
// FIXME: add a destructor that cancels/waits for all futures.
void spawn(std::vector<std::pair<Executor::work_t, uint8_t>> && work);
void spawn(uint8_t prioPrefix, Executor::work_t && work)
{
spawn({{std::move(work), prioPrefix}});
}
void finishAll();
};
} // namespace nix
</shell_result>
Script executed:
# Check the C++ semantics - when children is moved, is it destroyed?
# Let's look at whether std::move(children) into emplace keeps it alive
rg -n "emplace.*std::move" src/nix/flake.cc | head -10Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
914: drvObj.emplace("outputs", std::move(outputs));
917: obj.emplace("derivation", std::move(drvObj));
942: obj.emplace("children", std::move(children));
</shell_result>
Script executed:
# Let's also verify the issue in another similar file to see the pattern
sed -n '428,465p' src/nix/flake.cc | cat -nRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
1 } catch (Error & e) {
2 printError("❌ " ANSI_RED "%s" ANSI_NORMAL, leaf.node->getAttrPathStr());
3 if (settings.keepGoing) {
4 logEvalError();
5 hasErrors = true;
6 } else
7 throw;
8 }
9 },
10
11 [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
12 forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
13 futures.spawn(2, [&visit, node]() { visit(node); });
14 });
15 },
16
17 [&](ref<eval_cache::AttrCursor> node, const std::vector<std::string> & systems) {
18 for (auto & s : systems)
19 omittedSystems.lock()->insert(s);
20 });
21 };
22
23 flake_schemas::forEachOutput(
24 inventory,
25 [&](Symbol outputName,
26 std::shared_ptr<eval_cache::AttrCursor> output,
27 const std::string & doc,
28 bool isLast) {
29 if (output)
30 futures.spawn(1, [&visit, output(ref(output))]() { visit(output); });
31 else
32 uncheckedOutputs.lock()->insert(std::string(state->symbols[outputName]));
33 });
34
35 futures.finishAll();
36
37 if (!uncheckedOutputs.lock()->empty())
38 warn("The following flake outputs are unchecked: %s.", concatStringsSep(", ", *uncheckedOutputs.lock()));
</shell_result>
Script executed:
# Check if there's a similar pattern in the first function (around line 440)
sed -n '420,470p' src/nix/flake.cc | cat -nRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
1
2 if (done)
3 notice(
4 "✅ " ANSI_BOLD "%s" ANSI_NORMAL "%s",
5 leaf.node->getAttrPathStr(),
6 buildSkipped ? ANSI_ITALIC ANSI_FAINT " (build skipped)" : "");
7 } catch (Interrupted & e) {
8 throw;
9 } catch (Error & e) {
10 printError("❌ " ANSI_RED "%s" ANSI_NORMAL, leaf.node->getAttrPathStr());
11 if (settings.keepGoing) {
12 logEvalError();
13 hasErrors = true;
14 } else
15 throw;
16 }
17 },
18
19 [&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
20 forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
21 futures.spawn(2, [&visit, node]() { visit(node); });
22 });
23 },
24
25 [&](ref<eval_cache::AttrCursor> node, const std::vector<std::string> & systems) {
26 for (auto & s : systems)
27 omittedSystems.lock()->insert(s);
28 });
29 };
30
31 flake_schemas::forEachOutput(
32 inventory,
33 [&](Symbol outputName,
34 std::shared_ptr<eval_cache::AttrCursor> output,
35 const std::string & doc,
36 bool isLast) {
37 if (output)
38 futures.spawn(1, [&visit, output(ref(output))]() { visit(output); });
39 else
40 uncheckedOutputs.lock()->insert(std::string(state->symbols[outputName]));
41 });
42
43 futures.finishAll();
44
45 if (!uncheckedOutputs.lock()->empty())
46 warn("The following flake outputs are unchecked: %s.", concatStringsSep(", ", *uncheckedOutputs.lock()));
47
48 auto drvPaths(drvPaths_.lock());
49 auto derivedPathToAttrPaths(derivedPathToAttrPaths_.lock());
50
51 if (build && !drvPaths->empty()) {
</shell_result>
Excellent! I have all the evidence I need. The issue is confirmed and critical. Let me write the rewritten review comment:
Fix critical data-race issue when building children JSON asynchronously in CmdFlakeShow
In CmdFlakeShow::run (src/nix/flake.cc, lines 926-943), the child handler spawns tasks that hold references into a local children JSON object, which is then moved and destroyed before those tasks complete:
[&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
auto children = nlohmann::json::object();
forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value();
futures.spawn(1, [&visit, &j, node]() { // &j references into children
try {
visit(node, j); // Access &j while children may be destroyed
} catch (EvalError & e) {
if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages")
j.emplace("failed", true);
else
throw;
}
});
});
obj.emplace("children", std::move(children)); // children moved/destroyed here
}
// ... lambda returns, futures may still be running ...
futures.finishAll(); // finishAll() called later at line 970Each spawned task holds a reference &j into the local children object. After the lambda returns, children is destroyed while tasks may still be queued or running, causing all those &j references to become dangling pointers. This leads to undefined behavior when tasks attempt to write to them.
The simplest fix is to make this handler synchronous, preserving concurrency at the outer forEachOutput level:
[&](std::function<void(flake_schemas::ForEachChild)> forEachChild) {
auto children = nlohmann::json::object();
forEachChild([&](Symbol attrName, ref<eval_cache::AttrCursor> node, bool isLast) {
auto & j = children.emplace(state->symbols[attrName], nlohmann::json::object()).first.value();
- futures.spawn(1, [&visit, &j, node]() {
+ try {
- try {
- visit(node, j);
- } catch (EvalError & e) {
- // FIXME: make it a flake schema attribute whether to ignore evaluation errors.
- if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages")
- j.emplace("failed", true);
- else
- throw;
- }
- });
+ visit(node, j);
+ } catch (EvalError & e) {
+ // FIXME: make it a flake schema attribute whether to ignore evaluation errors.
+ if (node->root->state.symbols[node->getAttrPath()[0]] == "legacyPackages")
+ j.emplace("failed", true);
+ else
+ throw;
+ }
});
obj.emplace("children", std::move(children));
},| render = [&](nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix) { | ||
| if (j.find("type") != j.end()) { | ||
| std::string s; | ||
|
|
||
| std::string type = j["type"]; | ||
| if (type == "omitted") { | ||
| s = j["message"]; | ||
| } else if (type == "derivation") { | ||
| s = (std::string) j["subtype"] + " '" + (std::string) j["name"] + "'"; | ||
| } else { | ||
| s = type; | ||
| } | ||
| auto what = j.find("what"); | ||
| auto filtered = j.find("filtered"); | ||
| auto derivationName = j.find("derivationName"); | ||
|
|
||
| logger->cout("%s: %s", headerPrefix, s); | ||
| return; | ||
| } | ||
| auto s = headerPrefix; | ||
|
|
||
| logger->cout("%s", headerPrefix); | ||
| if (what != j.end()) | ||
| s += fmt(": %s", (std::string) *what); | ||
|
|
||
| auto nonEmpty = nlohmann::json::object(); | ||
| for (const auto & j2 : j.items()) { | ||
| if (hasContent(j2.value())) | ||
| nonEmpty[j2.key()] = j2.value(); | ||
| } | ||
| if (derivationName != j.end()) | ||
| s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *derivationName); | ||
|
|
||
| if (filtered != j.end() && (bool) *filtered) | ||
| s += " " ANSI_WARNING "omitted" ANSI_NORMAL " (use '--all-systems' to show)"; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Script executed:
# Search for the visit function and how JSON is constructed
rg -n "auto drv = leaf.derivation" src/nix/flake.cc -A 15 -B 5Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
401- if (!b)
402- throw Error("Evaluation check '%s' failed.", cursor->getAttrPathStr());
403- }
404- }
405-
406: if (auto drv = leaf.derivation()) {
407- if (buildAll || leaf.isFlakeCheck()) {
408- auto drvPath = drv->forceDerivation();
409- auto derivedPath = DerivedPath::Built{
410- .drvPath = makeConstantStorePathRef(drvPath),
411- .outputs = OutputsSpec::All{},
412- };
413- (*derivedPathToAttrPaths_.lock())[derivedPath].push_back(leaf.node->getAttrPath());
414- drvPaths_.lock()->push_back(std::move(derivedPath));
415- if (build)
416- done = false;
417- } else
418- buildSkipped = true;
419- }
420-
421- if (done)
--
889- obj.emplace("what", *what);
890-
891- if (auto shortDescription = leaf.shortDescription())
892- obj.emplace("shortDescription", *shortDescription);
893-
894: if (auto drv = leaf.derivation()) {
895- auto drvObj = nlohmann::json::object();
896-
897- drvObj.emplace("name", drv->getAttr(state->s.name)->getString());
898-
899- if (showDrvPaths) {
900- auto drvPath = drv->forceDerivation();
901- drvObj.emplace("path", store->printStorePath(drvPath));
902- }
903-
904- if (showOutputPaths) {
905- auto outputs = nlohmann::json::object();
906- auto drvPath = drv->forceDerivation();
907- auto drv = getEvalStore()->derivationFromPath(drvPath);
908- for (auto & i : drv.outputsAndOptPaths(*store)) {
909- if (auto outPath = i.second.second)
</shell_result>
Script executed:
# Search for all uses of derivationName in the codebase
rg -n "derivationName" src/nix/flake.cc -B 3 -A 3Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
981- render = [&](nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix) {
982- auto what = j.find("what");
983- auto filtered = j.find("filtered");
984: auto derivationName = j.find("derivationName");
985-
986- auto s = headerPrefix;
987-
988- if (what != j.end())
989- s += fmt(": %s", (std::string) *what);
990-
991: if (derivationName != j.end())
992: s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *derivationName);
993-
994- if (filtered != j.end() && (bool) *filtered)
995- s += " " ANSI_WARNING "omitted" ANSI_NORMAL " (use '--all-systems' to show)";
</shell_result>
Script executed:
# Look for how the render function is used and what it expects
rg -n "render.*nlohmann::json" src/nix/flake.cc -B 3 -A 10Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
978- std::function<void(nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix)>
979- render;
980-
981: render = [&](nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix) {
982- auto what = j.find("what");
983- auto filtered = j.find("filtered");
984- auto derivationName = j.find("derivationName");
985-
986- auto s = headerPrefix;
987-
988- if (what != j.end())
989- s += fmt(": %s", (std::string) *what);
990-
991- if (derivationName != j.end())
</shell_result>
Script executed:
# Check if there's an obj.emplace("derivation") call
rg -n 'obj\.emplace\("derivation"' src/nix/flake.cc -A 2 -B 2Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
915- }
916-
917: obj.emplace("derivation", std::move(drvObj));
918- }
919-
</shell_result>
Script executed:
# Check if there are any other JSON objects being created that might have derivationName
rg -n 'emplace.*derivationName' src/nix/ -r src/nix/flake.ccRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 1
</shell_result>
Script executed:
# Check the full render function context to see if there are other references
sed -n '978,1020p' src/nix/flake.ccRepository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
std::function<void(nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix)>
render;
render = [&](nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix) {
auto what = j.find("what");
auto filtered = j.find("filtered");
auto derivationName = j.find("derivationName");
auto s = headerPrefix;
if (what != j.end())
s += fmt(": %s", (std::string) *what);
if (derivationName != j.end())
s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *derivationName);
if (filtered != j.end() && (bool) *filtered)
s += " " ANSI_WARNING "omitted" ANSI_NORMAL " (use '--all-systems' to show)";
logger->cout(s);
auto children = j.find("children");
if (children != j.end()) {
for (const auto & [i, child] : enumerate(children->items())) {
bool last = i + 1 == children->size();
render(
child.value(),
fmt(ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL,
nextPrefix,
last ? treeLast : treeConn,
child.key()),
nextPrefix + (last ? treeNull : treeLine));
}
}
};
logger->cout("%s", fmt(ANSI_BOLD "%s" ANSI_NORMAL, flake->flake.lockedRef));
for (const auto & [i, child] : enumerate(inv.items())) {
bool last = i + 1 == inv.size();
auto nextPrefix = last ? treeNull : treeLine;
auto output = child.value().find("output");
</shell_result>
Script executed:
# Let's also check if the derivation field might have a "name" subfield being used elsewhere
rg -n 'derivation.*->find.*name' src/nix/flake.cc -B 2 -A 2Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 1
</shell_result>
Script executed:
# Let's see the complete render lambda
rg -n 'render = \[' src/nix/flake.cc -A 50 | head -80Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
981: render = [&](nlohmann::json j, const std::string & headerPrefix, const std::string & nextPrefix) {
982- auto what = j.find("what");
983- auto filtered = j.find("filtered");
984- auto derivationName = j.find("derivationName");
985-
986- auto s = headerPrefix;
987-
988- if (what != j.end())
989- s += fmt(": %s", (std::string) *what);
990-
991- if (derivationName != j.end())
992- s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *derivationName);
993-
994- if (filtered != j.end() && (bool) *filtered)
995- s += " " ANSI_WARNING "omitted" ANSI_NORMAL " (use '--all-systems' to show)";
996-
997- logger->cout(s);
998-
999- auto children = j.find("children");
1000-
1001- if (children != j.end()) {
1002- for (const auto & [i, child] : enumerate(children->items())) {
1003- bool last = i + 1 == children->size();
1004- render(
1005- child.value(),
1006- fmt(ANSI_GREEN "%s%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL,
1007- nextPrefix,
1008- last ? treeLast : treeConn,
1009- child.key()),
1010- nextPrefix + (last ? treeNull : treeLine));
1011- }
1012- }
1013- };
1014-
1015- logger->cout("%s", fmt(ANSI_BOLD "%s" ANSI_NORMAL, flake->flake.lockedRef));
1016-
1017- for (const auto & [i, child] : enumerate(inv.items())) {
1018- bool last = i + 1 == inv.size();
1019- auto nextPrefix = last ? treeNull : treeLine;
1020- auto output = child.value().find("output");
1021- auto headerPrefix = fmt(
1022- ANSI_GREEN "%s" ANSI_NORMAL ANSI_BOLD "%s" ANSI_NORMAL, last ? treeLast : treeConn, child.key());
1023- if (output != child.value().end())
1024- render(*output, headerPrefix, nextPrefix);
1025- else if (child.value().contains("unknown"))
1026- logger->cout(headerPrefix + ANSI_WARNING " unknown flake output" ANSI_NORMAL);
1027- else if (child.value().contains("skipped"))
1028- logger->cout(headerPrefix + ANSI_WARNING " omitted" ANSI_NORMAL " (use '--legacy' to show)");
1029- }
1030- }
1031- }
</shell_result>
Script executed:
# Double-check: search for any string literal "derivationName" in the entire nix directory
rg -n '"derivationName"' src/nix/ 2>/dev/null | head -20Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
src/nix/flake.cc:984: auto derivationName = j.find("derivationName");
</shell_result>
Script executed:
rg -r '"derivationName"' src/nix/Repository: DeterminateSystems/nix-src
<shell_result>
Repository: DeterminateSystems/nix-src
Exit code: 0
stdout:
src/internal-api-docs/doxygen.cfg.in: @BUILD_ROOT@/"derivationName"nix.p \
packaging/components.nix: nix-cli = callPackage ../"derivationName"package.nix { version = fineVersion; };
doc/manual/source/development/testing.md:ninja: Entering directory `/home/jcericson/"derivationName"master/build'
doc/manual/source/development/testing.md:ninja: Entering directory `/home/jcericson/"derivationName"master/build'
doc/manual/source/development/testing.md:Full log written to /home/jcericson/"derivationName"master/build/meson-logs/testlog.txt
</shell_result>
Tree renderer must read derivation name from nested JSON structure
The render helper at line 984 still searches for a top-level derivationName field:
auto derivationName = j.find("derivationName");
...
if (derivationName != j.end())
s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *derivationName);However, the JSON construction at line 917 now wraps derivation info under a nested "derivation" object:
if (auto drv = leaf.derivation()) {
auto drvObj = nlohmann::json::object();
drvObj.emplace("name", drv->getAttr(state->s.name)->getString());
// ... other fields ...
obj.emplace("derivation", std::move(drvObj));
}Since derivationName is never created at the top level, the derivation name will no longer appear in tree output. Update the render function to read from the new nested structure:
- auto derivationName = j.find("derivationName");
+ auto derivation = j.find("derivation");
...
- if (derivationName != j.end())
- s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *derivationName);
+ if (derivation != j.end()) {
+ auto name = derivation->find("name");
+ if (name != derivation->end())
+ s += fmt(ANSI_ITALIC " [%s]" ANSI_NORMAL, (std::string) *name);
+ }This chained find() pattern is the standard and safe approach for accessing nested fields in nlohmann::json.
🤖 Prompt for AI Agents
In src/nix/flake.cc around lines 981 to 996, the render lambda currently looks
for a top-level "derivationName" but the JSON now nests derivation info under
"derivation":{"name":...}; update the logic to first check for a nested name
(safe chained find: look up "derivation", then if present call find("name") on
that object and use its string value), fall back to the existing top-level
"derivationName" if present, and then append the italicized name as before;
ensure you perform existence checks before casting to std::string to avoid
exceptions.
Motivation
Updated version of NixOS#8892.
Context
Summary by CodeRabbit
New Features
Documentation
Tests
✏️ Tip: You can customize this high-level summary in your review settings.