Skip to content

Conversation

@sawka
Copy link
Member

@sawka sawka commented Nov 22, 2025

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 22, 2025

Walkthrough

This pull request adds Wave AI platform integration features across the frontend and backend. Changes include: adding a scrollToBottom capability to the AI panel input, implementing a "Send Output to AI" button in the builder that transmits filtered build output to the AI model, and gating configuration data fetching based on the active tab. On the backend, the changes extend the chat system to capture and propagate platform metadata (PlatformInfo) throughout the AI message conversion pipeline for both Anthropic and OpenAI providers. Additionally, the app initialization flow in the builder controller is reordered to start the app immediately after a successful build rather than after output emission. The BuilderAppGenerator callback signature is updated to return platform information as a third value.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Method signature changes requiring verification: BuilderAppGenerator callback signature changed from (string, string, error) to (string, string, string, error) in usechat-types.go and usechat.go. Review all call sites across the codebase to ensure all callers are updated to match.
  • Message injection logic refactoring: The Anthropic and OpenAI message conversion logic now handles multiple optional fields (PlatformInfo, AppStaticFiles, AppGoFile) with conditional block injection. Verify the injection order, block formatting, and handling of edge cases where some fields are empty.
  • Data flow coordination: PlatformInfo is now sourced via os/user import and must flow through the entire pipeline (generateBuilderAppData → BuilderAppGenerator → chatOpts.PlatformInfo → message injection). Trace the complete path for correctness.
  • App initialization timing change: The invocation of bc.runBuilderApp was moved earlier and a duplicate was removed. Verify the 1-second delay and subsequent control flow don't introduce race conditions or timing issues.
  • UI state gating: The configdatatab now gates fetching on activeTab state. Verify the dependency list, state reset logic, and tab switching behavior work as intended.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning No pull request description was provided by the author, making it impossible to assess whether it relates to the changeset. Add a description explaining the purpose and scope of these changes, including the new scroll-to-bottom functionality and AI output integration features.
Docstring Coverage ⚠️ Warning Docstring coverage is 40.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title 'more misc builder improvements' is vague and generic, using non-descriptive terms that don't convey meaningful information about the specific changes in the changeset. Use a more specific title that highlights the main changes, such as 'Add scroll-to-bottom and AI output integration to builder' or 'Implement AI panel scrolling and builder output sending features'.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sawka/builder-improvements-3

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
frontend/builder/builder-buildpanel.tsx (1)

85-102: Consider extracting the filtering logic to reduce duplication.

The line filtering logic appears twice:

  1. Lines 88-90 in handleSendToAI
  2. Lines 100-102 for filteredLines

While both are functional, you could reduce duplication by either:

  • Extracting a getFilteredLines(lines: string[], includeDebug: boolean) helper
  • Reusing the memoized filteredLines in handleSendToAI (though you'd need to slice it)

That said, the current duplication is minimal and the code is clear, so this is optional.

Example refactor:

+const getFilteredLines = (lines: string[], includeDebug: boolean) => {
+    return includeDebug
+        ? lines
+        : lines.filter((line) => !line.startsWith("[debug]") && line.trim().length > 0);
+};
+
 const handleSendToAI = useCallback(() => {
     const currentShowDebug = globalStore.get(model.showDebug);
     const currentOutputLines = globalStore.get(model.outputLines);
-    const filtered = currentShowDebug
-        ? currentOutputLines
-        : currentOutputLines.filter((line) => !line.startsWith("[debug]") && line.trim().length > 0);
+    const filtered = getFilteredLines(currentOutputLines, currentShowDebug);

     const linesToSend = filtered.slice(-200);
     // ... rest of the function
 }, [model]);

-const filteredLines = showDebug
-    ? outputLines
-    : outputLines.filter((line) => !line.startsWith("[debug]") && line.trim().length > 0);
+const filteredLines = getFilteredLines(outputLines, showDebug);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 99a2576 and 461df96.

📒 Files selected for processing (10)
  • frontend/app/aipanel/aipanelinput.tsx (2 hunks)
  • frontend/app/aipanel/waveai-model.tsx (2 hunks)
  • frontend/builder/builder-buildpanel.tsx (2 hunks)
  • frontend/builder/tabs/builder-configdatatab.tsx (2 hunks)
  • frontend/types/gotypes.d.ts (1 hunks)
  • pkg/aiusechat/anthropic/anthropic-convertmessage.go (1 hunks)
  • pkg/aiusechat/openai/openai-convertmessage.go (1 hunks)
  • pkg/aiusechat/uctypes/usechat-types.go (2 hunks)
  • pkg/aiusechat/usechat.go (5 hunks)
  • pkg/buildercontroller/buildercontroller.go (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-11-01T00:57:23.025Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2504
File: frontend/app/aipanel/aipanel-contextmenu.ts:15-16
Timestamp: 2025-11-01T00:57:23.025Z
Learning: In the waveterm codebase, types defined in custom.d.ts are globally available and do not require explicit imports. Backend types defined in gotypes.d.ts are also globally available.

Applied to files:

  • frontend/types/gotypes.d.ts
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.

Applied to files:

  • frontend/builder/builder-buildpanel.tsx
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/usechat.go
🧬 Code graph analysis (2)
frontend/builder/builder-buildpanel.tsx (1)
frontend/app/aipanel/waveai-model.tsx (1)
  • WaveAIModel (43-582)
pkg/aiusechat/usechat.go (1)
pkg/wavebase/wavebase.go (1)
  • GetSystemSummary (376-383)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (11)
frontend/types/gotypes.d.ts (1)

1223-1224: LGTM! Clean telemetry field additions.

The new optional fields for Wave AI chat tracking are properly typed and follow the existing naming conventions.

frontend/builder/tabs/builder-configdatatab.tsx (1)

72-72: LGTM! Smart optimization to gate data fetching.

The change to only fetch config/data when the "configdata" tab is active prevents unnecessary API calls and improves performance. The dependency array is correctly updated to include activeTab.

Also applies to: 148-158

pkg/buildercontroller/buildercontroller.go (1)

264-275: Clarify the purpose of the 1-second delay.

The reordering to start the app immediately after a successful build is good, but the 1-second sleep at line 275 lacks explanation.

Could you document why this delay is necessary? Is it to:

  • Allow the app process to stabilize?
  • Ensure the port is fully bound?
  • Give time for initial output to be captured?

Consider adding a comment explaining the rationale, especially since this is a critical part of the build flow.

frontend/app/aipanel/aipanelinput.tsx (1)

20-20: LGTM! Clean scrollToBottom implementation.

The new method is straightforward and correctly scrolls the textarea to its bottom. Good addition to support the programmatic scrolling feature.

Also applies to: 47-52

pkg/aiusechat/openai/openai-convertmessage.go (1)

221-223: LGTM! Consistent PlatformInfo injection.

The PlatformInfo block injection follows the same pattern as existing TabState and AppStaticFiles handling. The ordering and formatting are consistent.

frontend/app/aipanel/waveai-model.tsx (1)

304-304: LGTM! Well-designed optional scrolling behavior.

The optional scrollToBottom parameter is a clean extension of the appendText API. The 10ms delay ensures the DOM updates before scrolling, which is a sensible approach.

Also applies to: 321-323

frontend/builder/builder-buildpanel.tsx (1)

118-123: LGTM! Clean integration of "Send Output to AI" feature.

The button properly integrates with the WaveAIModel and leverages the new scrollToBottom capability. The feature will help users quickly send build output to AI for debugging.

pkg/aiusechat/uctypes/usechat-types.go (1)

452-452: All BuilderAppGenerator implementations have been correctly updated for the new signature.

Verification confirms that the implementation at usechat.go:694-696 uses the correct signature func() (string, string, string, error), and the usage at usechat.go:415 properly unpacks all four return values: appGoFile, appStaticFiles, platformInfo, appErr. The helper function generateBuilderAppData also returns (string, string, string, error), matching the required signature. Only one assignment exists in the codebase, and it is correctly implemented.

pkg/aiusechat/anthropic/anthropic-convertmessage.go (1)

75-104: LGTM! Clean multi-block injection pattern.

The refactoring from a single AppGoFile block to individual blocks for PlatformInfo, AppStaticFiles, and AppGoFile is well-implemented. Each field is checked independently and only appended when non-empty, following the same pattern as the TabState injection above.

pkg/aiusechat/usechat.go (2)

414-421: LGTM! Generator signature correctly updated.

The BuilderAppGenerator call now captures and assigns the new platformInfo return value alongside the existing appGoFile and appStaticFiles.


865-872: Verify that including the local username in platform info sent to AI is intentional and properly governed.

The implementation unconditionally collects and sends the current user's username to AI services (Anthropic, OpenAI) as part of the <PlatformInfo> context block. This is clearly intentional based on the code structure, but it is:

  • Undocumented: No comments explain why username collection is necessary
  • Non-configurable: No flags, settings, or opt-out mechanism exists
  • Unannounced: No user-facing documentation indicates their username is transmitted to external AI services

Please confirm with your product and privacy teams:

  1. Is this username transmission explicitly approved and intentional?
  2. Should this be documented for users or configurable/opt-in?
  3. Is this appropriate for the builder mode context, or should it be gated behind user consent?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants