Skip to content

fix: improve Windows/WSL compatibility, add multi-stage executor Dockerfile, and add http to https fallback for git clone#740

Open
SSwser wants to merge 4 commits intowecode-ai:mainfrom
SSwser:fix/windows-wsl-compat-and-executor-dockerfile
Open

fix: improve Windows/WSL compatibility, add multi-stage executor Dockerfile, and add http to https fallback for git clone#740
SSwser wants to merge 4 commits intowecode-ai:mainfrom
SSwser:fix/windows-wsl-compat-and-executor-dockerfile

Conversation

@SSwser
Copy link
Copy Markdown

@SSwser SSwser commented Mar 12, 2026

Summary

  • fix: improve Windows/WSL local dev compatibility — fix CRLF line ending issues in shell scripts and improve local dev experience on Windows/WSL environments
  • feat: add multi-stage Dockerfile for executor with source/binary targets — add a multi-stage Dockerfile supporting both source-based and binary-based build targets for the executor
  • feat: add http to https fallback for git clone with token — automatically retry git clone with https when http is used, ensuring token-based authentication works correctly

Test plan

  • Verify start.sh works correctly on WSL/Linux without CRLF errors
  • Verify executor Docker image builds successfully with both source and binary targets
  • Verify git clone with token falls back from http to https correctly

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added a backend-only startup option to skip frontend initialization.
  • Improvements

    • Docker image builds now explicitly target compiled binaries and use multi-stage builds for smaller, clearer artifacts.
    • Service readiness uses TCP connectivity probing for more reliable startup sequencing.
    • Git clone now retries with HTTPS as a fallback for token-based clones.
    • Frontend dev config updated for more consistent tooling and environment handling.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 12, 2026

📝 Walkthrough

Walkthrough

Adds explicit --target binary to executor image builds, converts the executor Dockerfile to multi-stage (deps, builder, source, binary), adds TCP probing and a backend-only startup mode in startup scripts, retries git HTTP clones using HTTPS on failure, and adjusts frontend Turbopack/webpack config and dev script.

Changes

Cohort / File(s) Summary
Build scripts
build_image.sh, build_image_mac.sh
Docker build for executor now passes --target binary, explicitly targeting the binary stage.
Executor Dockerfile
docker/executor/Dockerfile
Reworked to multi-stage: deps, builder (PyInstaller compile), source (dev run from source), and binary (run compiled binary). Added system/package deps and stage-specific CMDs.
Startup script
start.sh
Added probe_tcp(), host/port helpers, check_mysql_redis(), backend-only (--backend) mode, conditional frontend startup/health checks, and changed service readiness to TCP reachability with fallback to starting local containers.
Git util
shared/utils/git_util.py
clone_repo() now retries a failed token/http clone by converting the URL to HTTPS and retrying before returning the result.
Frontend config
frontend/next.config.js, frontend/package.json
Removed TURBOPACK env prefix from dev script; unified webpack config with a turbopack.resolveAlias for remark-gfm; updated allowedDevOrigins.
Frontend asset
frontend/public/mockServiceWorker.js
Minor whitespace: added a leading blank line.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Start as start.sh
participant Docker as Docker/Compose
participant MySQL as MySQL
participant Redis as Redis
participant Frontend as Frontend Server

Start->>MySQL: probe_tcp(host, port)
Start->>Redis: probe_tcp(host, port)
alt both reachable
    Start->>Frontend: conditionally start frontend (unless --backend)
else any unreachable
    Start->>Docker: docker-compose up wegent/mysql, wegent/redis
    Docker->>MySQL: start
    Docker->>Redis: start
    Start->>MySQL: wait until reachable
    Start->>Redis: wait until reachable
    Start->>Frontend: conditionally start frontend (unless --backend)
end
Frontend->>Start: report URL using LOCAL_IP

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • qdaxb
  • Micro66
  • feifei325

"I nibble at Dockerfiles, hop through nets so fine,
I probe the TCP tunnels to make services align,
I swap builds for binary stages, alias Turbopack's light,
Backend-only hops by moonbeam, dev scripts now take flight,
A rabbit cheers the code—🥕 compact, and neat, and bright!"

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the three main changes in the PR: Windows/WSL compatibility improvements, multi-stage executor Dockerfile, and HTTP to HTTPS fallback for git clone.
Docstring Coverage ✅ Passed Docstring coverage is 80.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can validate your CodeRabbit configuration file in your editor.

If your editor has YAML language server, you can enable auto-completion and validation by adding # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json at the top of your CodeRabbit configuration file.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
shared/utils/git_util.py (2)

78-87: Consider cleaning up the target directory before HTTPS retry.

If the initial HTTP clone fails after partially creating project_path (e.g., network interruption mid-clone), the HTTPS retry will also fail because git clone refuses to clone into an existing directory. This could leave users with confusing double-failure messages.

♻️ Proposed fix to clean up before retry
         # If http:// clone failed, retry with https:// in case the server enforces HTTPS
         if not success and project_url.startswith("http://"):
             https_url = "https://" + project_url[7:]
             logger.info(
                 f"http clone failed, retrying with https: {https_url}"
             )
+            # Clean up any partial clone artifacts before retry
+            import shutil
+            if os.path.exists(project_path):
+                shutil.rmtree(project_path, ignore_errors=True)
             success, error = clone_repo_with_token(
                 https_url, branch, project_path, user_name, token
             )

Note: You'll need to add import os at the top of the file if not already present (it's used in setup_git_hooks but imported inside the function).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shared/utils/git_util.py` around lines 78 - 87, When retrying the clone with
HTTPS in the HTTP-fallback block, ensure any partially created project_path is
removed before calling clone_repo_with_token to avoid "directory exists"
failures: check if os.path.exists(project_path) and if so remove it (e.g.,
shutil.rmtree) prior to constructing https_url and calling
clone_repo_with_token(https_url, branch, project_path, user_name, token); also
add an import os (and import shutil if needed) at the top of the file if not
already present.

75-87: Consider preserving the original error for debugging.

When the HTTPS retry also fails, only the second error is returned. The original HTTP error might contain useful diagnostic information. Consider logging or combining both errors.

📝 Optional: Log both errors for better debugging
         success, error = clone_repo_with_token(
             project_url, branch, project_path, user_name, token
         )
         # If http:// clone failed, retry with https:// in case the server enforces HTTPS
         if not success and project_url.startswith("http://"):
+            original_error = error
             https_url = "https://" + project_url[7:]
             logger.info(
-                f"http clone failed, retrying with https: {https_url}"
+                f"http clone failed ({original_error}), retrying with https: {https_url}"
             )
             success, error = clone_repo_with_token(
                 https_url, branch, project_path, user_name, token
             )
+            if not success:
+                error = f"https retry failed: {error} (original http error: {original_error})"
         return success, error
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@shared/utils/git_util.py` around lines 75 - 87, When retrying an http://
clone with an https:// URL in clone_repo_with_token call flow, preserve and
surface the original HTTP error (the "error" returned from the first
clone_repo_with_token) before overwriting it with the HTTPS attempt; capture the
initial error into a variable (e.g., original_error), log it with context using
logger.info/logger.error before retrying, and if the HTTPS retry also fails
return or log a combined message (or tuple) that includes both original_error
and the second error so callers can see both failures; update the code around
success, error = clone_repo_with_token(...) and the subsequent retry/return
logic to use original_error and the combined result.
start.sh (1)

1173-1176: Optional: Declare and assign separately to avoid masking return values.

Shellcheck SC2155 flags that combining local with command substitution can mask failures. While unlikely to cause issues here (since check_node_installed validates first), it's a good practice fix.

📝 Proposed fix
     # Check Node.js (skipped in --backend mode)
     if [ "$CLI_BACKEND_ONLY" = false ]; then
         check_node_installed
-        local node_version=$(node --version 2>&1)
-        local npm_version=$(npm --version 2>&1)
+        local node_version
+        local npm_version
+        node_version=$(node --version 2>&1)
+        npm_version=$(npm --version 2>&1)
         echo -e "  ${GREEN}✓${NC} Node.js detected: $node_version (npm $npm_version)"
     fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@start.sh` around lines 1173 - 1176, The current use of local with command
substitutions (local node_version=$(node --version 2>&1) and local
npm_version=$(npm --version 2>&1)) can mask command failures per ShellCheck
SC2155; update the start.sh code that reports Node/npm versions by declaring the
variables first (e.g., local node_version local npm_version) and then assign
them in separate statements using command substitution (node_version=$(node
--version 2>&1); npm_version=$(npm --version 2>&1)), preserving the existing
echo that references node_version and npm_version.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docker/executor/Dockerfile`:
- Around line 57-68: The PyInstaller spec (executor.spec) is missing
hiddenimports and the runtime hook; update executor.spec to add the runtime
hidden imports for modules installed at runtime (at minimum: "socketio",
"aiohttp", "grpcio", "grpcio_tools" or "grpc", "protobuf", "watchdog", and
"psutil") into the spec's hiddenimports list (follow the pattern used in
executor/scripts/build_local.py for socketio/aiohttp), and register the runtime
hook by adding "executor/hooks/rthook_version.py" to the spec's runtime_hooks
array so the --version hook runs before module initialization; ensure names
match the importable package names used at runtime and keep the spec consistent
with build_local.py examples.

In `@frontend/public/mockServiceWorker.js`:
- Line 1: Revert the accidental edit to the generated mockServiceWorker.js by
removing the leading blank line so the file exactly matches the upstream MSW
output; locate the generated file (mockServiceWorker.js) and restore the
original content ensuring the warning string "Please do NOT modify this file."
remains at the very top with no preceding newline, then commit the restored file
so it stays identical to the library-provided version.

In `@start.sh`:
- Around line 376-394: In get_db_host_port, the sed expression that sets
hostport fails when DATABASE_URL has no trailing “/db” and returns the full URL;
update the extraction so hostport captures the host[:port] whether or not a
trailing slash/database exists (e.g., change the sed/regex to stop at the first
slash or end-of-string), then keep the existing cut-based parsing of host and
port (hostport -> host, port) and the defaulting logic for port and localhost
resolution to ensure correct behavior when DATABASE_URL omits the database name.

---

Nitpick comments:
In `@shared/utils/git_util.py`:
- Around line 78-87: When retrying the clone with HTTPS in the HTTP-fallback
block, ensure any partially created project_path is removed before calling
clone_repo_with_token to avoid "directory exists" failures: check if
os.path.exists(project_path) and if so remove it (e.g., shutil.rmtree) prior to
constructing https_url and calling clone_repo_with_token(https_url, branch,
project_path, user_name, token); also add an import os (and import shutil if
needed) at the top of the file if not already present.
- Around line 75-87: When retrying an http:// clone with an https:// URL in
clone_repo_with_token call flow, preserve and surface the original HTTP error
(the "error" returned from the first clone_repo_with_token) before overwriting
it with the HTTPS attempt; capture the initial error into a variable (e.g.,
original_error), log it with context using logger.info/logger.error before
retrying, and if the HTTPS retry also fails return or log a combined message (or
tuple) that includes both original_error and the second error so callers can see
both failures; update the code around success, error =
clone_repo_with_token(...) and the subsequent retry/return logic to use
original_error and the combined result.

In `@start.sh`:
- Around line 1173-1176: The current use of local with command substitutions
(local node_version=$(node --version 2>&1) and local npm_version=$(npm --version
2>&1)) can mask command failures per ShellCheck SC2155; update the start.sh code
that reports Node/npm versions by declaring the variables first (e.g., local
node_version local npm_version) and then assign them in separate statements
using command substitution (node_version=$(node --version 2>&1);
npm_version=$(npm --version 2>&1)), preserving the existing echo that references
node_version and npm_version.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 12545354-c43c-40d6-aaa8-860c0786f02d

📥 Commits

Reviewing files that changed from the base of the PR and between fc5d9f5 and 863bda5.

📒 Files selected for processing (8)
  • build_image.sh
  • build_image_mac.sh
  • docker/executor/Dockerfile
  • frontend/next.config.js
  • frontend/package.json
  • frontend/public/mockServiceWorker.js
  • shared/utils/git_util.py
  • start.sh

Comment on lines +57 to +68
# ============================================================================
# Stage 2: builder — compiles executor Python source to standalone binary
# ============================================================================
FROM deps AS builder

# Install PyInstaller
RUN uv pip install --system pyinstaller

# Copy source code and build binary
COPY executor /app/executor
COPY shared /app/shared
RUN cd /app/executor && bash build.sh
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify hidden imports in executor.spec vs build_local.py

echo "=== Checking executor.spec for hidden imports ==="
rg -n "hiddenimports|hidden-import" executor/executor.spec || echo "No hiddenimports found in spec"

echo ""
echo "=== Checking build_local.py for hidden imports ==="
rg -n "hidden-import" executor/scripts/build_local.py | head -30

echo ""
echo "=== Checking runtime_hooks in executor.spec ==="
rg -n "runtime_hooks" executor/executor.spec

echo ""
echo "=== Checking if rthook_version.py exists ==="
fd "rthook_version.py" executor/

Repository: wecode-ai/Wegent

Length of output: 4284


🏁 Script executed:

#!/bin/bash
# Check for specific missing dependencies in executor.spec
echo "=== Checking for grpc, protobuf, watchdog, psutil in executor.spec ==="
rg -n "grpc|protobuf|watchdog|psutil" executor/executor.spec || echo "None found"

echo ""
echo "=== Checking executor pyproject.toml for these dependencies ==="
rg -n "grpc|protobuf|watchdog|psutil|socketio|aiohttp" executor/pyproject.toml

echo ""
echo "=== Verify if rthook_version.py is intended to be used ==="
head -20 executor/hooks/rthook_version.py

Repository: wecode-ai/Wegent

Length of output: 1133


Critical: PyInstaller spec is missing hidden imports for installed runtime dependencies.

The builder stage uses executor.spec for PyInstaller, but several dependencies installed in pyproject.toml are not declared in the spec's hiddenimports:

  1. socketio & aiohttp — Installed at lines 54-55 of pyproject.toml; required for local mode WebSocket communication
  2. grpcio, grpcio-tools & protobuf — Installed at lines 36-38; required for envd filesystem service
  3. watchdog — Installed at line 39; required for filesystem monitoring
  4. psutil — Installed at line 40; required for system metrics

Additionally, runtime_hooks=[] is empty in the spec, but executor/hooks/rthook_version.py exists and is designed to handle the --version flag before module initialization. This hook should be registered.

The Docker image will build successfully but the binary may fail at runtime when these modules are dynamically imported. Note that executor/scripts/build_local.py does include these hidden imports (e.g., lines 345-353 for socketio/aiohttp), showing this is a known pattern.

Update executor.spec to include the missing hidden imports and register the runtime hook.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/executor/Dockerfile` around lines 57 - 68, The PyInstaller spec
(executor.spec) is missing hiddenimports and the runtime hook; update
executor.spec to add the runtime hidden imports for modules installed at runtime
(at minimum: "socketio", "aiohttp", "grpcio", "grpcio_tools" or "grpc",
"protobuf", "watchdog", and "psutil") into the spec's hiddenimports list (follow
the pattern used in executor/scripts/build_local.py for socketio/aiohttp), and
register the runtime hook by adding "executor/hooks/rthook_version.py" to the
spec's runtime_hooks array so the --version hook runs before module
initialization; ensure names match the importable package names used at runtime
and keep the spec consistent with build_local.py examples.

@@ -1,3 +1,4 @@

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Revert the modification to this generated file.

This file contains an explicit warning: "Please do NOT modify this file." The added blank line appears to be an unintentional side effect of line ending normalization during the Windows/WSL compatibility fixes. Generated library files should remain pristine to avoid merge conflicts during updates and to maintain alignment with the upstream MSW library version.

🔄 Suggested fix

Remove the blank line at the start of the file to restore the original generated content:

-
 /* tslint:disable */
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/* tslint:disable */
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/public/mockServiceWorker.js` at line 1, Revert the accidental edit
to the generated mockServiceWorker.js by removing the leading blank line so the
file exactly matches the upstream MSW output; locate the generated file
(mockServiceWorker.js) and restore the original content ensuring the warning
string "Please do NOT modify this file." remains at the very top with no
preceding newline, then commit the restored file so it stays identical to the
library-provided version.

Comment thread start.sh
Comment on lines +376 to +394
# Extract host and port from DATABASE_URL (mysql+pymysql://user:pass@host:port/db)
get_db_host_port() {
local url="${DATABASE_URL:-}"
if [ -z "$url" ]; then
# Fall back to compose defaults
echo "127.0.0.1 ${MYSQL_PORT:-3306}"
return
fi
local hostport
hostport=$(echo "$url" | sed -E 's|.*@([^/]+)/.*|\1|')
local host port
host=$(echo "$hostport" | cut -d: -f1)
port=$(echo "$hostport" | cut -d: -f2)
# Resolve docker container name to localhost when running outside docker
if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then
host="127.0.0.1"
fi
echo "$host ${port:-3306}"
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Edge case: URL parsing may fail if DATABASE_URL lacks database name.

The sed pattern at line 385 requires a /database suffix:

hostport=$(echo "$url" | sed -E 's|.*@([^/]+)/.*|\1|')

If DATABASE_URL is provided without the database part (e.g., mysql+pymysql://user:pass@host:port), the pattern won't match and sed will output the entire URL, causing subsequent cut commands to fail silently.

🛡️ Proposed fix for more robust URL parsing
 get_db_host_port() {
     local url="${DATABASE_URL:-}"
     if [ -z "$url" ]; then
         # Fall back to compose defaults
         echo "127.0.0.1 ${MYSQL_PORT:-3306}"
         return
     fi
     local hostport
-    hostport=$(echo "$url" | sed -E 's|.*@([^/]+)/.*|\1|')
+    # Handle both with and without database name: `@host`:port/db or `@host`:port
+    hostport=$(echo "$url" | sed -E 's|.*@([^/]+)(/.*)?$|\1|')
     local host port
     host=$(echo "$hostport" | cut -d: -f1)
     port=$(echo "$hostport" | cut -d: -f2)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@start.sh` around lines 376 - 394, In get_db_host_port, the sed expression
that sets hostport fails when DATABASE_URL has no trailing “/db” and returns the
full URL; update the extraction so hostport captures the host[:port] whether or
not a trailing slash/database exists (e.g., change the sed/regex to stop at the
first slash or end-of-string), then keep the existing cut-based parsing of host
and port (hostport -> host, port) and the defaulting logic for port and
localhost resolution to ensure correct behavior when DATABASE_URL omits the
database name.

SSwser added 4 commits March 16, 2026 17:36
- Replace isTurbopack env var workaround with native turbopack.resolveAlias config
- Use TCP probe for MySQL/Redis readiness instead of docker container name checks, supporting custom DATABASE_URL/REDIS_URL in non-docker environments
- Add --backend flag to start.sh to skip frontend startup for backend-only development
- Increase health check timeouts to accommodate slower startup on Windows/WSL
- Expand allowedDevOrigins to cover localhost and 127.0.0.1
- Add 'source' target to run Python code directly for faster dev iteration
- Add 'binary' target (default) to run PyInstaller-compiled standalone binary
- Consolidate system packages and Python dependencies into shared 'deps' stage
- Update build scripts to explicitly target 'binary' stage for production builds
- Improve build caching by separating dependency installation from source code
- Retry git clone with https:// URL if http:// clone fails
- Log retry attempt when falling back to https protocol
- Improve reliability when git server enforces HTTPS connections
- Add resolveAlias function to convert Windows backslash paths to forward slashes
- Update turbopack resolveAlias to use relative paths (./src/...) instead of absolute paths
- Fixes "chunking context does not support external modules" errors in WSL/Windows environments
@SSwser SSwser force-pushed the fix/windows-wsl-compat-and-executor-dockerfile branch from 863bda5 to 3a4caec Compare March 16, 2026 10:21
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docker/executor/Dockerfile`:
- Around line 49-51: Add the local shared package as a dependency in
executor/pyproject.toml (e.g., add an entry like "wegent-shared = { path =
\"../shared\" }" in the dependencies section so the executor can import
logger/models/status/telemetry/utils), and ensure the Docker build installs that
local path by making the deps stage include the shared source before installing
(adjust the Dockerfile deps stage to COPY the shared directory into the image or
otherwise make the ../shared path available during the RUN pip install step).

In `@start.sh`:
- Around line 389-392: The current logic unconditionally rewrites any non-local
host variable to 127.0.0.1 (the host variable used when resolving container
names), which breaks custom DATABASE_URL/REDIS_URL endpoints; change the
normalization so only recognized Docker Compose service aliases (e.g., "db",
"postgres", "postgresql", "redis", "cache" — whatever your compose uses) are
rewritten to 127.0.0.1, and leave all other hosts untouched; update the host
normalization block that sets host="127.0.0.1" (the snippet operating on the
host variable) and the analogous block later that handles DATABASE_URL/REDIS_URL
to perform a whitelist check against known aliases instead of rewriting every
non-local host.
- Around line 1329-1334: The script currently always prints frontend URLs
(variables WEGENT_FRONTEND_PORT and WEGENT_SOCKET_URL) even when run with the
--backend flag; update the printing block to check the backend-only flag used by
the script (the variable set when parsing --backend, e.g., BACKEND_ONLY or
similar) and only print the frontend-related echo lines when that flag is
false/empty; wrap the lines that echo the Remote Frontend, Frontend URL and
their labels in an if ! [ "$BACKEND_ONLY" = "true" ] (or equivalent variable
name used in your parser) conditional so backend-only runs do not advertise
frontend access.
- Around line 367-371: The /dev/tcp probe can block indefinitely; wrap the bash
TCP probe (the conditional that runs echo > /dev/tcp/"$host"/"$port") in a
bounded timeout so it fails quickly (e.g., call the probe via the CLI timeout
utility with a short timeout like 1–2s) and keep the existing nc fallback (nc -z
-w2 "$host" "$port") unchanged; update the conditional so the timeout'd probe
returns non-zero on expiry and the script proceeds to the nc branch.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 405e4da6-7203-49f4-8597-58f39b278afb

📥 Commits

Reviewing files that changed from the base of the PR and between 863bda5 and 3a4caec.

📒 Files selected for processing (8)
  • build_image.sh
  • build_image_mac.sh
  • docker/executor/Dockerfile
  • frontend/next.config.js
  • frontend/package.json
  • frontend/public/mockServiceWorker.js
  • shared/utils/git_util.py
  • start.sh
🚧 Files skipped from review as they are similar to previous changes (2)
  • frontend/public/mockServiceWorker.js
  • shared/utils/git_util.py

Comment on lines +49 to +51
# Install Python executor dependencies (cached unless pyproject.toml changes)
COPY executor/pyproject.toml /app/executor/pyproject.toml
RUN cd /app/executor && uv pip install --system --no-cache -r pyproject.toml
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "1) Check whether executor declares local shared dependency"
rg -n '(\.\./shared|shared\s*@\s*file:|path\s*=\s*"\.\./shared"|tool\.uv\.sources)' executor/pyproject.toml || true

echo
echo "2) Inspect deps install order in docker/executor/Dockerfile"
sed -n '46,55p' docker/executor/Dockerfile

echo
echo "3) Compare with standalone install order"
sed -n '96,106p' docker/standalone/Dockerfile

Repository: wecode-ai/Wegent

Length of output: 1061


🏁 Script executed:

cat executor/pyproject.toml

Repository: wecode-ai/Wegent

Length of output: 2758


🏁 Script executed:

head -55 docker/executor/Dockerfile | tail -20

Repository: wecode-ai/Wegent

Length of output: 1019


🏁 Script executed:

rg -n 'from shared|import shared' executor/

Repository: wecode-ai/Wegent

Length of output: 13240


🏁 Script executed:

grep -r "shared" executor/pyproject.toml

Repository: wecode-ai/Wegent

Length of output: 175


Executor code imports from shared but pyproject.toml doesn't declare it—this causes runtime failures.

The executor extensively imports from shared module (logger, models, status, telemetry, utils), but executor/pyproject.toml does not declare a dependency on shared. At lines 49-51, when executor deps are installed in the deps stage, the shared module is not yet available, causing ImportError at runtime.

Proposed Docker fix
 COPY executor/pyproject.toml /app/executor/pyproject.toml
-RUN cd /app/executor && uv pip install --system --no-cache -r pyproject.toml
+COPY shared /app/shared
+RUN uv pip install --system --no-cache /app/shared \
+    && cd /app/executor \
+    && uv pip install --system --no-cache -r pyproject.toml

Better fix: Add shared as a dependency in executor/pyproject.toml (e.g., "wegent-shared = { path = "../shared" }), matching the pattern used in backend and other services.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker/executor/Dockerfile` around lines 49 - 51, Add the local shared
package as a dependency in executor/pyproject.toml (e.g., add an entry like
"wegent-shared = { path = \"../shared\" }" in the dependencies section so the
executor can import logger/models/status/telemetry/utils), and ensure the Docker
build installs that local path by making the deps stage include the shared
source before installing (adjust the Dockerfile deps stage to COPY the shared
directory into the image or otherwise make the ../shared path available during
the RUN pip install step).

Comment thread start.sh
Comment on lines +367 to +371
# Try bash /dev/tcp first (no extra tools needed), fall back to nc
if (echo > /dev/tcp/"$host"/"$port") 2>/dev/null; then
return 0
elif command -v nc >/dev/null 2>&1 && nc -z -w2 "$host" "$port" 2>/dev/null; then
return 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add a bounded timeout to TCP probe attempts.

Line 368 (/dev/tcp) can block for a long time on unreachable hosts, so the “60s wait” loop can exceed its intended bound and stall startup.

Suggested patch
 probe_tcp() {
     local host=$1
     local port=$2
-    # Try bash /dev/tcp first (no extra tools needed), fall back to nc
-    if (echo > /dev/tcp/"$host"/"$port") 2>/dev/null; then
+    # Prefer bounded-time probes
+    if command -v nc >/dev/null 2>&1 && nc -z -w2 "$host" "$port" 2>/dev/null; then
         return 0
-    elif command -v nc >/dev/null 2>&1 && nc -z -w2 "$host" "$port" 2>/dev/null; then
+    elif command -v timeout >/dev/null 2>&1 && timeout 2 bash -c ">/dev/tcp/$host/$port" 2>/dev/null; then
+        return 0
+    elif (echo > /dev/tcp/"$host"/"$port") 2>/dev/null; then
         return 0
     fi
     return 1
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@start.sh` around lines 367 - 371, The /dev/tcp probe can block indefinitely;
wrap the bash TCP probe (the conditional that runs echo >
/dev/tcp/"$host"/"$port") in a bounded timeout so it fails quickly (e.g., call
the probe via the CLI timeout utility with a short timeout like 1–2s) and keep
the existing nc fallback (nc -z -w2 "$host" "$port") unchanged; update the
conditional so the timeout'd probe returns non-zero on expiry and the script
proceeds to the nc branch.

Comment thread start.sh
Comment on lines +389 to 392
# Resolve docker container name to localhost when running outside docker
if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then
host="127.0.0.1"
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not rewrite all non-local DB/Redis hosts to 127.0.0.1.

Lines 390-391 and 408-409 currently force external hosts to localhost, which breaks custom DATABASE_URL/REDIS_URL endpoints and undermines the stated compatibility goal. Only known compose aliases should be normalized.

Suggested patch
-    # Resolve docker container name to localhost when running outside docker
-    if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then
+    # Resolve only local compose alias when running outside docker
+    if [ "$host" = "mysql" ]; then
         host="127.0.0.1"
     fi
-    if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then
+    if [ "$host" = "redis" ]; then
         host="127.0.0.1"
     fi

Also applies to: 408-410

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@start.sh` around lines 389 - 392, The current logic unconditionally rewrites
any non-local host variable to 127.0.0.1 (the host variable used when resolving
container names), which breaks custom DATABASE_URL/REDIS_URL endpoints; change
the normalization so only recognized Docker Compose service aliases (e.g., "db",
"postgres", "postgresql", "redis", "cache" — whatever your compose uses) are
rewritten to 127.0.0.1, and leave all other hosts untouched; update the host
normalization block that sets host="127.0.0.1" (the snippet operating on the
host variable) and the analogous block later that handles DATABASE_URL/REDIS_URL
to perform a whitelist check against known aliases instead of rewriting every
non-local host.

Comment thread start.sh
Comment on lines +1329 to 1334
echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}"
echo ""
echo -e "${YELLOW}📋 Share with others for remote access:${NC}"
echo -e " Frontend URL: ${BLUE}http://$(get_local_ip):$WEGENT_FRONTEND_PORT${NC}"
echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Backend-only mode should not advertise frontend URLs.

When --backend is used (Line 1292), Lines 1329 and 1333 still print frontend access URLs, which is misleading.

Suggested patch
-    echo -e "  Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+    if [ "$CLI_BACKEND_ONLY" = false ]; then
+        echo -e "  Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+    fi
@@
-    echo -e "  Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+    if [ "$CLI_BACKEND_ONLY" = false ]; then
+        echo -e "  Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+    fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}"
echo ""
echo -e "${YELLOW}📋 Share with others for remote access:${NC}"
echo -e " Frontend URL: ${BLUE}http://$(get_local_ip):$WEGENT_FRONTEND_PORT${NC}"
echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}"
if [ "$CLI_BACKEND_ONLY" = false ]; then
echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
fi
echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}"
echo ""
echo -e "${YELLOW}📋 Share with others for remote access:${NC}"
if [ "$CLI_BACKEND_ONLY" = false ]; then
echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
fi
echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@start.sh` around lines 1329 - 1334, The script currently always prints
frontend URLs (variables WEGENT_FRONTEND_PORT and WEGENT_SOCKET_URL) even when
run with the --backend flag; update the printing block to check the backend-only
flag used by the script (the variable set when parsing --backend, e.g.,
BACKEND_ONLY or similar) and only print the frontend-related echo lines when
that flag is false/empty; wrap the lines that echo the Remote Frontend, Frontend
URL and their labels in an if ! [ "$BACKEND_ONLY" = "true" ] (or equivalent
variable name used in your parser) conditional so backend-only runs do not
advertise frontend access.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant