fix: improve Windows/WSL compatibility, add multi-stage executor Dockerfile, and add http to https fallback for git clone#740
Conversation
📝 WalkthroughWalkthroughAdds explicit Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can validate your CodeRabbit configuration file in your editor.If your editor has YAML language server, you can enable auto-completion and validation by adding |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
shared/utils/git_util.py (2)
78-87: Consider cleaning up the target directory before HTTPS retry.If the initial HTTP clone fails after partially creating
project_path(e.g., network interruption mid-clone), the HTTPS retry will also fail becausegit clonerefuses to clone into an existing directory. This could leave users with confusing double-failure messages.♻️ Proposed fix to clean up before retry
# If http:// clone failed, retry with https:// in case the server enforces HTTPS if not success and project_url.startswith("http://"): https_url = "https://" + project_url[7:] logger.info( f"http clone failed, retrying with https: {https_url}" ) + # Clean up any partial clone artifacts before retry + import shutil + if os.path.exists(project_path): + shutil.rmtree(project_path, ignore_errors=True) success, error = clone_repo_with_token( https_url, branch, project_path, user_name, token )Note: You'll need to add
import osat the top of the file if not already present (it's used insetup_git_hooksbut imported inside the function).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/utils/git_util.py` around lines 78 - 87, When retrying the clone with HTTPS in the HTTP-fallback block, ensure any partially created project_path is removed before calling clone_repo_with_token to avoid "directory exists" failures: check if os.path.exists(project_path) and if so remove it (e.g., shutil.rmtree) prior to constructing https_url and calling clone_repo_with_token(https_url, branch, project_path, user_name, token); also add an import os (and import shutil if needed) at the top of the file if not already present.
75-87: Consider preserving the original error for debugging.When the HTTPS retry also fails, only the second error is returned. The original HTTP error might contain useful diagnostic information. Consider logging or combining both errors.
📝 Optional: Log both errors for better debugging
success, error = clone_repo_with_token( project_url, branch, project_path, user_name, token ) # If http:// clone failed, retry with https:// in case the server enforces HTTPS if not success and project_url.startswith("http://"): + original_error = error https_url = "https://" + project_url[7:] logger.info( - f"http clone failed, retrying with https: {https_url}" + f"http clone failed ({original_error}), retrying with https: {https_url}" ) success, error = clone_repo_with_token( https_url, branch, project_path, user_name, token ) + if not success: + error = f"https retry failed: {error} (original http error: {original_error})" return success, error🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@shared/utils/git_util.py` around lines 75 - 87, When retrying an http:// clone with an https:// URL in clone_repo_with_token call flow, preserve and surface the original HTTP error (the "error" returned from the first clone_repo_with_token) before overwriting it with the HTTPS attempt; capture the initial error into a variable (e.g., original_error), log it with context using logger.info/logger.error before retrying, and if the HTTPS retry also fails return or log a combined message (or tuple) that includes both original_error and the second error so callers can see both failures; update the code around success, error = clone_repo_with_token(...) and the subsequent retry/return logic to use original_error and the combined result.start.sh (1)
1173-1176: Optional: Declare and assign separately to avoid masking return values.Shellcheck SC2155 flags that combining
localwith command substitution can mask failures. While unlikely to cause issues here (sincecheck_node_installedvalidates first), it's a good practice fix.📝 Proposed fix
# Check Node.js (skipped in --backend mode) if [ "$CLI_BACKEND_ONLY" = false ]; then check_node_installed - local node_version=$(node --version 2>&1) - local npm_version=$(npm --version 2>&1) + local node_version + local npm_version + node_version=$(node --version 2>&1) + npm_version=$(npm --version 2>&1) echo -e " ${GREEN}✓${NC} Node.js detected: $node_version (npm $npm_version)" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@start.sh` around lines 1173 - 1176, The current use of local with command substitutions (local node_version=$(node --version 2>&1) and local npm_version=$(npm --version 2>&1)) can mask command failures per ShellCheck SC2155; update the start.sh code that reports Node/npm versions by declaring the variables first (e.g., local node_version local npm_version) and then assign them in separate statements using command substitution (node_version=$(node --version 2>&1); npm_version=$(npm --version 2>&1)), preserving the existing echo that references node_version and npm_version.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docker/executor/Dockerfile`:
- Around line 57-68: The PyInstaller spec (executor.spec) is missing
hiddenimports and the runtime hook; update executor.spec to add the runtime
hidden imports for modules installed at runtime (at minimum: "socketio",
"aiohttp", "grpcio", "grpcio_tools" or "grpc", "protobuf", "watchdog", and
"psutil") into the spec's hiddenimports list (follow the pattern used in
executor/scripts/build_local.py for socketio/aiohttp), and register the runtime
hook by adding "executor/hooks/rthook_version.py" to the spec's runtime_hooks
array so the --version hook runs before module initialization; ensure names
match the importable package names used at runtime and keep the spec consistent
with build_local.py examples.
In `@frontend/public/mockServiceWorker.js`:
- Line 1: Revert the accidental edit to the generated mockServiceWorker.js by
removing the leading blank line so the file exactly matches the upstream MSW
output; locate the generated file (mockServiceWorker.js) and restore the
original content ensuring the warning string "Please do NOT modify this file."
remains at the very top with no preceding newline, then commit the restored file
so it stays identical to the library-provided version.
In `@start.sh`:
- Around line 376-394: In get_db_host_port, the sed expression that sets
hostport fails when DATABASE_URL has no trailing “/db” and returns the full URL;
update the extraction so hostport captures the host[:port] whether or not a
trailing slash/database exists (e.g., change the sed/regex to stop at the first
slash or end-of-string), then keep the existing cut-based parsing of host and
port (hostport -> host, port) and the defaulting logic for port and localhost
resolution to ensure correct behavior when DATABASE_URL omits the database name.
---
Nitpick comments:
In `@shared/utils/git_util.py`:
- Around line 78-87: When retrying the clone with HTTPS in the HTTP-fallback
block, ensure any partially created project_path is removed before calling
clone_repo_with_token to avoid "directory exists" failures: check if
os.path.exists(project_path) and if so remove it (e.g., shutil.rmtree) prior to
constructing https_url and calling clone_repo_with_token(https_url, branch,
project_path, user_name, token); also add an import os (and import shutil if
needed) at the top of the file if not already present.
- Around line 75-87: When retrying an http:// clone with an https:// URL in
clone_repo_with_token call flow, preserve and surface the original HTTP error
(the "error" returned from the first clone_repo_with_token) before overwriting
it with the HTTPS attempt; capture the initial error into a variable (e.g.,
original_error), log it with context using logger.info/logger.error before
retrying, and if the HTTPS retry also fails return or log a combined message (or
tuple) that includes both original_error and the second error so callers can see
both failures; update the code around success, error =
clone_repo_with_token(...) and the subsequent retry/return logic to use
original_error and the combined result.
In `@start.sh`:
- Around line 1173-1176: The current use of local with command substitutions
(local node_version=$(node --version 2>&1) and local npm_version=$(npm --version
2>&1)) can mask command failures per ShellCheck SC2155; update the start.sh code
that reports Node/npm versions by declaring the variables first (e.g., local
node_version local npm_version) and then assign them in separate statements
using command substitution (node_version=$(node --version 2>&1);
npm_version=$(npm --version 2>&1)), preserving the existing echo that references
node_version and npm_version.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 12545354-c43c-40d6-aaa8-860c0786f02d
📒 Files selected for processing (8)
build_image.shbuild_image_mac.shdocker/executor/Dockerfilefrontend/next.config.jsfrontend/package.jsonfrontend/public/mockServiceWorker.jsshared/utils/git_util.pystart.sh
| # ============================================================================ | ||
| # Stage 2: builder — compiles executor Python source to standalone binary | ||
| # ============================================================================ | ||
| FROM deps AS builder | ||
|
|
||
| # Install PyInstaller | ||
| RUN uv pip install --system pyinstaller | ||
|
|
||
| # Copy source code and build binary | ||
| COPY executor /app/executor | ||
| COPY shared /app/shared | ||
| RUN cd /app/executor && bash build.sh |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify hidden imports in executor.spec vs build_local.py
echo "=== Checking executor.spec for hidden imports ==="
rg -n "hiddenimports|hidden-import" executor/executor.spec || echo "No hiddenimports found in spec"
echo ""
echo "=== Checking build_local.py for hidden imports ==="
rg -n "hidden-import" executor/scripts/build_local.py | head -30
echo ""
echo "=== Checking runtime_hooks in executor.spec ==="
rg -n "runtime_hooks" executor/executor.spec
echo ""
echo "=== Checking if rthook_version.py exists ==="
fd "rthook_version.py" executor/Repository: wecode-ai/Wegent
Length of output: 4284
🏁 Script executed:
#!/bin/bash
# Check for specific missing dependencies in executor.spec
echo "=== Checking for grpc, protobuf, watchdog, psutil in executor.spec ==="
rg -n "grpc|protobuf|watchdog|psutil" executor/executor.spec || echo "None found"
echo ""
echo "=== Checking executor pyproject.toml for these dependencies ==="
rg -n "grpc|protobuf|watchdog|psutil|socketio|aiohttp" executor/pyproject.toml
echo ""
echo "=== Verify if rthook_version.py is intended to be used ==="
head -20 executor/hooks/rthook_version.pyRepository: wecode-ai/Wegent
Length of output: 1133
Critical: PyInstaller spec is missing hidden imports for installed runtime dependencies.
The builder stage uses executor.spec for PyInstaller, but several dependencies installed in pyproject.toml are not declared in the spec's hiddenimports:
- socketio & aiohttp — Installed at lines 54-55 of pyproject.toml; required for local mode WebSocket communication
- grpcio, grpcio-tools & protobuf — Installed at lines 36-38; required for envd filesystem service
- watchdog — Installed at line 39; required for filesystem monitoring
- psutil — Installed at line 40; required for system metrics
Additionally, runtime_hooks=[] is empty in the spec, but executor/hooks/rthook_version.py exists and is designed to handle the --version flag before module initialization. This hook should be registered.
The Docker image will build successfully but the binary may fail at runtime when these modules are dynamically imported. Note that executor/scripts/build_local.py does include these hidden imports (e.g., lines 345-353 for socketio/aiohttp), showing this is a known pattern.
Update executor.spec to include the missing hidden imports and register the runtime hook.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker/executor/Dockerfile` around lines 57 - 68, The PyInstaller spec
(executor.spec) is missing hiddenimports and the runtime hook; update
executor.spec to add the runtime hidden imports for modules installed at runtime
(at minimum: "socketio", "aiohttp", "grpcio", "grpcio_tools" or "grpc",
"protobuf", "watchdog", and "psutil") into the spec's hiddenimports list (follow
the pattern used in executor/scripts/build_local.py for socketio/aiohttp), and
register the runtime hook by adding "executor/hooks/rthook_version.py" to the
spec's runtime_hooks array so the --version hook runs before module
initialization; ensure names match the importable package names used at runtime
and keep the spec consistent with build_local.py examples.
| @@ -1,3 +1,4 @@ | |||
|
|
|||
There was a problem hiding this comment.
Revert the modification to this generated file.
This file contains an explicit warning: "Please do NOT modify this file." The added blank line appears to be an unintentional side effect of line ending normalization during the Windows/WSL compatibility fixes. Generated library files should remain pristine to avoid merge conflicts during updates and to maintain alignment with the upstream MSW library version.
🔄 Suggested fix
Remove the blank line at the start of the file to restore the original generated content:
-
/* tslint:disable */📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /* tslint:disable */ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/public/mockServiceWorker.js` at line 1, Revert the accidental edit
to the generated mockServiceWorker.js by removing the leading blank line so the
file exactly matches the upstream MSW output; locate the generated file
(mockServiceWorker.js) and restore the original content ensuring the warning
string "Please do NOT modify this file." remains at the very top with no
preceding newline, then commit the restored file so it stays identical to the
library-provided version.
| # Extract host and port from DATABASE_URL (mysql+pymysql://user:pass@host:port/db) | ||
| get_db_host_port() { | ||
| local url="${DATABASE_URL:-}" | ||
| if [ -z "$url" ]; then | ||
| # Fall back to compose defaults | ||
| echo "127.0.0.1 ${MYSQL_PORT:-3306}" | ||
| return | ||
| fi | ||
| local hostport | ||
| hostport=$(echo "$url" | sed -E 's|.*@([^/]+)/.*|\1|') | ||
| local host port | ||
| host=$(echo "$hostport" | cut -d: -f1) | ||
| port=$(echo "$hostport" | cut -d: -f2) | ||
| # Resolve docker container name to localhost when running outside docker | ||
| if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then | ||
| host="127.0.0.1" | ||
| fi | ||
| echo "$host ${port:-3306}" | ||
| } |
There was a problem hiding this comment.
Edge case: URL parsing may fail if DATABASE_URL lacks database name.
The sed pattern at line 385 requires a /database suffix:
hostport=$(echo "$url" | sed -E 's|.*@([^/]+)/.*|\1|')If DATABASE_URL is provided without the database part (e.g., mysql+pymysql://user:pass@host:port), the pattern won't match and sed will output the entire URL, causing subsequent cut commands to fail silently.
🛡️ Proposed fix for more robust URL parsing
get_db_host_port() {
local url="${DATABASE_URL:-}"
if [ -z "$url" ]; then
# Fall back to compose defaults
echo "127.0.0.1 ${MYSQL_PORT:-3306}"
return
fi
local hostport
- hostport=$(echo "$url" | sed -E 's|.*@([^/]+)/.*|\1|')
+ # Handle both with and without database name: `@host`:port/db or `@host`:port
+ hostport=$(echo "$url" | sed -E 's|.*@([^/]+)(/.*)?$|\1|')
local host port
host=$(echo "$hostport" | cut -d: -f1)
port=$(echo "$hostport" | cut -d: -f2)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@start.sh` around lines 376 - 394, In get_db_host_port, the sed expression
that sets hostport fails when DATABASE_URL has no trailing “/db” and returns the
full URL; update the extraction so hostport captures the host[:port] whether or
not a trailing slash/database exists (e.g., change the sed/regex to stop at the
first slash or end-of-string), then keep the existing cut-based parsing of host
and port (hostport -> host, port) and the defaulting logic for port and
localhost resolution to ensure correct behavior when DATABASE_URL omits the
database name.
- Replace isTurbopack env var workaround with native turbopack.resolveAlias config - Use TCP probe for MySQL/Redis readiness instead of docker container name checks, supporting custom DATABASE_URL/REDIS_URL in non-docker environments - Add --backend flag to start.sh to skip frontend startup for backend-only development - Increase health check timeouts to accommodate slower startup on Windows/WSL - Expand allowedDevOrigins to cover localhost and 127.0.0.1
- Add 'source' target to run Python code directly for faster dev iteration - Add 'binary' target (default) to run PyInstaller-compiled standalone binary - Consolidate system packages and Python dependencies into shared 'deps' stage - Update build scripts to explicitly target 'binary' stage for production builds - Improve build caching by separating dependency installation from source code
- Retry git clone with https:// URL if http:// clone fails - Log retry attempt when falling back to https protocol - Improve reliability when git server enforces HTTPS connections
- Add resolveAlias function to convert Windows backslash paths to forward slashes - Update turbopack resolveAlias to use relative paths (./src/...) instead of absolute paths - Fixes "chunking context does not support external modules" errors in WSL/Windows environments
863bda5 to
3a4caec
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docker/executor/Dockerfile`:
- Around line 49-51: Add the local shared package as a dependency in
executor/pyproject.toml (e.g., add an entry like "wegent-shared = { path =
\"../shared\" }" in the dependencies section so the executor can import
logger/models/status/telemetry/utils), and ensure the Docker build installs that
local path by making the deps stage include the shared source before installing
(adjust the Dockerfile deps stage to COPY the shared directory into the image or
otherwise make the ../shared path available during the RUN pip install step).
In `@start.sh`:
- Around line 389-392: The current logic unconditionally rewrites any non-local
host variable to 127.0.0.1 (the host variable used when resolving container
names), which breaks custom DATABASE_URL/REDIS_URL endpoints; change the
normalization so only recognized Docker Compose service aliases (e.g., "db",
"postgres", "postgresql", "redis", "cache" — whatever your compose uses) are
rewritten to 127.0.0.1, and leave all other hosts untouched; update the host
normalization block that sets host="127.0.0.1" (the snippet operating on the
host variable) and the analogous block later that handles DATABASE_URL/REDIS_URL
to perform a whitelist check against known aliases instead of rewriting every
non-local host.
- Around line 1329-1334: The script currently always prints frontend URLs
(variables WEGENT_FRONTEND_PORT and WEGENT_SOCKET_URL) even when run with the
--backend flag; update the printing block to check the backend-only flag used by
the script (the variable set when parsing --backend, e.g., BACKEND_ONLY or
similar) and only print the frontend-related echo lines when that flag is
false/empty; wrap the lines that echo the Remote Frontend, Frontend URL and
their labels in an if ! [ "$BACKEND_ONLY" = "true" ] (or equivalent variable
name used in your parser) conditional so backend-only runs do not advertise
frontend access.
- Around line 367-371: The /dev/tcp probe can block indefinitely; wrap the bash
TCP probe (the conditional that runs echo > /dev/tcp/"$host"/"$port") in a
bounded timeout so it fails quickly (e.g., call the probe via the CLI timeout
utility with a short timeout like 1–2s) and keep the existing nc fallback (nc -z
-w2 "$host" "$port") unchanged; update the conditional so the timeout'd probe
returns non-zero on expiry and the script proceeds to the nc branch.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 405e4da6-7203-49f4-8597-58f39b278afb
📒 Files selected for processing (8)
build_image.shbuild_image_mac.shdocker/executor/Dockerfilefrontend/next.config.jsfrontend/package.jsonfrontend/public/mockServiceWorker.jsshared/utils/git_util.pystart.sh
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/public/mockServiceWorker.js
- shared/utils/git_util.py
| # Install Python executor dependencies (cached unless pyproject.toml changes) | ||
| COPY executor/pyproject.toml /app/executor/pyproject.toml | ||
| RUN cd /app/executor && uv pip install --system --no-cache -r pyproject.toml |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "1) Check whether executor declares local shared dependency"
rg -n '(\.\./shared|shared\s*@\s*file:|path\s*=\s*"\.\./shared"|tool\.uv\.sources)' executor/pyproject.toml || true
echo
echo "2) Inspect deps install order in docker/executor/Dockerfile"
sed -n '46,55p' docker/executor/Dockerfile
echo
echo "3) Compare with standalone install order"
sed -n '96,106p' docker/standalone/DockerfileRepository: wecode-ai/Wegent
Length of output: 1061
🏁 Script executed:
cat executor/pyproject.tomlRepository: wecode-ai/Wegent
Length of output: 2758
🏁 Script executed:
head -55 docker/executor/Dockerfile | tail -20Repository: wecode-ai/Wegent
Length of output: 1019
🏁 Script executed:
rg -n 'from shared|import shared' executor/Repository: wecode-ai/Wegent
Length of output: 13240
🏁 Script executed:
grep -r "shared" executor/pyproject.tomlRepository: wecode-ai/Wegent
Length of output: 175
Executor code imports from shared but pyproject.toml doesn't declare it—this causes runtime failures.
The executor extensively imports from shared module (logger, models, status, telemetry, utils), but executor/pyproject.toml does not declare a dependency on shared. At lines 49-51, when executor deps are installed in the deps stage, the shared module is not yet available, causing ImportError at runtime.
Proposed Docker fix
COPY executor/pyproject.toml /app/executor/pyproject.toml
-RUN cd /app/executor && uv pip install --system --no-cache -r pyproject.toml
+COPY shared /app/shared
+RUN uv pip install --system --no-cache /app/shared \
+ && cd /app/executor \
+ && uv pip install --system --no-cache -r pyproject.tomlBetter fix: Add shared as a dependency in executor/pyproject.toml (e.g., "wegent-shared = { path = "../shared" }), matching the pattern used in backend and other services.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker/executor/Dockerfile` around lines 49 - 51, Add the local shared
package as a dependency in executor/pyproject.toml (e.g., add an entry like
"wegent-shared = { path = \"../shared\" }" in the dependencies section so the
executor can import logger/models/status/telemetry/utils), and ensure the Docker
build installs that local path by making the deps stage include the shared
source before installing (adjust the Dockerfile deps stage to COPY the shared
directory into the image or otherwise make the ../shared path available during
the RUN pip install step).
| # Try bash /dev/tcp first (no extra tools needed), fall back to nc | ||
| if (echo > /dev/tcp/"$host"/"$port") 2>/dev/null; then | ||
| return 0 | ||
| elif command -v nc >/dev/null 2>&1 && nc -z -w2 "$host" "$port" 2>/dev/null; then | ||
| return 0 |
There was a problem hiding this comment.
Add a bounded timeout to TCP probe attempts.
Line 368 (/dev/tcp) can block for a long time on unreachable hosts, so the “60s wait” loop can exceed its intended bound and stall startup.
Suggested patch
probe_tcp() {
local host=$1
local port=$2
- # Try bash /dev/tcp first (no extra tools needed), fall back to nc
- if (echo > /dev/tcp/"$host"/"$port") 2>/dev/null; then
+ # Prefer bounded-time probes
+ if command -v nc >/dev/null 2>&1 && nc -z -w2 "$host" "$port" 2>/dev/null; then
return 0
- elif command -v nc >/dev/null 2>&1 && nc -z -w2 "$host" "$port" 2>/dev/null; then
+ elif command -v timeout >/dev/null 2>&1 && timeout 2 bash -c ">/dev/tcp/$host/$port" 2>/dev/null; then
+ return 0
+ elif (echo > /dev/tcp/"$host"/"$port") 2>/dev/null; then
return 0
fi
return 1
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@start.sh` around lines 367 - 371, The /dev/tcp probe can block indefinitely;
wrap the bash TCP probe (the conditional that runs echo >
/dev/tcp/"$host"/"$port") in a bounded timeout so it fails quickly (e.g., call
the probe via the CLI timeout utility with a short timeout like 1–2s) and keep
the existing nc fallback (nc -z -w2 "$host" "$port") unchanged; update the
conditional so the timeout'd probe returns non-zero on expiry and the script
proceeds to the nc branch.
| # Resolve docker container name to localhost when running outside docker | ||
| if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then | ||
| host="127.0.0.1" | ||
| fi |
There was a problem hiding this comment.
Do not rewrite all non-local DB/Redis hosts to 127.0.0.1.
Lines 390-391 and 408-409 currently force external hosts to localhost, which breaks custom DATABASE_URL/REDIS_URL endpoints and undermines the stated compatibility goal. Only known compose aliases should be normalized.
Suggested patch
- # Resolve docker container name to localhost when running outside docker
- if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then
+ # Resolve only local compose alias when running outside docker
+ if [ "$host" = "mysql" ]; then
host="127.0.0.1"
fi- if [ "$host" != "localhost" ] && [ "$host" != "127.0.0.1" ]; then
+ if [ "$host" = "redis" ]; then
host="127.0.0.1"
fiAlso applies to: 408-410
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@start.sh` around lines 389 - 392, The current logic unconditionally rewrites
any non-local host variable to 127.0.0.1 (the host variable used when resolving
container names), which breaks custom DATABASE_URL/REDIS_URL endpoints; change
the normalization so only recognized Docker Compose service aliases (e.g., "db",
"postgres", "postgresql", "redis", "cache" — whatever your compose uses) are
rewritten to 127.0.0.1, and leave all other hosts untouched; update the host
normalization block that sets host="127.0.0.1" (the snippet operating on the
host variable) and the analogous block later that handles DATABASE_URL/REDIS_URL
to perform a whitelist check against known aliases instead of rewriting every
non-local host.
| echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}" | ||
| echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}" | ||
| echo "" | ||
| echo -e "${YELLOW}📋 Share with others for remote access:${NC}" | ||
| echo -e " Frontend URL: ${BLUE}http://$(get_local_ip):$WEGENT_FRONTEND_PORT${NC}" | ||
| echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}" | ||
| echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}" |
There was a problem hiding this comment.
Backend-only mode should not advertise frontend URLs.
When --backend is used (Line 1292), Lines 1329 and 1333 still print frontend access URLs, which is misleading.
Suggested patch
- echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+ if [ "$CLI_BACKEND_ONLY" = false ]; then
+ echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+ fi
@@
- echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+ if [ "$CLI_BACKEND_ONLY" = false ]; then
+ echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}" | |
| echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}" | |
| echo "" | |
| echo -e "${YELLOW}📋 Share with others for remote access:${NC}" | |
| echo -e " Frontend URL: ${BLUE}http://$(get_local_ip):$WEGENT_FRONTEND_PORT${NC}" | |
| echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}" | |
| echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}" | |
| if [ "$CLI_BACKEND_ONLY" = false ]; then | |
| echo -e " Remote Frontend: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}" | |
| fi | |
| echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}" | |
| echo "" | |
| echo -e "${YELLOW}📋 Share with others for remote access:${NC}" | |
| if [ "$CLI_BACKEND_ONLY" = false ]; then | |
| echo -e " Frontend URL: ${BLUE}http://$LOCAL_IP:$WEGENT_FRONTEND_PORT${NC}" | |
| fi | |
| echo -e " Socket URL: ${BLUE}$WEGENT_SOCKET_URL${NC}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@start.sh` around lines 1329 - 1334, The script currently always prints
frontend URLs (variables WEGENT_FRONTEND_PORT and WEGENT_SOCKET_URL) even when
run with the --backend flag; update the printing block to check the backend-only
flag used by the script (the variable set when parsing --backend, e.g.,
BACKEND_ONLY or similar) and only print the frontend-related echo lines when
that flag is false/empty; wrap the lines that echo the Remote Frontend, Frontend
URL and their labels in an if ! [ "$BACKEND_ONLY" = "true" ] (or equivalent
variable name used in your parser) conditional so backend-only runs do not
advertise frontend access.
Summary
Test plan
start.shworks correctly on WSL/Linux without CRLF errorssourceandbinarytargets🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements