Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 33 additions & 1 deletion apps/api/scripts/dev-with-local-hyperdrive.mjs
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { spawn } from 'node:child_process';
import { existsSync, readFileSync } from 'node:fs';
import { resolve } from 'node:path';
import net from 'node:net';

const parseEnvFile = (contents) => {
const values = {};
Expand Down Expand Up @@ -34,6 +35,36 @@ const parseEnvFile = (contents) => {

const ROOT_ENV_PATH = resolve(process.cwd(), '..', '..', '.env');
const API_DEV_VARS_PATH = resolve(process.cwd(), '.dev.vars');
const API_DEV_PORT = 8787;

const assertPortAvailable = async (port, label) => {
await new Promise((resolvePromise, rejectPromise) => {
const server = net.createServer();

server.once('error', (error) => {
server.close();
rejectPromise(error);
});

server.listen(port, () => {
server.close((closeError) => {
if (closeError) {
rejectPromise(closeError);
return;
}
resolvePromise();
});
});
}).catch((error) => {
if ((error && typeof error === 'object' && 'code' in error && error.code) === 'EADDRINUSE') {
console.error(`${label} is already in use. Stop the existing process before starting local dev.`);
process.exit(1);
}

console.error(`Unable to verify ${label}: ${error instanceof Error ? error.message : 'unknown error'}`);
process.exit(1);
});
};

const loadRootEnv = () => {
if (!existsSync(ROOT_ENV_PATH)) {
Expand All @@ -49,6 +80,7 @@ const loadRootEnv = () => {
};

loadRootEnv();
await assertPortAvailable(API_DEV_PORT, 'api-dev-port');

if (!process.env.CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE) {
const databaseUrl = process.env.DATABASE_URL?.trim();
Expand All @@ -65,7 +97,7 @@ if (!process.env.CLOUDFLARE_HYPERDRIVE_LOCAL_CONNECTION_STRING_HYPERDRIVE) {
}

const wranglerCommand = process.platform === 'win32' ? 'wrangler.cmd' : 'wrangler';
const wranglerArgs = ['dev', '--config', 'wrangler.toml'];
const wranglerArgs = ['dev', '--config', 'wrangler.toml', '--port', String(API_DEV_PORT)];

if (existsSync(API_DEV_VARS_PATH)) {
wranglerArgs.push('--env-file', API_DEV_VARS_PATH);
Expand Down
31 changes: 31 additions & 0 deletions apps/api/src/server/database.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import { describe, expect, it } from 'vitest';

import { shouldUsePerRequestDatabase } from './database';

describe('shouldUsePerRequestDatabase', () => {
it('uses a short-lived database client for localhost requests', () => {
expect(
shouldUsePerRequestDatabase({
env: { APP_BASE_URL: 'https://opencalendly.com' },
req: { url: 'http://localhost:8787/v0/auth/me' },
}),
).toBe(true);
});

it('uses a short-lived database client for localhost app configuration', () => {
expect(
shouldUsePerRequestDatabase({
env: { APP_BASE_URL: 'http://localhost:3000' },
}),
).toBe(true);
});

it('keeps runtime database reuse enabled for non-local environments', () => {
expect(
shouldUsePerRequestDatabase({
env: { APP_BASE_URL: 'https://opencalendly.com' },
req: { url: 'https://api.opencalendly.com/v0/auth/me' },
}),
).toBe(false);
});
});
Comment on lines +5 to +31
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add coverage for the new withDatabase localhost branch.

These tests only lock the hostname predicate. The PR's behavior change is the createDb / connect / end path in withDatabase, and nothing here asserts that localhost requests actually take that branch and clean up the client. A small mock-based test would catch regressions that reintroduce the hanging local auth flow.

Based on learnings: Tests added/updated as part of Definition of Done.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/server/database.test.ts` around lines 5 - 31, Add a test that
exercises the withDatabase code path for localhost requests by mocking the
database lifecycle methods so you can assert createDb/connect are called and end
is invoked for cleanup; specifically, write a small unit test that calls
withDatabase with a request that meets shouldUsePerRequestDatabase (use
shouldUsePerRequestDatabase to generate the localhost predicate) and stub/mock
the createDb, connect and end functions (or the DB client returned by createDb)
to verify connect is awaited and end is called on completion (or on error),
ensuring the short-lived client branch is exercised and cleaned up.

46 changes: 45 additions & 1 deletion apps/api/src/server/database.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,39 @@
import { createRuntimeDb } from '@opencalendly/db';
import { createDb, createRuntimeDb } from '@opencalendly/db';

import { jsonError } from './core';
import { isNeonDatabaseUrl, resolveConnectionString } from './env';
import type { ContextLike, Database } from './types';

const LOCAL_DATABASE_HOSTS = new Set(['localhost', '127.0.0.1', '::1']);

const normalizeHostname = (value: string): string => {
return value.trim().toLowerCase().replace(/^\[(.*)\]$/, '$1');
};

export const shouldUsePerRequestDatabase = (context: Pick<ContextLike, 'env'> & { req?: { url: string } }): boolean => {
if (context.req?.url) {
try {
const requestUrl = new URL(context.req.url);
if (LOCAL_DATABASE_HOSTS.has(normalizeHostname(requestUrl.hostname))) {
return true;
}
} catch {
// Ignore malformed request URLs and fall back to APP_BASE_URL detection.
}
}

const configuredAppUrl = context.env.APP_BASE_URL?.trim();
if (!configuredAppUrl) {
return false;
}

try {
return LOCAL_DATABASE_HOSTS.has(normalizeHostname(new URL(configuredAppUrl).hostname));
} catch {
return false;
}
};

export const isUniqueViolation = (error: unknown, constraint?: string): boolean => {
if (!error || typeof error !== 'object') {
return false;
Expand Down Expand Up @@ -33,6 +63,20 @@ export const withDatabase = async (
return jsonError(context, 500, 'DATABASE_URL must point to Neon Postgres (*.neon.tech).');
}

if (shouldUsePerRequestDatabase(context)) {
// Wrangler/workerd local dev can hang retained pg pools after auth/session requests.
// Use a short-lived client for localhost flows and keep pooled reuse for deployed runtimes.
const { client, db } = createDb(connection.connectionString, {
enforceNeon: connection.source === 'database_url',
});
await client.connect();
try {
return await handler(db);
} finally {
await client.end().catch(() => undefined);
}
}
Comment on lines +66 to +78
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

printf '\n--- withDatabase localhost branch ---\n'
sed -n '49,84p' apps/api/src/server/database.ts

printf '\n--- Database type definition ---\n'
rg -n -C2 'type Database|export type Database|interface Database' apps/api/src/server

printf '\n--- createDb / createRuntimeDb return sites ---\n'
sed -n '56,105p' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 3708


🏁 Script executed:

# Find the createRuntimeDb definition
rg -n -A10 'export.*createRuntimeDb|function createRuntimeDb' packages/db/src

Repository: rohan-patnaik/opencalendly

Length of output: 732


🏁 Script executed:

# Check for TypeScript errors or build logs
find . -name "*.ts" -path "*/api/src/server/*" -exec wc -l {} \; | head -20
# Check if there are actual build outputs or CI configs
fd -e 'build.log' -e 'tsconfig.json' -e '.eslintrc'

Repository: rohan-patnaik/opencalendly

Length of output: 1453


🏁 Script executed:

# Get the complete createRuntimeDb definition
sed -n '107,135p' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 917


🏁 Script executed:

# Also check what createRuntimeDb actually returns
rg -A20 'export const createRuntimeDb' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 592


🏁 Script executed:

# Verify the exact type mismatch by checking the Drizzle return signatures
rg -B5 -A15 'const drizzle' packages/db/src/client.ts | head -60

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Check if there are any type assertions or as-casts in the database.ts handler
sed -n '66,78p' apps/api/src/server/database.ts | cat -n

Repository: rohan-patnaik/opencalendly

Length of output: 670


🏁 Script executed:

# Check the full context around the handler call to see if there's a type cast
sed -n '60,85p' apps/api/src/server/database.ts

Repository: rohan-patnaik/opencalendly

Length of output: 933


🏁 Script executed:

# Check if there's a TypeScript config and any build status
find . -name "tsconfig.json" | head -5

Repository: rohan-patnaik/opencalendly

Length of output: 195


🏁 Script executed:

# Look for any type assertions or workarounds in the handler signature
rg -n 'as Database|as any|type Database' apps/api/src/server/database.ts

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Check what the Drizzle return type actually looks like for both drivers
rg -B3 -A3 'drizzle\(' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 524


🏁 Script executed:

# Check package.json for build/type-check scripts and versions
cat apps/api/package.json | grep -A5 '"scripts"\|"dependencies"\|"drizzle"' | head -30

Repository: rohan-patnaik/opencalendly

Length of output: 587


🏁 Script executed:

# Check if the types are actually different by examining the schema export
rg -n 'export.*schema' packages/db/src/client.ts | head -5

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Verify if drizzle returns a compatible type regardless of Client vs Pool
rg -B2 -A2 'schema.*=' packages/db/src/client.ts | head -20

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Check the exact TypeScript configuration to understand strict mode
cat apps/api/tsconfig.json

Repository: rohan-patnaik/opencalendly

Length of output: 305


🏁 Script executed:

# Look for import of Database type in database.ts to understand the full context
head -30 apps/api/src/server/database.ts

Repository: rohan-patnaik/opencalendly

Length of output: 1028


🏁 Script executed:

# Check if there are any test files or build outputs that show actual errors
find . -name "*.test.ts" -path "*/server/*" | head -3

Repository: rohan-patnaik/opencalendly

Length of output: 185


🏁 Script executed:

# Check if there's any CI config or build log showing the actual error
find . -name "*.yml" -o -name "*.yaml" | grep -i github | head -5

Repository: rohan-patnaik/opencalendly

Length of output: 227


🏁 Script executed:

# Look at the database test to see how the types are actually used
cat apps/api/src/server/database.test.ts

Repository: rohan-patnaik/opencalendly

Length of output: 1011


🏁 Script executed:

# Check the base tsconfig for strict settings
cat tsconfig.base.json | grep -A20 '"compilerOptions"'

Repository: rohan-patnaik/opencalendly

Length of output: 472


🏁 Script executed:

# Check the CI workflow to see if it runs typecheck
cat .github/workflows/ci.yml | head -80

Repository: rohan-patnaik/opencalendly

Length of output: 1722


🏁 Script executed:

# Verify the actual Drizzle type definitions to understand if Client and Pool return compatible types
rg -n '\$client' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Let's try to understand the Drizzle return types better by checking imports
rg -n 'import.*drizzle' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 124


🏁 Script executed:

# Check if there's a type mismatch by looking at what Drizzle exports
rg -n 'NodePgDatabase|type.*Database' packages/db/src/client.ts

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Try running the typecheck to see if it actually fails
cd apps/api && npm run typecheck 2>&1 | head -50

Repository: rohan-patnaik/opencalendly

Length of output: 813


🏁 Script executed:

# Let's check if we can install dependencies first and then typecheck with full context
npm install 2>&1 | tail -20

Repository: rohan-patnaik/opencalendly

Length of output: 914


🏁 Script executed:

# Check the actual Drizzle version to understand the type behavior
cat packages/db/package.json | grep -i drizzle

Repository: rohan-patnaik/opencalendly

Length of output: 260


🏁 Script executed:

# Look for any type exports or internal types that might indicate Client vs Pool incompatibility
rg -n 'export type|interface.*Database' packages/db/src

Repository: rohan-patnaik/opencalendly

Length of output: 52


🏁 Script executed:

# Try typecheck again now that dependencies are installed
cd apps/api && npm run typecheck 2>&1

Repository: rohan-patnaik/opencalendly

Length of output: 1192


🏁 Script executed:

# Let's examine the actual type structure more carefully to understand if this is truly incompatible
rg -n 'schema' packages/db/src/client.ts | head -10

Repository: rohan-patnaik/opencalendly

Length of output: 250


This localhost branch does not type-check against Database.

The code fails CI with TS2379 at line 74: createDb() returns a Drizzle instance backed by pg.Client, but handler expects Database, which is typed to createRuntimeDb's return (backed by pg.Pool). The $client property on the Drizzle instance is typed to the specific driver, and Client is missing properties like totalCount, idleCount, waitingCount, and expiredCount that Pool provides.

Widen the shared Database type to accept both Client and Pool, or refactor the handler signature to support both client types.

🧰 Tools
🪛 GitHub Actions: CI

[error] 74-74: TypeScript (TS2379) type mismatch in typecheck. Argument of type 'NodePgDatabase<...> & { $client: Client; }' is not assignable to parameter type 'NodePgDatabase<...> & { $client: Pool; }' with 'exactOptionalPropertyTypes: true'. Consider adding 'undefined' to the types of the target's properties.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api/src/server/database.ts` around lines 66 - 78, The localhost branch
returns a Drizzle instance backed by pg.Client which doesn't type-match the
existing Database type (created by createRuntimeDb, backed by pg.Pool) because
the .$client differs; update types so the handler can accept both client shapes:
either widen the Database type (the shared alias/interface used with
Database/handler) to union the Drizzle return types for pg.Client and pg.Pool
(or make the .$client property typed as Pool | Client), or change the handler
signature to be generic/overloaded to accept the Drizzle DB returned by createDb
as well as createRuntimeDb; reference createDb, createRuntimeDb, Database,
handler, client, db and .$client when making the type change so the localhost
branch type-checks.


const { db } = createRuntimeDb(connection.connectionString, {
enforceNeon: connection.source === 'database_url',
});
Expand Down
2 changes: 1 addition & 1 deletion apps/web/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "node -e \"require('node:fs').rmSync('.next', { recursive: true, force: true })\" && next dev",
"dev": "node ../../scripts/ensure-dev-port.mjs 3000 web-dev-port && node -e \"require('node:fs').rmSync('.next', { recursive: true, force: true })\" && next dev -p 3000",
"build": "next build",
"start": "next start",
"typecheck": "tsc --project tsconfig.json --noEmit",
Expand Down
20 changes: 20 additions & 0 deletions docs/BACKLOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -1532,3 +1532,23 @@ Acceptance criteria:
- Validation passes:
- `git diff --check`
- workflow YAML remains syntactically valid

### Feature 59: Local auth safety on fixed dev ports

Scope:
- Prevent local auth and OAuth flows from silently breaking when the web or API dev server starts on an unexpected fallback port.
- Keep local development pinned to `localhost:3000` for web and `localhost:8787` for API.
- Fail fast with a clear error when either expected port is already occupied.
- Keep the pooled local API database runtime compatible with wrangler/workerd timer behavior so auth session exchange does not fail under local development.

Acceptance criteria:

- `npm run dev:web` checks port `3000` first and exits with a clear message instead of silently switching to another port.
- `npm run dev:api` checks port `8787` first and exits with a clear message instead of silently switching to another port.
- Local auth/OAuth assumptions remain aligned with `.env` defaults for `APP_BASE_URL` and `NEXT_PUBLIC_API_BASE_URL`.
- Local API auth/session requests do not fail with `tid.unref is not a function` from pooled database cleanup.
- Local localhost auth/session requests remain stable under wrangler/workerd instead of hanging after `POST /v0/auth/clerk/exchange`.
- Validation passes:
- `npm run lint`
- `npm run test`
- `npm run typecheck`
Comment on lines +1536 to +1554
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Renumber this backlog entry.

Feature 59 already exists on Line 320, and this file is meant to stay as an ordered one-feature-per-PR list. Reusing 59 here makes the PR/feature mapping ambiguous. Use the next available feature number and place the new entry in order.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/BACKLOG.md` around lines 1536 - 1554, This entry duplicates the "Feature
59" label; rename it to the next unused feature number and insert it in numeric
order so each feature is uniquely numbered (e.g., change the header "Feature 59:
Local auth safety on fixed dev ports" to the next available feature N) and
update any internal references to that feature header so the backlog remains
strictly ordered and unambiguous.

31 changes: 31 additions & 0 deletions packages/db/src/client.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import { afterEach, describe, expect, it, vi } from 'vitest';

import { runtimeSupportsTimerUnref } from './client';

describe('runtimeSupportsTimerUnref', () => {
afterEach(() => {
vi.unstubAllGlobals();
});

it('returns true when timers expose unref', () => {
const unref = vi.fn();
const timer = { unref };
const clearTimeoutMock = vi.fn();

vi.stubGlobal('setTimeout', vi.fn(() => timer) as unknown as typeof setTimeout);
vi.stubGlobal('clearTimeout', clearTimeoutMock as unknown as typeof clearTimeout);

expect(runtimeSupportsTimerUnref()).toBe(true);
expect(clearTimeoutMock).toHaveBeenCalledWith(timer);
});

it('returns false when timers do not expose unref', () => {
const clearTimeoutMock = vi.fn();

vi.stubGlobal('setTimeout', vi.fn(() => 1) as unknown as typeof setTimeout);
vi.stubGlobal('clearTimeout', clearTimeoutMock as unknown as typeof clearTimeout);

expect(runtimeSupportsTimerUnref()).toBe(false);
expect(clearTimeoutMock).toHaveBeenCalledWith(1);
});
});
Comment on lines +5 to +31
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Cover the createPgPool call site, not just the detector.

This suite proves runtimeSupportsTimerUnref() in isolation, but the regression this PR is fixing lives in createPgPool() when allowExitOnIdle is toggled. One test around the pool-options path would protect against accidentally wiring the helper result incorrectly while these tests still pass.

Based on learnings: Applies to **/*.{test,spec}.{js,ts,tsx} : Add/update tests with each implementation change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/db/src/client.test.ts` around lines 5 - 31, Add a test that
exercises createPgPool (not just runtimeSupportsTimerUnref) by stubbing global
timers to simulate both cases (timer object with unref and numeric timeout id)
and then calling createPgPool with allowExitOnIdle set to true and false; assert
that createPgPool uses the detector result (i.e., clearTimeout is called with
the same value returned from setTimeout) and that the resulting pool's
configuration reflects the allowExitOnIdle parameter (reference createPgPool and
runtimeSupportsTimerUnref to locate the code paths to validate).

19 changes: 16 additions & 3 deletions packages/db/src/client.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,14 @@ type RuntimeDbEntry = {
db: ReturnType<typeof drizzle<typeof schema>>;
};

export const runtimeSupportsTimerUnref = (): boolean => {
const timer = setTimeout(() => undefined, 0) as ReturnType<typeof setTimeout> & {
unref?: () => unknown;
};
clearTimeout(timer);
return typeof timer === 'object' && timer !== null && typeof timer.unref === 'function';
};

const getRuntimeDbCache = (): Map<string, RuntimeDbEntry> => {
const globalCache = globalThis as typeof globalThis & {
__opencalendlyRuntimeDbCache__?: Map<string, RuntimeDbEntry>;
Expand Down Expand Up @@ -69,14 +77,19 @@ export const createPgPool = (
assertNeonDatabaseUrl(databaseUrl);
}

return new Pool({
const poolOptions: ConstructorParameters<typeof Pool>[0] = {
connectionString: databaseUrl,
ssl: { rejectUnauthorized: false },
max: 20,
idleTimeoutMillis: 30_000,
connectionTimeoutMillis: 5_000,
allowExitOnIdle: true,
});
};

if (runtimeSupportsTimerUnref()) {
poolOptions.allowExitOnIdle = true;
}

return new Pool(poolOptions);
};

export const createDb = (databaseUrl = getDatabaseUrl(), options: CreatePgClientOptions = {}) => {
Expand Down
26 changes: 26 additions & 0 deletions scripts/ensure-dev-port.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import net from 'node:net';

const [rawPort, ...labelParts] = process.argv.slice(2);
const port = Number.parseInt(rawPort ?? '', 10);
const label = labelParts.join(' ').trim() || `port ${rawPort ?? 'unknown'}`;

if (!Number.isInteger(port) || port <= 0) {
console.error('Usage: node scripts/ensure-dev-port.mjs <port> [label]');
process.exit(1);
Comment on lines +3 to +9
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
node --input-type=module <<'EOF'
for (const raw of ['3000abc', '65536']) {
  const port = Number.parseInt(raw, 10);
  console.log({
    raw,
    parsed: port,
    passesCurrentGuard: Number.isInteger(port) && port > 0,
  });
}
EOF

Repository: rohan-patnaik/opencalendly

Length of output: 186


🏁 Script executed:

head -15 scripts/ensure-dev-port.mjs

Repository: rohan-patnaik/opencalendly

Length of output: 588


Validate the raw port token more strictly.

Line 4 uses Number.parseInt(), which accepts garbage suffixes like 3000abc (parsed as 3000), and the line 7 guard does not enforce the maximum valid port 65535. This allows malformed CLI input to bypass the intended usage check. Parse the raw token strictly and enforce the valid port range 1..65535.

Suggested tightening
-const [rawPort, ...labelParts] = process.argv.slice(2);
-const port = Number.parseInt(rawPort ?? '', 10);
+const [rawPort, ...labelParts] = process.argv.slice(2);
+const normalizedPort = rawPort?.trim() ?? '';
+const port = /^\d+$/.test(normalizedPort) ? Number(normalizedPort) : Number.NaN;
 const label = labelParts.join(' ').trim() || `port ${rawPort ?? 'unknown'}`;
 
-if (!Number.isInteger(port) || port <= 0) {
+if (!Number.isInteger(port) || port <= 0 || port > 65_535) {
   console.error('Usage: node scripts/ensure-dev-port.mjs <port> [label]');
   process.exit(1);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const [rawPort, ...labelParts] = process.argv.slice(2);
const port = Number.parseInt(rawPort ?? '', 10);
const label = labelParts.join(' ').trim() || `port ${rawPort ?? 'unknown'}`;
if (!Number.isInteger(port) || port <= 0) {
console.error('Usage: node scripts/ensure-dev-port.mjs <port> [label]');
process.exit(1);
const [rawPort, ...labelParts] = process.argv.slice(2);
const normalizedPort = rawPort?.trim() ?? '';
const port = /^\d+$/.test(normalizedPort) ? Number(normalizedPort) : Number.NaN;
const label = labelParts.join(' ').trim() || `port ${rawPort ?? 'unknown'}`;
if (!Number.isInteger(port) || port <= 0 || port > 65_535) {
console.error('Usage: node scripts/ensure-dev-port.mjs <port> [label]');
process.exit(1);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/ensure-dev-port.mjs` around lines 3 - 9, The current parsing of the
CLI token rawPort (and derived port) is too permissive; change the validation in
scripts/ensure-dev-port.mjs to strictly accept only a numeric token and enforce
the TCP port range 1..65535: first check that rawPort is a non-empty string of
only digits (e.g., /^\d+$/) before parsing, then parse to Number and verify it
is an integer between 1 and 65535 inclusive; if the check fails, keep the
existing usage error/exit path (which references label and port) so malformed
inputs like "3000abc" are rejected and out-of-range values are also caught.

}

const server = net.createServer();

server.once('error', (error) => {
if ((error && typeof error === 'object' && 'code' in error && error.code) === 'EADDRINUSE') {
console.error(`${label} is already in use. Stop the existing process before starting local dev.`);
process.exit(1);
}

console.error(`Unable to verify ${label}: ${error instanceof Error ? error.message : 'unknown error'}`);
process.exit(1);
});

server.listen(port, () => {
server.close(() => process.exit(0));
});
Loading