Replies: 4 comments
-
|
Got it, Roger — here is the minimal, surgical fix that removes the expensive artifact‑loading from No refactors, no architecture changes — just the smallest correct patch. ✅ 1. Minimal Fix: Remove JSON loading from /v1/jobsBefore (your current code — slow, wrong):# Load results only for completed jobs
result = None
result_url = None
summary = None
if job.status == JobStatus.completed and job.output_path:
try:
result_url = storage.get_signed_url(job.output_path)
file_path = storage.load_file(job.output_path)
with open(file_path, "r") as f:
result = json.load(f)
results = json.load(f)
summary = _derive_video_summary(results)❌ Problems:
✅ After (minimal fix):# Clean-break: metadata only
result_url = None
summary = job.summary # already computed during worker finalisation
if job.status == JobStatus.completed and job.output_path:
result_url = storage.get_signed_url(job.output_path)✔ No artifact loading✔ No JSON parsing✔ No summary computation✔ Uses stored summary only✔ Fast, O(1) per job✅ 2. Minimal Fix: Worker computes and stores summaryAdd this inside your worker finalisation, right after writing the artifact: # After writing artifact to storage
summary = None
if job.job_type in ("video", "video_multi"):
summary = _derive_video_summary(result)
job.summary = summary
job.output_path = artifact_path
job.status = JobStatus.completed
repo.update_job(job)✔ Summary computed once✔ Stored in DB✔ Never recomputed✔ /v1/jobs becomes metadata‑only✅ 3. Minimal Fix: Remove inline results from workerReplace: payload = tool_result["result"]With: payload = tool_result["result"]
# do NOT store inline results anywhereAnd ensure the worker never sets 🎯 Final ResultAfter this minimal fix:
If you want, I can generate the exact unified diff for just these minimal changes — no extra noise. |
Beta Was this translation helpful? Give feedback.
-
|
Absolutely, Roger — here is the exact unified diff, minimal and surgical, that fixes the reviewer’s finding without touching anything else. This diff:
No noise. No refactors. Just the fix. ✅ Unified Diff — Minimal Fix Only1. Fix
|
Beta Was this translation helpful? Give feedback.
-
|
Here you go, Roger — the minimal, surgical frontend diff plus a tiny test patch that guarantees No noise. No refactors. Just the exact changes needed to match the backend fix. ✅ 1. Minimal Frontend Diff (only what must change)This diff updates:
Nothing else.
|
Beta Was this translation helpful? Give feedback.
-
|
Here you go, Roger — three minimal, repo‑safe utilities that give you:
All three are intentionally tiny, explicit, and easy to drop into your repo without changing your architecture. 🧹 1. Repo‑wide grep scriptDetects any remaining inline‑result patternsThis script searches for:
It exits with non‑zero status if anything is found.
#!/usr/bin/env bash
set -euo pipefail
echo "🔍 Checking repo for forbidden inline result patterns..."
# Patterns that must never appear again
PATTERNS=(
"results[[:space:]]*="
"result[[:space:]]*="
"json.load"
"storage.load_file"
"job.results"
"job.result"
)
FAIL=0
for pattern in "${PATTERNS[@]}"; do
if grep -RIn --exclude-dir=node_modules --exclude-dir=dist --exclude-dir=.venv "$pattern" .; then
echo "❌ Forbidden pattern found: $pattern"
FAIL=1
fi
done
if [[ "$FAIL" -eq 0 ]]; then
echo "✅ No inline result patterns found."
else
echo "❌ Inline result patterns detected. Fix required."
exit 1
fiRun it manually:🚨 2. CI rule that blocks PRs reintroducing
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
/v1/jobs is still loading every full result artefact on the hot path.
This loop calls json.load(...) for each completed job just to derive summary. With large video outputs and limit=100, listing jobs becomes O(page size × artefact size) I/O and memory again, which undercuts the Artifact Pattern; computing and persisting the summary during worker finalisation would keep this endpoint metadata-only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
@server/app/api_routes/routes/jobs.pyaround lines 151 - 158, The route isloading full artefacts (storage.load_file + json.load) to compute a summary for
each completed job; instead compute and persist the summary when the worker
finalises the job and then read that stored summary in the /v1/jobs route.
Update the worker finalisation code that writes the artifact to storage to call
_derive_video_summary(results) (or equivalent) right after the artifact is
created, store the returned summary on the Job model (e.g., job.summary or
job.metadata['summary']), and persist the Job; then modify the jobs listing code
that checks job.status == JobStatus.completed and job.output_path to use
job.summary (and job.output_url via storage.get_signed_url if needed) and remove
storage.load_file/json.load from the hot path. Ensure references:
_derive_video_summary, storage.load_file, storage.get_signed_url, Job
(job.summary/job.metadata), and the worker finalisation routine are updated
accordingly.
Beta Was this translation helpful? Give feedback.
All reactions