You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: backend/app/api/docs/llm/get_llm_call.md
-34Lines changed: 0 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,43 +2,9 @@ Retrieve the status and results of an LLM call job by job ID.
2
2
3
3
This endpoint allows you to poll for the status and results of an asynchronous LLM call job that was previously initiated via the POST `/llm/call` endpoint.
4
4
5
-
### Path Parameters
6
-
7
-
**`job_id`** (required, UUID) - The unique identifier of the job returned when the LLM call was created.
8
-
9
-
### Response
10
-
11
-
The endpoint returns an `LLMJobPublic` object containing:
12
-
13
-
-**`job_id`** (UUID) - The unique identifier of the job
14
-
-**`status`** (string) - Current status of the job. Possible values:
15
-
-`PENDING` - Job has been created and is waiting to be processed
16
-
-`PROCESSING` - Job is currently being processed
17
-
-`SUCCESS` - Job completed successfully
18
-
-`FAILED` - Job failed during processing
19
-
-**`llm_response`** (object | null) - The complete LLM response when status is `SUCCESS`, containing:
20
-
-`response` - Normalized LLM response with provider_response_id, conversation_id, provider, model, and output
21
-
-`usage` - Token usage information (input_tokens, output_tokens, total_tokens)
22
-
-**`error_message`** (string | null) - Error details if the job failed, otherwise null
23
-
-**`job_inserted_at`** (datetime) - Timestamp when the job was created
24
-
-**`job_updated_at`** (datetime) - Timestamp when the job was last updated
25
-
26
-
### Usage
27
-
28
-
1. Create an LLM call using POST `/llm/call` to receive a `job_id`
29
-
2. Use this endpoint to poll for the job status
30
-
3. When the status is `SUCCESS`, the `llm_response` field will contain the complete LLM response
31
-
4. When the status is `FAILED`, check the `error_message` field for details
32
-
33
-
### Polling Strategy
34
-
35
-
- Poll this endpoint periodically until `status` is either `SUCCESS` or `FAILED`
36
-
- Use exponential backoff (e.g., 1s, 2s, 4s, 8s) to reduce server load
37
-
- Stop polling when status is terminal (`SUCCESS` or `FAILED`)
38
5
39
6
### Notes
40
7
41
8
- This endpoint returns both the job status AND the actual LLM response when complete
42
9
- LLM responses are also delivered asynchronously via the callback URL (if provided)
43
10
- Jobs can be queried at any time after creation
44
-
- The endpoint returns a 404 error if the job_id does not exist
Copy file name to clipboardExpand all lines: backend/app/api/routes/llm.py
+6-4Lines changed: 6 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -75,7 +75,10 @@ def llm_call(
75
75
ifnotjob:
76
76
raiseHTTPException(status_code=404, detail="Job not found")
77
77
78
-
message="Your response is being generated and will be delivered via callback."
78
+
ifrequest.callback_url:
79
+
message="Your response is being generated and will be delivered via callback."
80
+
else:
81
+
message="Your response is being generated"
79
82
80
83
job_response=LLMJobImmediatePublic(
81
84
job_id=job.id,
@@ -85,8 +88,6 @@ def llm_call(
85
88
job_updated_at=job.updated_at,
86
89
)
87
90
88
-
# message = "Your response is being generated and will be delivered via callback." if request.callback_url else "Your response is being generated. Use the job_id to poll for results."
0 commit comments