Skip to content

Conversation

leehuwuj
Copy link
Collaborator

@leehuwuj leehuwuj commented Jun 3, 2025

Summary by CodeRabbit

  • New Features

    • Enabled file upload functionality in chat UI with a new configuration option across Python and TypeScript servers.
    • Added API endpoints to upload files and return metadata.
    • Provided new example workflows (agent and custom) demonstrating file upload handling, stateful memory, and streaming LLM responses.
    • Introduced utilities to extract and manage file attachments from chat messages.
    • Added detailed step-by-step example server startup instructions and usage notes.
    • Added a vendor agreement document as a new example file.
  • Documentation

    • Expanded and restructured documentation with step-by-step guides, example code snippets, and usage instructions for file uploads and workflows.
    • Added comprehensive README files for private file handling and new example workflows.
  • Bug Fixes

    • Enhanced validation and error handling for file uploads and file reading operations.
    • Improved message role validation in chat requests.
  • Refactor

    • Improved file service implementation with centralized file naming, path, and URL logic.
    • Updated data models for file metadata, annotations, and chat messages to support file attachments.
    • Adjusted imports and type annotations to reflect new file models.
    • Streamlined workflow factory signatures and usage patterns.
  • Tests

    • Refined tests for file service, removing obsolete assertions and aligning with updated file handling logic.

Copy link

changeset-bot bot commented Jun 3, 2025

🦋 Changeset detected

Latest commit: 193298e

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 3 packages
Name Type
@create-llama/llama-index-server Patch
@llamaindex/server Patch
llamaindex-server-examples Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link

coderabbitai bot commented Jun 3, 2025

Walkthrough

This update introduces comprehensive support for user-uploaded file handling in both the TypeScript and Python LlamaIndex server implementations. It adds file upload endpoints, file storage utilities, new configuration options to enable file uploads, and documentation and example workflows demonstrating how to access and process uploaded files within chat and agent workflows.

Changes

Files/Paths Change Summary
packages/server/README.md, python/llama-index-server/README.md Updated documentation to describe new enableFileUpload/enable_file_upload UI config, file upload endpoints, and workflow input changes.
packages/server/examples/README.md Expanded and clarified example running instructions, added notes on prerequisites.
packages/server/examples/private-file/README.md, python/llama-index-server/examples/private_file/README.md Added detailed example documentation for handling uploaded files in workflows, including code snippets and usage notes.
packages/server/examples/private-file/agent-workflow.ts, packages/server/examples/private-file/custom-workflow.ts, packages/server/examples/private-file/index.ts Added TypeScript example workflows and server setup for file upload handling.
python/llama-index-server/examples/private_file/agent-workflow.py, python/llama-index-server/examples/private_file/custom-workflow.py Added Python example workflows and FastAPI server setup for file upload handling.
python/llama-index-server/examples/private_file/example.txt Added sample text file for file upload example.
packages/server/next/app/api/files/helpers.ts, packages/server/next/app/api/files/route.ts Added file upload API endpoint and helper for storing files on disk and returning metadata.
packages/server/src/index.ts, packages/server/src/utils/file.ts Added and exported utility function for constructing stored file paths.
packages/server/src/server.ts, packages/server/src/types.ts Added enableFileUpload config option and logic to expose upload API endpoint in client config.
packages/server/src/utils/events.ts Added schemas, types, and utility for extracting file attachments from chat messages.
python/llama-index-server/llama_index/server/models/chat.py Added models for server file metadata, file annotations, and updated chat message/request models to support file attachments.
python/llama-index-server/llama_index/server/api/routers/chat.py Added file upload endpoint and logic to extract and pass file attachments to workflow.
python/llama-index-server/llama_index/server/api/utils/chat_attachments.py Added utility to extract file attachments from chat messages.
python/llama-index-server/llama_index/server/server.py Added enable_file_upload config option and logic to expose upload API endpoint in config.
python/llama-index-server/llama_index/server/services/file.py Refactored file service for improved file handling, metadata, and base64 upload support.
python/llama-index-server/llama_index/server/services/llamacloud/index.py Updated to use attribute-based access for pipeline config.
python/llama-index-server/llama_index/server/tools/interpreter.py Updated to use new ServerFile model for file saving and metadata.
python/llama-index-server/examples/artifact/main.py Updated import path for ArtifactWorkflow.
python/llama-index-server/tests/api/test_chat_api.py Removed assertions on exact user message content in workflow call arguments.
python/llama-index-server/tests/services/test_file_service.py Refactored tests for new file service logic and models, updated assertions and removed tests for deprecated functions.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI
    participant Server
    participant FileService
    participant Workflow

    User->>UI: Uploads file via chat UI
    UI->>Server: POST /api/files with file data
    Server->>FileService: storeFile(name, content)
    FileService->>Server: Returns ServerFile metadata
    Server->>UI: Returns file metadata
    User->>UI: Sends chat message referencing file
    UI->>Server: Sends chat message with file annotation
    Server->>Workflow: Passes messages and file attachments
    Workflow->>FileService: Reads file content as needed
    Workflow->>Server: Returns response
    Server->>UI: Streams/returns chat response
Loading

Possibly related PRs

Suggested reviewers

  • marcusschiesser

Poem

In burrows deep, where servers dwell,
Now rabbits upload files as well!
With helpers, routes, and models new,
The chat can read your docs right through.
So hop along and give a try—
File uploads now are rabbit-high!
🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a0af27a and 193298e.

📒 Files selected for processing (1)
  • packages/server/project-config/package.json (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/server/project-config/package.json
⏰ Context from checks skipped due to timeout of 90000ms (58)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: Type Check
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

leehuwuj added 2 commits June 3, 2025 15:42
- Introduced `main.py` to set up the FastAPI application with file upload capabilities.
- Created `workflow.py` to manage file reading and tool creation for uploaded files.
- Updated `server.py` to include upload API configuration.
- Modified chat router to handle file uploads and return server file metadata.
- Refactored chat models to support new file handling structure.
- Enhanced file service to manage private file storage and retrieval.
@leehuwuj leehuwuj force-pushed the lee/private-file branch from 9814524 to 07ab553 Compare June 4, 2025 05:48
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.md extension?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to update the chat UI for this. In the chat UI, it's a document file that only supports several extensions.

@leehuwuj leehuwuj force-pushed the lee/private-file branch from 3106d2b to 095badc Compare June 4, 2025 08:30
@leehuwuj leehuwuj force-pushed the lee/private-file branch from 6d750e8 to ee3057e Compare June 5, 2025 02:22
@leehuwuj leehuwuj requested a review from thucpn June 5, 2025 07:23
@leehuwuj leehuwuj marked this pull request as ready for review June 5, 2025 07:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🔭 Outside diff range comments (2)
packages/server/src/server.ts (1)

50-61: 🛠️ Refactor suggestion

Consider security documentation for file upload feature.

While the implementation is correct, consider adding security warnings in the documentation about:

  • File size limits
  • File type validation
  • Storage security
  • Potential for malicious file uploads

Consider implementing server-side file validation and size limits as this feature could be vulnerable to abuse if not properly secured.

python/llama-index-server/README.md (1)

81-167: 🛠️ Refactor suggestion

Consider adding security guidelines for file uploads.

While the feature documentation is comprehensive, consider adding a security section that covers:

  • Recommended file size limits
  • File type validation best practices
  • Storage security considerations
  • Example of secure file handling in workflows

File upload features require careful security considerations that should be documented alongside the feature usage.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~85-~85: Loose punctuation mark.
Context: ...r custom UI components. - layout_dir: The directory for custom layout section...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~86-~86: Loose punctuation mark.
Context: ...details. - llamacloud_index_selector: Whether to show the LlamaCloud index se...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~87-~87: Loose punctuation mark.
Context: ...CLOUD_API_KEYto be set. -dev_mode`: When enabled, you can update workflow c...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~88-~88: Loose punctuation mark.
Context: ...ture enabled. - suggest_next_questions: Whether to suggest next questions after...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~89-~89: Loose punctuation mark.
Context: ...GGEST_NEXT_QUESTION_PROMPT. - verbose: Enable verbose logging - api_prefix`: ...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~128-~128: A punctuation mark might be missing here.
Context: ...unts the data and output folders at {server_url}{api_prefix}/files/data (default: /api/files/data) and `...

(AI_EN_LECTOR_MISSING_PUNCTUATION)


[uncategorized] ~130-~130: This verb may not be in the correct tense. Consider changing the tense to fit the context better.
Context: ...rkflows from create-llama (see below) are following this pattern. ### Chat UI When enable...

(AI_EN_LECTOR_REPLACEMENT_VERB_TENSE)


[uncategorized] ~166-~166: Loose punctuation mark.
Context: ...lowing default endpoints: - /api/chat: Chat interaction endpoint - `/api/chat/...

(UNLIKELY_OPENING_PUNCTUATION)

♻️ Duplicate comments (1)
python/llama-index-server/examples/private_file/agent-workflow.py (1)

16-44: Consider using FileService.get_file method instead.

Based on past review feedback, there might be a get_file method available that returns a ServerFile object, which would be more appropriate than using get_file_path and direct file I/O.

If available, consider this refactor:

 def read_file(file_id: str) -> str:
-    file_path = FileService.get_file_path(file_id)
     try:
-        with open(file_path, "r") as file:
-            return file.read()
+        server_file = FileService.get_file(file_id)
+        with open(server_file.path, "r") as file:
+            return file.read()
     except Exception as e:
-        return f"Error reading file {file_path}: {e}"
+        return f"Error reading file {file_id}: {e}"
🧹 Nitpick comments (10)
python/llama-index-server/examples/artifact/main.py (2)

1-2: Use direct import for ArtifactWorkflow
Switches to importing ArtifactWorkflow from code_workflow. Verify that this module resolution works in both script execution and package contexts, or consider adding an __init__.py for explicit relative imports.


5-5: Remove or clarify commented import
The commented-out import of ArtifactWorkflow from document_workflow may confuse readers. Either remove it or update the comment to explain when to use the document workflow instead of the code workflow.

packages/server/next/app/api/files/route.ts (1)

21-35: Improve base64 validation robustness.

The current validation assumes exactly one comma separator, but this could be more robust to handle edge cases.

-   const parts = base64.split(",");
-   if (parts.length !== 2) {
-     return NextResponse.json(
-       { error: "Invalid base64 format" },
-       { status: 400 },
-     );
-   }
-
-   const [header, content] = parts;
-   if (!header || !content) {
+   const commaIndex = base64.indexOf(",");
+   if (commaIndex === -1) {
      return NextResponse.json(
        { error: "Invalid base64 format" },
        { status: 400 },
      );
    }
+   
+   const header = base64.substring(0, commaIndex);
+   const content = base64.substring(commaIndex + 1);
+   if (!header || !content) {
+     return NextResponse.json(
+       { error: "Invalid base64 format" },
+       { status: 400 },
+     );
+   }
python/llama-index-server/llama_index/server/api/utils/chat_attachments.py (1)

24-31: Add error handling for file path operations.

While the logic is correct, consider adding error handling for the file path update operation in case FileService.get_file_path fails.

    for annotation in user_message_annotations:
        if isinstance(annotation, list):
            for item in annotation:
                if isinstance(item, FileAnnotation):
                    for file in item.data.files:
-                       file.path = FileService.get_file_path(file.id)
+                       try:
+                           file.path = FileService.get_file_path(file.id)
+                       except Exception:
+                           # Skip files with invalid paths rather than failing entirely
+                           continue
                    files.extend(item.data.files)
packages/server/examples/private-file/README.md (1)

21-21: Fix markdown heading formatting.

Remove the trailing colon from the heading to comply with markdown linting standards.

Apply this diff:

-## How to get the uploaded files in your workflow:
+## How to get the uploaded files in your workflow
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

21-21: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

python/llama-index-server/examples/private_file/README.md (2)

13-13: Fix markdown heading formatting.

Remove trailing colons from headings to follow markdown best practices.

-## How to get the uploaded files in your workflow:
+## How to get the uploaded files in your workflow

-## Examples:
+## Examples

-### For agent workflow:
+### For agent workflow

-### For custom workflow:
+### For custom workflow

Also applies to: 28-28, 30-30, 75-75

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

13-13: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


31-31: Fix list indentation for consistency.

Adjust list item indentation to start at column 0 for better markdown formatting.

-   - We create a simple file reader tool that can read the uploaded file content.
+- We create a simple file reader tool that can read the uploaded file content.

-   - Check out the [agent-workflow.py](agent-workflow.py) for more details.
+- Check out the [agent-workflow.py](agent-workflow.py) for more details.

-   - You can run the agent workflow with file tool by running the following command:
+- You can run the agent workflow with the file tool by running the following command:

-   - The attachments are included in the `attachments` parameter of the `StartEvent` so you can easily access them in the workflow.
+- The attachments are included in the `attachments` parameter of the `StartEvent` so you can easily access them in the workflow.

-   - Check out the [custom-workflow.py](custom-workflow.py) for more details.
+- Check out the [custom-workflow.py](custom-workflow.py) for more details.

-   - You can run the custom workflow by running the following command:
+- You can run the custom workflow by running the following command:

Also applies to: 66-66, 68-68, 76-76, 95-95, 97-97

🧰 Tools
🪛 LanguageTool

[uncategorized] ~31-~31: This verb may not be in the correct tense. Consider changing the tense to fit the context better.
Context: ...mples: ### For agent workflow: - We create a simple file reader tool that can read...

(AI_EN_LECTOR_REPLACEMENT_VERB_TENSE)

🪛 markdownlint-cli2 (0.17.2)

31-31: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)

python/llama-index-server/llama_index/server/models/chat.py (2)

20-22: Remove redundant exclusion configuration

The exclude_api configuration option is not a standard Pydantic v2 configuration. The exclude=True parameter in the Field definition already handles exclusion from serialization.

-    class Config:
-        exclude_api = {"path"}
-        json_schema_extra = {
+    model_config = {
+        "json_schema_extra": {
🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 20-20: Too few public methods (0/2)

(R0903)


56-56: Consider more specific typing for params field

The params: Any = None field is too generic. If this field has a known structure, consider creating a specific model or using a more restrictive type like Dict[str, Any] to improve type safety.

python/llama-index-server/llama_index/server/services/file.py (1)

15-15: Consider making PRIVATE_STORE_PATH configurable

The storage path is hardcoded which might not be suitable for all deployment scenarios. Consider making this configurable through environment variables or settings.

-PRIVATE_STORE_PATH = str(Path("output", "private"))
+PRIVATE_STORE_PATH = os.getenv("FILE_STORAGE_PATH", str(Path("output", "private")))
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a543a27 and 2929913.

📒 Files selected for processing (28)
  • packages/server/README.md (1 hunks)
  • packages/server/examples/README.md (1 hunks)
  • packages/server/examples/private-file/README.md (1 hunks)
  • packages/server/examples/private-file/agent-workflow.ts (1 hunks)
  • packages/server/examples/private-file/custom-workflow.ts (1 hunks)
  • packages/server/examples/private-file/index.ts (1 hunks)
  • packages/server/next/app/api/files/helpers.ts (1 hunks)
  • packages/server/next/app/api/files/route.ts (1 hunks)
  • packages/server/src/index.ts (1 hunks)
  • packages/server/src/server.ts (2 hunks)
  • packages/server/src/types.ts (1 hunks)
  • packages/server/src/utils/events.ts (2 hunks)
  • packages/server/src/utils/file.ts (1 hunks)
  • python/llama-index-server/README.md (3 hunks)
  • python/llama-index-server/examples/artifact/main.py (1 hunks)
  • python/llama-index-server/examples/private_file/README.md (1 hunks)
  • python/llama-index-server/examples/private_file/agent-workflow.py (1 hunks)
  • python/llama-index-server/examples/private_file/custom-workflow.py (1 hunks)
  • python/llama-index-server/examples/private_file/example.txt (1 hunks)
  • python/llama-index-server/llama_index/server/api/routers/chat.py (5 hunks)
  • python/llama-index-server/llama_index/server/api/utils/chat_attachments.py (1 hunks)
  • python/llama-index-server/llama_index/server/models/chat.py (2 hunks)
  • python/llama-index-server/llama_index/server/server.py (3 hunks)
  • python/llama-index-server/llama_index/server/services/file.py (3 hunks)
  • python/llama-index-server/llama_index/server/services/llamacloud/index.py (1 hunks)
  • python/llama-index-server/llama_index/server/tools/interpreter.py (3 hunks)
  • python/llama-index-server/tests/api/test_chat_api.py (0 hunks)
  • python/llama-index-server/tests/services/test_file_service.py (8 hunks)
💤 Files with no reviewable changes (1)
  • python/llama-index-server/tests/api/test_chat_api.py
🧰 Additional context used
🧬 Code Graph Analysis (7)
python/llama-index-server/examples/artifact/main.py (2)
python/llama-index-server/examples/artifact/document_workflow.py (1)
  • ArtifactWorkflow (55-326)
python/llama-index-server/examples/artifact/code_workflow.py (1)
  • ArtifactWorkflow (55-354)
packages/server/next/app/api/files/route.ts (2)
packages/server/next/app/api/files/helpers.ts (1)
  • storeFile (9-34)
packages/server/src/utils/events.ts (1)
  • FileAnnotation (235-235)
packages/server/examples/private-file/custom-workflow.ts (3)
packages/server/examples/private-file/agent-workflow.ts (1)
  • workflowFactory (8-34)
packages/create-llama/templates/components/ui/html/chat/chat-messages.tsx (1)
  • Message (6-10)
packages/server/src/utils/events.ts (1)
  • extractFileAttachments (241-261)
python/llama-index-server/llama_index/server/server.py (1)
python/llama-index-server/llama_index/server/settings.py (1)
  • api_url (24-25)
packages/server/next/app/api/files/helpers.ts (2)
packages/server/src/utils/events.ts (1)
  • ServerFile (237-239)
python/llama-index-server/llama_index/server/models/chat.py (1)
  • ServerFile (9-29)
packages/server/src/utils/events.ts (2)
packages/create-llama/templates/components/ui/html/chat/chat-messages.tsx (1)
  • Message (6-10)
packages/server/src/utils/file.ts (1)
  • getStoredFilePath (42-51)
python/llama-index-server/llama_index/server/services/file.py (2)
python/llama-index-server/llama_index/server/models/chat.py (1)
  • ServerFile (9-29)
python/llama-index-server/llama_index/server/settings.py (1)
  • file_server_url_prefix (20-21)
🪛 markdownlint-cli2 (0.17.2)
packages/server/examples/private-file/README.md

21-21: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

python/llama-index-server/examples/private_file/README.md

13-13: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


28-28: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


30-30: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


31-31: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


66-66: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


68-68: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


75-75: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


76-76: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


95-95: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


97-97: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)

🪛 Ruff (0.11.9)
python/llama-index-server/llama_index/server/api/routers/chat.py

116-116: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🪛 LanguageTool
python/llama-index-server/examples/private_file/README.md

[uncategorized] ~31-~31: This verb may not be in the correct tense. Consider changing the tense to fit the context better.
Context: ...mples: ### For agent workflow: - We create a simple file reader tool that can read...

(AI_EN_LECTOR_REPLACEMENT_VERB_TENSE)


[uncategorized] ~68-~68: You might be missing the article “the” here.
Context: ... - You can run the agent workflow with file tool by running the following command: ...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

🪛 Pylint (3.3.7)
python/llama-index-server/examples/private_file/custom-workflow.py

[refactor] 21-21: Too few public methods (0/2)

(R0903)

python/llama-index-server/tests/services/test_file_service.py

[refactor] 140-140: Too many arguments (6/5)

(R0913)


[refactor] 140-140: Too many positional arguments (6/5)

(R0917)

python/llama-index-server/llama_index/server/models/chat.py

[refactor] 20-20: Too few public methods (0/2)

(R0903)


[refactor] 9-9: Too few public methods (0/2)

(R0903)


[refactor] 32-32: Too few public methods (0/2)

(R0903)


[refactor] 40-40: Too few public methods (0/2)

(R0903)


[refactor] 49-49: Too few public methods (0/2)

(R0903)


[refactor] 59-59: Too few public methods (0/2)

(R0903)


[refactor] 69-69: Too few public methods (0/2)

(R0903)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: lint
🔇 Additional comments (57)
packages/server/src/index.ts (1)

4-4: Export new utility for file path construction
Exposes the getStoredFilePath function in the public API to support file upload workflows. Ensure the file path module path is correct and that consumers can import it seamlessly.

packages/server/src/types.ts (1)

21-21: Add UIConfig flag for file uploads
Introduces the optional enableFileUpload flag to toggle file upload support in the chat UI. This aligns with the new upload endpoints and storage utilities.

packages/server/README.md (1)

63-63: Document enableFileUpload config option
Adds enableFileUpload under uiConfig with a default of false, and links to the private-file example. This clearly communicates the new feature to users.

packages/server/examples/private-file/index.ts (3)

1-6: LGTM: Clean example setup with proper imports.

The import structure is well-organized and the commented alternative workflow import provides good flexibility for users to switch between different workflow types.


16-23: LGTM: Proper server configuration with file upload enabled.

The server configuration correctly enables file uploads through the enableFileUpload flag and uses standard settings. The port choice of 3000 is conventional for development.


8-14:

✅ Verification successful

Verify OpenAI model availability and pricing.

The model choices look reasonable, but please verify that these specific model names are currently available and consider the cost implications for users running this example.


🌐 Web query:

Are the OpenAI models "gpt-4o-mini" and "text-embedding-3-small" currently available and what are their current pricing tiers?

💡 Result:

Yes, OpenAI's models "gpt-4o-mini" and "text-embedding-3-small" are currently available.

GPT-4o Mini:

  • Availability: Launched on July 18, 2024, GPT-4o Mini is accessible to ChatGPT Free, Plus, and Team users. Enterprise users gained access shortly after the launch. (reuters.com)
  • Pricing: The API pricing is $0.15 per million input tokens and $0.60 per million output tokens. (reuters.com)

Text-Embedding-3-Small:

  • Availability: This model is available for use.
  • Pricing: The cost is $0.02 per thousand tokens. (markovate.com)

These models are designed to be cost-effective and efficient, catering to a wide range of applications.

Citations:


Confirm OpenAI model availability and pricing

I’ve verified that both models are currently offered by OpenAI:

  • gpt-4o-mini: Available since July 18, 2024; API cost is $0.15 per million input tokens and $0.60 per million output tokens.
  • text-embedding-3-small: Available now; API cost is $0.02 per thousand tokens.

No changes are required to the example code—just be aware of these rates when running the sample.

packages/server/src/server.ts (2)

50-50: LGTM: Appropriate default value for file upload feature.

Setting enableFileUpload to false by default is a good security practice, requiring explicit opt-in for file upload functionality.


60-61: LGTM: Clean conditional API exposure.

The implementation correctly exposes the upload API endpoint only when file upload is enabled, following the established configuration pattern.

packages/server/examples/README.md (2)

1-39: LGTM: Excellent documentation improvements.

The restructured README provides clear, comprehensive instructions for running examples. The step-by-step format with proper code blocks and environment setup guidance significantly improves usability for developers.


25-29:

✅ Verification successful

Verify the nodemon command syntax.

Please confirm that the pnpm nodemon --exec tsx <example>/index.ts command syntax is correct and that tsx is available in the project dependencies.


🏁 Script executed:

#!/bin/bash
# Description: Check if tsx is available as a dependency and verify nodemon usage

# Check if tsx is in package.json dependencies
fd "package.json" --exec cat {} | jq '.dependencies.tsx // .devDependencies.tsx // "not found"'

# Check for nodemon configuration or scripts
fd "package.json" --exec cat {} | jq '.scripts | to_entries[] | select(.value | contains("nodemon"))'

Length of output: 541


Verified nodemon command and tsx dependency
The syntax pnpm nodemon --exec tsx <example>/index.ts matches existing usage in our workspace and tsx is already declared in the project’s devDependencies. No changes needed.

python/llama-index-server/README.md (3)

81-81: LGTM: Clear documentation of new file upload configuration.

The documentation clearly explains the enable_file_upload option with appropriate default value and context.


105-115: LGTM: Comprehensive workflow parameter documentation.

The documentation effectively shows how to access uploaded files in workflows with a clear code example. The ServerFile type reference provides good context for developers.


167-167: LGTM: Accurate API endpoint documentation.

Correctly documents the conditional availability of the file upload endpoint based on the configuration flag.

packages/server/next/app/api/files/route.ts (3)

1-3: LGTM: Clean imports and proper typing.

The imports are well-organized and correctly typed, following Next.js conventions.


42-49: LGTM: Proper error handling and logging.

The error handling appropriately catches exceptions, logs them, and returns meaningful error responses with proper HTTP status codes.


37-41:

✅ Verification successful

Verify base64 decoding and file storage integration.

The base64 decoding and file storage logic looks correct. The storeFile helper handles filename sanitization according to the relevant code snippets.


🏁 Script executed:

#!/bin/bash
# Verify the storeFile function implementation and filename sanitization
ast-grep --pattern 'function storeFile($_, $_) {
  $$$
}'

Length of output: 59


🏁 Script executed:

#!/bin/bash
# Locate the storeFile implementation and verify filename sanitization logic
rg -n "storeFile" -C 3

Length of output: 4842


🏁 Script executed:

#!/bin/bash
# Display the storeFile function implementation for review
sed -n '1,200p' packages/server/next/app/api/files/helpers.ts

Length of output: 1577


No action needed: base64 decoding and file storage logic are correct.

– Buffer.from(content, "base64") yields a valid Buffer or throws on malformed input.
– storeFile in packages/server/next/app/api/files/helpers.ts
• splits and validates name and extension
• sanitizes the filename (sanitizeFileName)
• prevents absolute paths in saveFile
• writes the Buffer correctly and returns a usable URL.

python/llama-index-server/llama_index/server/api/utils/chat_attachments.py (3)

1-6: LGTM: Well-structured imports.

The imports are clean and follow Python conventions with proper typing.


8-17: Excellent documentation and function signature.

The function has a clear docstring with proper parameter and return type documentation.


18-22: LGTM: Correct filtering logic for user messages.

The list comprehension properly filters for user messages that have annotations, which is the expected behavior for extracting user-uploaded files.

python/llama-index-server/llama_index/server/tools/interpreter.py (3)

9-11: LGTM: Updated imports align with model refactoring.

The import changes correctly reflect the shift from DocumentFile to ServerFile model usage across the codebase.


92-92: LGTM: Return type updated consistently.

The return type change from DocumentFile to ServerFile aligns with the model refactoring.


122-122:

✅ Verification successful

Verify the impact of changing from .name to .id attribute.

The change from document_file.name to document_file.id means the filename will now show the generated file ID (which includes UUID) rather than the original filename. This may affect user experience if the original filename was more meaningful.

Consider whether users would prefer seeing the original filename or if the generated ID is acceptable for the interpreter output.


🏁 Script executed:

#!/bin/bash
# Check the ServerFile model structure to understand the difference between id and name attributes
ast-grep --pattern 'class ServerFile($_) {
  $$$
}'

Length of output: 53


🏁 Script executed:

#!/bin/bash
# Locate the ServerFile class definition and inspect its constructor parameters
rg -n "class ServerFile" -C20 --glob '*.py'

Length of output: 2596


🏁 Script executed:

#!/bin/bash
# Show context around the 'filename=document_file' usage in interpreter.py
sed -n '100,140p' python/llama-index-server/llama_index/server/tools/interpreter.py

# Show all occurrences of document_file in interpreter.py for attribute inspection
rg -n "document_file" -C3 python/llama-index-server/llama_index/server/tools/interpreter.py

Length of output: 2258


🏁 Script executed:

#!/bin/bash
# Locate the _save_to_disk method in interpreter.py and show its implementation
rg -n "def _save_to_disk" -C20 python/llama-index-server/llama_index/server/tools/interpreter.py

Length of output: 1856


🏁 Script executed:

#!/bin/bash
# Locate the FileService class and inspect its save_file method
rg -n "class FileService" -C20 python/llama-index-server/llama_index/server/services
rg -n "def save_file" -C20 python/llama-index-server/llama_index/server/services

Length of output: 7500


Using .id Is Correct for ServerFile
The ServerFile model only defines an id field (no name attribute), so document_file.id is the correct property here. Attempting to use .name would result in an undefined attribute.

python/llama-index-server/llama_index/server/services/llamacloud/index.py (1)

95-102: LGTM: Proper migration to structured data access.

The change from dictionary-style access to attribute-style access aligns with the migration to structured Pydantic models. The snake_case naming (llama_cloud_pipeline) follows Python conventions and the null checking is preserved.

python/llama-index-server/llama_index/server/api/routers/chat.py (3)

28-28: LGTM: Import additions support new file handling functionality.

The new imports for get_file_attachments, ChatFile, MessageRole, and ServerFile are correctly added to support the file upload feature.

Also applies to: 30-30, 32-32


51-53: LGTM: Proper validation and file attachment extraction.

The code correctly extracts file attachments and validates that the last message is from a user, which is essential for proper workflow execution.


75-78: LGTM: Workflow integration with file attachments.

The workflow is properly updated to include the extracted attachments parameter, maintaining the existing chat history functionality while adding file support.

packages/server/examples/private-file/agent-workflow.ts (2)

8-11: LGTM: Proper file extraction from messages.

The workflow factory correctly extracts files from chat messages using the utility function. The destructuring and variable naming are clear and appropriate.


27-34: LGTM: Agent configuration with appropriate context.

The agent is properly configured with the file reading tool and a system prompt that provides context about available file IDs, enabling the agent to assist users effectively.

packages/server/examples/private-file/custom-workflow.ts (3)

22-30: LGTM: Proper workflow initialization and file validation.

The workflow factory correctly extracts attachments and validates that exactly one file is uploaded, providing clear error messaging for the requirement.


33-38: LGTM: Effective use of stateful middleware.

The stateful middleware properly initializes chat memory and stores the uploaded file metadata, providing a clean way to maintain state across workflow events.


64-93: LGTM: Well-implemented streaming LLM response.

The file helper event handler effectively combines file content with user input in a clear prompt structure and properly streams the LLM response using the agent stream events.

packages/server/examples/private-file/README.md (1)

14-19: LGTM: Comprehensive documentation with clear examples.

The documentation effectively explains the file upload functionality, provides clear setup instructions, and demonstrates both workflow approaches with practical examples.

Also applies to: 25-29, 56-60, 62-68, 70-70

python/llama-index-server/llama_index/server/server.py (3)

19-19: Import organization is acceptable.

Moving the pydantic imports closer to their usage is a reasonable code organization choice.


31-34: Good implementation of the file upload configuration.

The new enable_file_upload field follows the established pattern of other boolean configuration options in the class, with appropriate default value and description.


53-57: Secure conditional API endpoint exposure.

The conditional exposure of the UPLOAD_API endpoint based on the enable_file_upload flag is a good security practice, ensuring the endpoint is only available when explicitly enabled.

python/llama-index-server/tests/services/test_file_service.py (4)

3-3: Good addition of type annotations.

The explicit type annotations for mock parameters improve code readability and help with IDE support.

Also applies to: 16-21, 51-56, 83-88, 113-118, 141-147, 180-185, 202-207


40-40: Verify the ServerFile.id format change.

The test now expects result.id to be the full filename string instead of just the UUID. This aligns with the updated FileService implementation.


44-44: URL path normalization is correct.

The updated assertions using replace(os.path.sep, "/") ensure cross-platform compatibility for URL path construction.

Also applies to: 75-75, 106-106, 133-133, 166-166


170-170: Updated error message is appropriate.

The error message "File name is not valid!" is more user-friendly than the previous technical validation message.

python/llama-index-server/examples/private_file/agent-workflow.py (2)

46-52: Workflow creation logic is well-structured.

The conditional tool creation and fallback LLM configuration provide a robust workflow setup.


55-64: Proper server configuration for file upload.

The app configuration correctly enables file upload and sets up the component directory for custom UI components.

packages/server/next/app/api/files/helpers.ts (5)

7-7: Verify upload directory consistency.

The upload folder is set to "output/uploaded" here, but the Python tests use "output/private". This inconsistency should be resolved for uniform file storage across implementations.


13-21: Robust file validation logic.

The file name and extension validation prevents potential security issues and ensures proper file handling.


38-40: Excellent security measure against path traversal.

The absolute path check prevents directory traversal attacks, which is a critical security consideration for file upload functionality.


55-57: Appropriate filename sanitization.

The sanitization function removes potentially problematic characters while preserving alphanumeric characters, underscores, and hyphens, which is a good balance between security and usability.


45-49: Proper content type handling.

The function correctly handles both string and Buffer content types, ensuring flexibility for different file upload scenarios.

packages/server/src/utils/events.ts (3)

3-3: LGTM! Good imports for the new functionality.

The imports are correctly added to support the new file attachment extraction feature.

Also applies to: 11-11


220-239: Well-structured schema definitions and types.

The Zod schemas provide proper validation for file annotations, and the TypeScript types are correctly derived. The ServerFile type extension with the path property is a good design choice.


241-261: Excellent implementation of file attachment extraction.

The function follows the established patterns in this file and correctly:

  • Filters for user messages with annotations
  • Validates annotations against the schema
  • Extracts file metadata and computes paths consistently
  • Returns a properly typed array of ServerFile objects
python/llama-index-server/examples/private_file/README.md (1)

1-102: Excellent documentation for file upload feature.

The README provides comprehensive guidance on using uploaded files in workflows, with clear examples for both agent and custom workflow patterns. The code examples are practical and well-explained.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~31-~31: This verb may not be in the correct tense. Consider changing the tense to fit the context better.
Context: ...mples: ### For agent workflow: - We create a simple file reader tool that can read...

(AI_EN_LECTOR_REPLACEMENT_VERB_TENSE)


[uncategorized] ~68-~68: You might be missing the article “the” here.
Context: ... - You can run the agent workflow with file tool by running the following command: ...

(AI_EN_LECTOR_MISSING_DETERMINER_THE)

🪛 markdownlint-cli2 (0.17.2)

13-13: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


28-28: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


30-30: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


31-31: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


66-66: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


68-68: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


75-75: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


76-76: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


95-95: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)


97-97: Unordered list indentation
Expected: 0; Actual: 3

(MD007, ul-indent)

python/llama-index-server/examples/private_file/custom-workflow.py (5)

21-28: Good event class design.

The FileHelpEvent class properly encapsulates the data needed for the workflow with clear field names and documentation.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 21-21: Too few public methods (0/2)

(R0903)


30-60: Excellent workflow implementation with proper validation.

The workflow correctly:

  • Validates that exactly one file is uploaded
  • Reads file content synchronously (appropriate for the example)
  • Returns structured event data

The error handling for incorrect file count is a good practice.


62-91: Good streaming implementation.

The help_user step properly formats the prompt and streams the LLM response using the correct event structure for the framework.


100-118: Clean application setup.

The FastAPI app configuration correctly enables file upload and sets up the workflow factory. The standalone run configuration is appropriate for development.


96-96:

❌ Incorrect review comment

Verify the OpenAI model name.

The model name "gpt-4.1-mini" doesn't appear to be a valid OpenAI model. Consider using a valid model like "gpt-4o-mini" or "gpt-3.5-turbo".


🌐 Web query:

What are the current available OpenAI model names, particularly for GPT-4 mini variants?

💡 Result:

OpenAI offers several models optimized for various tasks, including the GPT-4 mini variants. Here's an overview of the current available models:

GPT-4 Series:

  • GPT-4o: Released in May 2024, GPT-4o is a versatile, multimodal model capable of processing and generating text, images, and audio. It offers enhanced accuracy and responsiveness across multiple data types. (en.wikipedia.org)

  • GPT-4o mini: Introduced in July 2024, GPT-4o mini is a cost-effective and efficient version of GPT-4o. It maintains high performance while reducing computational requirements, making it suitable for applications with limited resources. (reuters.com)

  • GPT-4.1: Launched in April 2025, GPT-4.1 focuses on improved coding capabilities, instruction following, and long-context comprehension. It supports up to 1 million tokens, facilitating deeper understanding of large datasets. (reuters.com)

  • GPT-4.1 mini: Also released in April 2025, GPT-4.1 mini is a smaller and more cost-effective variant of GPT-4.1, designed to deliver impressive results in tasks such as coding, mathematics, and visual analysis. (reuters.com)

o-Series Models:

  • o1: Released in December 2024, o1 is OpenAI's first reasoning model, designed to handle complex tasks requiring step-by-step logical reasoning. It outperforms previous models in areas like competitive programming, mathematics, and scientific reasoning. (en.wikipedia.org)

  • o1-mini: A more efficient and cost-effective version of o1, o1-mini is suitable for tasks that require speed and lower resource consumption. (en.wikipedia.org)

  • o3: Released in April 2025, o3 is a reflective generative pre-trained transformer model designed to devote additional deliberation time when addressing questions that require step-by-step logical reasoning. (en.wikipedia.org)

  • o3-mini: A smaller and more efficient variant of o3, o3-mini is optimized for technical domains requiring precision and speed. (en.wikipedia.org)

These models are accessible through OpenAI's API and are integrated into various applications, including ChatGPT, to provide enhanced performance across a range of tasks.

Citations:


No changes needed: “gpt-4.1-mini” is a valid model

According to the latest OpenAI releases, GPT-4.1 mini was introduced in April 2025 and is a supported, cost-effective variant of GPT-4.1. You can continue using:

llm=OpenAI(model="gpt-4.1-mini"),

Likely an incorrect or invalid review comment.

python/llama-index-server/examples/private_file/example.txt (1)

1-143: Good example document for testing file upload functionality.

The vendor agreement provides realistic, structured text content that will be useful for demonstrating file upload and processing capabilities in the examples.

python/llama-index-server/llama_index/server/services/file.py (1)

115-116:

✅ Verification successful

Verify protection against path traversal attacks

The method uses file_id directly in path construction. Ensure that file_id is properly validated to prevent path traversal attacks.


🏁 Script executed:

#!/bin/bash
# Description: Check if file_id is validated before being used in file operations

# Search for file_id validation or sanitization
rg -A 5 -B 5 "file_id|file_name" | grep -E "(validate|sanitize|check|verify)" 

# Check for any path traversal protection
rg -A 3 -B 3 "(\.\.|\.\./|path.*traversal)" 

# Look for where file_id is generated or received
ast-grep --pattern 'def $_($$$ file_id $$$) { $$$ }'

Length of output: 120779


🏁 Script executed:

#!/bin/bash
# Print the section around the join call to inspect sanitization logic
sed -n '80,150p' python/llama-index-server/llama_index/server/services/file.py

Length of output: 2656


File ID sanitization prevents path traversal

The file_id used in get_file_path is always generated by the service itself—first stripping out any non-alphanumeric or “.” characters from the original name and then appending a UUID and extension:

  • In file.py (lines 80–115), the upload logic runs
    name = re.sub(r"[^a-zA-Z0-9.]", "_", name)
    file_id = f"{name}_{uuid4()}.{extension}"
    ensuring no .., / or \ can appear in file_id.

Since all downstream operations (URL construction, os.path.join(save_dir, file_id), file reads) use only these service-generated IDs, there is no avenue for attacker-controlled path traversal. No further validation is required here.

… Update custom workflow to handle file attachments and modify chat router to remove unused attachment handling. Refactor create_workflow to pass attachments from chat request.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)

101-114: ⚠️ Potential issue

Address the unresolved issues from previous review.

The file upload endpoint still has the same issues that were identified in the previous review:

  1. Exception handling doesn't follow the static analysis recommendation (line 113)
  2. Missing input validation for request fields
  3. Hard-coded save directory path

Apply this diff to address these issues:

 @router.post("/file")
 async def upload_file(request: ChatFile) -> ServerFile:
     """
     Upload a file to the server to be used in the chat session.
     """
     try:
+        if not request.name or not request.base64:
+            raise ValueError("File name and content are required")
+        
         save_dir = os.path.join("output", "private")
         content, _ = FileService._preprocess_base64_file(request.base64)
         file = FileService.save_file(content, request.name, save_dir)
         return file
-    except Exception:
-        raise HTTPException(status_code=500, detail="Error uploading file")
+    except ValueError as e:
+        raise HTTPException(status_code=400, detail=str(e))
+    except Exception as e:
+        raise HTTPException(status_code=500, detail="Error uploading file") from e
🧰 Tools
🪛 Ruff (0.11.9)

113-113: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 24d7e4c and e6183c7.

📒 Files selected for processing (3)
  • python/llama-index-server/README.md (2 hunks)
  • python/llama-index-server/examples/private_file/custom-workflow.py (1 hunks)
  • python/llama-index-server/llama_index/server/api/routers/chat.py (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • python/llama-index-server/README.md
🧰 Additional context used
🧬 Code Graph Analysis (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (3)
python/llama-index-server/llama_index/server/models/chat.py (3)
  • ChatFile (49-56)
  • ChatRequest (106-128)
  • ServerFile (9-29)
python/llama-index-server/llama_index/server/services/file.py (3)
  • FileService (18-153)
  • _preprocess_base64_file (146-153)
  • save_file (24-74)
packages/server/src/utils/events.ts (1)
  • ServerFile (237-239)
python/llama-index-server/examples/private_file/custom-workflow.py (4)
python/llama-index-server/llama_index/server/api/utils/chat_attachments.py (1)
  • get_file_attachments (8-31)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)
  • chat (44-99)
python/llama-index-server/llama_index/server/models/chat.py (2)
  • ChatRequest (106-128)
  • ServerFile (9-29)
python/llama-index-server/examples/private_file/agent-workflow.py (2)
  • create_workflow (45-51)
  • create_app (54-63)
🪛 Ruff (0.11.9)
python/llama-index-server/llama_index/server/api/routers/chat.py

113-113: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🪛 Pylint (3.3.7)
python/llama-index-server/examples/private_file/custom-workflow.py

[refactor] 23-23: Too few public methods (0/2)

(R0903)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint
🔇 Additional comments (9)
python/llama-index-server/llama_index/server/api/routers/chat.py (3)

9-9: LGTM on the new imports.

The added imports support the new file upload functionality and explicit message validation.

Also applies to: 29-29, 31-31


50-51: Good improvement on explicit validation.

Making the user role validation explicit rather than implicit improves code clarity and error handling.


73-73:

❓ Verification inconclusive

Verify workflow compatibility with raw message content.

The change from converting to llamaindex message format to passing raw content directly should be verified to ensure all workflows handle this correctly.


🏁 Script executed:

#!/bin/bash
# Description: Verify that workflows can handle raw string content in user_msg parameter
# Expected: Find workflow implementations that accept user_msg parameter

ast-grep --pattern $'async def run($$$, user_msg: $_, $$$):
  $$$'

echo "=== Also check for workflow.run calls with user_msg ==="
rg -A 3 'workflow\.run.*user_msg'

Length of output: 224


🏁 Script executed:

#!/bin/bash
set -e

echo "Locating chat.py file..."
CHAT_PATH=$(find . -type f | grep "routers/chat.py" | head -n1)
echo "Found: $CHAT_PATH"

echo
echo "===== chat.py (first 100 lines) ====="
sed -n '1,100p' "$CHAT_PATH"

echo
echo "===== Searching for workflow.run calls in chat.py ====="
rg --color=never -n "workflow\.run" "$CHAT_PATH"

echo
echo "===== Searching for Workflow classes across the repo ====="
rg --color=never -n "class .*Workflow" --glob "*.py"

echo
echo "===== Listing all run method definitions ====="
rg --color=never -n "def run" --glob "*.py" | head -n 100

echo
echo "===== Searching for user_msg usage across repo ====="
rg --color=never -n "user_msg"

Length of output: 15056


🏁 Script executed:

#!/bin/bash
set -e

echo "Locating llama-index-server chat.py..."
CHAT_PATH=$(find python/llama-index-server -type f -name chat.py | grep "llama_index/server/api/routers/chat.py" | head -n1)
echo "Chat router located at: $CHAT_PATH"

echo
echo "===== chat.py (lines 1-200) ====="
sed -n '1,200p' "$CHAT_PATH"

echo
echo "===== create_workflow import in chat.py ====="
grep -n "create_workflow" "$CHAT_PATH" || echo "No import found"

echo
echo "Locating create_workflow definition..."
CREATE_PATH=$(grep -Rnl "def create_workflow" python/llama-index-server)
echo "create_workflow defined in: $CREATE_PATH"

echo
echo "===== create_workflow definition (lines 1-200) ====="
sed -n '1,200p' "$CREATE_PATH"

echo
echo "===== run methods in create_workflow file ====="
grep -n "def run" "$CREATE_PATH" || echo "No run methods found in create_workflow file"

Length of output: 11378


🏁 Script executed:

#!/bin/bash
set -e

echo "===== Searching for include_router and chat_router usage ====="
rg --color=never -n "include_router" -A2 python/llama-index-server || true
rg --color=never -n "chat_router" -A3 python/llama-index-server || true

echo
echo "===== Locating Workflow class definition ====="
WF_FILE=$(find python/llama-index-server/llama_index/core/workflow -type f -name "workflow.py" | head -n1)
echo "Workflow file: $WF_FILE"

echo
echo "===== Workflow class and run method (lines 1-200) ====="
sed -n '1,200p' "$WF_FILE"

Length of output: 6742


🏁 Script executed:

#!/bin/bash
set -e

echo "===== Displaying test_chat_api.py (1-200) ====="
sed -n '1,200p' python/llama-index-server/tests/api/test_chat_api.py

echo
echo "===== Finding Workflow class file ====="
WF_PATH=$(find python/llama-index-server/llama_index -type f -name "workflow.py" | head -n1)
echo "Workflow file located at: $WF_PATH"

echo
echo "===== Showing Workflow class and run signature (1-200) ====="
sed -n '1,200p' "$WF_PATH"

Length of output: 9773


Verify core Workflow.run accepts raw user_msg strings

The router now passes the last user content directly as

workflow.run(user_msg=last_message.content, chat_history=…)

instead of a llama-index message object. Please manually confirm:

  • That llama_index.core.workflow.Workflow.run has a user_msg: str parameter and handles plain strings.
  • Agent and custom workflows correctly wrap or convert this string into the expected AgentInput event.
  • Existing tests cover this change (or add a test asserting run is invoked with user_msg and correct types).
python/llama-index-server/examples/private_file/custom-workflow.py (6)

1-21: LGTM on the imports and overall structure.

The imports are appropriate for a custom workflow implementation with file handling capabilities.


23-30: LGTM on the event class definition.

The FileHelpEvent class appropriately encapsulates the file content and user request for workflow processing.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 23-23: Too few public methods (0/2)

(R0903)


52-53: Good validation for missing files.

The error handling correctly prevents the workflow from proceeding without uploaded files.


65-96: LGTM on the LLM integration and streaming logic.

The prompt template is well-structured and the streaming implementation correctly uses AgentStream events for real-time response delivery.


98-103: LGTM on the workflow factory implementation.

The factory correctly extracts file attachments and instantiates the workflow with appropriate configuration.


106-124: LGTM on the FastAPI app setup.

The application is properly configured with file upload enabled and includes appropriate development server setup.

…function to accept file attachments directly. Introduce a new ServerFileResponse model for better file response handling. Update chat router to utilize the new FileUpload model for file uploads. Clean up imports and ensure consistent file attachment processing across workflows.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (1)

106-119: Improve exception handling and add input validation.

The file upload endpoint has the same issues identified in previous reviews: missing input validation, broad exception handling without proper chaining, and hard-coded save directory path.

🧰 Tools
🪛 Ruff (0.11.9)

118-118: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

python/llama-index-server/examples/private_file/custom-workflow.py (1)

56-59: Improve file reading robustness.

The file reading logic has the same robustness issues identified in previous reviews: missing error handling, no encoding specification, and potential for I/O errors.

🧹 Nitpick comments (1)
python/llama-index-server/llama_index/server/models/chat.py (1)

27-35: Consider adding validation for base64 content.

The FileUpload model looks good, but consider adding validation to ensure the base64 field contains valid base64 data format.

 class FileUpload(BaseModel):
     """
     The file to be uploaded to the chat.
     """

     name: str
-    base64: str
+    base64: str = Field(..., pattern=r'^data:[^;]+;base64,')
     params: Any = None
🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 27-27: Too few public methods (0/2)

(R0903)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e6183c7 and cd1ed64.

📒 Files selected for processing (8)
  • python/llama-index-server/examples/private_file/agent-workflow.py (1 hunks)
  • python/llama-index-server/examples/private_file/custom-workflow.py (1 hunks)
  • python/llama-index-server/llama_index/server/api/routers/chat.py (5 hunks)
  • python/llama-index-server/llama_index/server/api/utils/chat_attachments.py (1 hunks)
  • python/llama-index-server/llama_index/server/models/chat.py (2 hunks)
  • python/llama-index-server/llama_index/server/models/file.py (1 hunks)
  • python/llama-index-server/llama_index/server/services/file.py (3 hunks)
  • python/llama-index-server/llama_index/server/tools/interpreter.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • python/llama-index-server/llama_index/server/tools/interpreter.py
  • python/llama-index-server/llama_index/server/api/utils/chat_attachments.py
  • python/llama-index-server/examples/private_file/agent-workflow.py
  • python/llama-index-server/llama_index/server/services/file.py
🧰 Additional context used
🧬 Code Graph Analysis (2)
python/llama-index-server/llama_index/server/api/routers/chat.py (3)
python/llama-index-server/llama_index/server/models/chat.py (2)
  • ChatRequest (84-106)
  • FileUpload (27-34)
python/llama-index-server/llama_index/server/models/file.py (2)
  • ServerFileResponse (6-10)
  • to_server_file_response (20-26)
python/llama-index-server/llama_index/server/services/file.py (3)
  • FileService (18-153)
  • _preprocess_base64_file (146-153)
  • save_file (24-74)
python/llama-index-server/llama_index/server/models/chat.py (2)
python/llama-index-server/tests/test_llamaindex_server.py (1)
  • server (33-40)
python/llama-index-server/llama_index/server/models/file.py (1)
  • ServerFileResponse (6-10)
🪛 Pylint (3.3.7)
python/llama-index-server/examples/private_file/custom-workflow.py

[refactor] 24-24: Too few public methods (0/2)

(R0903)

python/llama-index-server/llama_index/server/models/chat.py

[refactor] 10-10: Too few public methods (0/2)

(R0903)


[refactor] 18-18: Too few public methods (0/2)

(R0903)


[refactor] 27-27: Too few public methods (0/2)

(R0903)


[refactor] 37-37: Too few public methods (0/2)

(R0903)


[refactor] 47-47: Too few public methods (0/2)

(R0903)

python/llama-index-server/llama_index/server/models/file.py

[refactor] 6-6: Too few public methods (0/2)

(R0903)


[refactor] 13-13: Too few public methods (1/2)

(R0903)

🪛 Ruff (0.11.9)
python/llama-index-server/llama_index/server/api/routers/chat.py

118-118: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: lint
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
🔇 Additional comments (8)
python/llama-index-server/llama_index/server/api/routers/chat.py (2)

55-56: LGTM! Message validation is correctly implemented.

The explicit validation ensures that only user messages can trigger workflow execution, which aligns with the workflow processing logic.


78-78: LGTM! Using raw content is appropriate here.

Using last_message.content directly is correct since the validation above ensures it's a user message, and the workflow expects the raw string content.

python/llama-index-server/llama_index/server/models/file.py (1)

6-26: Well-designed file models with clear separation of concerns.

The separation between ServerFile (internal with path) and ServerFileResponse (external without path) is excellent for security and API design. The conversion method provides a clean transformation between representations.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 6-6: Too few public methods (0/2)

(R0903)


[refactor] 13-13: Too few public methods (1/2)

(R0903)

python/llama-index-server/examples/private_file/custom-workflow.py (2)

50-64: Good workflow structure for file handling.

The workflow design effectively demonstrates how to process uploaded files with proper event-driven architecture and streaming responses.


99-105: Well-structured workflow factory pattern.

The factory function properly extracts file attachments from the chat request and configures the workflow with appropriate dependencies.

python/llama-index-server/llama_index/server/models/chat.py (3)

10-25: Well-structured file annotation models.

The FileData and FileAnnotation models provide a clean, typed approach to handling file attachments in chat messages with proper Literal typing for the annotation type.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 10-10: Too few public methods (0/2)

(R0903)


[refactor] 18-18: Too few public methods (0/2)

(R0903)


47-57: Excellent use of field aliases for API compatibility.

The ChatData model properly handles camelCase conversion with both alias and serialization_alias, ensuring compatibility with JavaScript frontends.

🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 47-47: Too few public methods (0/2)

(R0903)


63-70: Good type safety improvement with clear documentation.

The Union type for annotations maintains backwards compatibility while adding type safety for file annotations. The docstring clearly explains the conversion behavior.

leehuwuj added 2 commits June 6, 2025 10:30
…es. Introduce a new `workflowFactory` structure to support file attachments, and improve the `extractFileAttachments` function for better clarity and usability. Update descriptions in tools to reflect changes in file ID handling.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
packages/server/examples/private-file/agent-workflow.ts (1)

15-31: ⚠️ Potential issue

Add error handling for file operations.

This issue was previously identified but still needs to be addressed. The readFileTool lacks error handling for file read operations, which could cause the agent to crash if a file doesn't exist or is inaccessible.

Apply this diff to add proper error handling:

 const readFileTool = tool(
-    ({ fileId }) => {
+    async ({ fileId }) => {
       if (!fileIds.includes(fileId)) {
         throw new Error(`File with id ${fileId} not found`);
       }

+      try {
         const filePath = getStoredFilePath({ id: fileId });
-        return fsPromises.readFile(filePath, "utf8");
+        return await fsPromises.readFile(filePath, "utf8");
+      } catch (error) {
+        return `Error reading file with id "${fileId}": ${error instanceof Error ? error.message : 'Unknown error'}`;
+      }
     },
packages/server/examples/private-file/README.md (1)

23-23: Update documentation link to match actual file location.

The link references chat_attachments.ts but the actual function is in events.ts.

Apply this diff to correct the documentation:

-In LlamaIndexServer, the uploaded file is included in chat message annotations. You can easily get the uploaded files from chat messages using the [extractFileAttachments](https://github.com/llamaindex/llamaindex/blob/main/packages/server/src/utils/chat_attachments.ts) function.
+In LlamaIndexServer, the uploaded file is included in chat message annotations. You can easily get the uploaded files from chat messages using the [extractFileAttachments](https://github.com/llamaindex/llamaindex/blob/main/packages/server/src/utils/events.ts) function.
🧹 Nitpick comments (4)
packages/server/README.md (2)

75-75: Fix grammar and clarity issues.

The sentence has multiple grammatical errors that affect readability.

Apply this diff to improve grammar and clarity:

-The `workflow` provided will be called for each chat request to initialize a new workflow instance. For advanced use cases, you can define workflowFactory with a chatBody which include list of UI messages in the request body.
+The `workflow` provided will be called for each chat request to initialize a new workflow instance. For advanced use cases, you can define a workflowFactory with a chatBody parameter which includes a list of UI messages in the request body.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~75-~75: You might be missing the article “a” here.
Context: ... For advanced use cases, you can define workflowFactory with a chatBody which include list of U...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~75-~75: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...e workflowFactory with a chatBody which include list of UI messages in the request body...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[uncategorized] ~75-~75: You might be missing the article “a” here.
Context: ...owFactory with a chatBody which include list of UI messages in the request body. ``...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


86-86: Fix possessive pronoun usage.

The sentence uses "it's" (contraction for "it is") instead of the possessive pronoun "its".

Apply this diff to correct the grammar:

-This means that the workflow must handle a `startAgentEvent` event, which is the entry point of the workflow and contains the following information in it's `data` property:
+This means that the workflow must handle a `startAgentEvent` event, which is the entry point of the workflow and contains the following information in its `data` property:
🧰 Tools
🪛 LanguageTool

[grammar] ~86-~86: Did you mean the possessive pronoun “its”?
Context: ...d contains the following information in it's data property: ```typescript { use...

(IT_S_ITS)

packages/server/examples/private-file/README.md (2)

37-37: Fix grammar in recommendation sentence.

The verb "recommend" should be followed by a gerund form.

Apply this diff to fix the grammar:

-If you are using AgentWorkflow, to provide file access to the agent, you can create a tool to read the file content. We recommend to use the `fileId` as the parameter of the tool instead of the `filePath` to avoid showing internal file path to the user.
+If you are using AgentWorkflow, to provide file access to the agent, you can create a tool to read the file content. We recommend using the `fileId` as the parameter of the tool instead of the `filePath` to avoid showing internal file path to the user.
🧰 Tools
🪛 LanguageTool

[grammar] ~37-~37: The verb ‘recommend’ is used with the gerund form.
Context: ...ate a tool to read the file content. We recommend to use the fileId as the parameter of the to...

(ADMIT_ENJOY_VB)


68-68: Add missing article for clarity.

The sentence is missing the article "the" before "custom workflow example".

Apply this diff to improve readability:

-> To run custom workflow example, update the `index.ts` file to use the `workflowFactory` from `custom-workflow.ts` instead of `agent-workflow.ts`.
+> To run the custom workflow example, update the `index.ts` file to use the `workflowFactory` from `custom-workflow.ts` instead of `agent-workflow.ts`.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~68-~68: Possible missing article found.
Context: ...low.ts) for the full example. > To run custom workflow example, update the index.ts...

(AI_HYDRA_LEO_MISSING_THE)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cd1ed64 and 5870403.

📒 Files selected for processing (6)
  • packages/server/README.md (2 hunks)
  • packages/server/examples/private-file/README.md (1 hunks)
  • packages/server/examples/private-file/agent-workflow.ts (1 hunks)
  • packages/server/examples/private-file/custom-workflow.ts (1 hunks)
  • packages/server/examples/private-file/index.ts (1 hunks)
  • packages/server/src/utils/events.ts (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/server/examples/private-file/index.ts
  • packages/server/examples/private-file/custom-workflow.ts
  • packages/server/src/utils/events.ts
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/server/examples/private-file/agent-workflow.ts (3)
packages/server/examples/private-file/custom-workflow.ts (1)
  • workflowFactory (23-98)
packages/create-llama/templates/components/ui/html/chat/chat-messages.tsx (1)
  • Message (6-10)
packages/server/src/utils/events.ts (1)
  • extractFileAttachments (271-279)
🪛 LanguageTool
packages/server/examples/private-file/README.md

[grammar] ~37-~37: The verb ‘recommend’ is used with the gerund form.
Context: ...ate a tool to read the file content. We recommend to use the fileId as the parameter of the to...

(ADMIT_ENJOY_VB)


[uncategorized] ~68-~68: Possible missing article found.
Context: ...low.ts) for the full example. > To run custom workflow example, update the index.ts...

(AI_HYDRA_LEO_MISSING_THE)

packages/server/README.md

[uncategorized] ~75-~75: You might be missing the article “a” here.
Context: ... For advanced use cases, you can define workflowFactory with a chatBody which include list of U...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[uncategorized] ~75-~75: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...e workflowFactory with a chatBody which include list of UI messages in the request body...

(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)


[uncategorized] ~75-~75: You might be missing the article “a” here.
Context: ...owFactory with a chatBody which include list of UI messages in the request body. ``...

(AI_EN_LECTOR_MISSING_DETERMINER_A)


[grammar] ~86-~86: Did you mean the possessive pronoun “its”?
Context: ...d contains the following information in it's data property: ```typescript { use...

(IT_S_ITS)

🪛 markdownlint-cli2 (0.17.2)
packages/server/examples/private-file/README.md

21-21: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/server/src/utils/file.ts (1)

54-70: Implementation addresses security concerns effectively.

The function correctly implements the suggestions from past review comments:

  • Uses path.join for cross-platform compatibility
  • Validates the id parameter to prevent path traversal attacks
  • Provides appropriate error handling for invalid inputs

The implementation is secure and well-structured.

🧹 Nitpick comments (4)
packages/server/src/utils/file.ts (1)

34-53: Remove duplicate documentation blocks.

There are two JSDoc comment blocks for the same function, which creates confusion and redundancy. The second block (lines 44-53) provides better detail about the implementation.

-/**
- * Returns the full path to a stored file given its id and optional save directory.
- * If saveDir is not provided, defaults to "output/uploaded".
- *
- * @param {Object} params - The parameters object.
- * @param {string} params.id - The file identifier.
- * @param {string} [params.saveDir] - Optional directory to save the file.
- * @returns {string} The full file path.
- */
-
/**
 * Constructs a stored file path from an ID and optional directory.
 * Uses path.join for cross-platform safety and validates the ID to prevent path traversal.
 *
 * @param {Object} params - The parameters object.
 * @param {string} params.id - The file identifier (must not contain path separators).
 * @param {string} [params.saveDir] - Optional directory to save the file. Defaults to "output/uploaded".
 * @returns {string} The full file path.
 * @throws {Error} If the id contains invalid path characters.
 */
packages/server/examples/private-file/README.md (3)

21-21: Remove trailing punctuation from heading.

Markdown headings should not end with punctuation for better consistency.

-## How to get the uploaded files in your workflow:
+## How to get the uploaded files in your workflow
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

21-21: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


37-37: Fix grammar: Use infinitive form with 'recommend'.

The verb "recommend" should be followed by the infinitive form "to use" rather than the gerund.

-If you are using AgentWorkflow, to provide file access to the agent, you can create a tool to read the file content. We recommend to use the `fileId` as the parameter of the tool instead of the `filePath` to avoid showing internal file path to the user.
+If you are using AgentWorkflow, to provide file access to the agent, you can create a tool to read the file content. We recommend using the `fileId` as the parameter of the tool instead of the `filePath` to avoid showing internal file path to the user.
🧰 Tools
🪛 LanguageTool

[grammar] ~37-~37: The verb ‘recommend’ is used with the gerund form.
Context: ...ate a tool to read the file content. We recommend to use the fileId as the parameter of the to...

(ADMIT_ENJOY_VB)


68-68: Add missing article before 'custom workflow'.

Add the article "the" before "custom workflow" for proper grammar.

-> To run custom workflow example, update the `index.ts` file to use the `workflowFactory` from `custom-workflow.ts` instead of `agent-workflow.ts`.
+> To run the custom workflow example, update the `index.ts` file to use the `workflowFactory` from `custom-workflow.ts` instead of `agent-workflow.ts`.
🧰 Tools
🪛 LanguageTool

[uncategorized] ~68-~68: Possible missing article found.
Context: ...low.ts) for the full example. > To run custom workflow example, update the index.ts...

(AI_HYDRA_LEO_MISSING_THE)

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5870403 and f71353f.

📒 Files selected for processing (2)
  • packages/server/examples/private-file/README.md (1 hunks)
  • packages/server/src/utils/file.ts (2 hunks)
🧰 Additional context used
🪛 LanguageTool
packages/server/examples/private-file/README.md

[grammar] ~37-~37: The verb ‘recommend’ is used with the gerund form.
Context: ...ate a tool to read the file content. We recommend to use the fileId as the parameter of the to...

(ADMIT_ENJOY_VB)


[uncategorized] ~68-~68: Possible missing article found.
Context: ...low.ts) for the full example. > To run custom workflow example, update the index.ts...

(AI_HYDRA_LEO_MISSING_THE)

🪛 markdownlint-cli2 (0.17.2)
packages/server/examples/private-file/README.md

21-21: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint
🔇 Additional comments (2)
packages/server/src/utils/file.ts (1)

3-3: LGTM: Import addition is appropriate.

The path module import is correctly added to support the new getStoredFilePath function.

packages/server/examples/private-file/README.md (1)

50-50:

❌ Incorrect review comment

Verify variable scope for 'attachments'.

The variable attachments is referenced in the tool description but may not be in scope within this code example. Ensure the variable is properly defined or consider using a more generic description.


🏁 Script executed:

#!/bin/bash
# Verify if 'attachments' variable is properly defined in the context
rg -A 5 -B 5 "const attachments.*extractFileAttachments"

Length of output: 1807


No scope issue: attachments is defined before use
In packages/server/examples/private-file/README.md, attachments is declared at the top of workflowFactory:

const attachments = extractFileAttachments(reqBody.messages);

Since the tools array (and its description template) is created within the same function, attachments is in scope and can be safely referenced. No changes required.

Likely an incorrect or invalid review comment.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.changeset/wide-queens-drop.md (1)

5-5: Refine changelog summary grammar.
Consider rephrasing to “Add support for chat file uploads” or “Support file uploads in the chat UI” for improved clarity.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f71353f and 70b808b.

📒 Files selected for processing (3)
  • .changeset/hot-moments-slide.md (1 hunks)
  • .changeset/wide-queens-drop.md (1 hunks)
  • packages/server/examples/private-file/custom-workflow.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .changeset/hot-moments-slide.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/server/examples/private-file/custom-workflow.ts
⏰ Context from checks skipped due to timeout of 90000ms (57)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: typescript (22, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, ubuntu-22.04, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, windows-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --example-file, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --llamacloud, streaming)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, --no-files, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --example-file, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, llamaindexserver)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --llamacloud, streaming)
  • GitHub Check: typescript (20, 3.11, macos-latest, nextjs, --no-files, streaming)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, llamaindexserver)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --example-file, streaming)
  • GitHub Check: Unit Tests (windows-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, llamaindexserver)
  • GitHub Check: Unit Tests (ubuntu-latest, 3.9)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, --no-files, streaming)
  • GitHub Check: lint

@leehuwuj leehuwuj merged commit 1ff6eaf into main Jun 6, 2025
60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants