Skip to content
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
f99d8ac
init private support for python BE
leehuwuj Jun 3, 2025
d64ce74
feat: Add private file handling and upload support in FastAPI
leehuwuj Jun 3, 2025
91a4173
add process base64 and update examples
leehuwuj Jun 3, 2025
e873b63
Merge remote-tracking branch 'origin/main' into lee/private-file
leehuwuj Jun 3, 2025
c94ef19
add readme example
leehuwuj Jun 3, 2025
99cef08
fix test
leehuwuj Jun 4, 2025
667da52
feat: Add file upload support to LlamaIndexServer TS
leehuwuj Jun 4, 2025
8ebafc9
add get_file to fileservice
marcusschiesser Jun 4, 2025
07ab553
refactor: Simplify file storage logic in helpers.ts
leehuwuj Jun 4, 2025
095badc
update example
leehuwuj Jun 4, 2025
ee3057e
attach file to user message
leehuwuj Jun 5, 2025
7d50b60
fix example, improve model
leehuwuj Jun 5, 2025
2105e26
Merge remote-tracking branch 'origin/main' into lee/private-file
leehuwuj Jun 5, 2025
89c4036
feat: Add file upload support and enhance chat workflow in LlamaIndex…
leehuwuj Jun 5, 2025
36cdd6d
remove redundant change
leehuwuj Jun 5, 2025
0198c00
support agent workflow for ts
leehuwuj Jun 5, 2025
656280f
Enhance README and add file upload examples for LlamaIndex Server. Up…
leehuwuj Jun 5, 2025
b617453
update doc
leehuwuj Jun 5, 2025
2929913
Merge remote-tracking branch 'origin/main' into lee/private-file
leehuwuj Jun 5, 2025
24d7e4c
update example
leehuwuj Jun 5, 2025
e6183c7
Enhance README with detailed instructions for file upload in chat UI.…
leehuwuj Jun 6, 2025
cd1ed64
Refactor file handling in workflows by updating the create_file_tool …
leehuwuj Jun 6, 2025
4fdaa8d
Enhance file handling in workflows by updating README and example fil…
leehuwuj Jun 6, 2025
5870403
fix unstoppable
leehuwuj Jun 6, 2025
f71353f
chore: fix issues
leehuwuj Jun 6, 2025
70b808b
add changeset
leehuwuj Jun 6, 2025
a0af27a
bump chat-ui
leehuwuj Jun 6, 2025
193298e
bump chat-ui for eject project
leehuwuj Jun 6, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions packages/server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ The `LlamaIndexServer` accepts the following configuration options:
- `workflow`: A callable function that creates a workflow instance for each request. See [Workflow factory contract](#workflow-factory-contract) for more details.
- `uiConfig`: An object to configure the chat UI containing the following properties:
- `starterQuestions`: List of starter questions for the chat UI (default: `[]`)
- `enableFileUpload`: Whether to enable file upload in the chat UI (default: `false`). See [Upload file example](./examples/private-file/README.md) for more details.
- `componentsDir`: The directory for custom UI components rendering events emitted by the workflow. The default is undefined, which does not render custom UI components.
- `layoutDir`: The directory for custom layout sections. The default value is `layout`. See [Custom Layout](#custom-layout) for more details.
- `llamaCloudIndexSelector`: Whether to show the LlamaCloud index selector in the chat UI (requires `LLAMA_CLOUD_API_KEY` to be set in the environment variables) (default: `false`)
Expand Down
40 changes: 33 additions & 7 deletions packages/server/examples/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,38 @@
# LlamaIndex Server Examples

This directory contains examples of how to use the LlamaIndex Server.
This directory provides example projects demonstrating how to use the LlamaIndex Server.

## Running the examples
## How to Run the Examples

```bash
export OPENAI_API_KEY=your_openai_api_key
pnpm run dev
```
1. **Install dependencies**

## Open browser at http://localhost:3000
In the root of this directory, run:

```bash
pnpm install
```

2. **Set your OpenAI API key**

Export your OpenAI API key as an environment variable:

```bash
export OPENAI_API_KEY=your_openai_api_key
```

3. **Start an example**

Replace `<example>` with the name of the example you want to run (e.g., `private-file`):

```bash
pnpm nodemon --exec tsx <example>/index.ts
```

4. **Open the application in your browser**

Visit [http://localhost:3000](http://localhost:3000) to interact with the running example.

## Notes

- Make sure you have [pnpm](https://pnpm.io/) installed.
- Each example may have its own specific instructions or requirements; check the individual example's index.ts for details.
70 changes: 70 additions & 0 deletions packages/server/examples/private-file/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Upload File Example

This example shows how to use the uploaded file (private file) from the user in the workflow.

## Prerequisites

Please follow the setup instructions in the [examples README](../README.md).

You will also need:

- An OpenAI API key
- The `enableFileUpload` option in the `uiConfig` is set to `true`.

```typescript
new LlamaIndexServer({
// ... other options
uiConfig: { enableFileUpload: true },
}).start();
```

## How to get the uploaded files in your workflow:

In LlamaIndexServer, the uploaded file is included in chat message annotations. You can easily get the uploaded files from chat messages using the [extractFileAttachments](https://github.com/llamaindex/llamaindex/blob/main/packages/server/src/utils/chat_attachments.ts) function.

```typescript
import { extractFileAttachments } from "@llamaindex/server";

const attachments = extractFileAttachments(chatMessages);
```

### AgentWorkflow

If you are using AgentWorkflow, to provide file access to the agent, you can create a tool to read the file content.

```typescript
const readFileTool = tool(
({ filePath }) => {
return fsPromises.readFile(filePath, "utf8");
},
{
name: "read_file",
description: `Use this tool with the file path to read the file content. The available file are: [${files.map((file) => file.path).join(", ")}]`,
parameters: z.object({
filePath: z.string(),
}),
},
);
```

**Note:**

- You can either put the attachments file information to the tool description or agent's system prompt.

- To avoid showing internal file path to the user, you can use the `getStoredFilePath` function to get the file path from the file id.

```typescript
import { getStoredFilePath } from "@llamaindex/server";

const filePath = getStoredFilePath({ id });
```

Check: [agent-workflow.ts](./agent-workflow.ts) for the full example.

### Custom Workflow

In custom workflow, instead of defining a tool, you can use the helper functions (`extractFileAttachments` and `getStoredFilePath`) to work with file attachments in your workflow.

Check: [custom-workflow.ts](./custom-workflow.ts) for the full example.

> To run custom workflow example, update the `index.ts` file to use the `workflowFactory` from `custom-workflow.ts` instead of `agent-workflow.ts`.
34 changes: 34 additions & 0 deletions packages/server/examples/private-file/agent-workflow.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import { extractFileAttachments, getStoredFilePath } from "@llamaindex/server";
import { agent } from "@llamaindex/workflow";
import { type Message } from "ai";
import { tool } from "llamaindex";
import { promises as fsPromises } from "node:fs";
import { z } from "zod";

export const workflowFactory = async (reqBody: { messages: Message[] }) => {
const { messages } = reqBody;
// Extract the files from the messages
const files = extractFileAttachments(messages);

// Define a tool to read the file content using the id
const readFileTool = tool(
({ id }) => {
const filePath = getStoredFilePath({ id });
return fsPromises.readFile(filePath, "utf8");
},
{
name: "read_file",
description: `Use this tool with the id of the file to read the file content.`,
parameters: z.object({
id: z.string(),
}),
},
);
return agent({
tools: [readFileTool],
systemPrompt: `
You are a helpful assistant that can help the user with their file.
Here are the available file ids: [${files.map((file) => file.id).join(", ")}]
`,
});
};
96 changes: 96 additions & 0 deletions packages/server/examples/private-file/custom-workflow.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
import { extractFileAttachments } from "@llamaindex/server";
import { ChatMemoryBuffer, MessageContent, Settings } from "llamaindex";

import {
agentStreamEvent,
createStatefulMiddleware,
createWorkflow,
startAgentEvent,
workflowEvent,
} from "@llamaindex/workflow";
import { Message } from "ai";
import { promises as fsPromises } from "node:fs";

const fileHelperEvent = workflowEvent<{
userInput: MessageContent;
fileContent: string;
}>();

/**
* This is an simple workflow to demonstrate how to use uploaded files in the workflow.
*/
export function workflowFactory(reqBody: { messages: Message[] }) {
const llm = Settings.llm;

// First, extract the uploaded file from the messages
const attachments = extractFileAttachments(reqBody.messages);

if (attachments.length !== 1) {
throw new Error("Please upload a file to start");
}

// Then, add the uploaded file info to the workflow state
const { withState, getContext } = createStatefulMiddleware(() => {
return {
memory: new ChatMemoryBuffer({ llm }),
uploadedFile: attachments[attachments.length - 1],
};
});
const workflow = withState(createWorkflow());

// Handle the start of the workflow: read the file content
workflow.handle([startAgentEvent], async ({ data }) => {
const { userInput } = data;
// Prepare chat history
const { state } = getContext();
if (!userInput) {
throw new Error("Missing user input to start the workflow");
}
state.memory.put({ role: "user", content: userInput });

// Read file content
const fileContent = await fsPromises.readFile(
state.uploadedFile.path,
"utf8",
);

return fileHelperEvent.with({
userInput,
fileContent,
});
});

// Use LLM to help the user with the file content
workflow.handle([fileHelperEvent], async ({ data }) => {
const { sendEvent } = getContext();

const prompt = `
You are a helpful assistant that can help the user with their file.

Here is the provided file content:
${data.fileContent}

Now, let help the user with this request:
${data.userInput}
`;

const response = await llm.complete({
prompt,
stream: true,
});

// Stream the response
for await (const chunk of response) {
sendEvent(
agentStreamEvent.with({
delta: chunk.text,
response: chunk.text,
currentAgentName: "agent",
raw: chunk.raw,
}),
);
}
});

return workflow;
}
23 changes: 23 additions & 0 deletions packages/server/examples/private-file/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
import { LlamaIndexServer } from "@llamaindex/server";
// import { workflowFactory } from "./agent-workflow";
// Uncomment this to use a custom workflow
import { OpenAI, OpenAIEmbedding } from "@llamaindex/openai";
import { Settings } from "llamaindex";
import { workflowFactory } from "./custom-workflow";

Settings.llm = new OpenAI({
model: "gpt-4o-mini",
});

Settings.embedModel = new OpenAIEmbedding({
model: "text-embedding-3-small",
});

new LlamaIndexServer({
workflow: workflowFactory,
suggestNextQuestions: true,
uiConfig: {
enableFileUpload: true,
},
port: 3000,
}).start();
57 changes: 57 additions & 0 deletions packages/server/next/app/api/files/helpers.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
import crypto from "node:crypto";
import fs from "node:fs";
import path from "node:path";

import { type ServerFile } from "@llamaindex/server";

export const UPLOADED_FOLDER = "output/uploaded";

export async function storeFile(
name: string,
fileBuffer: Buffer,
): Promise<ServerFile> {
const parts = name.split(".");
const fileName = parts[0];
const fileExt = parts[1];
if (!fileName) {
throw new Error("File name is required");
}
if (!fileExt) {
throw new Error("File extension is required");
}

const id = crypto.randomUUID();
const fileId = `${sanitizeFileName(fileName)}_${id}.${fileExt}`;
const filepath = path.join(UPLOADED_FOLDER, fileId);
const fileUrl = await saveFile(filepath, fileBuffer);
return {
id: fileId,
size: fileBuffer.length,
type: fileExt,
url: fileUrl,
path: filepath,
};
}

// Save document to file server and return the file url
async function saveFile(filepath: string, content: string | Buffer) {
if (path.isAbsolute(filepath)) {
throw new Error("Absolute file paths are not allowed.");
}

const dirPath = path.dirname(filepath);
await fs.promises.mkdir(dirPath, { recursive: true });

if (typeof content === "string") {
await fs.promises.writeFile(filepath, content, "utf-8");
} else {
await fs.promises.writeFile(filepath, content);
}

const fileurl = `/api/files/${filepath}`;
return fileurl;
}

function sanitizeFileName(fileName: string) {
return fileName.replace(/[^a-zA-Z0-9_-]/g, "_");
}
49 changes: 49 additions & 0 deletions packages/server/next/app/api/files/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import { type FileAnnotation } from "@llamaindex/server";
import { NextRequest, NextResponse } from "next/server";
import { storeFile } from "./helpers";

export async function POST(request: NextRequest) {
try {
const {
name,
base64,
}: {
name: string;
base64: string;
} = await request.json();
if (!base64 || !name) {
return NextResponse.json(
{ error: "base64 and name is required in the request body" },
{ status: 400 },
);
}

const parts = base64.split(",");
if (parts.length !== 2) {
return NextResponse.json(
{ error: "Invalid base64 format" },
{ status: 400 },
);
}

const [header, content] = parts;
if (!header || !content) {
return NextResponse.json(
{ error: "Invalid base64 format" },
{ status: 400 },
);
}

const fileBuffer = Buffer.from(content, "base64");

const file = await storeFile(name, fileBuffer);

return NextResponse.json(file as FileAnnotation);
} catch (error) {
console.error("[Upload API]", error);
return NextResponse.json(
{ error: (error as Error).message },
{ status: 500 },
);
}
}
1 change: 1 addition & 0 deletions packages/server/src/index.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
export * from "./server";
export * from "./types";
export * from "./utils/events";
export { getStoredFilePath } from "./utils/file";
export { generateEventComponent } from "./utils/gen-ui";
export * from "./utils/inline";
export * from "./utils/prompts";
Loading
Loading