Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent RAG Tutorial with Ollama/ReActAgent failing in tool call because of "missing" Input value #1599

Open
1 of 14 tasks
nea opened this issue Jan 16, 2025 · 0 comments
Open
1 of 14 tasks
Labels
bug Something isn't working

Comments

@nea
Copy link

nea commented Jan 16, 2025

Describe the bug
When working on the https://ts.llamaindex.ai/docs/llamaindex/guide/agents/4_agentic_rag tutorial to create an Agent RAG, without OpenAI but with Ollama (command-r7g) locally, the solution does not work, as the required input for the QueryEngineTool is missing the Input: keyword, looked for in react.ts#L99 extractToolUse.

By my tests, I was able to reproduce thoughts as

error: Could not extract tool use from input text: "Thought: I need to use a tool to help me answer the question.
Action: san_francisco_budget_tool ({"query": "What is the budget of San Francisco in 2023-2024?"})"

which can be captured by making "Input:" optional, such as: /\s*Thought: (.*?)\nAction: ([a-zA-Z0-9_]+).*?\.*[Input:]*.*?(\{.*?\})/s
This worked fine for me, and I do not see a breaker to the previous logic. If you agree, I could create a PR or you just change it yourself.
Or am I missing something?

Thanks a lot in advance for your help

To Reproduce
Code to reproduce the behavior:

import {
  FunctionTool,
  HuggingFaceEmbedding,
  MetadataMode,
  Ollama,
  QueryEngineTool,
  ReActAgent,
  Settings,
  SimpleDirectoryReader,
  VectorStoreIndex,
  type NodeWithScore
} from "llamaindex";

Settings.llm = new Ollama({
  model: "command-r7b",
});

Settings.embedModel = new HuggingFaceEmbedding({
  modelType: "BAAI/bge-small-en-v1.5",
});

async function main() {
  Settings.callbackManager.on("llm-tool-call", (event) => {
    console.log(event.detail);
  });
  Settings.callbackManager.on("llm-tool-result", (event) => {
    console.log(event.detail);
  });

  // load our data and create a query engine
  const reader = new SimpleDirectoryReader();
  const documents = await reader.loadData("./data");
  const index = await VectorStoreIndex.fromDocuments(documents);
  const retriever = await index.asRetriever({
    similarityTopK: 10,
  });
  const queryEngine = await index.asQueryEngine({
    retriever,
  });

  // define the query engine as a tool
  const tools = [
    new QueryEngineTool({
      queryEngine: queryEngine,
      metadata: {
        name: "san_francisco_budget_tool",
        description: `This tool can answer detailed questions about the individual components of the budget of San Francisco in 2023-2024.`,    
      },
    }),
  ];

  // create the agent
  // const agent = new OllamaAgent({ tools });
  const agent = new ReActAgent({ tools });

  let response = await agent.chat({
    message: "What's the budget of San Francisco in 2023-2024?",
  });

  console.log(response);
}

main().catch(console.error);

Expected behavior
Tool calls work fine with input

Desktop (please complete the following information):

  • OS: macOS
  • JS Runtime / Framework / Bundler (select all applicable)
  • Node.js
  • Deno
  • Bun
  • Next.js
  • ESBuild
  • Rollup
  • Webpack
  • Turbopack
  • Vite
  • Waku
  • Edge Runtime
  • AWS Lambda
  • Cloudflare Worker
  • Others (please elaborate on this)
  • Version [e.g. 22]
@nea nea added the bug Something isn't working label Jan 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant