You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: examples/langchain/rag.js
+5-6
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,13 @@
1
1
/**
2
2
* @file examples/langchain/rag.js
3
-
* @description This example demonstrates Retrieval-Augmented Generation (RAG) with custom models built using LLMInterface, which are compatible with LangChain.
3
+
* @description This example demonstrates Retrieval-Augmented Generation (RAG) with custom models built using LLMInterface, which are compatible with LangChain.js.
4
4
*
5
5
* To run this example, you need to install the required modules by executing:
6
6
* "npm install langchain dotenv".
7
7
*
8
8
* This example showcases how to retrieve relevant documents from a local directory, generate embeddings using a custom model built with LLMInterface, identify the most relevant context for answering a question, and construct a prompt for a language model to generate a response.
9
9
*
10
-
* The workflow employs cosine similarity to determine document relevance and utilizes LangChain to format and process the final prompt. After completing the RAG process, a final direct query is sent to the provider, and the control answer is displayed for comparison.
10
+
* The workflow employs cosine similarity to determine document relevance and utilizes LangChain.js to format and process the final prompt. After completing the RAG process, a final direct query is sent to the provider, and the control answer is displayed for comparison.
constdescription=`This example demonstrates the use of Retrieval-Augmented Generation (RAG) with custom models built using LLMInterface, which are compatible with LangChain. The process involves retrieving relevant documents from a local directory, generating embeddings, identifying the most pertinent context for answering a question, and constructing a prompt for a language model to generate a response.
29
+
constdescription=`This example demonstrates the use of Retrieval-Augmented Generation (RAG) with custom models built using LLMInterface, which are compatible with LangChain.js. The process involves retrieving relevant documents from a local directory, generating embeddings, identifying the most pertinent context for answering a question, and constructing a prompt for a language model to generate a response.
31
30
32
-
The workflow employs cosine similarity to determine the relevance of documents and utilizes LangChain to format and process the final prompt. After completing the RAG process, a final direct query is sent to the provider, and the control answer is displayed for comparison.`;
31
+
The workflow employs cosine similarity to determine the relevance of documents and utilizes LangChain.js to format and process the final prompt. After completing the RAG process, a final direct query is sent to the provider, and the control answer is displayed for comparison.`;
33
32
34
33
require('dotenv').config({path: '../../.env'});
35
34
@@ -112,7 +111,7 @@ async function exampleUsage(provider) {
112
111
113
112
console.time('Timer');
114
113
prettyText(
115
-
`\n${YELLOW}Use Langchain to create the PromptTemplate and invoke LLMChain${RESET}\n`,
114
+
`\n${YELLOW}Use Langchain.js to create the PromptTemplate and invoke LLMChain${RESET}\n`,
0 commit comments