This project demonstrates the creation of LLM-powered autonomous agents using LangChain, LangGraph, and various AI tools. It showcases how to build a sophisticated agent system capable of planning, reflection, prompt engineering, and dynamic workflow orchestration. The system integrates multiple components such as vector stores, retrievers, and decision-making logic to answer complex queries effectively.
-
LangChain - Framework for building LLM applications
-
LangGraph - Workflow orchestration for LLM agents
-
Groq - LLM model integration (llama-3.1-8b-instant)
-
Google Generative AI Embeddings - Embedding generation
-
Chroma - Vector store for document retrieval
-
LangChain Community Tools - Document loaders, retriever tools
-
Python 3.10+ and related dependencies (see
requirements.txt)
This diagram illustrates the decision-making and retrieval flow of the autonomous agent:
- Data Loading: Loads web documents from URLs using
WebBaseLoader. - Text Splitting: Splits documents into token chunks for embedding.
- Embedding and Vector Store: Uses Google Generative AI embeddings and stores chunks in Chroma vector store.
- Retriever Tool: Creates a retriever tool specialized for LangChain blog posts.
- LLM Decision Maker: Implements logic to decide when to use tools or answer directly.
- Document Grading: Grades retrieved documents for relevance to user queries.
- Generator and Rewriter Nodes: Generates answers or rewrites queries based on grading.
- Workflow Orchestration: Uses LangGraph to orchestrate nodes and edges for agent workflow.
- Example Invocation: Demonstrates querying the agent with complex questions.
-
Clone the repository and navigate to the
Agentic-2.0/langgraphdirectory. -
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables for API keys:
GROQ_API_KEYfor Groq LLM accessGOOGLE_API_KEYfor Google Generative AI embeddings
You can create a
.envfile in the root directory with:GROQ_API_KEY=your_groq_api_key_here GOOGLE_API_KEY=your_google_api_key_here -
Run the notebook
langgrapg_class_5.ipynbto execute the agent workflow.
from langgraph.langgrapg_class_5 import app
response = app.invoke({
"messages": [
"What is LLM Powered Autonomous Agents? Explain planning, reflection, and prompt engineering in terms of agents and LangChain."
]
})
print(response)Agentic-2.0/
└── langgraph/
├── langgrapg_class_5.ipynb # Main notebook demonstrating the agent system
├── langgrapg_class_4.ipynb # Previous class notebook
├── langgraph_class_3.ipynb # Earlier class notebook
├── langgraph_class_6.ipynb # Next class notebook
├── langgraph_class_6_multiagent.ipynb # Multi-agent example
├── langgraph_intro.ipynb # Introduction notebook
└── tools.ipynb # Tools used in the project
This project is licensed under the MIT License. See the LICENSE file for details.
This README was generated based on the content of langgrapg_class_5.ipynb to provide a comprehensive overview suitable for GitHub.
