Skip to content

sanhariharan/AutoLang

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LLM Powered Autonomous Agents with LangChain and LangGraph

Project Description

This project demonstrates the creation of LLM-powered autonomous agents using LangChain, LangGraph, and various AI tools. It showcases how to build a sophisticated agent system capable of planning, reflection, prompt engineering, and dynamic workflow orchestration. The system integrates multiple components such as vector stores, retrievers, and decision-making logic to answer complex queries effectively.

Technologies and Libraries Used

This diagram illustrates the decision-making and retrieval flow of the autonomous agent:

Agent Workflow Diagram

Features and Components

  • Data Loading: Loads web documents from URLs using WebBaseLoader.
  • Text Splitting: Splits documents into token chunks for embedding.
  • Embedding and Vector Store: Uses Google Generative AI embeddings and stores chunks in Chroma vector store.
  • Retriever Tool: Creates a retriever tool specialized for LangChain blog posts.
  • LLM Decision Maker: Implements logic to decide when to use tools or answer directly.
  • Document Grading: Grades retrieved documents for relevance to user queries.
  • Generator and Rewriter Nodes: Generates answers or rewrites queries based on grading.
  • Workflow Orchestration: Uses LangGraph to orchestrate nodes and edges for agent workflow.
  • Example Invocation: Demonstrates querying the agent with complex questions.

Setup and Installation

  1. Clone the repository and navigate to the Agentic-2.0/langgraph directory.

  2. Install dependencies:

    pip install -r requirements.txt
  3. Set up environment variables for API keys:

    • GROQ_API_KEY for Groq LLM access
    • GOOGLE_API_KEY for Google Generative AI embeddings

    You can create a .env file in the root directory with:

    GROQ_API_KEY=your_groq_api_key_here
    GOOGLE_API_KEY=your_google_api_key_here
    
  4. Run the notebook langgrapg_class_5.ipynb to execute the agent workflow.

Usage Example

from langgraph.langgrapg_class_5 import app

response = app.invoke({
    "messages": [
        "What is LLM Powered Autonomous Agents? Explain planning, reflection, and prompt engineering in terms of agents and LangChain."
    ]
})

print(response)

Project Structure Overview

Agentic-2.0/
└── langgraph/
    ├── langgrapg_class_5.ipynb    # Main notebook demonstrating the agent system
    ├── langgrapg_class_4.ipynb    # Previous class notebook
    ├── langgraph_class_3.ipynb    # Earlier class notebook
    ├── langgraph_class_6.ipynb    # Next class notebook
    ├── langgraph_class_6_multiagent.ipynb # Multi-agent example
    ├── langgraph_intro.ipynb      # Introduction notebook
    └── tools.ipynb                # Tools used in the project

License

This project is licensed under the MIT License. See the LICENSE file for details.


This README was generated based on the content of langgrapg_class_5.ipynb to provide a comprehensive overview suitable for GitHub.

About

Autonomous validating Rag Agent using Langgraph

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published