Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 110 additions & 0 deletions Co-creation-projects/LYGreen-agent-learning-helper/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# AgentLearningHelper - 智能学习助手

> 这是一个 Demo 项目。
> 项目链接:
> [前端项目](https://github.com/LYGreen/agent-learning-helper-frontend)
> [后端项目](https://github.com/LYGreen/agent-learning-helper-backend)

## 📝 项目简介

AgentLearningHelper 是一个智能学习助手,能够帮助用户生成学习课程、生成练习题、答疑解惑

### 核心功能

- **学科导向课程生成**:根据用户输入的特定学科或知识点,利用 LLM 动态构建结构化的课程大纲。

- **自适应习题库**:为每个章节实时生成配套练习题,确保题目难度与教学内容匹配。

- **智能判题与反馈**:利用 AI 进行语义分析,指出用户答案中的知识盲区并给出改进建议。

## 🛠️ 技术栈

- FastAPI(服务端)
- OpenAI API(调用大模型)
- Vue(客户端)

## 🚀 快速开始

### 安装依赖

```bash
cd backend
pip install -r requirements.txt
cd ../frontend
npm install
```

### 配置参数
- 后端:
```
MODEL=
BASE_URL=
OPENAI_API_KEY=
```

- 前端:
```
VITE_BACKEND_URL=
```

### 运行项目

- 运行前端:
```bash
cd frontend
npm run dev
```

- 运行后端:
```bash
cd backend
uvicorn main:app --reload
```

## 📖 使用示例

在浏览器中输入 `http://localhost:5173` 访问

## 📂 项目结构

```
agent-learning-helper/
├── backend/
│ ├── ...
│ ├── main.py # 主程序
│ ├── requirements.txt # 依赖列表
│ └── .env.example # 环境变量示例
├── frontend/
│ ├── ...
│ └── .env.example # 环境变量示例
├── img/
│ └── ... # README.md 图片
├── .gitignore # Git忽略文件
└── README.md # 项目说明文档
```

## 🔧 技术实现

- **课程生成智能体**:根据用户输入的学科生成课程
- **习题生成智能体**:根据用户当前的学习进度生成习题
- **答疑解惑智能体**:判断用户的答案是否正确

## 📊 示例输出

![](img/image0.png)

## 🚧 未来改进

- [ ] 添加知识库,使生成的课程不具有误导性,同时增加答疑解惑的正确性
- [ ] RAG 检索知识库
- [ ] LaTeX 渲染
- [ ] ...

## 🙏 致谢

感谢 [Datawhale](https://github.com/datawhalechina) 社区和 [Hello-Agents](https://github.com/datawhalechina/hello-agents) 项目!

## 📄 许可证

本项目采用 MIT 许可证。

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
MODEL=
BASE_URL=
OPENAI_API_KEY=
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from abc import ABC, abstractmethod
from alh.llm.llm_model import LLMModel

class Agent(ABC):

def __init__(self, llm_model: LLMModel):
self.llm_model = llm_model

def run(self, data: dict = None):
try:
return self._execute(data)
except Exception as e:
raise e

@abstractmethod
def _execute(self, data: dict = None):
pass
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
from alh.agents.agent import Agent
from alh.llm.llm_model import LLMModel
from pydantic import BaseModel

CHECK_ANSWER_AGENT_PROMPT = """
你是一个学习助手,你的任务是帮助用户进行学习辅导,根据问题和用户给出的答案,进行反馈。
若答案基本正确,则设置 can_pass 为 True,反之为 False;content 输出解析。
若答案不正确,则设置 can_pass 为 False;content 可以给出提示。
你需要根据下面的 Json Schema 来生成,可以插入 \\n 表示换行:
{json_schema}
你必须只能输出 Json 格式,用 ```json ```包裹,不要输出任何其他内容。

现在,请开始吧。

问题:
{question}

用户的答案:
{user_input}

"""

class FeedbackModel(BaseModel):
content: str
can_pass: bool

class CheckAnswerAgent(Agent):
def __init__(self, llm_model: LLMModel):
super().__init__(llm_model)
self.prompt_temperate = CHECK_ANSWER_AGENT_PROMPT

def _execute(self, data: dict = None):
subject = data["subject"]
question = data["question"]
user_answer = data["user_answer"]

max_steps = 10

for step in range(max_steps):
formatted_prompt = self.prompt_temperate.format(
json_schema=FeedbackModel.model_json_schema(),
question=f"学科:{subject}\n问题:{question}",
user_input=user_answer
)

streamer = self.llm_model.stream_talk(formatted_prompt)

response = ""

for chunk in streamer:
response += chunk

try:
json_str = self._extract_json_str(response)
json = self._parse_json(json_str)
return json
except Exception as e:
print("Error: " + str(e))
continue

return None

def _extract_json_str(self, response: str):
import re
return re.search(r"```json(.*?)```", response, re.DOTALL).group(1)

def _parse_json(self, json_str: str):
import json
return json.loads(json_str)
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
from alh.agents.agent import Agent
from alh.llm.llm_model import LLMModel
from pydantic import BaseModel

QUESTION_GENERATE_AGENT_PROMPT = """
你是一个问题生成器,你的任务是根据用户的学习内容生成一个问题。
你需要根据下面的 Json Sehcma 来生成(可以用 \\n 表示换行):
{json_schema}
你必须只能输出 Json 格式,用 ```json ```包裹,不要输出任何其他内容。

现在,请开始吧。

用户:
{user_input}

"""

class QuestionWritingModel(BaseModel):
question: str

class QuestionGenerateAgent(Agent):
def __init__(self, llm_model: LLMModel):
super().__init__(llm_model)
self.prompt_temperate = QUESTION_GENERATE_AGENT_PROMPT

def _execute(self, data: dict = None):
subject = data["subject"]
title = data["title"]
description = data["description"]
content = data["content"]

max_steps = 10

for step in range(max_steps):
formatted_prompt = self.prompt_temperate.format(
json_schema=QuestionWritingModel.model_json_schema(),
user_input=f"学科:{subject}\n标题:{title}\n描述:{description}\n内容:{content}\n"
)

streamer = self.llm_model.stream_talk(formatted_prompt)

response = ""

for chunk in streamer:
response += chunk

try:
json_str = self._extract_json_str(response)
json = self._parse_json(json_str)
return json
except Exception as e:
print("Error: " + str(e))
continue

return None

def _extract_json_str(self, response: str):
import re
return re.search(r"```json(.*?)```", response, re.DOTALL).group(1)

def _parse_json(self, json_str: str):
import json
return json.loads(json_str)
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
from alh.agents.agent import Agent
from alh.llm.llm_model import LLMModel
from dataclasses import dataclass
from typing import List
from pydantic import BaseModel

SCHEDULE_GENERATE_AGENT_PROMPT = """
你是一个课程生成器,你的任务是根据用户的输入设计一个课程,尽量多并具体。
你需要根据下面的 Json Schema 生成一个课程(可以用 \\n 表示换行):
{json_schema}
你必须只能输出 Json 格式,用 ```json ```包裹,不要输出任何其他内容。

现在,请开始吧。

用户输入:
{user_input}

"""

@dataclass
class Step:
title: str
description: str
content: str

class ScheduleGenerateModel(BaseModel):
steps: List[Step]

class ScheduleGenerateAgent(Agent):
def __init__(self, llm_model: LLMModel):
super().__init__(llm_model)
self.prompt_template = SCHEDULE_GENERATE_AGENT_PROMPT

def _execute(self, data: dict = None):
subject = data["subject"]

max_steps = 10

# 减少出错
for step in range(max_steps):
formatted_prompt = self.prompt_template.format(
json_schema=ScheduleGenerateModel.model_json_schema(),
user_input=f"生成一个 {subject} 的学习计划"
)

streamer = self.llm_model.stream_talk(formatted_prompt)

response = ""

for chunk in streamer:
response += chunk

try:
json_str = self._extract_json_str(response)
json = self._parse_json(json_str)
return json
except Exception as e:
print("Error: " + str(e))
continue

return None

def _extract_json_str(self, response: str):
import re
return re.search(r"```json(.*?)```", response, re.DOTALL).group(1)

def _parse_json(self, json_str: str):
import json
return json.loads(json_str)
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import os
from dotenv import load_dotenv
from alh.llm.openai_model import OpenAIModel
from alh.agents.schedule_generate_agent import ScheduleGenerateAgent
from alh.agents.question_generate_agent import QuestionGenerateAgent
from alh.agents.check_answer_agent import CheckAnswerAgent

load_dotenv()

MODEL = os.getenv("MODEL")
BASE_URL = os.getenv("BASE_URL")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

class Command:
def __init__(self):
raise RuntimeError("Command is an abstract class")

def _init_command(self):
self.model = OpenAIModel(BASE_URL, OPENAI_API_KEY, MODEL)
self.schedule_generate_agent = ScheduleGenerateAgent(self.model)
self.question_generate_agent = QuestionGenerateAgent(self.model)
self.check_answer_agent = CheckAnswerAgent(self.model)

@classmethod
def get_instance(cls):
if not hasattr(cls, 'instance'):
cls.instance = super().__new__(cls)
cls._init_command(cls)
return cls.instance

def run(self, command: str, data: dict):
if command == "/generate":
subject = data["subject"]
return self.schedule_generate_agent.run({ "subject": subject })
elif command == "/generate_question":
subject = data["subject"]
title = data["title"]
description = data["description"]
content = data["content"]
return self.question_generate_agent.run({ "subject": subject, "title": title, "description": description, "content": content })
elif command == "/check_answer":
subject = data["subject"]
question = data["question"]
user_answer = data["user_answer"]
return self.check_answer_agent.run({ "subject": subject, "question": question, "user_answer": user_answer })
else:
pass
Loading