Skip to content

Feature/personalized learning path#76

Open
sneha12jyoti wants to merge 2 commits intoCoderUzumaki:mainfrom
sneha12jyoti:feature/personalized-learning-path
Open

Feature/personalized learning path#76
sneha12jyoti wants to merge 2 commits intoCoderUzumaki:mainfrom
sneha12jyoti:feature/personalized-learning-path

Conversation

@sneha12jyoti
Copy link
Copy Markdown

@sneha12jyoti sneha12jyoti commented Aug 9, 2025

added event schema and design doc for ai driven learning path

Summary by CodeRabbit

  • New Features

    • Introduced a backend service to handle user learning events and provide personalized learning roadmaps.
    • Added endpoints for submitting user activity and retrieving individualized learning paths.
  • Documentation

    • Added a design document outlining the AI-driven personalized learning path feature, including goals, MVP scope, and data schema for user activity tracking.

@vercel
Copy link
Copy Markdown

vercel bot commented Aug 9, 2025

@servis is attempting to deploy a commit to the coderuzumaki's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Aug 9, 2025

Walkthrough

A new design document for an AI-driven personalized learning path feature has been added. Additionally, a FastAPI backend service for personalization is introduced, including data models and endpoints to handle user learning events and retrieve user-specific roadmaps. The service uses in-memory data structures for event storage and roadmap retrieval.

Changes

Cohort / File(s) Change Summary
Design Documentation
docs/personalization_design.md
Added a design document outlining goals, MVP scope, and data schema for an AI-driven personalized learning path feature, including weakness detection and weekly personalized recommendations.
Personalization Service Implementation
services/personalization/app.py
Introduced a FastAPI service with Pydantic models (Event, Task, WeekPlan, Roadmap), in-memory event and roadmap storage, and endpoints for adding events and retrieving user roadmaps. Includes basic error handling and validation.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant API as Personalization Service (FastAPI)
    participant DB as In-Memory Store

    User->>API: POST /events (Event data)
    API->>DB: Append event to events list
    API-->>User: Confirmation with total events

    User->>API: GET /user/{user_id}/path
    API->>DB: Retrieve roadmap for user_id
    alt Roadmap exists
        API-->>User: Return Roadmap
    else No roadmap
        API-->>User: 404 error
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

A bunny hops with docs in paw,
New learning paths—oh, what it saw!
FastAPI burrows, events in tow,
Roadmaps ready, onward we go!
With every hop, the future’s bright,
Personalized journeys, just right.
🐇✨

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (5)
docs/personalization_design.md (1)

23-35: MVP scope: add success metrics, non-goals, and privacy considerations

Adding these improves alignment and reduces churn later.

  • Success metrics: e.g., weekly active days ↑, completion rate of recommended tasks ↑, accuracy uplift on weak topics.
  • Non-goals: clarify what’s not covered in MVP (e.g., LLM-generated explanations, multi-device sync).
  • Privacy/PII: event retention policy, anonymization strategy, and user consent surface.
services/personalization/app.py (4)

39-41: Type the in-memory store for events; store models (or use model_dump for dicts)

Typed stores reduce mistakes. If you prefer dicts, use model_dump() for Pydantic v2.

Option A (store models):

-mock_events = []
+mock_events: list[Event] = []

And update the append call (see Lines 60-63).


45-54: Nit: “topics” mixing topic and subtopic

The sample uses ["arrays", "two-pointer"]. Consider either:

  • a separate subtopics list, or
  • use a structured object: {topic, subtopics: []}

Example:

  • topics: ["arrays"]
  • subtopics: ["two-pointer"]
    Or:
  • topics: [{ "topic": "arrays", "subtopics": ["two-pointer"] }]

65-69: Endpoint naming and error messages

Minor API ergonomics.

  • Consider plural resource naming: GET /users/{user_id}/roadmap
  • Keep error details consistent with API style guide (e.g., use a code field: { "code": "ROADMAP_NOT_FOUND", "message": "..." })

1-69: Auth, CORS, and persistence (future)

For MVP this is okay, but calling out next steps to avoid surprises.

  • Add CORS if this will be called by a web frontend.
  • Add basic auth/token checks before accepting events.
  • Persist events/roadmaps to a store (SQLite/Postgres) to avoid memory loss on restart or multi-worker divergences.
  • Add simple unit tests: POST /events happy path and validation failures; GET roadmap found/missing.

Happy to scaffold these if you want me to open follow-up issues.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f312c2c and e266beb.

📒 Files selected for processing (2)
  • docs/personalization_design.md (1 hunks)
  • services/personalization/app.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.12.2)
services/personalization/app.py

1-2: SyntaxError: Expected an expression


4-4: SyntaxError: Got unexpected token `


4-4: SyntaxError: Got unexpected token `


4-4: SyntaxError: Got unexpected token `

Comment on lines +37 to +65
\## Data Schema

\*\*Events\*\*

```json

{

&nbsp; "user\_id": "string",

&nbsp; "activity\_type": "quiz|coding|mock",

&nbsp; "topic": "arrays|dp|graphs|system-design",

&nbsp; "subtopic": "two-pointer",

&nbsp; "timestamp": "ISO8601",

&nbsp; "accuracy": 0.85,

&nbsp; "time\_taken\_seconds": 120,

&nbsp; "attempts": 1,

&nbsp; "difficulty": "easy|medium|hard",

&nbsp; "exercise\_id": "string"

}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Event schema: add constraints and a few operational fields

To make downstream validation and analytics simpler, specify constraints and add a few commonly used fields.

Recommended refinements:

  • Constraints:
    • accuracy: float in [0.0, 1.0]
    • time_taken_seconds: non-negative int
    • attempts: int ≥ 1
    • difficulty: enum {easy, medium, hard}
    • activity_type: enum {quiz, coding, mock}
  • Additional fields:
    • event_id (string UUID)
    • session_id (string)
    • source (e.g., web/app)
    • platform (e.g., ios/android/web)
    • language (e.g., python/java)
    • version (client build/app version)

If you want, I can update the sample JSON here and mirror the same constraints in the FastAPI/Pydantic models.

🤖 Prompt for AI Agents
In docs/personalization_design.md around lines 37 to 65, the event schema lacks
explicit constraints and some useful operational fields. Update the JSON schema
to specify constraints such as accuracy being a float between 0.0 and 1.0,
time_taken_seconds as a non-negative integer, attempts as an integer greater or
equal to 1, and enums for difficulty and activity_type. Also, add additional
fields like event_id (UUID string), session_id, source, platform, language, and
version to improve validation and analytics clarity.

Comment on lines +41 to +65
```json

{

&nbsp; "user\_id": "string",

&nbsp; "activity\_type": "quiz|coding|mock",

&nbsp; "topic": "arrays|dp|graphs|system-design",

&nbsp; "subtopic": "two-pointer",

&nbsp; "timestamp": "ISO8601",

&nbsp; "accuracy": 0.85,

&nbsp; "time\_taken\_seconds": 120,

&nbsp; "attempts": 1,

&nbsp; "difficulty": "easy|medium|hard",

&nbsp; "exercise\_id": "string"

}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix: Close the JSON code fence to avoid broken Markdown rendering

The JSON example block is opened (Line 41) but not closed, which will break rendering for everything that follows.

Apply this diff to close the code fence:

   "exercise_id": "string"
 }
 
-
+```
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```json
{
&nbsp; "user\_id": "string",
&nbsp; "activity\_type": "quiz|coding|mock",
&nbsp; "topic": "arrays|dp|graphs|system-design",
&nbsp; "subtopic": "two-pointer",
&nbsp; "timestamp": "ISO8601",
&nbsp; "accuracy": 0.85,
&nbsp; "time\_taken\_seconds": 120,
&nbsp; "attempts": 1,
&nbsp; "difficulty": "easy|medium|hard",
&nbsp; "exercise\_id": "string"
}
🤖 Prompt for AI Agents
In docs/personalization_design.md between lines 41 and 65, the JSON code block
is opened but not closed, causing broken Markdown rendering. Fix this by adding
a closing triple backtick (```) after the JSON example to properly close the
code fence.

Comment on lines +1 to +4
---

## **2. Backend Starter (FastAPI)** — `services/personalization/app.py`
```python
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: File contains Markdown wrappers (front-matter and code fences) and won’t import

There’s ---, a Markdown header, and a ```python fence inside a .py file, which causes SyntaxError (matches the static analysis hints).

Apply this diff to strip Markdown from the Python file:

---- 
-
-## **2. Backend Starter (FastAPI)** — `services/personalization/app.py`
-```python
+"""
+Personalization Service (FastAPI)
+"""
@@
-    return mock_roadmaps[user_id]
-```
+    return mock_roadmaps[user_id]

Also applies to: 69-69

🧰 Tools
🪛 Ruff (0.12.2)

1-2: SyntaxError: Expected an expression


4-4: SyntaxError: Got unexpected token `


4-4: SyntaxError: Got unexpected token `


4-4: SyntaxError: Got unexpected token `

🤖 Prompt for AI Agents
In services/personalization/app.py at the very beginning (lines 1 to 4) and at
line 69, remove all Markdown-specific syntax including the front-matter '---'
and the triple backticks ```python and ``` that are wrapping the Python code.
These Markdown wrappers cause syntax errors in the Python file. Ensure the file
contains only valid Python code without any Markdown formatting.

Comment on lines +1 to +69
---

## **2. Backend Starter (FastAPI)** — `services/personalization/app.py`
```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from datetime import datetime

app = FastAPI(title="Personalization Service")

# ---- Schemas ----
class Event(BaseModel):
user_id: str
activity_type: str
topic: str
subtopic: str
timestamp: datetime
accuracy: float
time_taken_seconds: int
attempts: int
difficulty: str
exercise_id: str

class Task(BaseModel):
description: str
completed: bool = False

class WeekPlan(BaseModel):
week_no: int
topics: List[str]
tasks: List[Task]
goal: str

class Roadmap(BaseModel):
user_id: str
weeks: List[WeekPlan]

# ---- Mock Database ----
mock_events = []
mock_roadmaps = {
"u1": Roadmap(
user_id="u1",
weeks=[
WeekPlan(
week_no=1,
topics=["arrays", "two-pointer"],
tasks=[
Task(description="Solve 3 practice problems"),
Task(description="Watch 2 concept videos"),
Task(description="Take revision quiz")
],
goal="Reach avg_accuracy ≥ 0.75"
)
]
)
}

# ---- Endpoints ----
@app.post("/events")
def add_event(event: Event):
mock_events.append(event.dict())
return {"message": "Event recorded", "total_events": len(mock_events)}

@app.get("/user/{user_id}/path", response_model=Roadmap)
def get_roadmap(user_id: str):
if user_id not in mock_roadmaps:
raise HTTPException(status_code=404, detail="No roadmap found for this user")
return mock_roadmaps[user_id] No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify and fix syntax errors flagged by static analysis

Confirm that the file no longer contains markdown fences/backticks and parses as Python.

Run:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Detecting markdown fences in services/personalization/app.py"
rg -n '^(---|```)' services/personalization/app.py || echo "No markdown fences detected."

echo "AST-parse the file to catch syntax errors"
python - << 'PY'
import ast, sys, pathlib
p = pathlib.Path("services/personalization/app.py")
try:
    ast.parse(p.read_text(encoding="utf-8"))
    print("AST parse OK: services/personalization/app.py")
except SyntaxError as e:
    print("SyntaxError:", e)
    sys.exit(1)
PY

Length of output: 373


Remove Markdown Fences Causing Syntax Errors

The file services/personalization/app.py still contains Markdown fences that break Python parsing. Please remove these lines (and any closing ``` fence) so the file is valid Python:

• services/personalization/app.py:1
­– Remove the leading ---
• services/personalization/app.py:4
­– Remove the opening ````python```

After stripping out all backticks/markdown fences, re-run your static analysis or ast.parse check to confirm there are no remaining syntax errors.

🧰 Tools
🪛 Ruff (0.12.2)

1-2: SyntaxError: Expected an expression


4-4: SyntaxError: Got unexpected token `


4-4: SyntaxError: Got unexpected token `


4-4: SyntaxError: Got unexpected token `

🤖 Prompt for AI Agents
In services/personalization/app.py at the beginning of the file (lines 1 to 4),
remove the leading markdown fences including the line with "---" and the opening
"```python" fence. Also ensure to remove any closing "```" fence if present
later in the file. This will make the file valid Python code and prevent syntax
errors during parsing or static analysis.

Comment on lines +6 to +9
from pydantic import BaseModel
from typing import List
from datetime import datetime

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Strengthen imports for validation and enums

Use Pydantic Field for constraints and Enum for typed fields.

-from pydantic import BaseModel
-from typing import List
-from datetime import datetime
+from pydantic import BaseModel, Field
+from typing import List
+from datetime import datetime
+from enum import Enum
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
from pydantic import BaseModel
from typing import List
from datetime import datetime
from pydantic import BaseModel, Field
from typing import List
from datetime import datetime
from enum import Enum
🤖 Prompt for AI Agents
In services/personalization/app.py around lines 6 to 9, the imports should be
enhanced by adding Pydantic's Field to enable validation constraints on model
fields and importing Enum from the enum module to define typed enumerations.
Update the import statements to include these for stronger data validation and
clearer type definitions.

Comment on lines +13 to +24
class Event(BaseModel):
user_id: str
activity_type: str
topic: str
subtopic: str
timestamp: datetime
accuracy: float
time_taken_seconds: int
attempts: int
difficulty: str
exercise_id: str

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Schema hardening: enums and field constraints for Event

Constrain inputs to reduce bad data. Also future-proof for Pydantic v2 semantics.

-class Event(BaseModel):
-    user_id: str
-    activity_type: str
-    topic: str
-    subtopic: str
-    timestamp: datetime
-    accuracy: float
-    time_taken_seconds: int
-    attempts: int
-    difficulty: str
-    exercise_id: str
+class ActivityType(str, Enum):
+    quiz = "quiz"
+    coding = "coding"
+    mock = "mock"
+
+
+class Difficulty(str, Enum):
+    easy = "easy"
+    medium = "medium"
+    hard = "hard"
+
+
+class Event(BaseModel):
+    user_id: str
+    activity_type: ActivityType
+    topic: str
+    subtopic: str
+    timestamp: datetime
+    accuracy: float = Field(ge=0.0, le=1.0)
+    time_taken_seconds: int = Field(ge=0)
+    attempts: int = Field(ge=1)
+    difficulty: Difficulty
+    exercise_id: str
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class Event(BaseModel):
user_id: str
activity_type: str
topic: str
subtopic: str
timestamp: datetime
accuracy: float
time_taken_seconds: int
attempts: int
difficulty: str
exercise_id: str
class ActivityType(str, Enum):
quiz = "quiz"
coding = "coding"
mock = "mock"
class Difficulty(str, Enum):
easy = "easy"
medium = "medium"
hard = "hard"
class Event(BaseModel):
user_id: str
activity_type: ActivityType
topic: str
subtopic: str
timestamp: datetime
accuracy: float = Field(ge=0.0, le=1.0)
time_taken_seconds: int = Field(ge=0)
attempts: int = Field(ge=1)
difficulty: Difficulty
exercise_id: str
🤖 Prompt for AI Agents
In services/personalization/app.py around lines 13 to 24, the Event model fields
lack constraints and enums, which can lead to invalid data. Define enums for
fields like activity_type, difficulty, and possibly topic/subtopic if values are
known. Add field constraints such as min/max lengths for strings, positive
values for numeric fields, and use Pydantic's Field with appropriate validators.
This will harden the schema and prepare it for Pydantic v2 compatibility.

Comment on lines +60 to +63
@app.post("/events")
def add_event(event: Event):
mock_events.append(event.dict())
return {"message": "Event recorded", "total_events": len(mock_events)}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

POST /events: return 201 and prefer model_dump() in Pydantic v2

Status 201 is more accurate for creation; model_dump avoids v2 deprecation warnings.

If you chose “store models” (Option A above):

-@app.post("/events")
-def add_event(event: Event):
-    mock_events.append(event.dict())
-    return {"message": "Event recorded", "total_events": len(mock_events)}
+@app.post("/events", status_code=201)
+def add_event(event: Event):
+    mock_events.append(event)
+    return {"message": "Event recorded", "total_events": len(mock_events)}

If you prefer storing dicts:

-@app.post("/events")
-def add_event(event: Event):
-    mock_events.append(event.dict())
+@app.post("/events", status_code=201)
+def add_event(event: Event):
+    mock_events.append(event.model_dump())
     return {"message": "Event recorded", "total_events": len(mock_events)}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@app.post("/events")
def add_event(event: Event):
mock_events.append(event.dict())
return {"message": "Event recorded", "total_events": len(mock_events)}
@app.post("/events", status_code=201)
def add_event(event: Event):
mock_events.append(event)
return {"message": "Event recorded", "total_events": len(mock_events)}
Suggested change
@app.post("/events")
def add_event(event: Event):
mock_events.append(event.dict())
return {"message": "Event recorded", "total_events": len(mock_events)}
@app.post("/events", status_code=201)
def add_event(event: Event):
mock_events.append(event.model_dump())
return {"message": "Event recorded", "total_events": len(mock_events)}
🤖 Prompt for AI Agents
In services/personalization/app.py around lines 60 to 63, the POST /events
endpoint should return HTTP status 201 to indicate resource creation, and use
event.model_dump() instead of event.dict() to comply with Pydantic v2 standards
and avoid deprecation warnings. Update the return statement to include status
code 201 and replace event.dict() with event.model_dump() when appending to
mock_events.

@CoderUzumaki
Copy link
Copy Markdown
Owner

Hey @sneha12jyoti ,
Thank you for your contribution, But you are requested to use the provided PR Template for raising a PR, as it helps us to have a better understanding of your changes and simplifies the process of code review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants