First, run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun devQuiz generation can use retrieved course chunks from Snowflake when the dashboard sends a course id. If these env vars are set, the quiz API will embed the unit/topic query, retrieve top-k chunks, and ask Gemini to base at least 80% of answers on that material. If they are missing or retrieval fails, the app falls back to topic-only generation.
SNOWFLAKE_HOST– Snowflake account host (e.g.abc12345.snowflakecomputing.com)SNOWFLAKE_TOKEN– Bearer token for REST APISNOWFLAKE_TOKEN_TYPE– Optional (defaultPROGRAMMATIC_ACCESS_TOKEN)SNOWFLAKE_DATABASE– Database name (defaultKNOT)SNOWFLAKE_WAREHOUSE– Warehouse for queriesSNOWFLAKE_ROLE– Optional role
Same DB/schema as the ingestion pipeline (see ingestion/README.md).
- Response debug – Every quiz response includes
_debug:{ ragUsed: boolean, chunkCount: number, query?: string }. In the browser: DevTools → Network → trigger a quiz → select thegeneraterequest → Response tab. IfragUsedistrueandchunkCount≥ 1, RAG retrieval ran and that many chunks were injected into the prompt. - Chunk info in response – When RAG is used, the response includes a
sourcesarray: each item haschunk_id,document_id,document_title,course_name,module_name,score, andtext. The client receives it viagenerateQuiz()(returns{ questions, sources? }); the Quiz component keepssourcesin state for citations or "view source" later. - Server logs – With
npm run dev, when RAG is used you’ll see a log like:[Quiz RAG] courseId=... unitId=... chunks=8 query="Unit: ...". - Compare with/without – Use a course that has ingested chunks. Open a unit and start a quiz (dashboard sends
courseId). Then call the API manually withoutcourseId(e.g. omit it in a curl body); the first run should showragUsed: true, the secondragUsed: false. - Content check – If your course material has distinctive terms or definitions, RAG-generated questions should reflect them; non-RAG questions may be more generic.