Skip to content

Supabase Auth + AI Stack for Next.js 15 with SSR and React Server Components (RSC), Welcome to a production-ready template combining Supabase SSR authentication with AI capabilities: document chat (RAG), web search, and multiple LLM support. Features include secure file storage, vector search (pgvector), and persistent chat history.

License

Notifications You must be signed in to change notification settings

ElectricCodeGuy/SupabaseAuthWithSSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Supabase Auth with SSR + RAG + AI Web Search πŸ”

Project Showcase

Images

Front Page 1 Front Page 2 Front Page 3 Protected Page 1 Sign In Page Password AI Chat Page RAG Chat RAG Chat

Videos

You can find the videos located inside the public folder!

Table of Contents

Features

  • Robust and easy authentication: Utilize Supabase's auth capabilities alongside SSR for security.
  • Performance: Leverage server-side rendering for faster load times and improved user experience.
  • Next.js Integration: Specifically designed for easy integration with Next.js 15 projects.

Getting Started

Prerequisites

If you want to use the AI features the following keys are needed

Installation

  1. Clone the Repository

    git clone https://github.com/ElectricCodeGuy/SupabaseAuthWithSSR.git
  2. Navigate to the Project Directory

    cd SupabaseAuthWithSSR
  3. Install Required Packages

    npm install

Database Setup

Before launching your application, you must configure the database schema within Supabase. Navigate to supabase SQL editor and use the following SQL queries to setup the schemas

  1. Create the Users Table
-- Create users table
create table users (
  id uuid references auth.users not null primary key,
  full_name text,
  email text
);

-- Enable Row Level Security (RLS)
alter table public.users enable row level security;

-- Create RLS policies for users table
create policy "Users can insert own data"
on public.users
for insert
to public
with check (id = auth.uid());

create policy "Users can update own data"
on public.users
for update
to public
using (id = auth.uid())
with check (id = auth.uid());

create policy "Users can view own data"
on public.users
for select
to public
using (id = auth.uid());

This SQL statement creates a users table with columns for storing user data such as id, full_name and email. The id column is a foreign key referencing the auth.users table. It also enables RLS for the users table allowing users to read, insert and update their own data

  1. Create a Trigger Function

    create function public.handle_new_user()
    returns trigger as $$
    begin
     insert into public.users (id, full_name, email)
     values (
       new.id,
       new.raw_user_meta_data->>'full_name',
       new.email
     );
     return new;
    end;
    $$ language plpgsql security definer;

This SQL function is a trigger function that automatically inserts a new user entry into the public.users table when a new user signs up via Supabase Auth. It extracts the id, full_name and email from the auth.users table and inserts them into the corresponding columns in the public.users table.

  1. Create a Trigger

    create trigger on_auth_user_created
      after insert on auth.users
      for each row execute procedure public.handle_new_user();

This SQL statement creates a trigger named on_auth_user_created that executes the public.handle_new_user() function after each new user is inserted into the auth.users table.

  1. Sign Up for an Account
  • Navigate to http://localhost:3000/signup in your web browser.
  • Use the sign-up form to create an account. Ensure you use a valid email address that you have access to, as you'll need to verify it in the next step.
  1. Verify Your Email
  • After signing up, Supabase will send an email to the address you provided. Check your inbox for an email from Supabase or your application.
  • Open the email and click on the verification link to confirm your email address. This step is crucial for activating your account and ensuring that you can log in and access the application's features.
  1. Make the rest of the tables, RLS and RPC
  -- Chat Sessions Table
  create table
    public.chat_sessions (
      id uuid not null default extensions.uuid_generate_v4 (),
      user_id uuid not null,
      created_at timestamp with time zone not null default current_timestamp,
      updated_at timestamp with time zone not null default current_timestamp,
      chat_title null,
      constraint chat_sessions_pkey primary key (id),
      constraint chat_sessions_user_id_fkey foreign key (user_id) references users (id)
    ) tablespace pg_default;

  create index if not exists idx_chat_sessions_user_id on public.chat_sessions using btree (user_id) tablespace pg_default;

  create index if not exists chat_sessions_created_at_idx on public.chat_sessions using btree (created_at) tablespace pg_default;

  -- Message Parts Table (NEW - Replaces chat_messages for incremental saving)
  -- This table stores individual message parts (text, tools, reasoning, etc.)
  -- allowing for incremental saving and proper ordering of AI responses
  create table public.message_parts (
    id uuid NOT NULL DEFAULT gen_random_uuid(),
    chat_session_id uuid NOT NULL,
    message_id uuid NOT NULL,
    role text NOT NULL,
    type text NOT NULL,
    "order" integer NOT NULL DEFAULT 0,
    created_at timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP,

    -- Text part fields
    text_text text NULL,
    text_state text NULL DEFAULT 'done',

    -- Reasoning part fields
    reasoning_text text NULL,
    reasoning_state text NULL DEFAULT 'done',

    -- File part fields
    file_mediatype text NULL,
    file_filename text NULL,
    file_url text NULL,

    -- Source URL part fields
    source_url_id text NULL,
    source_url_url text NULL,
    source_url_title text NULL,

    -- Source Document part fields
    source_document_id text NULL,
    source_document_mediatype text NULL,
    source_document_title text NULL,
    source_document_filename text NULL,

    -- Tool: searchUserDocument fields
    tool_searchuserdocument_toolcallid uuid NULL,
    tool_searchuserdocument_state text NULL,
    tool_searchuserdocument_input jsonb NULL,
    tool_searchuserdocument_output jsonb NULL,
    tool_searchuserdocument_errortext text NULL,
    tool_searchuserdocument_providerexecuted boolean NULL,

    -- Tool: websiteSearchTool fields
    tool_websitesearchtool_toolcallid uuid NULL,
    tool_websitesearchtool_state text NULL,
    tool_websitesearchtool_input jsonb NULL,
    tool_websitesearchtool_output jsonb NULL,
    tool_websitesearchtool_errortext text NULL,
    tool_websitesearchtool_providerexecuted boolean NULL,

    -- Provider metadata
    providermetadata jsonb NULL,

    -- Constraints
    CONSTRAINT message_parts_pkey PRIMARY KEY (id),
    CONSTRAINT message_parts_chat_session_id_fkey FOREIGN KEY (chat_session_id)
      REFERENCES chat_sessions (id) ON DELETE CASCADE,
    CONSTRAINT message_parts_role_check CHECK (
      role = ANY (ARRAY['user'::text, 'assistant'::text, 'system'::text])
    ),
  ) TABLESPACE pg_default;

  -- Create indexes for performance
  CREATE INDEX idx_message_parts_chat_session_id
    ON public.message_parts USING btree (chat_session_id) TABLESPACE pg_default;
  CREATE INDEX idx_message_parts_message_id
    ON public.message_parts USING btree (message_id) TABLESPACE pg_default;
  CREATE INDEX idx_message_parts_chat_session_message_order
    ON public.message_parts USING btree (chat_session_id, message_id, "order") TABLESPACE pg_default;
  CREATE INDEX idx_message_parts_created_at
    ON public.message_parts USING btree (created_at) TABLESPACE pg_default;
  CREATE INDEX idx_message_parts_type
    ON public.message_parts USING btree (type) TABLESPACE pg_default;
  CREATE INDEX idx_message_parts_message_order
    ON public.message_parts USING btree (message_id, "order") TABLESPACE pg_default;

-- Enable the vector extension
CREATE EXTENSION IF NOT EXISTS vector WITH SCHEMA extensions;

-- Note: PostgreSQL currently does not support indexing vectors with more than 2,000 dimensions. If you have hundreds of thousands of documents resulting in hundreds of thousands of vectors, you need to use an embedding model that produces 2,000 dimensions or fewer.

# Vector Database Configuration for Efficient Similarity Search

When dealing with hundreds of thousands of document vectors, optimizing for both storage and retrieval speed is critical. Our system has been configured using the following best practices:


-- Create the vector_documents table
CREATE TABLE public.user_documents (
  id uuid NOT NULL DEFAULT gen_random_uuid(),
  user_id uuid NOT NULL,
  title text NOT NULL,
  total_pages integer NOT NULL,
  ai_description text NULL,
  ai_keyentities text[] NULL,
  ai_maintopics text[] NULL,
  ai_title text NULL,
  file_path text NOT NULL,
  created_at timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP,
  updated_at timestamp with time zone NULL DEFAULT CURRENT_TIMESTAMP,
  CONSTRAINT user_documents_pkey PRIMARY KEY (id),
  CONSTRAINT user_documents_user_title_unique UNIQUE (user_id, title),
  CONSTRAINT user_documents_user_id_fkey FOREIGN KEY (user_id) REFERENCES users (id) ON DELETE CASCADE
) TABLESPACE pg_default;

-- Separate vector embeddings table
CREATE TABLE public.user_documents_vec (
  id uuid NOT NULL DEFAULT gen_random_uuid(),
  document_id uuid NOT NULL,
  text_content text NOT NULL,
  page_number integer NOT NULL,
  embedding extensions.vector(1024) NULL,
  CONSTRAINT user_documents_vec_pkey PRIMARY KEY (id),
  CONSTRAINT user_documents_vec_document_page_unique UNIQUE (document_id, page_number),
  CONSTRAINT user_documents_vec_document_id_fkey FOREIGN KEY (document_id) REFERENCES user_documents (id) ON DELETE CASCADE
) TABLESPACE pg_default;

ALTER TABLE public.user_documents ENABLE ROW LEVEL SECURITY;
ALTER TABLE public.user_documents_vec ENABLE ROW LEVEL SECURITY;

-- RLS policy for user_documents - users can only access their own documents
CREATE POLICY "Users can only access their own documents" ON public.user_documents
    FOR ALL
    TO public
    USING ((SELECT auth.uid()) = user_id);

CREATE POLICY "Users can only access their own document vectors" ON public.user_documents_vec
    FOR ALL
    TO public
    USING (
        EXISTS (
            SELECT 1 FROM user_documents
            WHERE user_documents.id = user_documents_vec.document_id
            AND user_documents.user_id = (SELECT auth.uid())
        )
    );


-- Create indexes for better performance
CREATE INDEX IF NOT EXISTS idx_user_documents_user_id
ON public.user_documents USING btree (user_id) TABLESPACE pg_default;


CREATE INDEX IF NOT EXISTS idx_user_documents_vec_document_id
ON public.user_documents_vec USING btree (document_id) TABLESPACE pg_default;

-- Create HNSW index for vector similarity search
CREATE INDEX IF NOT EXISTS user_documents_vec_embedding_idx
ON public.user_documents_vec
USING hnsw (embedding extensions.vector_l2_ops)
WITH (m = '16', ef_construction = '64')
TABLESPACE pg_default;


## HNSW Index Configuration

The Hierarchical Navigable Small World (HNSW) index is configured with:

- **m = 16**: Maximum number of connections per layer
- **ef_construction = 64**: Size of the dynamic candidate list during construction

These parameters balance build time, index size, and query performance for our document volumes. The HNSW index drastically improves vector similarity search performance while maintaining high recall rates.

## Why These Parameters?

- **Dimension Size (1024)**: Our embedding model (voyage-3-large) produces 1024-dimensional vectors, well under the pgvector 2000-dimension limit
- **HNSW Algorithm**: Offers logarithmic search complexity, critical for large document collections
- **Cosine Similarity**: Best metric for normalized document embeddings

These optimizations enable sub-second query times even with hundreds of thousands of document vectors in the database.

Above 500k rows you should consider increasing m and ef_construction to m = '32' and ef_construction = '128'

-- Enable RLS
ALTER TABLE public.vector_documents ENABLE ROW LEVEL SECURITY;

-- Optimized RLS Policies for vector_documents
CREATE POLICY "Users can only read their own documents"
ON public.vector_documents
FOR SELECT
TO authenticated
USING (user_id = (SELECT auth.uid()));

-- Users Table RLS Policies
CREATE POLICY "Users can insert own data"
ON public.users
FOR INSERT
TO public
WITH CHECK (id = (SELECT auth.uid()));

CREATE POLICY "Users can update own data"
ON public.users
FOR UPDATE
TO public
USING (id = (SELECT auth.uid()))
WITH CHECK (id = (SELECT auth.uid()));

CREATE POLICY "Users can view own data"
ON public.users
FOR SELECT
TO public
USING (id = (SELECT auth.uid()));

-- Chat Sessions RLS Policies
CREATE POLICY "Users can view own chat sessions"
ON public.chat_sessions
AS PERMISSIVE
FOR ALL
TO public
USING (user_id = (SELECT auth.uid()));

-- Chat Messages RLS Policies
CREATE POLICY "Users can view messages from their sessions"
ON public.chat_messages
AS PERMISSIVE
FOR ALL
TO public
USING (
  chat_session_id IN (
      SELECT chat_sessions.id
      FROM chat_sessions
      WHERE chat_sessions.user_id = (SELECT auth.uid())
  )
);

-- Create the similarity search function
CREATE OR REPLACE FUNCTION match_documents(
  query_embedding vector(1024),
  match_count int,
  filter_user_id uuid,
  file_ids uuid[],
  similarity_threshold float DEFAULT 0.30
)
RETURNS TABLE (
  id uuid,
  text_content text,
  title text,
  doc_timestamp timestamp with time zone,
  ai_title text,
  ai_description text,
  ai_maintopics text[],
  ai_keyentities text[],
  page_number integer,
  total_pages integer,
  similarity float
)
LANGUAGE plpgsql
AS $$
BEGIN
  RETURN QUERY
  SELECT
    vec.id,
    vec.text_content,
    doc.title,
    doc.created_at as doc_timestamp,
    doc.ai_title,
    doc.ai_description,
    doc.ai_maintopics,
    doc.ai_keyentities,
    vec.page_number,
    doc.total_pages,
    1 - (vec.embedding <=> query_embedding) as similarity
  FROM
    user_documents_vec vec
  INNER JOIN
    user_documents doc ON vec.document_id = doc.id
  WHERE
    doc.user_id = filter_user_id
    -- Use the renamed parameter in the filter condition
    AND doc.id = ANY(file_ids)
    AND 1 - (vec.embedding <=> query_embedding) > similarity_threshold
  ORDER BY
    vec.embedding <=> query_embedding ASC
  LIMIT LEAST(match_count, 200);
END;
$$;

Document Processing Setup

To enable document upload and chat functionality, you'll need additional API keys:

  1. LlamaIndex Cloud Setup
  • Visit LlamaIndex Cloud
  • Create an account and get your API key
  • Add to .env.local:
    LLAMA_CLOUD_API_KEY=your_api_key_here
    

These services enable document processing, embedding storage, and semantic search capabilities in your chat interface.

Storage Setup and RLS

After setting up the basic database structure, you need to configure storage and its associated security policies in Supabase.

  1. Create Storage Bucket

First, create a storage bucket named 'userfiles' in your Supabase dashboard:

  • Go to Storage in your Supabase dashboard
  • Click "Create Bucket"
  • Name it "userfiles"
  • Set it to private
  1. Configure Storage RLS Policies

Add the following policies to secure your storage. These policies ensure users can only access their own files and folders.

-- Policy 1: Allow users to select their own files
create policy "User can select own files"
on storage.objects for select
using ((bucket_id = 'userfiles'::text) AND
       ((auth.uid())::text = (storage.foldername(name))[1]));

-- Policy 2: Allow users to insert their own files
create policy "User can insert own files"
on storage.objects for insert
with check ((bucket_id = 'userfiles'::text) AND
            ((auth.uid())::text = (storage.foldername(name))[1]));

-- Policy 3: Allow users to update their own files
create policy "User can update own files"
on storage.objects for update
using ((bucket_id = 'userfiles'::text) AND
       ((auth.uid())::text = (storage.foldername(name))[1]));

-- Policy 4: Allow users to delete their own files
create policy "User can delete own files"
on storage.objects for delete
using ((bucket_id = 'userfiles'::text) AND
       ((auth.uid())::text = (storage.foldername(name))[1]));

-- Policy 5: Allow public select access to objects
create policy "Allow public select access"
on storage.objects for select
using (true);

These policies accomplish the following:

  • Policies 1-4 ensure users can only manage (select, insert, update, delete) files within their own user directory
  • Policy 5 allows public select access to all objects, which is necessary for certain Supabase functionality

The storage.foldername(name)[1] function extracts the first part of the file path, which should match the user's ID.

  1. Verify Configuration

After setting up these policies:

  • Users can only access files in their own directory
  • Files are organized by user ID automatically
  • Public select access is maintained for system functionality
  • All other operations are restricted to file owners only

Environment Variables

Configure your environment by renaming .env.local.example to .env.local and updating it with your Supabase project details:

  • SUPABASE_URL: Your Supabase project URL.
  • SUPABASE_ANON_KEY: Your Supabase anon (public) key.
  • SUPABASE_SERVICE_ROLE_KEY=: Your Supabse Service key

Document Processing:

  • LLAMA_CLOUD_API_KEY: Your LlamaIndex Cloud API key

For Openai

  • OPENAI_API_KEY=

πŸ“§ Email Templates

To ensure that the authentication flow works correctly with the API routes provided in this codebase, please update your email templates in the Supabase project settings according to the templates provided below:

Confirm Your Signup

When users sign up, they'll receive an email to confirm their account. The template should look like this:

<!DOCTYPE html>
<html>
  <head>
    <title>Confirm Your Signup</title>
    <!-- Add styles and head content here -->
  </head>
  <body>
    <div class="container">
      <div class="header">
        <h1>Welcome to You Company Name</h1>
      </div>

      <h2>Confirm your signup</h2>
      <p>Follow this link to confirm your user:</p>
      <a
        href="{{ .SiteURL }}/api/auth/callback?token_hash={{ .TokenHash }}&type=email"
        >Confirm your email</a
      >
    </div>
  </body>
</html>

Invite User Email When you invite new users to your platform, they should receive an invitation like this:

<h2>You have been invited</h2>
<p>
  You have been invited to create a user on {{ .SiteURL }}. Follow this link to
  accept the invite:
</p>
<a
  href="{{ .SiteURL }}/api/auth/callback?token_hash={{ .TokenHash }}&type=invite&next=/auth-password-update"
  >Accept the invite</a
>

Magic Link Email For passwordless login, the magic link email template should be set as follows:

<h2>Magic Link</h2>
<p>Follow this link to login:</p>
<a
  href="{{ .SiteURL }}/api/auth/callback?token_hash={{ .TokenHash }}&type=email"
  >Log In</a
>

Confirm Email Change When users need to confirm their new email, use the following template:

<h2>Confirm Change of Email</h2>
<p>
  Follow this link to confirm the update of your email from {{ .Email }} to {{
  .NewEmail }}:
</p>
<a href="{{ .ConfirmationURL }}">Change Email</a>

Reset Password Email For users that have requested a password reset:

<h2>Reset Password</h2>
<p>Follow this link to reset the password for your user:</p>
<a
  href="{{ .SiteURL }}/api/auth/callback?token_hash={{ .TokenHash }}&type=recovery&next=/auth-password-update"
  >Reset Password</a
>

Code Structure and Philosophy

Code Organization Over Design Patterns

While design patterns like Factory Pattern and other "clean code" principles have their place, they often lead to overly complex, hard-to-understand codebases. Different developers have different coding styles and approaches - this is natural and okay. Instead of forcing a specific pattern, we focus on keeping related code together in the same folder, making it easier for everyone to understand and maintain.

Intentional Code Duplication Examples

  1. Shared Code (Minimal) Only truly universal utilities are shared:

    • getSession() for auth
    • Type definitions for database schema
    • Error boundary components
  2. Locality of Behavior Everything else stays with its feature:

    • Custom hooks live in feature directories
    • API route handlers stay with their features
    • State management is feature-specific
    • Types and interfaces specific to a feature stay in that feature's directory

This approach means:

  • Each feature directory is a complete, self-contained unit
  • No hunting through shared directories to understand a feature
  • Changes can be made confidently without side effects
  • New developers can understand features by looking in one place

The goal is maximum independence and clarity, even at the cost of some duplication. Rather than creating complex abstractions or following rigid design patterns, we prioritize keeping related code together and making it easy to understand at a glance. Shared code is limited to only the most basic, unchanging utilities that truly serve every part of the application.

Project Structure Visualization

Below is a comprehensive dependency graph showing how all components and modules in the project are interconnected. This visualization helps understand the project's architecture and component relationships:

Project Dependencies Graph

This dependency graph illustrates:

  • Component hierarchies and their relationships
  • Module dependencies across the application
  • Import/Export relationships between files
  • The overall architectural structure of the project

Understanding this graph can help developers:

  • Navigate the codebase more effectively
  • Identify potential areas for refactoring
  • Understand component dependencies
  • Visualize the application's architecture

The dependency graph was generated using the following command:

npx madge \
  --image full-deps.svg \
  --extensions js,jsx,ts,tsx \
  --ts-config tsconfig.json \
  --exclude "node_modules|.next|public" \
  --warning \
  .

πŸ“œ License

πŸ”– Licensed under the MIT License. See LICENSE.md for details.


About

Supabase Auth + AI Stack for Next.js 15 with SSR and React Server Components (RSC), Welcome to a production-ready template combining Supabase SSR authentication with AI capabilities: document chat (RAG), web search, and multiple LLM support. Features include secure file storage, vector search (pgvector), and persistent chat history.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published