Skip to content

Upgrade Vercel AI SDK to v6 (from v4) #238

@thomasdavis

Description

@thomasdavis

Overview

Upgrade from Vercel AI SDK v4 to v6 to access the latest features, performance improvements, and better React integration.

Current State

  • Current version: AI SDK v4.x
  • Target version: AI SDK v6.x (latest stable)
  • Usage: Cover letter generation, interview chat, resume suggestions, job matching

Benefits of AI SDK v6

  1. Better streaming support: Improved streaming APIs with better error handling
  2. Enhanced tool calling: More powerful function calling capabilities
  3. React Server Components: Better integration with Next.js App Router
  4. Type safety improvements: Better TypeScript support
  5. Performance optimizations: Faster token processing and lower latency
  6. New hooks: useChat, useCompletion improvements
  7. Multi-provider support: Easier switching between OpenAI, Anthropic, etc.

Current AI Features in Registry

Files using AI SDK:

  • app/api/chat/route.js - AI chat for resume suggestions
  • app/api/letter/route.js - Cover letter generation
  • app/api/interview/route.js - Interview practice
  • app/api/decisions/evaluate/route.js - AI-powered job matching
  • app/api/suggestions.js - Resume improvement suggestions
  • app/components/AIChatEditor.js - Chat UI component
  • app/[username]/interview/ - Interview feature pages

Breaking Changes in v6

1. Import Paths Changed

// v4
import { OpenAIStream } from 'ai';

// v6
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

2. Streaming API Refactored

// v4
const response = await openai.createChatCompletion({
  model: 'gpt-4',
  messages,
  stream: true,
});
const stream = OpenAIStream(response);

// v6
const result = await streamText({
  model: openai('gpt-4-turbo'),
  messages,
});
return result.toDataStreamResponse();

3. Tool/Function Calling Updated

// v4
functions: [{
  name: 'getTool',
  description: 'Get a tool',
  parameters: { ... }
}]

// v6
tools: {
  getTool: tool({
    description: 'Get a tool',
    parameters: z.object({ ... }),
    execute: async (params) => { ... }
  })
}

4. React Hooks Enhanced

// v4
const { messages, input, handleSubmit } = useChat({
  api: '/api/chat',
});

// v6 - More options and better streaming
const { messages, input, handleSubmit, isLoading, error } = useChat({
  api: '/api/chat',
  onFinish: (message) => { ... },
  onError: (error) => { ... },
});

Migration Plan

Phase 1: Research and Planning (1 day)

  • Review AI SDK v6 documentation
  • Identify all files using AI SDK v4
  • Document breaking changes affecting our codebase
  • Test v6 in isolated branch

Phase 2: Dependency Updates (1 day)

  • Update ai package to v6
  • Add @ai-sdk/openai package (new modular approach)
  • Update related dependencies
  • Run pnpm install

Phase 3: Code Migration (3-4 days)

Priority order:

  1. API Routes (most critical):

    • /api/chat/route.js - Chat endpoint
    • /api/letter/route.js - Cover letter generation
    • /api/interview/route.js - Interview chat
    • /api/decisions/evaluate/route.js - Job matching AI
    • /api/suggestions.js - Resume suggestions
  2. React Components:

    • AIChatEditor.js - Update useChat hook
    • Interview components - Update chat interfaces
    • Any other AI-powered UI
  3. Utilities:

    • Shared AI helper functions
    • Prompt templates
    • Token counting utilities

Phase 4: Testing (2-3 days)

  • Test chat functionality
  • Test cover letter generation
  • Test interview feature
  • Test AI job matching
  • Test resume suggestions
  • Performance testing (latency, token usage)
  • Error handling testing
  • Rate limiting testing

Phase 5: Documentation Updates

  • Update CLAUDE.md with v6 patterns
  • Document new AI SDK usage patterns
  • Add examples for tool calling
  • Update developer guides

Code Examples

Before (v4):

// app/api/chat/route.js
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream, StreamingTextResponse } from 'ai';

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(config);

export async function POST(req) {
  const { messages } = await req.json();
  
  const response = await openai.createChatCompletion({
    model: 'gpt-4',
    stream: true,
    messages,
  });
  
  const stream = OpenAIStream(response);
  return new StreamingTextResponse(stream);
}

After (v6):

// app/api/chat/route.js
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

export async function POST(req) {
  const { messages } = await req.json();
  
  const result = await streamText({
    model: openai('gpt-4-turbo'),
    messages,
    maxTokens: 1000,
  });
  
  return result.toDataStreamResponse();
}

Tool Calling (v6):

import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';

const result = await streamText({
  model: openai('gpt-4-turbo'),
  messages,
  tools: {
    getJobMatches: tool({
      description: 'Find matching jobs for a resume',
      parameters: z.object({
        skills: z.array(z.string()),
        experience: z.number(),
      }),
      execute: async ({ skills, experience }) => {
        // Fetch matching jobs
        return await findMatchingJobs(skills, experience);
      },
    }),
  },
});

Testing Strategy

Unit Tests

  • Test streaming responses
  • Test error handling
  • Test tool calling
  • Test token limits

Integration Tests

  • Test full chat flows
  • Test cover letter generation end-to-end
  • Test interview conversations
  • Test AI evaluation accuracy

Performance Tests

  • Measure response latency
  • Compare token usage (v4 vs v6)
  • Monitor API costs
  • Check memory usage

Rollback Plan

If issues are found:

  1. Keep v4 as fallback in feature flag
  2. Gradual rollout (10% → 50% → 100%)
  3. Monitor error rates and user feedback
  4. Quick revert capability via environment variable

Success Criteria

  • ✅ All AI features working correctly
  • ✅ No increase in errors or failures
  • ✅ Performance equal or better than v4
  • ✅ Token usage optimized
  • ✅ All tests passing
  • ✅ Documentation updated
  • ✅ No regressions in user experience

Cost Considerations

Monitor token usage before and after:

  • Track total tokens consumed per request
  • Monitor API costs (OpenAI dashboard)
  • Optimize prompts if costs increase
  • Consider caching strategies

Dependencies

Blocked by:

Blocks:

  • Future AI feature enhancements
  • Multi-model support (Anthropic, etc.)
  • Advanced tool calling features

References

Labels

enhancement, dependencies, ai-features, breaking-change

Metadata

Metadata

Assignees

No one assigned

    Labels

    ai-featuresAI/ML functionalitybreaking-changeChanges that break existing functionalitydependenciesenhancementNew feature or requestupgradeDependency or version upgrades

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions