Skip to content

vm-mishchenko/agent-playground

Repository files navigation

Agent playground

A minimal, ready-to-use Anthropic agent.

Showcases core building blocks with no heavy library abstractions.

See a demo

Screenshot

Features

What's implemented:

  • Stream LLM responses to the client
  • Handle errors during streaming
  • Let users cancel the stream
  • Support different response types (like Text and Thinking)
  • Check high-level design

Tech:

  • Next.js (15.3.4)
  • Anthropic SDK
  • shadcn for styles
  • Docker for packaging

Design

The client keeps all state (messages) in memory to simplify development (refreshing the page resets the state).

It sends messages via POST request to the server to get LLM response.

Communication follows a one request - one response schema. No open WebSocket connections or Server-Sent Events to push data outside the initial request.

The server forwards messages to Anthropic using the SDK.

Anthropic response message follows the schema:

Message:
  content:
    ThinkingBlock: {thinking: 'User asked me about ...'}
    TextBlock {text: 'Hi, ...'}
    ... other blocks    

Server streams blocks back using 'Transfer-Encoding': 'chunked'. Each chunk is serialized and separated by "\n".

The browser buffers data in chunks (a few KB to ~64KB) before passing it to the client, so buffered chunks may differ from the server’s original chunks.

Server chunks: "chunk1\n", "chunk2\n", "chunk3\n" 
Browser chunks: "chunk1\nchu",  "nk2\nchunk3\n"

The client reads the chunks, splits by "\n", and parses each piece as JSON.

Client merging chunks in memory and periodically syncing them with the React state.

The client can't call setState for every incoming chunk because React gets overwhelmed, buffers updates, and the state stays stale - blocking proper merging of new chunks.

Development

Run the development server:

npm run dev

Add ui components

# https://ui.shadcn.com/docs
npx shadcn@latest add button

Project initialized by:

# initialize Next.js
npx create-next-app@latest

# https://ui.shadcn.com/docs/installation/next
npx shadcn@latest init

Deploy to GCP

Build image:

# Build image (GCP structure: project_id/artifact_registry_name/image_name)
docker build --platform=linux/amd64 --tag us-west1-docker.pkg.dev/ai-agent-playground-465901/ai-agent-playground/ai-agent-playground:latest .

# Run image locally
docker run --init \
--publish 3000:3000 \
--env ANTHROPIC_API_KEY=xxx \
us-west1-docker.pkg.dev/ai-agent-playground-465901/ai-agent-playground/ai-agent-playground:latest

Configure GCP:

  1. Enable GCP Artifact registry
  2. Create ai-agent-playground registry in us-west1 region
  3. Assign roles: 'Artifact Registry Administrator', 'Cloud Run Developer'

Configure gcloud locally:

# Log in into GCP account
gcloud auth login

# Configure docker
gcloud auth configure-docker us-west1-docker.pkg.dev

Deploy image to GCP Cloud Run:

# Upload image to GCP
docker push us-west1-docker.pkg.dev/ai-agent-playground-465901/ai-agent-playground/ai-agent-playground:latest

# Deploy image to GCP Cloud Run (0 instances when no requests)
gcloud run deploy ai-agent-playground \
--image=us-west1-docker.pkg.dev/ai-agent-playground-465901/ai-agent-playground/ai-agent-playground:latest \
--allow-unauthenticated \
--port=3000 \
--min-instances=0 \
--max-instances=1 \
--platform=managed \
--region=us-central1 \
--memory=512Mi \
--project=ai-agent-playground-465901 \
--set-env-vars "ANTHROPIC_API_KEY=XXX"

About

A minimal, ready-to-use Anthropic agent.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published