-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Purpose
FastAPI backend to support:
- AI/LLM integration (architecture TBD - may be separate service)
- Galaxy tool execution
- NCBI cross-linking (see Link from NCBI to BRC Analytics #783)
- SRA metadata database integration
Initial Directory Structure
/backend/
├── app/
│ ├── api/v1/ # Feature routers (health, version, cache, etc.)
│ ├── core/ # Config, cache, dependencies, auth
│ └── main.py
├── Dockerfile
├── docker-compose.yml
├── nginx.conf # Reverse proxy for backend
└── pyproject.toml
**Rationale: Start simple with single service. Can refactor to /backend/api-service/ structure later if we add more services.
Prod Deployment Architecture
Frontend: Static Next.js + CloudFront (existing, unchanged)
Backend: FastAPI + nginx
Tech Stack
FastAPI + uvicorn (async Python)
nginx - Backend reverse proxy
Redis - Caching (job status, API responses)
Docker + Docker Compose
Ruff - Linting
Key Features
Async operations throughout
Shared auth/caching across features
Single URL local dev
E2E test support
Version syncing
API Endpoints (Initial)
/health - Service monitoring
/version - API metadata
/cache/* - Cache management
Open Questions
AI service architecture: Router in this service vs. separate service?
- Depends on: desire for portability, scaling needs, deployment complexity
- Decision deferred until requirements clearer
Next.js server requirement: Do we need a Next.js server for faster builds, image optimization, and NextAuth (Google login)?
- Consider: Are there alternative solutions?
- Trade-offs: Static site simplicity vs. server runtime complexity/cost
Success Criteria
FastAPI + nginx + Redis running in Docker
Health/version endpoints functional
E2E tests passing
Local dev environment working
Metadata
Metadata
Assignees
Labels
Type
Projects
Status