Skip to content

API proposal: Database configuration & transaction management; ORM later #181

@vim89

Description

@vim89

Labels: api-proposal, component:database, component:orm, priority:critical, status:discussion


Important Context

Requires Scala 3.7+: This proposal uses scalasql's SimpleTable syntax for cleaner ergonomics. Cask-Framework is opinionated - we demand newer Scala versions for better developer experience.

Uses scalasql directly: scalasql explicitly states "Non-Goals: ORM/ActiveRecord". We respect this. This proposal focuses on auto-configuring scalasql and eliminating database setup boilerplate. Users write scalasql queries directly in routes.

Scope: This proposal is limited to database configuration and transaction management. ORM layer is out of scope for initial phase but may be reconsidered in future as Cask-Framework matures.


Problem

Building web apps with Cask + ScalaSql requires 60+ lines of boilerplate before your first endpoint:

// Manual database setup (15 lines)
val sqliteDataSource = new org.sqlite.SQLiteDataSource()
sqliteDataSource.setUrl(s"jdbc:sqlite:$tmpDb/file.db")
lazy val sqliteClient = new scalasql.DbClient.DataSource(...)

// Custom transaction decorator (11 lines - every app needs this!)
class transactional extends cask.RawDecorator {
  def wrapFunction(pctx: cask.Request, delegate: Delegate) = {
    sqliteClient.transaction { txn =>
      val res = delegate(pctx, Map("txn" -> txn))
      if (res.isInstanceOf[cask.router.Result.Error]) txn.rollback()
      res
    }
  }
}

// Explicit Txn parameter in every method
@transactional
@cask.get("/todos")
def list()(txn: Txn) = {
  txn.run(Todo.select)  // Manual transaction handling
}

This is 10x more code than other frameworks for basic CRUD. That's why Scala web development feels hard.


Proposed solution

Auto-configure scalasql's DataSource/DbClient from application.conf and provide transaction management. Users write scalasql queries directly.

What we're building:

  • Auto-configured DbClient from configuration file
  • Transaction decorator (@cask.transactional)
  • Implicit database context (using ctx: DbClient.Txn)
  • Convention helpers (optional table naming)

Out of scope for initial phase:

  • ORM layer (may revisit after framework matures)
  • Active Record / Model abstraction
  • CRUD helper methods
  • Relationship management

API example

Before: Manual database setup (15 lines) + custom transaction decorator (11 lines) + explicit Txn parameters

After:

// application.conf (auto-loaded)
database {
  driver = "org.sqlite.JDBC"
  url = "jdbc:sqlite:./myapp.db"
  pool.maximum-size = 10
}

// Define table using SimpleTable (Scala 3.7+)
case class Todo(id: Int, checked: Boolean, text: String)
object Todo extends SimpleTable[Todo]

// Use in routes - database auto-configured
@cask.transactional
@cask.get("/todos/:id")
def show(id: Int)(using ctx: DbClient.Txn) = {
  ctx.run(Todo.select.filter(_.id === id)).headOption
    .getOrElse(cask.Abort(404))
}

@cask.transactional
@cask.get("/todos")
def list()(using ctx: DbClient.Txn) = {
  ctx.run(Todo.select.filter(_.checked === false))
}

@cask.transactional
@cask.post("/todos")
def create(text: String)(using ctx: DbClient.Txn) = {
  val newTodo = Todo(id = 0, checked = false, text = text)
  ctx.run(Todo.insert.values(newTodo).returning(_.id).single)
}

What we get:

  • No database setup code (auto-configured from application.conf)
  • No custom transaction decorator (built-in @cask.transactional)
  • Clean SimpleTable syntax (no HKT T[_])
  • Direct ScalaSql usage (full query DSL available)

Design arch.

flowchart TB
    CONFIG["application.conf<br/>(Database config)"]
    SETUP["Cask-Framework<br/>(Auto-configuration)"]
    ROUTES["User Routes<br/>(Direct ScalaSql queries)"]
    SCALASQL["ScalaSql<br/>(Type-safe SQL DSL)"]
    DB["Database"]

    CONFIG -->|"Load config"| SETUP
    SETUP -->|"Provides DbClient"| ROUTES
    ROUTES -->|"ctx.run(Todo.select)"| SCALASQL
    SCALASQL -->|"SQL Query"| DB
    DB -->|"Result Set"| SCALASQL
    SCALASQL -->|"Case class"| ROUTES

    style CONFIG fill:#f0e1ff,stroke:#8800cc,color:#000
    style SETUP fill:#e1ffe1,stroke:#2d7a2d,color:#000
    style ROUTES fill:#e1f5ff,stroke:#0066cc,color:#000
    style SCALASQL fill:#fff4e1,stroke:#cc8800,color:#000
    style DB fill:#f0e1ff,stroke:#8800cc,color:#000
Loading

Design: Configuration layer auto-configures ScalaSql, routes use ScalaSql directly. No ORM abstraction.


Decision: Db configuration & transaction management

Context

Organizational priorities:

  • Cask-Framework must reduce boilerplate for web applications
  • Must integrate with Li Haoyi ecosystem (non-negotiable constraint)
  • initial phase focus: essential infrastructure, avoid scope creep
  • Be opinionated: demand Scala 3.7+ for better DX

Current situation:

  • Cask + scalasql requires 60+ lines of setup boilerplate per application
  • Every app repeats same DataSource/DbClient/transaction configuration
  • scalasql provides excellent query DSL but no setup conventions
  • Building an ORM would be a huge undertaking (design, implement, document, maintain)

Decision

We will auto-configure scalaSql and provide transaction management. Users write scalasql queries directly.

This means:

  • Auto-configured DbClient from application.conf
  • @cask.transactional decorator for automatic transaction wrapping
  • Implicit context (using ctx: DbClient.Txn)
  • Scala 3.7+ requirement for SimpleTable syntax
  • Direct scalasql usage (no ORM abstraction yet)

Why auto-configuration for initial phase:

  • Solves the real pain (database setup boilerplate)
  • Respects scalasql's "Non-Goal: ORM" design
  • Avoids huge ORM implementation/maintenance burden for initial release
  • Users already know scalasql (no new API to learn)
  • Keeps codebase focused on Cask-specific features
  • ORM layer would be convenient but can be reconsidered after framework matures

Why SimpleTable (Scala 3.7+):

  • Cleaner syntax (no HKT T[_])
  • Better developer experience
  • Opinionated framework approach (demand newer versions)

Consequences

Positive:

  • Dramatically reduces boilerplate (60+ lines → ~10 lines)
  • No new persistence API to learn (just use ScalaSql)
  • Respects ecosystem design (ScalaSql as-is)
  • Minimal maintenance burden
  • Users get full ScalaSql query power

Negative:

  • No ORM conveniences (findById, etc.)
  • Users must write scalasql queries for every operation
  • More verbose than Active Record pattern (but cleaner than manual setup)

Mitigations:

  • scalasql query DSL is already expressive and type-safe
  • Documentation will provide common query patterns as examples
  • Optional convention helpers (table naming) can reduce boilerplate
  • ORM layer may be reconsidered after Cask-Framework core is mature

Follow-up decisions needed:

  • Configuration format details (HOCON vs YAML)
  • Transaction rollback strategy (explicit vs automatic)
  • Connection pool configuration (HikariCP settings)

Initial phase scope

What's included:

  • Auto-configured DbClient from application.conf
  • @cask.transactional decorator
  • Implicit transaction context (using ctx: DbClient.Txn)
  • Scala 3.7+ requirement (SimpleTable syntax)
  • Configuration documentation

Out of scope for initial phase:

  • ORM layer (may revisit after framework matures)
  • CRUD helper methods
  • Migrations
  • Relationships
  • Validation helpers

Open questions

  1. Configuration format? HOCON (Lightbend Config) vs YAML vs custom Scala DSL?

  2. Transaction rollback strategy? Should errors automatically rollback, or require explicit txn.rollback()?

  3. Connection pool? Use HikariCP by default, or make it configurable?

  4. Convention helpers? Should we provide optional traits for table naming conventions (Todo → "todos")?


Request for feedback

We need input on:

  1. Does auto-configuration solve the real pain point?
  2. Is Scala 3.7+ requirement acceptable for opinionated framework?
  3. Is direct ScalaSql usage (no ORM) the right choice for Phase 0?
  4. What configuration format feels most natural?
  5. What critical setup features are missing?

This is the foundation of Cask-Framework database integration.


References

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions