Migrations are managed by Kysely, and on the CLI by Kysely-CTL. Kysely-CTL keeps the same API as Knex, so if the docs are not great, you can use the knex migration docs, too. There are 3 ways migrations are triggered:
- Manually, calling
yarn kysely migrate:latest
,yarn kysely migrate:up
, oryarn kysely migrate:down
- After the app is built, calling
yarn predeploy
(this is run in prod, where the /migrations dir does not exist) - In dev, calling
yarn dev
, which uses PM2 to callyarn kysely migrate:latest
To create a new migration run yarn kysely migrate:make NAME
Rebasing migrations means deleting all the migrations in /migrations and starting fresh. It is time to rebase migrations when one of the following is true:
- a migration has a dependency that you want to remove from the project
- there are too many migrations and they're slowing down CI/CD
To rebase:
- create a new DB in postgres & change it in the .env, e.g.
POSTGRES_DB='init1'
. - run
yarn kysely migrate:latest
to build it - goto pgadmin, right click the database and click backup
- General Tab
- Filename: init.sql
- Encoding: UTF8
- Data Options
- Sections: Pre-data, Data, Post-data
- Do not save: Owner
- Query Options
- Use Insert Commands
- On conflict do nothing to INSERT command
- Table options
- Exclude Patterns: Tables:
_*
(excludes_migration
,_migrationLock
)
- Exclude Patterns: Tables:
yarn kysely migrate:make init
to create a new initial migration- Copy the contents of the old
_init.ts
migration to the new file you just created & just replace the SQL. At the beginning of the file, update the oldmigrationTableName
so we delete the old migration table - Delete all old migrations
- Increment the table version number in
kyselyMigrations.ts
formigrationTableName
, e.g._migrationV3
- Pick a directory on the local filesystem to store the keys. Store the absolute path in the env var
POSTGRES_SSL_DIR
- Add
root.crt
(CA),postgresql.key
, andpostgresql.crt
to that directory.
pgm.db.query
isn't a true async function. It runs all migrations in parallel, queues up every query, and then executes those in serial.
This is generally fine if you only use pgm
.
However, if you create 1 migration using pgm
and a second migration using pg.Client
then the 2nd migration will run before pgm
finishes, causing it to fail.
What's more, if you call pgm
to create a table and pg.Client
to access that table, the table won't exist.
We learned this after about 10 migrations. As a band-aid, we use pgm.noTransaction()
on migrations that caused errors.
Moving forward, we will only use pg.Client
Parameters are capped at 16-bit, so if you're doing a bulk insert, you'll need to break it up.
In other words, if # rows * columns per row > 65,535
you need to do it in batches.
pg-protocol
shows this here: https://github.com/brianc/node-postgres/blob/master/packages/pg-protocol/src/serializer.ts#L155
Issue here: brianc/node-postgres#581