@@ -14,8 +14,8 @@ in the database. We also store the original source document "as-is".
1414When making changes to the database structure, we also have a migration process, which takes care of upgrading the
1515database structures during an upgrade.
1616
17- However, in some cases, changing the database structure actually means to extract more information from documents and is
18- currently stored in the database. Or information is extracted in a different way. This requires a re-processing of
17+ However, in some cases, changing the database structure actually means extracting more information from documents than
18+ is currently stored in the database. Or information is extracted in a different way. This requires a re-processing of
1919all documents affected by this change.
2020
2121### Example
@@ -30,25 +30,51 @@ This ADR makes the following assumptions:
3030* All documents are stored in the storage
3131* It is expected that an upgrade is actually required
3232* Running such migrations is expected to take a long time
33+ * The management of infrastructure (PostgreSQL) is not in the scope of Trustify
3334
3435Question? Do we want to support downgrades?
3536
3637## Decision
3738
38- ### Option 1
39+ During the migration of database structures (sea orm), we also re-process all documents (if required). This would
40+ be running during the migration job of the Helm chart and would have an impact on updates as the rollout of newer
41+ version pods would be delayed until the migration (of data) has been finished.
3942
40- During the migration of database structures (sea orm), we also re-process all documents (when required).
43+ This would also require to prevent users from creating new documents during that time. Otherwise, we would need to
44+ re-process documents ingested during the migration time. A way of doing this could be to leverage PostgreSQL's ability
45+ to switch into read-only mode. Having mutable operations fail with a 503 (Service Unavailable) error. This would also
46+ allow for easy A/B (green/blue) database setups. Switching the main one to read-only, having the other one run the
47+ migration.
4148
42- In order to report progress, we could write that state into a table and expose that information to the user via the UI .
49+ We could provide an endpoint to the UI, reporting the fact that the system is in read-only mode during a migration .
4350
44- * 👎 Might serve inaccurate data for a while
45- * 👎 Might block an upgrade if re-processing fails
4651* 👍 Can fully migrate database (create mandatory field as optional -> re-process -> make mandatory)
52+ * 👍 Might allow for an out-of-band migration of data, before running the upgrade (even on a staging env)
53+ * 👍 Would allow to continue serving data while the process is running
4754* 👎 Might be tricky to create a combined re-processing of multiple ones
55+ * 👎 Might block an upgrade if re-processing fails
56+
57+ ### Approach 1
58+
59+ The "lazy" approach, where the user just runs the migration (or the new version of the application with migrations
60+ enabled). The process will migrate schema and data. This might block the startup for a bit. But would be fast and
61+ simple for small systems.
62+
63+ ### Approach 2
64+
65+ The user uses a green/blue deployment. Switching the application to use green and run migrations against blue. Once
66+ the migrations are complete, switching back to blue. Green will be read-only and mutable API calls will fail with a 503
67+ error.
68+
69+ ## Open items
70+
71+ * [ ] How to handle unparsable or failing documents during migration?
72+
73+ ## Alternative approaches
4874
4975### Option 2
5076
51- We create a similar module as for the importer. Running migrations after an upgrade. Accepting that in the meantime,
77+ We create a similar module as for the importer. Running migrations after an upgrade. Accepting that in the meantime,
5278we might service inaccurate data.
5379
5480* 👎 Might serve inaccurate data for a while for a longer time
@@ -58,7 +84,7 @@ we might service inaccurate data.
5884
5985### Option 3
6086
61- We change ingestion in a way to it is possible to just re-ingest every document. Meaning, we re-ingest from the
87+ We change ingestion in a way to it is possible to just re-ingest every document. Meaning, we re-ingest from the
6288original sources.
6389
6490* 👎 Might serve inaccurate data for a while for a longer time
@@ -68,14 +94,10 @@ original sources.
6894* 👎 Won't work for manual (API) uploads
6995* 👎 Would require removing optimizations for existing documents
7096
71- ## Open items
72-
73- …
74-
75- ## Alternative approaches
76-
77- …
7897
7998## Consequences
8099
81- …
100+ * The migration will block the upgrade process until it is finished
101+ * Ansible and the operator will need to handle this as well
102+ * The system will become read-only during a migration
103+ * The UI needs to provide a page for monitoring the migration state. The backend needs to provide appropriate APIs.
0 commit comments