Releases: Altinity/clickhouse-backup
Releases · Altinity/clickhouse-backup
v0.5.1
BUG FIXES
- Restoring some tables might not work correctly. That has been fixed (thanks @yuzhichang)
IMPROVEMENTS
- Added
S3_DISABLE_CERT_VERIFICATION
option (thanks @martenlindblad) - Removed default value for GCS_CREDENTIALS_FILE option, it led to issue when running on GCE (thanks @nikon72ru)
- Now the view tables will be restored at the end
v0.5.0
IMPROVEMENTS
- Added support of Google Cloud Storage (thanks @przemekd)
- Now tables will be processed in an orderly manner (thanks @yuzhichang)
- Now config file location may be defined via $CLICKHOUSE_BACKUP_CONFIG
BROKEN CHANGES
- 'backups_to_keep_s3' option in config file was renamed into 'backups_to_keep_remote'
- 'disable_progress_bar', 'backups_to_keep_local', 'backups_to_keep_remote' settings were moved to 'general' section in config file
- 'restore-schema' and 'restore-data' were united into one command. You can restore the schema and data separately, as before, using the '--schema' and '--data' flags (thanks @dcastanier)
v0.4.2
IMPROVEMENTS
- Support of ClickHouse v19.15 was added
- AWS Server-Side Encryption support was added (thanks @pelletier)
- Auto create 'backup' folder on download (thanks @dcastanier)
v0.4.1
IMPROVEMENTS
- Now restrictions defined by flag 'tables' or CLICKHOUSE_SKIP_TABLES will be also applied for metadata (#43)
- S3 part was rewritten on non-blocking pipes. It allowed to execute reading from disk, compression and uploading simultaneously.
- Library mholt/archiver was updated to version with multi-threading 'gzip' support and now 'gzip' has been selected as compression format by default (#14)
Comparison table of upload 31GiB data with v0.4.0:
+------+----------------+----------------+
| | v0.4.0 | v0.4.1 |
+------+----------------+----------------+
| tar | Time: 8m14s | Time: 7m50s |
| | Size: 30.8GB | Size: 30.8GB |
+------+----------------+----------------+
| lz4 | Time: 22m50s | Time: 15m15s |
| | Size: 24GB | Size: 24.5GB |
+------+----------------+----------------+
| gzip | Time: 13m21s | Time: 6m54s |
| | Size: 21GB | Size: 24.2GB |
+------+----------------+----------------+
v0.4.0
IMPROVEMENTS
- Upload and download were rewritten on the official AWS library. This allows to support more s3 cloud providers. In particular Yandex.Cloud now works correctly (#30)
- Support for tables containing special and national characters in name was added (#28)
- Part size has been increased to 100MiB by default
- App will retry to upload and download data 30 times in case of network errors (#36)
DEPREСATIONS
- The 'tree' strategy and the 'dry-run' flag, which were marked as deprecated in v0.3.2, are removed
BUG FIXES
- Error while creating backups of tables different from MergeTree has been fixed (#44)
v0.3.7
IMPROVEMENTS
- Check that table is created before 'restore-data' was added
- Support for comma separated list for '--tables' flag was added
v0.3.6
IMPROVEMENTS
- Support of the newest versions of ClickHouse was added (above 19.4.4.33)
v0.3.5
IMPROVEMENTS
- The 'list' command can print latest and penultimate of backup name. Now you can use the output of the 'list' command as parameters for another command.
For example, create new backup and upload diff:clickhouse-backup create clickhouse-backup upload --diff-from=$(clickhouse-backup list local penult) $(clickhouse-backup list local latest)
- The 'delete' command has been added. Now you can remove specific backups from local and s3.
clickhouse-backup delete local <backup_name> clickhouse-backup delete s3 $(clickhouse-backup list s3 penult)
v0.3.4
Added AWS IAM support
v0.3.3
BUG FIXES
- Bug because of which the newest backups are deleted instead of the oldest ones has been fixed