Skip to content

Commit 2844b1d

Browse files
committed
remove /en
1 parent 542809e commit 2844b1d

28 files changed

+60
-50
lines changed

.github/dependabot.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ updates:
66
schedule:
77
interval: "monthly"
88
- package-ecosystem: "npm"
9-
directory: "/docs/en/integrations"
9+
directory: "/docs/integrations"
1010
schedule:
1111
interval: "monthly"
1212
# Disable version updates for dependencies in the code snippets
1313
open-pull-requests-limit: 0
1414
- package-ecosystem: "pip"
15-
directory: "/docs/en/integrations"
15+
directory: "/docs/integrations"
1616
schedule:
1717
interval: "monthly"
1818
# Disable version updates for dependencies in the code snippets

.github/pull_request_template.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,4 @@
44
## Checklist
55
- [ ] Delete items not relevant to your PR
66
- [ ] URL changes should add a redirect to the old URL via https://github.com/ClickHouse/clickhouse-docs/blob/main/docusaurus.config.js
7-
- [ ] If adding a new integration page, also add an entry to the integrations list here: https://github.com/ClickHouse/clickhouse-docs/blob/main/docs/en/integrations/index.mdx
7+
- [ ] If adding a new integration page, also add an entry to the integrations list here: https://github.com/ClickHouse/clickhouse-docs/blob/main/docs/integrations/index.mdx

images/knowledgebase/connection_timeout_remote_remoteSecure.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: "When using the `remote` or `remoteSecure` table functions on a nod
55
# Code: 279. DB::NetException: All connection tries failed.
66

77
**Problem**
8-
[`remote()` or `remoteSecure()`](https://clickhouse.com/docs/en/sql-reference/table-functions/remote/) table function allows the access of remote table from another ClickHouse node.
8+
[`remote()` or `remoteSecure()`](https://clickhouse.com/docs/sql-reference/table-functions/remote/) table function allows the access of remote table from another ClickHouse node.
99

1010
When using these functions on a node that is located more than 100ms (latency wise) away from the remote node, it is common to encounter the following timeout error.
1111

images/knowledgebase/delete-old-data.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ TTL can also be used to move data not only to [/dev/null](https://en.wikipedia.o
2020
More details on [configuring TTL](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl).
2121

2222
## DELETE FROM
23-
[DELETE FROM](/docs/en/sql-reference/statements/delete.md) allows standard DELETE queries to be run in ClickHouse. The rows targeted in the filter clause are marked as deleted, and removed from future result sets. Cleanup of the rows happens asynchronously.
23+
[DELETE FROM](/docs/sql-reference/statements/delete.md) allows standard DELETE queries to be run in ClickHouse. The rows targeted in the filter clause are marked as deleted, and removed from future result sets. Cleanup of the rows happens asynchronously.
2424

2525
:::note
2626
DELETE FROM is an experimental feature and must be enabled with:
@@ -31,7 +31,7 @@ SET allow_experimental_lightweight_delete = true;
3131

3232
## ALTER DELETE {#alter-delete}
3333

34-
ALTER DELETE removes rows using asynchronous batch operations. Unlike DELETE FROM, queries run after the ALTER DELETE and before the batch operations complete will include the rows targeted for deletion. For more details see the [ALTER DELETE](/docs/en/sql-reference/statements/alter/delete.md) docs.
34+
ALTER DELETE removes rows using asynchronous batch operations. Unlike DELETE FROM, queries run after the ALTER DELETE and before the batch operations complete will include the rows targeted for deletion. For more details see the [ALTER DELETE](/docs/sql-reference/statements/alter/delete.md) docs.
3535

3636
`ALTER DELETE` can be issued to flexibly remove old data. If you need to do it regularly, the main downside will be the need to have an external system to submit the query. There are also some performance considerations since mutations rewrite complete parts even there is only a single row to be deleted.
3737

@@ -49,4 +49,4 @@ More details on [manipulating partitions](../../sql-reference/statements/alter/p
4949

5050
It’s rather radical to drop all data from a table, but in some cases it might be exactly what you need.
5151

52-
More details on [table truncation](/docs/en/sql-reference/statements/truncate.md).
52+
More details on [table truncation](/docs/sql-reference/statements/truncate.md).
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
# Execute SYSTEM statements on all nodes in ClickHouse Cloud
22

3-
In order to execute the same [query](url) on all nodes of a ClickHouse cloud service, we can use [clusterAllReplicas](https://clickhouse.com/docs/en/sql-reference/table-functions/cluster/).
3+
In order to execute the same [query](url) on all nodes of a ClickHouse cloud service, we can use [clusterAllReplicas](https://clickhouse.com/docs/sql-reference/table-functions/cluster/).
44
E.g. in order to get entries from a (node-local) system table from all nodes, you can use:
55
```
66
SELECT ... FROM clusterAllReplicas(default, system.TABLE) ...;
77
```
88

9-
Similarly, you can execute the same [SYSTEM statement](https://clickhouse.com/docs/en/sql-reference/statements/system/) on all nodes with a single statement, by using the [ON CLUSTER](https://clickhouse.com/docs/en/sql-reference/distributed-ddl/) clause:
9+
Similarly, you can execute the same [SYSTEM statement](https://clickhouse.com/docs/sql-reference/statements/system/) on all nodes with a single statement, by using the [ON CLUSTER](https://clickhouse.com/docs/sql-reference/distributed-ddl/) clause:
1010
```
1111
SYSTEM ... ON CLUSTER default;
1212
```
1313

14-
For example for [dropping the filesystem cache](https://clickhouse.com/docs/en/sql-reference/statements/system/#drop-filesystem-cache) from all nodes, you can use:
14+
For example for [dropping the filesystem cache](https://clickhouse.com/docs/sql-reference/statements/system/#drop-filesystem-cache) from all nodes, you can use:
1515
```
1616
SYSTEM DROP FILESYSTEM CACHE ON CLUSTER default;
1717
```

images/knowledgebase/file-export.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ description: "Add an INTO OUTFILE clause to your query."
77

88
## Using INTO OUTFILE Clause {#using-into-outfile-clause}
99

10-
Add an [INTO OUTFILE](/docs/en/sql-reference/statements/select/into-outfile.md) clause to your query.
10+
Add an [INTO OUTFILE](/docs/sql-reference/statements/select/into-outfile.md) clause to your query.
1111

1212
For example:
1313

images/knowledgebase/improve-map-performance.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
sidebar_position: 1
3-
description: "Map lookups such as `a['key']' works with linear complexity (mentioned [here](https://clickhouse.com/docs/en/sql-reference/data-types/map)) and can be inefficient."
3+
description: "Map lookups such as `a['key']' works with linear complexity (mentioned [here](https://clickhouse.com/docs/sql-reference/data-types/map)) and can be inefficient."
44
---
55

66
# Improving Map performance
77

88
**Problem**
99

10-
Map lookup such as `a['key']` works with linear complexity (mentioned [here](https://clickhouse.com/docs/en/sql-reference/data-types/map)) and can be inefficient. This is because selecting a value with a specific key from a table would require iterating through all keys (~M) across all rows (N) in the Map column, resulting in ~MxN lookups.
10+
Map lookup such as `a['key']` works with linear complexity (mentioned [here](https://clickhouse.com/docs/sql-reference/data-types/map)) and can be inefficient. This is because selecting a value with a specific key from a table would require iterating through all keys (~M) across all rows (N) in the Map column, resulting in ~MxN lookups.
1111

1212
A lookup using Map can be 10x slower than a String column. The experiment below also shows ~10x slowdown for cold query, and difference in multiple magnitudes of data processed (7.21 MB vs 5.65 GB).
1313

images/knowledgebase/ingest-parquet-files-in-s3.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ normal login users usually don't work since they may have been configured with a
1010
The following is a very simple example that you can use to test the mechanics of accessing your parquet files successfully prior to applying to your actual data.
1111

1212
If you need an example of creating a user and bucket, you can follow the first two sections (create user and create bucket):
13-
https://clickhouse.com/docs/en/guides/sre/configuring-s3-for-clickhouse-use/
13+
https://clickhouse.com/docs/guides/sre/configuring-s3-for-clickhouse-use/
1414

1515
I used this sample file: https://github.com/Teradata/kylo/tree/master/samples/sample-data/parquet
1616
and uploaded it to my test bucket
@@ -41,7 +41,7 @@ You can set the policy something like this on the bucket:
4141
```
4242

4343
You can run queries with this type of syntax using the S3 table engine:
44-
https://clickhouse.com/docs/en/sql-reference/table-functions/s3/
44+
https://clickhouse.com/docs/sql-reference/table-functions/s3/
4545

4646
```
4747
clickhouse-cloud :) select count(*) from s3('https://mars-doc-test.s3.amazonaws.com/s3-parquet-test/userdata1.parquet','ABC123', 'abc+123', 'Parquet', 'first_name String');
@@ -59,7 +59,7 @@ Query id: fd4f1193-d604-4ac0-9a46-bdd2d5e14727
5959
```
6060

6161
The data types reference for parquet format are here:
62-
https://clickhouse.com/docs/en/interfaces/formats/#data-format-parquet
62+
https://clickhouse.com/docs/interfaces/formats/#data-format-parquet
6363

6464
To bring in the data into a native ClickHouse table:
6565

@@ -128,7 +128,7 @@ When you are ready to import your real data, you can use some special syntax lik
128128
I'd recommend to filter a few directories and files to test the import, maybe a certain year, a couple months and some date range to test first.
129129

130130
besides the path options here, newly released is syntax `**` which specifies all subdirectories recursively.
131-
https://clickhouse.com/docs/en/sql-reference/table-functions/s3/
131+
https://clickhouse.com/docs/sql-reference/table-functions/s3/
132132

133133
For example, assuming the paths and bucket structure is something like this:
134134
`https://your_s3_bucket.s3.amazonaws.com/<your_folder>/<year>/<month>/<day>/<filename>.parquet`

images/knowledgebase/production.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,5 +60,5 @@ Here is some guidance on how to choose between them:
6060
Many teams who initially think that `lts` is the way to go often switch to `stable` anyway because of some recent feature that’s important for their product.
6161

6262
:::warning
63-
One more thing to keep in mind when upgrading ClickHouse: we’re always keeping an eye on compatibility across releases, but sometimes it’s not reasonable to keep and some minor details might change. So make sure you check the [changelog](/docs/en/whats-new/changelog/index.md) before upgrading to see if there are any notes about backward-incompatible changes.
63+
One more thing to keep in mind when upgrading ClickHouse: we’re always keeping an eye on compatibility across releases, but sometimes it’s not reasonable to keep and some minor details might change. So make sure you check the [changelog](/docs/whats-new/changelog/index.md) before upgrading to see if there are any notes about backward-incompatible changes.
6464
:::

images/knowledgebase/time-series.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ClickHouse is a generic data storage solution for [OLAP](../../faq/general/olap.
1111

1212
First of all, there are **[specialized codecs](../../sql-reference/statements/create/table#specialized-codecs)** which make typical time-series. Either common algorithms like `DoubleDelta` and `Gorilla` or specific to ClickHouse like `T64`.
1313

14-
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/docs/en/engines/table-engines/mergetree-family/mergetree.md/##table_engine-mergetree-multiple-volumes) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
14+
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/docs/engines/table-engines/mergetree-family/mergetree.md/##table_engine-mergetree-multiple-volumes) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
1515

1616
Even though it’s against ClickHouse philosophy of storing and processing raw data, you can use [materialized views](../../sql-reference/statements/create/view.md) to fit into even tighter latency or costs requirements.
1717

knowledgebase/calculate_ratio_of_zero_sparse_serialization.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ keywords: ['Empty/Zero Ratio', 'Calculate']
1111

1212
## How to calculate the ratio of empty/zero values in every column in a table
1313

14-
If a column is sparse (empty or contains mostly zeros), ClickHouse can encode it in a sparse format and automatically optimize calculations - the data does not require full decompression during queries. In fact, if you know how sparse a column is, you can define its ratio using the [`ratio_of_defaults_for_sparse_serialization` setting](https://clickhouse.com/docs/en/operations/settings/merge-tree-settings#ratio_of_defaults_for_sparse_serialization) to optimize serialization.
14+
If a column is sparse (empty or contains mostly zeros), ClickHouse can encode it in a sparse format and automatically optimize calculations - the data does not require full decompression during queries. In fact, if you know how sparse a column is, you can define its ratio using the [`ratio_of_defaults_for_sparse_serialization` setting](https://clickhouse.com/docs/operations/settings/merge-tree-settings#ratio_of_defaults_for_sparse_serialization) to optimize serialization.
1515

1616
This handy query can take a while, but it analyzes every row in your table and determines the ratio of values that are zero (or the default) in every column in the specified table:
1717

knowledgebase/certificate_verify_failed_error.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -45,4 +45,4 @@ Here is an example configuration:
4545

4646
## Additional resources
4747

48-
View [https://clickhouse.com/docs/en/interfaces/cli/#configuration_files](https://clickhouse.com/docs/en/interfaces/cli/#configuration_files)
48+
View [https://clickhouse.com/docs/interfaces/cli/#configuration_files](https://clickhouse.com/docs/en/interfaces/cli/#configuration_files)

knowledgebase/file-export.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ keywords: ['Exporting Data', 'INTO OUTFILE', 'File Table Engine']
1111

1212
## Using INTO OUTFILE Clause {#using-into-outfile-clause}
1313

14-
Add an [INTO OUTFILE](https://clickhouse.com/docs/en/sql-reference/statements/select/into-outfile) clause to your query.
14+
Add an [INTO OUTFILE](https://clickhouse.com/docs/sql-reference/statements/select/into-outfile) clause to your query.
1515

1616
For example:
1717

knowledgebase/how-to-check-my-clickhouse-cloud-sevice-state.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ How do I check my ClickHouse Cloud Service state? I want to check if the Service
1818
The [ClickHouse Cloud API](/cloud/manage/api/api-overview) is great for checking the status of a cloud service. You need to create an API Key in your service before you can use the Cloud API. You can do this in ClickHouse Cloud [clickhouse.cloud](https://console.clickhouse.cloud):
1919

2020
- [API Overview](/cloud/manage/api/api-overview)
21-
- [Swagger](https://clickhouse.com/docs/en/cloud/manage/api/swagger)
21+
- [Swagger](https://clickhouse.com/docs/cloud/manage/api/swagger)
2222

2323
1. To check the status of a service, run the following. Make sure to replace `Key-ID` and `Key-Secret` with your respective details:
2424

knowledgebase/how-to-increase-thread-pool-size.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,4 +31,4 @@ You can also free up resources if your server has a lot of idle threads - using
3131
<max_thread_pool_free_size>2000</max_thread_pool_free_size>
3232
```
3333

34-
Check out the [docs](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#max-thread-pool-size) for more details on the settings above and other settings that affect the Global Thread pool.
34+
Check out the [docs](https://clickhouse.com/docs/operations/server-configuration-parameters/settings#max-thread-pool-size) for more details on the settings above and other settings that affect the Global Thread pool.

knowledgebase/how_to_use_parametrised_views.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -87,4 +87,4 @@ Query id: 5731aae1-3e68-4e63-b57f-d50f29055744
8787
1 row in set. Elapsed: 0.004 sec. Processed 319.49 thousand rows, 319.49 KB (76.29 million rows/s., 76.29 MB/s.)
8888
```
8989

90-
For more info, please refer to https://clickhouse.com/docs/en/sql-reference/statements/create/view#parameterized-view
90+
For more info, please refer to https://clickhouse.com/docs/sql-reference/statements/create/view#parameterized-view

knowledgebase/json_extract_example.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ keywords: ['JSON', 'extract base types']
1111

1212
## JSON Extract example
1313

14-
This is just a short example that illustrates the use of [JSONExtract](https://clickhouse.com/docs/en/sql-reference/functions/json-functions) functions.
14+
This is just a short example that illustrates the use of [JSONExtract](https://clickhouse.com/docs/sql-reference/functions/json-functions) functions.
1515

1616
Create a table:
1717

knowledgebase/kafka-clickhouse-json.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ ENGINE = Kafka(
7070
);
7171
```
7272

73-
Note that we're using the [`JSONAsObject`](https://clickhouse.com/docs/en/interfaces/formats#jsonasobject) format, which will ensure that incoming messages are made available as a JSON object.
73+
Note that we're using the [`JSONAsObject`](https://clickhouse.com/docs/interfaces/formats#jsonasobject) format, which will ensure that incoming messages are made available as a JSON object.
7474
This format can only be parsed into a table that has a single column with the `JSON` type.
7575

7676
Next, we'll create the underlying table to store the Wiki data:

knowledgebase/mysql-to-parquet-csv-json.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ keywords: ['MySQL', 'Parquet', 'CSV', 'JSON']
1313

1414
The `clickhouse-local` tool makes it quick and easy to read data from MySQL and output the data into lots of different formats, including Parquet, CSV, and JSON. We are going to:
1515

16-
- Use the [`mysql` table function](https://clickhouse.com/docs/en/sql-reference/table-functions/mysql) to read the data
16+
- Use the [`mysql` table function](https://clickhouse.com/docs/sql-reference/table-functions/mysql) to read the data
1717
- Use the `INTO OUTFILE _filename_ FORMAT` clause and specify the desired output format
1818

1919
The `clickhouse-local` tool is a part of the ClickHouse binary. Download it using the following:

knowledgebase/olap.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ All database management systems could be classified into two groups: OLAP (Onlin
3838

3939
In practice OLAP and OLTP are not categories, it’s more like a spectrum. Most real systems usually focus on one of them but provide some solutions or workarounds if the opposite kind of workload is also desired. This situation often forces businesses to operate multiple storage systems integrated, which might be not so big deal but having more systems make it more expensive to maintain. So the trend of recent years is HTAP (**Hybrid Transactional/Analytical Processing**) when both kinds of the workload are handled equally well by a single database management system.
4040

41-
Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](https://clickhouse.com/docs/en/faq/general/why-clickhouse-is-so-fast) and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
41+
Even if a DBMS started as a pure OLAP or pure OLTP, they are forced to move towards that HTAP direction to keep up with their competition. And ClickHouse is no exception, initially, it has been designed as [fast-as-possible OLAP system](https://clickhouse.com/docs/faq/general/why-clickhouse-is-so-fast) and it still does not have full-fledged transaction support, but some features like consistent read/writes and mutations for updating/deleting data had to be added.
4242

4343
The fundamental trade-off between OLAP and OLTP systems remains:
4444

0 commit comments

Comments
 (0)