-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: rewrite the data model and schema design docs #1590
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Deploying greptime-docs with
|
Latest commit: |
3b37855
|
Status: | ✅ Deploy successful! |
Preview URL: | https://2abc9680.greptime-docs.pages.dev |
Branch Preview URL: | https://docs-data-model.greptime-docs.pages.dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! However, I think we should include some practical guidelines, particularly a series of 'how-to' guides. For instance, we could cover topics such as:
- Best practices for designing a traces table
- Guidelines for structuring a log table
- How to desgin table for prometheus-like metrics
- Guidelines for Iot Devices sensors
- And similar common use cases
- Typically use short strings for tags, avoiding `FLOAT`, `DOUBLE`, `TIMESTAMP`. | ||
- Never set high cardinality columns as tags if they change frequently. | ||
For example, `trace_id`, `span_id`, `user_id` must not be used as tags. | ||
GreptimeDB works well if you set them as fields instead of tags. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: Creating indexes on fields and tags can be done at any time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need think about this. Is it the implementation, or the definition? Is it for now, or forever?
For example, for traces the most typical query is retrieve all items by trace_id
. Why data is not partitioned or sorted by trace_id
in storage? And in trace, we will use traceid+spanid+timestamp for deduplication, maybe we will do it on compaction or on read. Not having them as primary keys makes it impossible to do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some content for reference: https://clickhouse.com/docs/use-cases/observability/schema-design#choosing-a-primary-ordering-key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, this is an implementation limitation. Our primary key is a time-series and is similar to a stream in VL.
https://docs.victoriametrics.com/victorialogs/keyconcepts/#how-to-determine-which-fields-must-be-associated-with-log-streams
In the future, we can support flexible primary key, but some optimization is only possible and available when the cardinality is low,
For trace_id+spanid
, it is actually not a time-series. We may need to consider this in the future.
Suppose we have a time-series table called `system_metrics` that monitors the resource usage of a standalone device: | ||
### Metrics | ||
|
||
Suppose we have a table called `system_metrics` that monitors the resource usage of machines in data centers: | ||
|
||
```sql | ||
CREATE TABLE IF NOT EXISTS system_metrics ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this SQL statement still suitable for the new data model where the tag columns are no longer recommended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tags are still helpful for querying if we set them properly. We can also only use host for tag or use idc, host
This improves the locality of data with the same tags. | ||
If there are no tag columns, GreptimeDB sorts rows by timestamp. | ||
2. Identifying a unique time-series. | ||
When the table is not append-only, GreptimeDB can deduplicate rows by timestamp under the same time-series (primary key). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel this should be definition of primary key, that primary key + timestamp forms a unique id of data records. Even if in append-only mode, we will do best-of-effort de-duplication in future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Append mode is intended not to dedup, some data like logs can have duplicate keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should at least keep a semantic for deduplicate entries. For performance consideration, we can do best-effort dedup, for example dedup on read or on compaction. Otherwise we will be in a position that in theory we cannot deal with logs generated from upstream retries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In most analytical databases, the primary key is not unique.
If there are no tag columns, GreptimeDB sorts rows by timestamp. | ||
2. Identifying a unique time-series. | ||
When the table is not append-only, GreptimeDB can deduplicate rows by timestamp under the same time-series (primary key). | ||
3. Smoothing migration from other TSDBs that use tags or labels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to mention the impact of the order of tags in primary key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The When to use primary key
section explains this and has an example.
Recommendations for Tag columns: | ||
Usually you don't need tag columns since ordering data by timestamp is sufficient for most use cases. | ||
If you need deduplication, or your queries can benefit from the ordering, you can define tag columns. | ||
Setting tags may also reduce disk space usage as it improves the locality of data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually you don't need tag columns since ordering data by timestamp is sufficient for most use cases.
I feel this is due to the limitation caused cardinality effect on memtable. This should not be, at least the final, definition of tags.
The selection of tags determines:
- how we can dedup data
- how we can sort data to narrow down the scan range of user's most typical queries
- how we can do effective partition, for small query there should be less partitions envolved.
- sorted data brings better compression as mentioned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing the limitation may be the final goal, but this document is a guide for the current version.
However, a high cardinality primary key may still cause issues in some time-series workloads, like fetching last rows, processing in per-series manner. This may be related to the schema design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not only memtable, our batch struct is also per series.
https://github.com/GreptimeTeam/greptimedb/blob/c77ce958a3b936e1065abe51fa10832b0ea2ac99/src/mito2/src/read.rs#L63-L74
What's Changed in this PR
This PR rewrites the data model and schema design docs.
The new design guide covers
It also modifies the data model document to correct some concepts.
Checklist
sidebars.ts
matches the current document structure when you changed the document structure.