Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: rewrite the data model and schema design docs #1590

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

evenyag
Copy link
Contributor

@evenyag evenyag commented Mar 20, 2025

What's Changed in this PR

This PR rewrites the data model and schema design docs.

The new design guide covers

  • Column types and selection
  • Primary key and index selection
  • Deduplication
  • Distributed tables

It also modifies the data model document to correct some concepts.

Checklist

  • Please confirm that all corresponding versions of the documents have been revised.
  • Please ensure that the content in sidebars.ts matches the current document structure when you changed the document structure.
  • This change requires follow-up update in localized docs.

Copy link
Contributor

coderabbitai bot commented Mar 20, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

cloudflare-workers-and-pages bot commented Mar 20, 2025

Deploying greptime-docs with  Cloudflare Pages  Cloudflare Pages

Latest commit: 3b37855
Status: ✅  Deploy successful!
Preview URL: https://2abc9680.greptime-docs.pages.dev
Branch Preview URL: https://docs-data-model.greptime-docs.pages.dev

View logs

Copy link
Contributor

@killme2008 killme2008 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! However, I think we should include some practical guidelines, particularly a series of 'how-to' guides. For instance, we could cover topics such as:

  • Best practices for designing a traces table
  • Guidelines for structuring a log table
  • How to desgin table for prometheus-like metrics
  • Guidelines for Iot Devices sensors
  • And similar common use cases

- Typically use short strings for tags, avoiding `FLOAT`, `DOUBLE`, `TIMESTAMP`.
- Never set high cardinality columns as tags if they change frequently.
For example, `trace_id`, `span_id`, `user_id` must not be used as tags.
GreptimeDB works well if you set them as fields instead of tags.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: Creating indexes on fields and tags can be done at any time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need think about this. Is it the implementation, or the definition? Is it for now, or forever?

For example, for traces the most typical query is retrieve all items by trace_id. Why data is not partitioned or sorted by trace_id in storage? And in trace, we will use traceid+spanid+timestamp for deduplication, maybe we will do it on compaction or on read. Not having them as primary keys makes it impossible to do that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, this is an implementation limitation. Our primary key is a time-series and is similar to a stream in VL.
https://docs.victoriametrics.com/victorialogs/keyconcepts/#how-to-determine-which-fields-must-be-associated-with-log-streams

In the future, we can support flexible primary key, but some optimization is only possible and available when the cardinality is low,

For trace_id+spanid, it is actually not a time-series. We may need to consider this in the future.

Suppose we have a time-series table called `system_metrics` that monitors the resource usage of a standalone device:
### Metrics

Suppose we have a table called `system_metrics` that monitors the resource usage of machines in data centers:

```sql
CREATE TABLE IF NOT EXISTS system_metrics (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this SQL statement still suitable for the new data model where the tag columns are no longer recommended?

Copy link
Contributor Author

@evenyag evenyag Mar 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tags are still helpful for querying if we set them properly. We can also only use host for tag or use idc, host

This improves the locality of data with the same tags.
If there are no tag columns, GreptimeDB sorts rows by timestamp.
2. Identifying a unique time-series.
When the table is not append-only, GreptimeDB can deduplicate rows by timestamp under the same time-series (primary key).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel this should be definition of primary key, that primary key + timestamp forms a unique id of data records. Even if in append-only mode, we will do best-of-effort de-duplication in future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Append mode is intended not to dedup, some data like logs can have duplicate keys.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should at least keep a semantic for deduplicate entries. For performance consideration, we can do best-effort dedup, for example dedup on read or on compaction. Otherwise we will be in a position that in theory we cannot deal with logs generated from upstream retries.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In most analytical databases, the primary key is not unique.

If there are no tag columns, GreptimeDB sorts rows by timestamp.
2. Identifying a unique time-series.
When the table is not append-only, GreptimeDB can deduplicate rows by timestamp under the same time-series (primary key).
3. Smoothing migration from other TSDBs that use tags or labels.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to mention the impact of the order of tags in primary key

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The When to use primary key section explains this and has an example.

Recommendations for Tag columns:
Usually you don't need tag columns since ordering data by timestamp is sufficient for most use cases.
If you need deduplication, or your queries can benefit from the ordering, you can define tag columns.
Setting tags may also reduce disk space usage as it improves the locality of data.
Copy link
Member

@sunng87 sunng87 Mar 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually you don't need tag columns since ordering data by timestamp is sufficient for most use cases.

I feel this is due to the limitation caused cardinality effect on memtable. This should not be, at least the final, definition of tags.

The selection of tags determines:

  1. how we can dedup data
  2. how we can sort data to narrow down the scan range of user's most typical queries
  3. how we can do effective partition, for small query there should be less partitions envolved.
  4. sorted data brings better compression as mentioned

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the limitation may be the final goal, but this document is a guide for the current version.

However, a high cardinality primary key may still cause issues in some time-series workloads, like fetching last rows, processing in per-series manner. This may be related to the schema design.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants