Skip to content
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions _partials/_not-supported-for-azure.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
<Highlight type="note">

This feature is on our roadmap for $CLOUD_LONG on Microsoft Azure. Stay tuned!

</Highlight>
5 changes: 4 additions & 1 deletion migrate/livesync-for-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,13 @@ tags: [stream, connector]

import PrereqCloud from "versionContent/_partials/_prereqs-cloud-only.mdx";
import EarlyAccessNoRelease from "versionContent/_partials/_early_access.mdx";
import NotSupportedAzure from "versionContent/_partials/_not-supported-for-azure.mdx";

# Stream data from Kafka

You use the Kafka source connector in $CLOUD_LONG to stream events from Kafka into your $SERVICE_SHORT. $CLOUD_LONG connects to your Confluent Cloud Kafka cluster and Schema Registry using SASL/SCRAM authentication and service account–based API keys. Only the Avro format is currently supported [with some limitations][limitations].

This page explains how to connect $CLOUD_LONG to your Confluence Cloud Kafka cluster.
This page explains how to connect $CLOUD_LONG to your Confluent Cloud Kafka cluster.

<EarlyAccessNoRelease />: the Kafka source connector is not yet supported for production use.

Expand All @@ -24,6 +25,8 @@ This page explains how to connect $CLOUD_LONG to your Confluence Cloud Kafka clu
- [Sign up][confluence-signup] for Confluence Cloud.
- [Create][create-kafka-cluster] a Kafka cluster in Confluence Cloud.

<NotSupportedAzure />

## Access your Kafka cluster in Confluent Cloud

Take the following steps to prepare your Kafka cluster for connection to $CLOUD_LONG:
Expand Down
5 changes: 5 additions & 0 deletions migrate/livesync-for-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ tags: [recovery, logical backup, replication]

import PrereqCloud from "versionContent/_partials/_prereqs-cloud-only.mdx";
import EarlyAccessNoRelease from "versionContent/_partials/_early_access.mdx";
import NotSupportedAzure from "versionContent/_partials/_not-supported-for-azure.mdx";

# Sync data from S3

Expand Down Expand Up @@ -62,6 +63,8 @@ The $S3_CONNECTOR continuously imports data from an Amazon S3 bucket into your d

- [Public anonymous user][credentials-public].

<NotSupportedAzure />

## Limitations

- **File naming**:
Expand Down Expand Up @@ -161,6 +164,8 @@ To sync data from your S3 bucket to your $SERVICE_LONG using $CONSOLE:
And that is it, you are using the $S3_CONNECTOR to synchronize all the data, or specific files, from an S3 bucket to your
$SERVICE_LONG in real time.



[about-hypertables]: /use-timescale/:currentVersion:/hypertables/
[lives-sync-specify-tables]: /migrate/:currentVersion:/livesync-for-postgresql/#specify-the-tables-to-synchronize
[compression]: /use-timescale/:currentVersion:/compression/about-compression
Expand Down
10 changes: 9 additions & 1 deletion migrate/upload-file-using-console.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ keywords: [import]

import ImportPrerequisitesCloudNoConnection from "versionContent/_partials/_prereqs-cloud-no-connection.mdx";
import EarlyAccessGeneral from "versionContent/_partials/_early_access.mdx";
import NotSupportedAzure from "versionContent/_partials/_not-supported-for-azure.mdx";

# Upload a file into your $SERVICE_SHORT using $CONSOLE_LONG

Expand All @@ -24,6 +25,8 @@ $CONSOLE_LONG enables you to drag and drop files to upload from your local machi

<ImportPrerequisitesCloudNoConnection />

<NotSupportedAzure />

<Tabs label="Upload files from a local machine" persistKey="file-import">

<Tab title="From CSV" label="import-csv">
Expand Down Expand Up @@ -124,6 +127,8 @@ $CONSOLE_LONG enables you to upload CSV and Parquet files, including archives co
- [IAM Role][credentials-iam].
- [Public anonymous user][credentials-public].

<NotSupportedAzure />

<Tabs label="Import files from S3" persistKey="file-import">

<Tab title="From CSV" label="import-csv">
Expand Down Expand Up @@ -202,9 +207,12 @@ To import a Parquet file from an S3 bucket:

</Tabs>


And that is it, you have imported your data to your $SERVICE_LONG.





[credentials-iam]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html#roles-creatingrole-user-console
[credentials-public]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html#example-bucket-policies-anonymous-user
[console]: hhttps://console.cloud.timescale.com/dashboard/services
6 changes: 6 additions & 0 deletions use-timescale/tigerlake.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ keywords: [data lake, lakehouse, s3, iceberg]

import IntegrationPrereqsCloud from "versionContent/_partials/_integration-prereqs-cloud-only.mdx";
import EarlyAccessGeneral from "versionContent/_partials/_early_access.mdx";
import NotSupportedAzure from "versionContent/_partials/_not-supported-for-azure.mdx";

# Integrate data lakes with $CLOUD_LONG

Expand All @@ -29,6 +30,8 @@ Tiger Lake is currently in private beta. Please contact us to request access.

<IntegrationPrereqsCloud/>

<NotSupportedAzure />

## Integrate a data lake with your $SERVICE_LONG

To connect a $SERVICE_LONG to your data lake:
Expand Down Expand Up @@ -333,6 +336,9 @@ data lake:
* Iceberg snapshots are pruned automatically if the amount exceeds 2500.
* The Iceberg namespace is hard coded to `timescaledb`, a custom namespace value is work in progress.




[cmc]: https://console.aws.amazon.com/cloudformation/
[aws-athena]: https://aws.amazon.com/athena/
[apache-spark]: https://spark.apache.org/
Expand Down