diff --git a/docs/deployment.md b/docs/deployment.md index aff2e6c..b845461 100644 --- a/docs/deployment.md +++ b/docs/deployment.md @@ -1,10 +1,22 @@ # Supported MongoDB deployments -* Percona Link for MongoDB supports only Replica Set to Replica Set synchronization. The source and target replica sets can have different number of nodes. +{{pcsm.full_name}} supports the following deployment topologies: + +* **Replica Set to Replica Set**: The source and target replica sets can have different numbers of nodes. +* **Sharded cluster to Sharded cluster**: The source and target sharded clusters can have different numbers of shards. This functionality is in tech preview stage. See [Sharding support in {{pcsm.full_name}}](sharding.md) for details. + +## Version requirements + * You can synchronize Percona Server for MongoDB or MongoDB Community/Enterprise Advanced/Atlas within the same major versions - 6.0 to 6.0, 7.0 to 7.0, 8.0 to 8.0 -* Percona Link for MongoDB is supported on both ARM64 and x86_64 architectures. * Minimal supported MongoDB versions are: 6.0.17, 7.0.13, 8.0.0 -* You can connect the following MongoDB deployments: + +## Supported architectures + +* {{pcsm.full_name}} is supported on both ARM64 and x86_64 architectures. + +## Supported MongoDB deployments + +You can connect the following MongoDB deployments: | Source | Target | | --- | --- | diff --git a/docs/install/authentication.md b/docs/install/authentication.md index 0365585..d99905a 100644 --- a/docs/install/authentication.md +++ b/docs/install/authentication.md @@ -47,10 +47,23 @@ When you [install PLM from repositories](repos.md), the environment file is crea ### Example environment file -```{.text .no-copy} -PLM_SOURCE_URI="mongodb://source:mys3cretpAssword@mysource1:27017,mysource2:27017,mysource3:27017/" -PLM_TARGET_URI="mongodb://target:tops3cr3t@mytarget1:27017,mytarget2:27017,mytarget3:27017/" -``` +=== "Replica sets" + + List all replica set members of the source and target clusters in the respective MongoDB connection string URIs to ensure {{pcsm.short}} can reach each of them: + + ```{.text .no-copy} + PLM_SOURCE_URI="mongodb://source:mys3cretpAssword@mysource1:27017,mysource2:27017,mysource3:27017/" + PLM_TARGET_URI="mongodb://target:tops3cr3t@mytarget1:27017,mytarget2:27017,mytarget3:27017/" + ``` + +=== "Sharded clusters" + + {{pcsm.short}} communicates with the clusters via `mongos`. Therefore, specify the hostname and port of the `mongos` instances of the source and target clusters in the respective MongoDB connection string URIs. + + ```{.text .no-copy} + PCSM_SOURCE_URI="mongodb://source-user:password@mongos-source1:27017/admin" + PCSM_TARGET_URI="mongodb://target-user:password@mongos-target1:27017/admin" + ``` ### Passwords with special characters diff --git a/docs/install/usage.md b/docs/install/usage.md index 4077834..e389a78 100644 --- a/docs/install/usage.md +++ b/docs/install/usage.md @@ -1,18 +1,22 @@ -# Use {{plm.full_name}} +# Use {{pcsm.full_name}} -{{plm.full_name}} doesn't automatically start data replication after the startup. It has the `idle` status indicating that it is ready to accept requests. +{{pcsm.full_name}} doesn't automatically start data replication after startup. It has the `idle` status indicating that it is ready to accept requests. -You can interact with {{plm.full_name}} using the command-line interface or via the HTTP API. Read more about [PLM API](../api.md). +!!! tip "Understanding the workflow" + + For an overview of how {{pcsm.short}} works and the replication workflow stages, see [How {{pcsm.full_name}} works](../intro.md). + +You can interact with {{pcsm.full_name}} using the command-line interface or via the HTTP API. Read more about [{{pcsm.short}} HTTP API](../api.md). ## Before you start -Your target MongoDB cluster may be empty or contain data. PLM replicates data from the source to the target but doesn't manage the target's data. If the target already has the same data as the source, PLM overwrites it. However, if the target contains different data, PLM doesn't delete it during replication. This leads to inconsistencies between the source and target. To ensure consistency, manually delete any existing data from the target before starting replication. +Your target MongoDB cluster may be empty or contain data. {{pcsm.short}} replicates data from the source to the target but doesn't manage the target's data. If the target already has the same data as the source, {{pcsm.short}} overwrites it. However, if the target contains different data, {{pcsm.short}} doesn't delete it during replication. This leads to inconsistencies between the source and target. To ensure consistency, manually delete any existing data from the target before starting replication. ## Start the replication -Start the replication process between source and target clusters. PLM starts copying the data from the source to the target. First it does the initial sync by cloning the data and then applying all the changes that happened since the clone start. +Start the replication process between source and target clusters. {{pcsm.short}} starts copying the data from the source to the target. First it does the initial sync by cloning the data and then applying all the changes that happened since the clone start. -Then it uses the [change streams :octicons-link-external-16:](https://www.mongodb.com/docs/manual/changeStreams/) to track the changes to your data on the source and replicate them to the target. +Then it uses [change streams :octicons-link-external-16:](https://www.mongodb.com/docs/manual/changeStreams/) to track changes on the source and replicate them to the target. === "Command line" @@ -72,7 +76,7 @@ To include or exclude a specific database and all collections it includes, pass ## Pause the replication -You can pause the replication at any moment. PLM stops the replication, saves the timestamp and enters the `paused` state. PLM uses the saved timestamp after you [resume the replication](#resume-the-replication). +You can pause the replication at any moment. {{pcsm.short}} stops the replication, saves the timestamp, and enters the `paused` state. {{pcsm.short}} uses the saved timestamp after you [resume the replication](#resume-the-replication). === "Command line" @@ -90,7 +94,7 @@ You can pause the replication at any moment. PLM stops the replication, saves th ## Resume the replication -Resume the replication. PLM changes the state to `running` and copies the changes that occurred to the data from the timestamp it saved when you paused the replication. Then it continues monitoring the data changes and replicating them real-time. +Resume the replication. {{pcsm.short}} changes the state to `running` and copies the changes that occurred from the timestamp it saved when you paused the replication. Then it continues monitoring data changes and replicating them in real time. === "Command line" @@ -143,9 +147,9 @@ Check the current status of the replication process. $ curl http://localhost:2242/status ``` -# Finalize the replication +## Finalize the replication -When you no longer need / want to replicate data, finalize the replication. PLM stops replication, creates the required indexes on the target, and stops. This is a one-time operation. You cannot restart the replicaton after you finalized it. If you run the `start` command, PLM will start the replication anew, with the initial sync. +When you no longer need to replicate data, finalize the replication. {{pcsm.short}} stops replication, creates the required indexes on the target, and stops. This is a one-time operation. You cannot restart the replication after you finalized it. If you run the `start` command, {{pcsm.short}} will start the replication anew, with the initial sync. === "Command line" diff --git a/docs/intro.md b/docs/intro.md index 8facf5d..1abbe5e 100644 --- a/docs/intro.md +++ b/docs/intro.md @@ -1,49 +1,129 @@ -# How {{plm.full_name}} works +# How {{pcsm.full_name}} works -{{plm.full_name}} (PLM) is a binary process that replicates data between MongoDB deployments in real time until you manually finalize it. You can also make a one-time data migration from the source to the target with zero downtime. +{{pcsm.full_name}} is a binary process that replicates data between MongoDB deployments in real time until you manually finalize it. You can also make a one-time data migration from the source to the target with zero downtime. -You operate with {{plm.full_name}} using the [set of commands](plm-commands.md) or [API calls](api.md). Depending on the request it receives, {{plm.full_name}} has several states as shown in the following diagram: +You operate with {{pcsm.full_name}} using the [set of commands](pcsm-commands.md) or [API calls](api.md). Depending on the request it receives, {{pcsm.full_name}} has several states as shown in the following diagram: -![PLM states](_images/state-transition-flow.jpg) +![PCSM states](_images/state-transition-flow.jpg) -* **Idle**: PLM is up and running but not migrating data -* **Running**: PLM is replicating data from the source to the target. PLM enters the running state when you start and resume the replication -* **Paused**: PLM is not running and data is not replicated -* **Finalizing**: PLM stops the replication and is doing final checks, creates indexes +* **Idle**: {{pcsm.short}} is up and running but not migrating data +* **Running**: {{pcsm.short}} is replicating data from the source to the target. {{pcsm.short}} enters the running state when you start and resume the replication +* **Paused**: {{pcsm.short}} is not running and data is not replicated +* **Finalizing**: {{pcsm.short}} stops the replication and is doing final checks, creates indexes * **Finalized**: all checks are complete, data replication is stopped -* **Failed**: PLM encountered an error +* **Failed**: {{pcsm.short}} encountered an error -## Usage scenario +## Replication workflows -Now, let's use the data migration from MongoDB Atlas to Percona Server for MongoDB as an example to understand how PLM works. +The workflow for {{pcsm.short}} depends on your MongoDB deployment topology. Select the tab below that matches your setup: -You run a MongoDB Atlas 8.0.8 deployed as a replica set. You need to migrate to Percona Server for MongoDB 8.0.8-3, also a replica set. You have a strict requirement to migrate with zero downtime; therefore, using logical backups with [Percona Backup for MongoDB :octicons-link-external-16:](https://docs.percona.com/percona-backup-mongodb/features/logical.html) is a no-go. +=== "Replica Sets" -A solution is to use Percona Link for MongoDB. MongoDB Atlas is your source. An empty Percona Server for MongoDB replica set is your target. Data migration is a resource-intensive task. Therefore, we recommend installing PLM closest to the target to reduce the network lag as much as possible. + ### Usage scenario -Create users for PLM in both MongoDB deployments. Start and connect PLM to your source and target using these user credentials. Now you are ready to start the migration. + Let's use a data migration from MongoDB Atlas to Percona Server for MongoDB as an example to understand how {{pcsm.short}} works with replica sets. -To start the migration, call the `start` command. PLM starts copying the data from the source to the target. First it does the initial sync by cloning the data and then applying all the changes that happened since the clone start. + You run a MongoDB Atlas 8.0.8 deployed as a replica set. You need to migrate to Percona Server for MongoDB 8.0.8-3, also a replica set. You have a strict requirement to migrate with zero downtime; therefore, using logical backups with [Percona Backup for MongoDB :octicons-link-external-16:](https://docs.percona.com/percona-backup-mongodb/features/logical.html) is not an option. -After the initial data sync, PLM monitors changes in the source and replicates them to the target at runtime. You don't have to stop your source deployment, it operates as usual, accepting client requests. PLM uses [change streams :octicons-link-external-16:](https://www.mongodb.com/docs/manual/changeStreams/) to track the changes to your data and replicate them to the target. + A solution is to use {{pcsm.full_name}}. MongoDB Atlas is your source. An empty Percona Server for MongoDB replica set is your target. Data migration is a resource-intensive task. Therefore, we recommend installing {{pcsm.short}} on a dedicated host closest to the target to reduce the network lag as much as possible. -You can `pause` the replication and `resume` it later. When paused, PLM saves the timestamp when it stops the replication. After you resume PLM, it copies the changes from the saved timestamp and continues real-time replication. + ### Workflow steps -You can track the migration status in logs and using the `status` command. When the data migration is complete, call the `finalize` command. This makes PLM finalize the replication, create the required indexes on the target, and stop. Note that finalizing is a one-time operation. If you try to start PLM again, it will start data copy anew. + 1. **Set up authentication**: Create users for {{pcsm.short}} in both MongoDB deployments. Start and connect {{pcsm.short}} to your source and target using these user credentials and the `mongos` hostname and port. See [Configure authentication in MongoDB](install/authentication.md) for details. -Afterwards, you will only need to switch your clients to connect to Percona Server for MongoDB. + 2. **Start the replication**: Call the `start` command. {{pcsm.short}} starts copying the data from the source to the target. First it does the initial sync by cloning the data and then applying all the changes that happened since the clone start. See [Start the replication](install/usage.md#start-the-replication) for command details. + + 3. **Real-time replication**: After the initial data sync, {{pcsm.short}} monitors changes in the source and replicates them to the target at runtime. You don't have to stop your source deployment—it operates as usual, accepting client requests. {{pcsm.short}} uses [change streams :octicons-link-external-16:](https://www.mongodb.com/docs/manual/changeStreams/) to track the changes to your data and replicate them to the target. + + 4. **Control replication**: You can `pause` the replication and `resume` it later. When paused, {{pcsm.short}} saves the timestamp when it stops the replication. After you resume {{pcsm.short}}, it starts watching for the changes from the moment when the replication was paused and continues real-time replication. See [Pause the replication](install/usage.md#pause-the-replication) and [Resume the replication](install/usage.md#resume-the-replication) for command details. + + 5. **Monitor progress**: Track the migration status in logs and using the `status` command. See [Check the replication status](install/usage.md#check-the-replication-status) for details. + + 6. **Finalize**: When the data migration is complete, call the `finalize` command. This makes {{pcsm.short}} finalize the replication, create the required indexes on the target, and stop. Note that finalizing is a one-time operation. If you try to start {{pcsm.short}} again, it will start data copy anew. See [Finalize the replication](install/usage.md#finalize-the-replication) for command details. + + 7. **Cutover**: Switch your clients to connect to Percona Server for MongoDB. + + For detailed instructions, see [Use {{pcsm.full_name}}](install/usage.md). + +=== "Sharded Clusters (Tech Preview)" + + ### Usage scenario + + Let's use a data migration between two sharded MongoDB clusters as an example to understand how {{pcsm.short}} works with sharded clusters. + + For example, you run a MongoDB Enterprise Advanced 8.0 sharded cluster with 3 shards as your source. You need to migrate to a self-hosted Percona Server for MongoDB 8.0 sharded cluster with 5 shards as your target. You need zero-downtime migration and cannot afford to disable the balancer on either cluster, which makes traditional migration methods challenging. + + A solution is to use {{pcsm.full_name}}. Since {{pcsm.short}} connects to `mongos` instances, the number of shards on source and target can differ. Install {{pcsm.short}} on a dedicated host closer to the target cluster to minimize network latency. + + ### Workflow steps + + 1. **Set up authentication**: Create users for {{pcsm.short}} in both MongoDB deployments. Configure connection strings using `mongos` hostname and port for both source and target clusters. See [Configure authentication in MongoDB](install/authentication.md) for details. + + 2. **Start the replication**: Call the `start` command. You don't have to disable the balancer on the target. Before starting the data copying, {{pcsm.short}} retrieves the information about the shard keys for collections on the source cluster and creates these collections on the target with the same shard key. Then {{pcsm.short}} starts copying all data from the source to the target. First it does the initial sync by cloning the data and then applying all the changes that happened since the clone start. See [Start the replication](install/usage.md#start-the-replication) for command details. + + 3. **Real-time replication**: During the replication stage, {{pcsm.short}} captures change stream events from the source cluster through `mongos` and applies them to the target cluster, ensuring real-time synchronization of data changes. The target cluster's balancer handles chunk distribution. For details about sharding-specific behavior, see [Sharding behavior](sharding.md#sharding-specific-behavior). + + 4. **Control replication**: You can `pause` the replication and `resume` it later, just like with replica sets. When paused, {{pcsm.short}} saves the timestamp when it stops the replication. See [Pause the replication](install/usage.md#pause-the-replication) and [Resume the replication](install/usage.md#resume-the-replication) for command details. + + 5. **Monitor progress**: Track the migration status in logs and using the `status` command. See [Check the replication status](install/usage.md#check-the-replication-status) for details. + + 6. **Finalize**: When the data migration is complete and you no longer need to run clusters in sync, call the `finalize` command to complete the migration. This makes {{pcsm.short}} finalize the replication, create the required indexes on the target, and stop. Note that finalizing is a one-time operation. If you try to start {{pcsm.short}} again, it will start data copy anew. See [Finalize the replication](install/usage.md#finalize-the-replication) for command details. + + 7. **Cutover**: Switch your clients to connect to the target Percona Server for MongoDB cluster. + + For detailed information about sharded cluster replication, see [Sharding support in {{pcsm.full_name}}](sharding.md). ## Filtered replication -You can replicate the whole dataset or only a specific subset of data, which is a filtered replication. You can use filtered replication for various use cases, such as: +You can replicate the whole dataset or only a specific subset of data, which is a filtered replication. Filtered replication works for both replica sets and sharded clusters. You can use filtered replication for various use cases, such as: -* Spin up a new development environment with a specific subset of data instead of the whole dataset. +* Spin up a new development environment with a specific subset of data instead of the whole dataset. * Optimize cloud storage costs for hybrid environments where your target MongoDB deployment runs in the cloud. -Specify what namespaces - databases and collections - to include and/or exclude from the replication when you start it. +Specify what namespaces—databases and collections—to include and/or exclude from the replication when you start it. See [Start the filtered replication](install/usage.md#start-the-filtered-replication) for details. + +## Index handling + +{{pcsm.short}} manages indexes throughout the replication process to ensure data consistency and query performance on the target cluster. + +### Replication stage + +During the replication stage, {{pcsm.short}} copies indexes from the source to the target cluster as follows: + +* **Unique indexes handling**: Unique indexes are copied as non-unique indexes during replication. This allows {{pcsm.short}} to handle potential duplicate data that may exist during the migration process. + +* **Hidden indexes**: {{pcsm.short}} copies hidden indexes as non-hidden. + +* **TTL index handling**: When copying TTL indexes, {{pcsm.short}} temporarily disables TTL expiration and saves the `expireAfterSeconds` property for later restoration. This way {{pcsm.short}} ensures documents won't expire while being copied. + +* **Index creation during sync**: If an index is created on the source cluster while the clusters are in sync, {{pcsm.short}} automatically creates the same index on the target cluster. + +* **Incomplete indexes**: If the index build is in progress during the replication stage, {{pcsm.short}} records that and will try to recreate the index during the finalization stage. + +* **Inconsistent indexes**: {{pcsm.short}} uses the $indexStats aggregation to count index occurrences across shards. If an index exists on fewer shards than the _id_ index, it is considered inconsistent. {{pcsm.short}} skips cloning inconsistent indexes during the replication stage. + +* **Failed index creation**: If {{pcsm.short}} cannot create an index during replication, it proceeds with replication and records the failure. The index creation will be retried during the finalization stage. + +### Finalization stage + +On the finalization stage, {{pcsm.short}} finalizes index index management on the target cluster to match their configuration on the source: + +* **Unique index conversion**: Non-unique indexes that were originally unique on the source are converted back to unique indexes. + +* **Hidden indexes**: {{pcsm.short}} restores hidden on the target if they were hidden on the source. + +* **TTL indexes**: {{pcsm.short}} restores the original `expireAfterSeconds` value for TTL indexes on the target cluster so that documents will expire according to the original configuration. + +* **Inconsistent indexes**: {{pcsm.short}} skips creating inconsistent indexes, detected during the replication stage. It reports the warning about such an index in the logs. Check the logs for the information about inconsistent indexes and try to manually recreate them on the target. + +* **Retry failed indexes**: {{pcsm.short}} attempts to create indexes that failed to be created during replication. + +* **Warning logs**: If {{pcsm.short}} cannot create an index even during finalization, it records a warning in the logs. Check the logs after finalization to identify any indexes that could not be created. + +This approach ensures that replication continues even if some indexes cannot be created immediately, while still attempting to create all indexes during finalization to match the source cluster's index configuration. ## Next steps -Ready to try out PLM? +Ready to try out {{pcsm.short}}? [Quickstart](installation.md){.md-button} diff --git a/docs/limitations.md b/docs/limitations.md index 1c96aa6..d81cc02 100644 --- a/docs/limitations.md +++ b/docs/limitations.md @@ -4,13 +4,24 @@ author: Radoslaw Szulgo --- # Known issues and limitations -This page lists known limitations for using Percona Link for MongoDB +This page lists known limitations for using {{pcsm.full_name}}. ## Versions and topology -* Sharded clusters are not supported * MongoDB versions that reached End-of-Life are not supported -* PLM connects only to the primary node in the replica set. You cannot force connection to secondary members using the [directConnection :octicons-link-external-16:](https://www.mongodb.com/docs/manual/reference/connection-string/#connection-string-formats) option. This option is ignored. +* {{pcsm.short}} connects only to the primary node in the replica set. You cannot force connection to secondary members using the [directConnection :octicons-link-external-16:](https://www.mongodb.com/docs/manual/reference/connection-string/#connection-string-formats) option. This option is ignored. + +## Sharded clusters + +The following limitations apply specifically to sharded cluster replication: + +* {{pcsm.short}} replicates the data and doesn't replicate metadata. This means that the following information is not preserved from the source cluster: + + * The primary shard name for a collection. The target cluster may have a different primary shard name. + * The chunk distribution information. The target cluster manages chunk distribution according to its own sharding configuration. See [Sharding support](sharding.md#limitations) for more information. + * The configuration of [zones for sharded data :octicons-link-external-16:](https://www.mongodb.com/docs/manual/core/zone-sharding/). + +* During data replication, the following commands are not supported: `movePrimary`, `reshardCollecton`, `unshardCollection`, `refineCollectionShardKey`. Running them results in failed replication and you must start it anew, from the initial data sync stage. ## Data types @@ -22,7 +33,7 @@ This page lists known limitations for using Percona Link for MongoDB * Capped collections created or converted as the result of `cloneCollectionAsCapped` and `convertToCapped` commands are not supported. These operations don't change the event and are not captured by the change streams. * [Percona Memory Engine :octicons-link-external-16:](https://docs.percona.com/percona-server-for-mongodb/8.0/inmemory.html) is not supported * Persistent Query Settings (added in MongoDB 8) are not supported -* documents that have [field names with periods and dollar signs :octicons-link-external-16:](https://www.mongodb.com/docs/manual/core/dot-dollar-considerations/) are not supported +* Documents that have [field names with periods and dollar signs :octicons-link-external-16:](https://www.mongodb.com/docs/manual/core/dot-dollar-considerations/) are not supported ## Other diff --git a/docs/sharding.md b/docs/sharding.md new file mode 100644 index 0000000..f22daaf --- /dev/null +++ b/docs/sharding.md @@ -0,0 +1,61 @@ +# Sharding support in {{pcsm.full_name}} (Technical Preview) + +!!! warning "Technical Preview" + + Sharding support is available starting with {{pcsm.full_name}} 0.7.0 and is currently in technical preview stage. We encourage you to try it out and share your feedback. This will help us improve the feature in future releases. + +{{pcsm.full_name}} supports replication between sharded MongoDB clusters, enabling you to migrate or synchronize data from one sharded deployment to another. This capability allows you to migrate sharded clusters with minimal downtime, maintain disaster recovery setups across sharded environments, and synchronize data between sharded clusters for testing or development purposes. + +## Overview + +The workflow for sharded clusters is similar to replica sets. See [How {{pcsm.full_name}} works](intro.md#replication-workflows) for the complete workflow overview. The key difference is that {{pcsm.short}} connects to `mongos` instances on both the source and target clusters instead of replica set members. + +Since {{pcsm.short}} connects through `mongos`, the cluster topology doesn't matter. This means the source and target clusters can have different numbers of shards. + +Also, {{pcsm.short}} replicates data and not metadata. This means chunk distribution as well as the primary shard name for a collection may differ on source and target clusters. + +## Prerequisites + +* {{pcsm.full_name}} version 0.7.0 or later +* Source and target clusters must be sharded MongoDB deployments +* Both clusters must be running the same MongoDB version. Check [Version requirements](deployment.md#version-requirements) for more information about supported versions. + +## Connection string format + +When connecting to sharded clusters, use the standard MongoDB connection string format but specify `mongos` hostname and port instead of replica set members: + +```{.text .no-copy} +mongodb://user:pwd@mongos-host:port/[authdb]?[options] +``` + +Since {{pcsm.short}} connects through `mongos`, you don't need to specify individual shard members or config servers in the connection string. The `mongos` router handles routing to the appropriate shards. + +For detailed information about authentication and connection string configuration, see [Configure authentication in MongoDB](install/authentication.md). + +## Sharding-specific behavior + +### Initial sync preparation + +Before starting the initial sync, {{pcsm.short}} checks which collections are sharded on the source cluster and creates corresponding sharded collections on the destination cluster. The only sharding configuration preserved from the source cluster is the sharding key; all other sharding details are handled internally by the destination cluster. + +### Balancer operation + +{{pcsm.full_name}} connects to source and target clusters via a `mongos` instance. Therefore, you do not need to disable the balancer on either the source or target cluster before starting replication. The target cluster's balancer continues to operate normally and manages chunk distribution according to its own sharding configuration and balancer settings. + +### Chunk distribution + +{{pcsm.short}} does not preserve chunk distribution information from the source cluster. The target cluster manages chunk distribution internally through its balancer. This means that after replication, chunks may be distributed differently on the target cluster compared to the source cluster, which is expected behavior. + +Since the target cluster already has information about which collections are sharded, it handles sharding internally. {{pcsm.short}} does not interfere with the target cluster's sharding configuration or chunk distribution. + +## Usage + +The commands and API endpoints for sharded cluster replication are the same as for replica set replication. The workflow follows the same stages as replica set replication. See [How {{pcsm.full_name}} works](intro.md#replication-workflows) for the complete workflow overview and [Use {{pcsm.full_name}}](install/usage.md) for detailed command instructions. + +## Next steps + +* [Install {{pcsm.full_name}}](installation.md) +* [Configure authentication](install/authentication.md) +* [Start replication](install/usage.md) +* [Monitor replication status](install/usage.md#check-the-replication-status) +* [Monitor PCSM performance with Percona Monitoring and Management](pmm-setup.md) diff --git a/mkdocs-base.yml b/mkdocs-base.yml index 808b19a..f503fa2 100644 --- a/mkdocs-base.yml +++ b/mkdocs-base.yml @@ -156,6 +156,8 @@ nav: - How PLM works: intro.md - compare.md - deployment.md + - sharding.md + - limitations.md - Get started: - Quickstart: installation.md - System requirements: system-requirements.md @@ -170,8 +172,7 @@ nav: - PLM commands: plm-commands.md - PLM HTTP API: api.md - Troubleshooting and debugging: - - Known limitations: limitations.md - - troubleshooting.md + - Troubleshooting guide: troubleshooting.md - oplog-sizing.md - logging.md - pmm-setup.md