Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 8 additions & 16 deletions modules/oadp-backup-restore-for-large-usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,15 @@ Testing shows that increasing `NodeAgent` CPU can significantly improve backup a

[IMPORTANT]
====
It is not recommended to use Kopia without limits in production environments on nodes running production workloads due to Kopia’s aggressive consumption of resources. However, running Kopia with limits that are too low results in CPU limiting and slow backups and restore situations. Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace.
====
You can tune your {product-title} environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups.

If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern.

Testing detected no CPU limiting or memory saturation with these resource specifications.
Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits.

You can set these limits in Ceph MDS pods by following the procedure in https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf#changing_the_cpu_and_memory_resources_on_the_rook_ceph_pods[Changing the CPU and memory resources on the rook-ceph pods].
Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications.
====

You need to add the following lines to the storage cluster Custom Resource (CR) to set the limits:
In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation.

[source,yaml]
----
resources:
mds:
limits:
cpu: "3"
memory: 128Gi
requests:
cpu: "3"
memory: 8Gi
----
For more information about how to set the pod resources limit in Ceph MDS pods, see link:https://docs.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf#changing_the_cpu_and_memory_resources_on_the_rook_ceph_pods[Changing the CPU and memory resources on the rook-ceph pods].