diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 8133ad9abe..b9f7d7b5c0 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -360,6 +360,7 @@ include::cli:partial$cbcli/nav.adoc[] **** xref:rest-api:rest-manage-cluster-connections.adoc[Managing Cluster Connections] **** xref:rest-api:rest-set-up-alternate-address.adoc[Managing Alternate Addresses] **** xref:rest-api:rest-cluster-email-notifications.adoc[Setting Alerts] + **** xref:rest-api:disk-usage-limits.adoc[] *** xref:rest-api:rest-status-and-events-overview.adoc[Status and Events] **** xref:rest-api:rest-get-cluster-tasks.adoc[Getting Cluster Tasks] diff --git a/modules/introduction/partials/new-features-80.adoc b/modules/introduction/partials/new-features-80.adoc index e609f68246..3aea2f25f8 100644 --- a/modules/introduction/partials/new-features-80.adoc +++ b/modules/introduction/partials/new-features-80.adoc @@ -107,6 +107,14 @@ The metric includes the first 32 characters sent by any clients up to the first and limits the number of metrics to 100. Additional information sent by clients at connection time can be found in the logs. +[#section-new-feature-800-disk-limits] +https://jira.issues.couchbase.com/browse/MB-59113[MB-59113] Prevent buckets from causing nodes to run out of disk space:: + You can configure Couchbase Server to prevent writes to buckets from consuming all of the disk space in a node. + You set a minimum amount of space every node must have free in the filesystem used by the data service. + If the node's has less free space than this limit, Couchbase Server prevents writes to buckets. + Even if you do not set this limit, Couchbase Server now alerts you when a node starts to run out of disk space. + See xref:learn:buckets-memory-and-storage/storage-settings.adoc#filesystem-free-space-and-usage-limits[Filesystem Free Space and Usage Limits] for more information. + [#section-new-feature-800-XDCR] === XDCR diff --git a/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc b/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc index cf7553c04f..4e15f39bf1 100644 --- a/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc +++ b/modules/learn/pages/buckets-memory-and-storage/storage-settings.adoc @@ -1,5 +1,5 @@ = Storage Properties -:description: Couchbase Server provides persistence, whereby certain items are stored on disk as well as in memory; and reliability is thereby enhanced. +:description: Couchbase Server stores certain items on disk as well as in memory to provide persistence and enhance reliability. :page-aliases: understanding-couchbase:buckets-memory-and-storage/storage,architecture:storage-architecture,learn:buckets-memory-and-storage/storage.adoc [abstract] @@ -8,153 +8,222 @@ [#understanding-couchbase-storage] == Understanding Couchbase Storage -Couchbase Server stores certain items in compressed form on disk; and, whenever required, removes them. -This allows data-sets to exceed the size permitted by existing memory-resources; since undeleted items not currently in memory can be restored to memory from disk, as needed. -It also facilitates backup-and-restore procedures. +In addition to storing data in memory, Couchbase Server also stores data in Couchbase buckets on disk. +Saving data to disk provides persistence so that data is not lost if a node restarts or fails. +It also lets your data sets exceed the limits of the memory in your cluster. +Couchbase Server restores data that's not in memory from disk when needed. -Generally, a client's interactions with the server are not blocked during disk-access procedures. -However, if a specific item is being restored from disk to memory, the item is not made available to the client until the item's restoration is complete. +Ephemeral buckets and their items exist only in memory and are never written to disk. +For more details, see xref:buckets-memory-and-storage/buckets.adoc[Buckets]. -Not all items are written to disk: _Ephemeral_ buckets and their items are maintained in memory only. -See xref:buckets-memory-and-storage/buckets.adoc[Buckets] for information. +Couchbase Server compresses the data it writes to disk. +Compression reduces the amount of disk space used which can help reduce costs. +It also makes the backup and restore procedures easier. +In addition to compressing data written to disk, Couchbase Server can also compress data in memory. +See xref:buckets-memory-and-storage/compression.adoc[Compression] for more information. -Items written to disk are always written in compressed form. -Based on bucket configuration, items may be maintained in compressed form in memory also. -See xref:buckets-memory-and-storage/compression.adoc[Compression] for information. +Disk access does not interrupt most client interactions. +However, if a client requests an item that's on disk, it must wait while Couchbase Server reads, decompresses, and copies the data into memory. -Items can be removed from disk based on a configured point of expiration, referred to as _Time-To-Live_. -See xref:data/expiration.adoc[Expiration] for information. +You can remove items from disk based on a configured expiration time, called time to live. +See xref:data/expiration.adoc[Expiration] for details. -For illustrations of how Couchbase Server saves new and updates existing Couchbase-bucket items, thereby employing both memory and storage resources, see xref:buckets-memory-and-storage/memory-and-storage.adoc[Memory and Storage]. +To see how Couchbase Server saves new items and updates existing items in Couchbase buckets, using both memory and storage, seexref:buckets-memory-and-storage/memory-and-storage.adoc[Memory and Storage]. [#threading] == Threading -Synchronized, multi-threaded _readers_ and _writers_ provide simultaneous, high-performance operations for data on disk. -Conflicts are avoided by assigning each thread (reader or writer) a specific subset of the 1024 vBuckets for each Couchbase bucket. +Couchbase Server uses synchronized, multi-threaded readers and writers to provide high-performance, simultaneous operations for data on disk. +Readers and writers each have their own set of threads. +To prevent conflicts, each thread is responsible for reading or writing a subset of the vBuckets in a Couchbase bucket. -Couchbase Server allows the number of threads allocated per node for reading and writing to be configured by the administrator. -The maximum thread-allocation that can be specified for each is _64_, the minimum _1_. +You can control the number of reader and writer threads. +In the Couchbase Server Web Console, you can have Couchbase Server automatically choose a default value or a value that optimizes disk I/O. +You can also manually set the number of threads per node to a value between 1 and 64. +Using a higher number of threads may improve performance if your hardware supports it, such as when your CPU has a high large of cores. +Increasing the number of writer threads helps optimize durable writes. +For more information, see xref:learn:data/durability.adoc[Durability]. -A high thread-allocation may improve performance on systems whose hardware-resources are commensurately supportive (for example, where the number of CPU cores is high). -In particular, a high number of _writer_ threads on such systems may significantly optimize the performance of _durable writes_: see xref:learn:data/durability.adoc[Durability], for information. +Setting the number of threads higher than your hardware supports can reduce performance. +Test changes to the default thread allocation before applying them to production systems. +As a starting point, set the number of reader and writer threads to match the queue depth of your I/O subsystem. -Note, however, that a high thread-allocation might _impair_ some aspects of performance on less appropriately resourced nodes. -Consequently, changes to the default thread-allocation should not be made to production systems without prior testing. -A starting-point for experimentation is to establish the numbers for reader threads and writer threads as each equal to the _queue depth_ of the underlying I/O subsystem. +For details on setting reader and writer thread counts, see xref:manage:manage-settings/general-settings.adoc#data-settings[Data Settings]. -See the _General-Settings_ information on xref:manage:manage-settings/general-settings.adoc#data-settings[Data Settings], for details on how to establish appropriate numbers of reader and writer threads. +You can also configure thread counts for the NonIO and AuxIO thread pools. +The NonIO thread pool runs in-memory tasks, such as the durability timeout task. +The AuxIO thread pool runs auxiliary I/O tasks, such as the access log task. +Set the thread count for each between 1 and 64. -Note also that the number of threads can also be configured for the _NonIO_ and _AuxIO_ thread pools: +To view thread status, use the [.cmd]`cbstats` command with the [.param]`raw workload` option. +For more information, see xref:cli:cbstats-intro.adoc[cbstats]. -* The _NonIO_ thread pool is used to run _in memory_ tasks -- for example, the _durability timeout_ task. - -* The _AuxIO_ thread pool is used to run _auxiliary I/O_ tasks -- for example, the _access log_ task. - -Again, the maximum thread-allocation that can be specified for each is _64_, the minimum _1_. - -Thread-status can be viewed, by means of the [.cmd]`cbstats` command, specified with the [.param]`raw workload` option. -See xref:cli:cbstats-intro.adoc[cbstats] for information. - -For information on using the REST API to manage thread counts, see xref:rest-api:rest-reader-writer-thread-config.adoc[Setting Thread Allocations]. +To manage thread counts using the REST API, see xref:rest-api:rest-reader-writer-thread-config.adoc[Setting Thread Allocations]. [#deletion] == Deletion -Items can be deleted by a client application: either by immediate action, or by setting a _Time-To-Live_ (TTL) value: this value is established through accessing the `TTL` metadata field of the item, which establishes a future point-in-time for the item's _expiration_. -When the point-in-time is reached, Couchbase Server deletes the item. +You can delete items either explicitly or by setting a time to live (TTL) value. +When the TTL expires, Couchbase Server deletes the item. -Following deletion by either method, a _tombstone_ is maintained by Couchbase Server, as a record (see below). +After deletion, Couchbase Server keeps a tombstone as a record (see the next section for more information). -An item's TTL can be established either directly on the item itself, or via the bucket that contains the item. -For information, see xref:data/expiration.adoc[Expiration]. +You can set an item's TTL directly on the item or at the bucket level. +For more information, see xref:data/expiration.adoc[Expiration]. == Tombstones -A _tombstone_ is a record of an item that has been removed. -Tombstones are maintained in order to provide eventual consistency, between nodes and between clusters. +A tombstone records an item removed from the database. +Couchbase Server uses tombstones to maintain consistency between nodes and clusters. -Tombstones are created for the following: +Couchbase Server creates tombstones when you: -* _Individual documents_. -The tombstone is created when the document is _deleted_; and contains the former document's key and metadata. +* Delete an individual document. +Couchbase Server creates a tombstone that contains the document's key and metadata. -* _Collections_. -The tombstone is created when the collection is _dropped_; and contains information that includes the collection-id, the collection’s scope-id, and a manifest-id that records the dropping of the collection. +* Drop a collection. +Couchbase Server creates a tombstone that includes the collection ID, scope ID, and a manifest ID that records the drop event. + -All documents that were in the dropped collection are deleted when the collection is dropped. -No tombstones are maintained for such documents: moreover, any tombstones for deleted documents that existed in the collection prior to its dropping are themselves removed when the collection is dropped; and consequently, only a collection-tombstone remains, when a collection is dropped. -The collection-tombstone is replicated via DCP as a single message (ordered with respect to mutations occurring in the vBucket), to replicas and other DCP clients, to notify such recipients that the collection has indeed been dropped. -It is then the responsibility of each recipient to purge anything it still contains that belonged to the dropped collection. +When you drop a collection, Couchbase Server deletes all documents in it. +It does not maintain tombstones for those deleted documents. +Couchbase Server also deletes any document tombstones that were in the collection before you dropped it. +After you drop a collection, only the collection tombstone remains. +Couchbase Server replicates the collection tombstone as a single message (ordered with respect to mutations in the vBucket) to replicas and other DCP clients. +This message notifies recipients that you dropped the collection. +Each recipient is then responsible for purging anything it still contains from the dropped collection. -The _Metadata Purge Interval_ establishes the frequency with which Couchbase Server _purges_ itself of tombstones of both kinds: which means, removes them fully and finally. -The Metadata Purge Interval setting runs as part of auto-compaction (see xref:learn:buckets-memory-and-storage/storage.adoc#append-only-writes-and-auto-compaction[Append-Only Writes and Auto-Compaction], below). +The Metadata Purge Interval setting controls how often Couchbase Server purges tombstones of both kinds. +When Couchbase Server purges a tombstone, it removes it completely. +The Metadata Purge Interval runs as part of auto-compaction. +See xref:learn:buckets-memory-and-storage/storage.adoc#append-only-writes-and-auto-compaction[Append-Only Writes and Auto-Compaction] for more information. -For more information, see xref:data/expiration.adoc#post-expiration-purging[Post-Expiration Purging], in xref:data/expiration.adoc[Expiration]. +For more information, see xref:data/expiration.adoc#post-expiration-purging[Post-Expiration Purging] in xref:data/expiration.adoc[Expiration]. [#disk-paths] == Disk Paths -At node-initialization, Couchbase Server allows up to four custom paths to be established, for the saving of data to the filesystem: these are for the Data Service, the Index Service, the Analytics Service, and the Eventing Service. Note that the paths are node-specific: consequently, the data for any of these services may occupy a different filesystem-location, on each node. +When you initialize a node, you choose where Couchbase Server stores data for most services. +You can specify the location where Couchbase Server stores data on a node for the following services: -For information on setting data-paths, see xref:manage:manage-nodes/initialize-node.adoc[Initialize a Node]. +* Data Service +* Index Service +* Analytics Service +* Eventing Service -[#append-only-writes-and-auto-compaction] -== Append-Only Writes and Auto-Compaction +In addition, you can use local paths for backup repositories. +See xref:learn:services-and-indexes/services/backup-service.adoc#repositories[Repositories] for more information. + +Couchbase Server has a default storage location for logs that's platform-specific. +For example, on Linux, the default location is `/opt/couchbase/var/lib/couchbase/logs`. + +For information about setting data paths, see xref:manage:manage-nodes/initialize-node.adoc[Initialize a Node]. + +[[filesystem-free-space-and-usage-limits]] +=== Filesystem Free Space and Usage Limits + +Running out of disk space on any filesystem can cause errors. +In particular, running out of disk space on the filesystem containing the Data Service storage path can make recovery difficult. +Recovery problems could lead to data loss. + +By default, Couchbase Server alerts you if the filesystem containing the following storage paths becomes more than 90% full: + +* Data Service +* Index Service +* `ns_log` and `audit_log` paths` + +See xref:manage:manage-settings/configure-alerts.adoc[] for more information about alerts. +You can change how full the disk becomes before triggering this alert by changing the xref:rest-api:rest-cluster-email-notifications.adoc#maxdatadiskusedperc[maxDiskUsedPerc] alert limit. -Couchbase Server uses an _append-only_ file-write format; which helps to ensure files' internal consistency, and reduces the risk of corruption. -Necessarily, this means that every change made to a file — whether an addition, a modification, or a deletion — results in a new entry being created at the end of the file: therefore, a file whose user-data is diminished by deletion actually grows in size. +Beyond alerting you, Couchbase Server does not take any action to prevent the filesystem from becoming full. -File-sizes should be periodically reduced by means of _compaction_. -This operation can be performed either manually, on a specified bucket; or on an automated, scheduled basis, either for specified buckets, or for all buckets. -For information on performing manual compaction with the CLI, see xref:cli:cbcli/couchbase-cli-bucket-compact.adoc[bucket-compact]. -For information on configuring auto-compaction with the CLI, see xref:cli:cbcli/couchbase-cli-setting-compaction.adoc[setting-compaction]. -For all information on using the REST API for compaction, see the xref:rest-api:compaction-rest-api.adoc[Compaction API]. +You can enable a feature to have Couchbase Server stop writing to the Data Service storage path when it reaches a certain percentage of disk usage. +The default (and recommended) limit is 85% full which means Couchbase Server stops writing data if the filesystem is 85% or more full. +Enabling this limit helps avoid potential issues with recovery. -For information on configuring auto-compaction with Couchbase Web Console, see xref:manage:manage-settings/configure-compact-settings.adoc[Auto-Compaction]. +When you set a disk usage limit, Couchbase Server starts alerting you when the filesystem fills to within 10% of the threshold you set. +This alert is in addition to the default alert when the filesystem is 90% full. +For example, if you use the default limit of 85%, Couchbase Server alerts you when the filesystem reaches 75% full. +When the filesystem reaches the limit, Couchbase Server stops writing to the Data Service storage path. +Any attempts to write to the Data Service storage path results in an `EBucketDiskSpace` error. +To re-enable writes, you must reduce the disk usage below the limit you set. + +To learn how to set the disk usage limit using the Couchbase Server Web Console, see xref:manage:manage-settings/general-settings.adoc#data-settings[Data Settings]. +To set the limits using the REST API, see xref:rest-api:disk-usage-limits.adoc[]. + +[#append-only-writes-and-auto-compaction] +== Append-Only Writes and Auto-Compaction + +When mutating data, Couchbase Server only appends to data files, instead of rewriting them. +This approach helps maintain file consistency and reduces the risk of file corruption. +Every time you add, modify, or delete data, Couchbase Server creates a new entry at the end of the data files. +As a result, files grow in size even when you delete data. + +To prevent data files from growing too large, Couchbase Server periodically compacts them. +Compaction rewrites the file, applying additions, modifications, and deletions before saving a new version of the file. +You can change the schedule Couchbase Server follows to compact data. +See xref:manage:manage-settings/configure-compact-settings.asdoc[Auto-Compaction] for more information. +For information about configuring auto-compaction with the command line, see xref:cli:cbcli/couchbase-cli-setting-compaction.adoc[setting-compaction]. + +You can also perform compaction manually on a specific bucket. +For information about performing manual compaction with the command line, see xref:cli:cbcli/couchbase-cli-bucket-compact.adoc[bucket-compact]. + +For all information about using the REST API for compaction, see the xref:rest-api:compaction-rest-api.adoc[Compaction API]. == Disk I/O Priority -_Disk I/O_ — reading items from and writing them to disk — does not block client-interactions: disk I/O is thus considered a _background task_. -The priority of disk I/O (along with that of other background tasks, such as item-paging and DCP stream-processing) is configurable _per bucket_. -This means, for example, that one bucket's disk I/O can be granted priority over another's. +Disk I/O means reading items from and writing them to disk. +Disk I/O does not block client interactions because it runs as a background task. +You can configure the priority of disk I/O and other background tasks, such as item paging and DCP stream processing, for each bucket. +For example, you can give one bucket a higher disk I/O priority than another. For further information, see xref:manage:manage-buckets/create-bucket.adoc[Create a Bucket]. [#storage-settings-ejection-policy] == Ejection Policy -Ejection is the policy which Couchbase will adopt to prevent data loss due to memory exhaustion. The policies available depend on the type of bucket being created. +To improve performance, Couchbase Server tries to keep as much data as possible in memory. +When memory fills, Couchbase Server ejects data from memory to make room for new data. +Ejection policies control how Couchbase Server decides what data to remove. + +Ejection has a different effect on different bucket types. +In an ephemeral bucket, data that Couchbase Server ejects is lost, because it only exists in memory. +In Couchbase buckets, data is removed from memory but still exists on disk. +If the data is needed again, Couchbase Server can reload the data from disk back into memory. + +The available ejection policies depend on the bucket type, as shown in the following table. -Note that in _Capella_, Couchbase buckets are referred to as _Memory and Disk_ buckets; while Ephemeral buckets are referred to as _Memory Only_ buckets. .Ejection policies |=== |Policy |Bucket type |Description |No Ejection -|_Ephemeral_ -|If memory is exhausted then the buckets are set to read-only to prevent data loss. This is the default setting. +|Ephemeral +|When memory runs out, the bucket becomes read-only to prevent data loss. +This is the default setting. -|NRU{empty}footnote:[Not Recently Used] Ejection -|_Ephemeral_ -|The documents that have not been recently used are ejected from memory. +|Not Recently Used (NRU) Ejection +|Ephemeral +|The server removes from memory the documents that have not been used for the longest time. |Value Only Ejection -|_Couchbase_ -|In low memory situations, this policy wll eject values and data from memory, but keys and metadata will be retained. This is the default policy for _Couchbase_ buckets. +|Couchbase +|When memory is low, Couchbase Server ejects values and data from memory but keeps keys and metadata. +This is the default policy for Couchbase buckets. |Full Ejection -|_Couchbase_ -|Under this policy, data, keys and metadata are ejected from memory. - +|Couchbase +|The server ejects data, keys, and metadata from memory. |=== -The policy can be set using the xref:rest-api:rest-bucket-create.adoc#evictionpolicy[REST API] when the bucket is created. -For more information on ejection policies, read https://blog.couchbase.com/a-tale-of-two-ejection-methods-value-only-vs-full/ +You can set the policy using the xref:rest-api:rest-bucket-create.adoc#evictionpolicy[REST API] when you create the bucket. +For more information about ejection policies, read https://blog.couchbase.com/a-tale-of-two-ejection-methods-value-only-vs-full/ include::partial$full-ejection-note.adoc[] + +NOTE: In Capella, Couchbase buckets are called Memory and Disk buckets. +Ephemeral buckets are called Memory Only buckets. diff --git a/modules/manage/assets/images/manage-settings/data-settings.png b/modules/manage/assets/images/manage-settings/data-settings.png index f2f4b651e4..80cf366b6d 100644 Binary files a/modules/manage/assets/images/manage-settings/data-settings.png and b/modules/manage/assets/images/manage-settings/data-settings.png differ diff --git a/modules/manage/pages/manage-settings/configure-alerts.adoc b/modules/manage/pages/manage-settings/configure-alerts.adoc index 59beabb6b0..9faacdb506 100644 --- a/modules/manage/pages/manage-settings/configure-alerts.adoc +++ b/modules/manage/pages/manage-settings/configure-alerts.adoc @@ -122,7 +122,7 @@ The listed alerts are as follows. | The auto-failover system stops auto-failover when the maximum number of spare nodes available has been reached. | `auto_failover_maximum_reached` -| Node wasn't auto-failed-over as other nodes are down at the same time +| Node was not auto-failed-over as other nodes are down at the same time | Auto-failover does not take place if there is already a node down. | `auto_failover_other_nodes_down` @@ -202,17 +202,30 @@ The size of the change history may need to be increased. For information, on establishing change-history size, see xref:rest-api:rest-bucket-create.adoc[Creating and Editing Buckets]. | `history_size_warning` -| Low Indexer Residence Percentage +| Approaching Indexer low resident percentage | Warns that the Index Service is, on a given node, occupying a percentage of available memory that is below an established threshold, the default for which is `10`. | `indexer_low_resident_percentage` a| [#memcached-alert] Memcached connection threshold exceeded. | Trigger an alert if the number of `system` or `user` connections used by the data service exceeds a configurable percentage of the available connections{blank}xref:#memcached-alert-foonote[^1^]. -For information on setting the `memcached` alert thresholds, see xref:rest-api:rest-cluster-email-notifications.adoc#setting-memcache-alert-threshold[Setting alerts]. +For information about setting the `memcached` alert thresholds, see xref:rest-api:rest-cluster-email-notifications.adoc#setting-memcache-alert-threshold[Setting alerts]. | `memcached_connections` +| Rebalance stage appears stuck +| An ongoing KV or index rebalance has not made progress during the timeout period set by the `stuckRebalanceThresholdIndex` and `stuckRebalanceThresholdKV` alert limits. +The default value for the timeout period is 1800 seconds (30 minutes). +| `stuck_rebalance` +| Disk usage is within 10% of maximum for data service mutations +| The used disk space on the a filesystem containing the Data Service storage path is within 10% of the configured limit. +This limit is set either through the Advanced Data Settings in the Couchbase Server Web Console, or by using the `/settings/resourceManagement` REST API endpoint. +See xref:learn:buckets-memory-and-storage/storage-settings.adoc#filesystem-free-space-and-usage-limits[Filesystem Free Space and Usage Limits] for more information. +| `disk_guardrail` + +| Index has diverging replicas +| The indexer has detected inconsistencies between an index and its replicas. +| `indexer_diverging_replicas` |=== diff --git a/modules/manage/pages/manage-settings/general-settings.adoc b/modules/manage/pages/manage-settings/general-settings.adoc index c002144398..9286fe4cbd 100644 --- a/modules/manage/pages/manage-settings/general-settings.adoc +++ b/modules/manage/pages/manage-settings/general-settings.adoc @@ -156,35 +156,54 @@ For information, see xref:learn:clusters-and-availability/rebalance.adoc#limitin [#data-settings] === Data Settings +The fields that appear when you expand the *Advanced Data Settings* section let you control filesystem use limits and I/O thread allocation. -The settings in this area control the numbers of threads that are allocated _per node_ by Couchbase Server to the _reading_ and _writing_ of data, respectively. -The maximum thread-allocation to each is _64_, the minimum _4_. +image::manage-settings/data-settings.png["The Data Settings panel",align=center] -A high thread-allocation may improve performance on systems whose hardware-resources are commensurately supportive (for example, where the number of CPU cores is high). -In particular, a high number of _writer_ threads on such systems may significantly optimize the performance of _durable writes_: see xref:learn:data/durability.adoc[Durability], for information. +*Prevent writes to buckets when storage becomes % full* controls whether Couchbase Server prevents the filesystem containing the data path from becoming full. +This option is off by default. +When selected, Couchbase Server prevents writes to buckets when the filesystem fills to the percent you set in the *% full* field. +The default value for this field is 85%. -Note, however, that a high thread-allocation might _impair_ some aspects of system-performance on less appropriately resourced nodes. -Consequently, changes to the default thread-allocation should not be made to production systems without prior testing. +See xref:learn:buckets-memory-and-storage/storage-settings.adoc#filesystem-free-space-and-usage-limits[Filesystem Free Space and Usage Limits] for more information. -Left-clicking on the *Advanced Data Settings* tab displays radio buttons for *Reader Thread Settings* and *Writer Thread Settings*: +The *Reader Thread Settings* and *Writer Thread Settings* options let you control the number of threads the Data Service uses on each node to read and write data. +Allocating more threads can improve performance. +In particular, adding more writer threads can improve durable write performance,. +See xref:learn:data/durability.adoc[] for more information. +However, setting the number of threads too high can reduce performance if the node is not capable of handling the additional threads. -image::manage-settings/data-settings.png["The Data Settings panel",548,align=center] +Both *Reader Thread Settings* and *Writer Thread Settings* offer the same options: -Each group has the same, three radio buttons, which are as follows: +Default:: +Couchbase Server sets the number of threads to a balanced value suitable for most workloads. -* *Default*. -The number of threads allocated is set to a balanced value which is reasonable for most workloads. +Disk i/o optimized:: +Couchbase Server sets the number of threads equal to the number of CPU cores on the node. +For buckets using the Magma storage engine, consider using this setting for the following conditions: ++ +-- +For Writes:: ++ +* When reducing the latency of durable writes is more important to you than write throughput. +* For write-intensive workloads where you want greater throughput and you find the SSD is not saturated using the default setting. -* *Disk i/o optimized*. -The number of threads allocated is equal to the number of CPU cores for the node. + -In order to get maximum performance from Magma for disk-oriented workloads, it is recommended to set the Writer Threads to 'Disk i/o optimized'. This setting will ensure there are enough threads to sustain high write rates. + -To Learn more about the Magma Storage Engine, see xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Storage Engines -- Magma Storage Engine]. +For Reads:: ++ +* When you have low memory data residency, use this option for better throughput and latency. +* When your data is on a high-latency virtualized storage device such as EBS volumes on the cloud. +In this case, a larger I/O queue depth helps saturate the disk IOPS/bandwidth. -* *Fixed value*. -The number of threads allocated is equal to the value selected from the pull-down menu. +For more details, see xref:learn:buckets-memory-and-storage/storage-engines.adoc#storage-engine-magma[Magma]. +-- + +Fixed value:: +When you select this option, a field appears in which you can select the number of threads to use. + -NOTE: A good rule of thumb is to set each of readers and writers equal to the queue depth of the underlying IO subsystem (i.e. readers = queue_depth and writers = queue_depth). + -However, for best performance it is recommended to benchmark with different settings and pick the one that best meets the throughput and latency requirements in your environment. +NOTE: As a guideline, set the number of reader and writer threads equal to the queue depth of your IO subsystem (for example, readers = queue_depth and writers = queue_depth). +For best performance, benchmark different settings and choose the one that meets your throughput and latency requirements. + +See xref:learn:buckets-memory-and-storage/storage-settings.adoc#threading[Threading] for more information about reader and writer threads. [#query-settings] === Query Settings diff --git a/modules/rest-api/pages/disk-usage-limits.adoc b/modules/rest-api/pages/disk-usage-limits.adoc new file mode 100644 index 0000000000..1c01815a9e --- /dev/null +++ b/modules/rest-api/pages/disk-usage-limits.adoc @@ -0,0 +1,166 @@ += Set Data Disk Use Limits +:description: You can have Couchbase Server stop writing to the data storage path when it is a specific percentage full. This option helps prevent the data path from running out of disk space and making recovery difficult. +:keywords: storage, disk usage limits, disk space, data storage path + + +[abstract] +{description} + +== Description + +Allowing any filesystem on a node to become full can cause errors. +If the filesystem containing the data storage path becomes full, recovery can be difficult. +This endpoint allows you to set a limit on the percentage of disk space that can be used by the data storage path. +When the data storage path reaches this limit, Couchbase Server stops writing to it. +See xref:learn:buckets-memory-and-storage/storage-settings.adoc#filesystem-free-space-and-usage-limits[Filesystem Free Space and Usage Limits] for more information. + +== HTTP Methods + +This API endpoint supports the following methods: + +* <<#get-settings>> +* <<#set-usage-limit>> + + +[[get-settings]] +== Get Data Disk Use Limits + +Use this endpoint to get the current data disk use limit settings. + +.Get Limit Settings +---- +GET /settings/resourceManagement +---- + +=== curl Syntax + +[source,bash] +---- + curl -u $USER:$PASSWORD -X GET \ + 'http://{HOST}:{PORT}/settings/resourceManagement' +---- + +.Path Parameters +:priv-link: get-privs +include::partial$user-pw-host-port-params.adoc[] + +[[get-privs]] +=== Required Privileges + +You must have at least on one of the following roles: + +* xref:learn:security/roles.adoc#full-admin[Full Admin] +* xref:learn:security/roles.adoc#cluster-admin[Cluster Admin] +* xref:learn:security/roles.adoc#local-user-security-admin[Local User Admin] +* xref:learn:security/roles.adoc#security-admin[Security Admin] + + +=== Responses + +`200 OK`:: +Returns a JSON object containing the current data disk use limit settings. +See <> for the schema of the output. + +`403 Forbidden`:: +Returned if the user does not have one of the roles listed in <>. + +[#get-settings-example] +=== Examples + +The following gets the current settings for data disk use limits: + +[source,bash] +---- +curl -u Administrator:password \ + -X GET 'http://127.0.0.1:8091//settings/resourceManagement' | jq +---- + +The JSON returned by this command shows the current settings for data disk use limits: + +[source,json] +---- + { + "diskUsage": { + "enabled": false, + "maximum": 85 + } +} +---- + +The result shows that the disk usage limit is not enabled, and the maximum disk usage is set to 85% (the default) + + +[[set-usage-limit]] +== Set Data Disk Use Limits +Use this endpoint to set the data disk use limit settings. + +.Set Limits +---- +POST /settings/resourceManagement +---- + +=== curl Syntax + +[source,bash] +---- + curl -u $USER:$PASSWORD -X POST \ + 'http://{HOST}:{PORT}/settings/resourceManagement' \ + -H 'Content-Type: application/json' \ + -d '{"diskUsage": {"enabled": [true|false], "maximum": }}' +---- + +.Path Parameters +:priv-link: set-privs +include::partial$user-pw-host-port-params.adoc[] + +.Data Parameters + +`enabled` (Boolean):: +If `true`, enables the data disk use limit. If `false`, disables the data disk use limit. + +`maximum` (integer):: +The maximum percentage of disk space that can be used by the data storage path. +If the data storage path reaches this limit, Couchbase Server stops writing to it. +This value must be between 1 and 100. + +[[set-privs]] +=== Required Privileges + +You must have at least on one of the following roles: + +* xref:learn:security/roles.adoc#full-admin[Full Admin] +* xref:learn:security/roles.adoc#cluster-admin[Cluster Admin] +* xref:learn:security/roles.adoc#security-admin[Security Admin] + +=== Responses + +`200 OK`:: +Returns a JSON object containing the current data disk use limit settings. +See <> for the schema of the output. + +`403 Forbidden`:: +Returned if the user does not have one of the roles listed in <>. + +[#set-limit-example] +=== Examples + +The following example enables data disk use limits and sets the maximum disk usage to 90%: + +[source,bash] +---- +curl -X POST 'http://127.0.0.1:8091/settings/resourceManagement' \ + -H "Content-Type: application/json"\ + -d '{"diskUsage": {"enabled": true, "maximum": 90}}' | jq +---- + +The JSON returned by this command shows new current settings for data disk use limits: + +[source,json] +---- +{ + "diskUsage": { + "enabled": true, + "maximum": 90 + } +} +---- diff --git a/modules/rest-api/pages/rest-cluster-email-notifications.adoc b/modules/rest-api/pages/rest-cluster-email-notifications.adoc index 369d3f1e3a..b9fba4d192 100644 --- a/modules/rest-api/pages/rest-cluster-email-notifications.adoc +++ b/modules/rest-api/pages/rest-cluster-email-notifications.adoc @@ -69,6 +69,7 @@ curl -X POST http:///settings/alerts/limits -d certExpirationDays = -d historyWarningThreshold= -d lowIndexerResidentPerc= + -d maxDataDiskUsedPerc= -d maxDiskUsedPerc= -d maxIndexerRamPerc= -d maxOverheadPerc= @@ -77,7 +78,8 @@ curl -X POST http:///settings/alerts/limits -d memoryCriticalThreshold= -d memcachedSystemConnectionWarningThreshold= -d memcachedUserConnectionWarningThreshold= - + -d stuckRebalanceThresholdIndex= + -d stuckRebalanceThresholdKV= curl -X POST http://:8091/settings/alert/sendTestEmail -u : @@ -152,6 +154,15 @@ See xref:rest-api:rest-bucket-create.adoc[Creating and Editing Buckets], for inf Warns that the Index Service is, on a given node, occupying a percentage of available memory that is below an established threshold, which is the value of `lowIndexerResidentPerc`. The default value is `10`. +* `maxDataDiskUsedPerc`. +The percentage of disk space used that will trigger an alert on the filesystem containing the data service, index service, or the `ns_log` or `audit_log` storage paths. +This alert warns you that the disk is becoming full. +It occurs even if data disk usage limits are not enabled. +The value must be an integer between `1` and `100`, which is the percentage of disk space used. +It defaults to `90`. +See xref:learn:buckets-memory-and-storage/storage-settings.adoc#filesystem-free-space-and-usage-limits[Filesystem Free Space and Usage Limits] for more information. + +[[maxdatadiskusedperc]] * `maxDiskUsedPerc`, `maxIndexerRamPerc`, and `maxOverheadPerc`. The maximum percentages for disk usage, memory consumption by the Index Service, and overhead. Values must be between `0` and `100`. @@ -173,6 +184,12 @@ NOTE: If the node exceeds 90% of the available system connections, then please c * `memcachedUserConnectionWarningThreshold`. Trigger the `xref:manage:manage-settings/configure-alerts.adoc#memcached-alert[memcached_connections]` alert if the number of `user` connections in use exceeds the given percentage of connections available. (E.g., if this value is set to `90`, the system will trigger an alert if the number of user connections used by the data service exceeds 90% of the available connections.) +* `stuckRebalanceThresholdIndex` and `stuckRebalanceThresholdKV`. +Sets the timeout threshold for an index rebalance and a data operation to be considered stuck. +If this period elapses and no progress has been made, Couchbase Server tiggers an alert. +The value must be an integer that represents a number of seconds. +The default value is `1800` seconds (30 minutes). + == Responses A successful call returns `200 OK`. diff --git a/modules/rest-api/partials/user-pw-host-port-params.adoc b/modules/rest-api/partials/user-pw-host-port-params.adoc new file mode 100644 index 0000000000..cdec18b81e --- /dev/null +++ b/modules/rest-api/partials/user-pw-host-port-params.adoc @@ -0,0 +1,13 @@ + +`USER`:: +The name of a user who has one of the roles listed in <<{priv-link}>>. + +`PASSWORD`:: +The password for the `user`. + +`HOST`:: +Hostname or IP address of a Couchbase Server. + +`PORT`:: +Port number for the REST API. +Defaults are 8091 for unencrypted and 18901 for encrypted connections.