|
| 1 | +[#release-727] |
| 2 | +== Release 7.2.7 (February 2025) |
| 3 | + |
| 4 | +Couchbase Server 7.2.7 was released in February 2025. |
| 5 | +This maintenance release contains the following fixes. |
| 6 | + |
| 7 | +=== Fixed Issues |
| 8 | + |
| 9 | +==== Storage Engine |
| 10 | +[#table-fixed-issues-727-storage-engine, cols="10,40,40"] |
| 11 | +|=== |
| 12 | +|Issue | Description | Resolution |
| 13 | + |
| 14 | +| https://jira.issues.couchbase.com/browse/MB-63261[MB-63261] |
| 15 | +| An issue occurred caused by a race condition in the index recovery code path, which may result in item count mismatch and wrong query results. |
| 16 | +Prior to Release `7.6.0`, this issue may occur during an Indexer restart. |
| 17 | +However, as part of the file-based rebalance process introduced in `7.6.0`, a recovery of the index is performed after the index is moved, which increases the likelihood that this race condition might be reached. |
| 18 | + |
| 19 | +| The race condition has been addressed and the issue is resolved. |
| 20 | + |
| 21 | + |
| 22 | +| https://jira.issues.couchbase.com/browse/MB-64742[MB-64742] |
| 23 | +a|A bug in the plasma tracking statistics incorrectly marked stale recovery points in the recovery log as valid data. This caused two problems: |
| 24 | + |
| 25 | +* At low mutation rates, the log cleaning process ran slowly and couldn't effectively trim recovery point history. |
| 26 | +* At higher mutation rates, the system worked around this issue because mutations would increase the fragmentation ratio enough to trigger the log cleaner, which could then trim recovery point history despite the tracking statistics bug. |
| 27 | + |
| 28 | +| The system now marks only the latest recovery point that exists in both the recovery log and data log. This change effectively limits the recovery point history list to a single entry in the recovery log. The plasma tracking statistics have been fixed to correctly identify older recovery points as stale data in the recovery log. These improvements allow the log cleaner to run efficiently even at low mutation rates. |
| 29 | + |
| 30 | + |
| 31 | + |
| 32 | +|=== |
| 33 | + |
| 34 | +==== Data Service |
| 35 | + |
| 36 | +[#table-fixed-issues-727-data-service, cols="10,40,40"] |
| 37 | +|=== |
| 38 | +|Issue | Description | Resolution |
| 39 | + |
| 40 | + |
| 41 | +| https://jira.issues.couchbase.com/browse/MB-63827[MB-63827] |
| 42 | +| DCP connection metrics with connection names that do not conform to the server format are not exposed to Prometheus. |
| 43 | + |
| 44 | +| The metrics are aggregated and exposed with `connection_type="_unknown"`. |
| 45 | + |
| 46 | +|=== |
| 47 | + |
| 48 | +==== XDCR |
| 49 | + |
| 50 | +[#table-fixed-issues-727-xdcr, cols="10,40,40"] |
| 51 | +|=== |
| 52 | +|Issue | Description | Resolution |
| 53 | + |
| 54 | +| https://jira.issues.couchbase.com/browse/MB-64565[MB-64565] |
| 55 | +| If users did not want the replication to be automatically deleted upon either source or bucket deletion and/or recreation, they could set the `skipReplSpecAutoGc` replication setting to `true` upon replication creation. |
| 56 | + |
| 57 | +In the situation that the replication would have been deleted, it would have been automatically paused instead, and a persistent UI error message logged for viewing. Users are expected to manually execute the recovery action by deleting the replication and re-creating a newer one, if necessary. |
| 58 | + |
| 59 | +| In the situation that the replication has been deleted, it will be automatically paused instead, and a persistent UI error message logged for viewing. Users are then expected to manually execute the recovery action by deleting the replication and re-creating a newer one, if necessary. |
| 60 | + |
| 61 | +|=== |
0 commit comments