Skip to content

schedule, server: harden scheduler config input validation#10311

Open
bufferflies wants to merge 1 commit intomasterfrom
bugfix/ai
Open

schedule, server: harden scheduler config input validation#10311
bufferflies wants to merge 1 commit intomasterfrom
bugfix/ai

Conversation

@bufferflies
Copy link
Contributor

@bufferflies bufferflies commented Mar 7, 2026

What problem does this PR solve?

Issue Number: None

What is changed and how does it work?

Improve scheduler API handler robustness by validating request payload types before casting to concrete types, and return clear 4xx errors for invalid input instead of panics or 5xx responses. Also harden scheduler internals against nil/wrong-type assumptions and add focused unit tests that assert invalid payloads no longer panic.

Check List

Tests

  • Unit test

Code changes

  • Has HTTP APIs changed

Release note

Fix scheduler API handlers to gracefully reject malformed config payloads and avoid panic paths in scheduler internals.

Summary by CodeRabbit

Release Notes

  • Bug Fixes

    • Improved error handling in scheduler configuration with appropriate HTTP status codes for invalid input scenarios.
    • Added nil-safety checks to prevent potential panics during scheduler operations.
    • Enhanced robustness of input validation for configuration updates across schedulers.
  • Tests

    • Extended test coverage for invalid input types and edge cases.

Signed-off-by: tongjian <[email protected]>
@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Mar 7, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign bufferflies for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot ti-chi-bot bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has signed the dco. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Mar 7, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 7, 2026

📝 Walkthrough

Walkthrough

This PR introduces comprehensive input validation and type-safe handling across multiple scheduler components. Changes add validation for engine/rule/alias/timeout in balance_range, type-checking for ranges and store-leader-id in grant_leader/grant_hot_region/evict_leader, nil-safety checks, and standardization of HTTP error responses from 500 to 400 for invalid inputs.

Changes

Cohort / File(s) Summary
Balance Range Scheduler
pkg/schedule/schedulers/balance_range.go, pkg/schedule/schedulers/balance_range_test.go
Adds input validation for engine, rule, alias, and optional timeout parameters; validates engine is TiKV or TiFlash and rule is one of LeaderScatter/PeerScatter/LearnerScatter; includes nil-safety check in shouldFinished; includes test for invalid field types and missing StartTime.
Evict Leader Scheduler
pkg/schedule/schedulers/evict_leader.go, pkg/schedule/schedulers/evict_leader_test.go
Inverts logic in pauseLeaderTransferIfStoreNotExist (now returns false for non-existence); expands ranges input handling to accept both []string and []any with type validation; tests validate string array ranges and rejection of mixed-type ranges.
Grant Hot Region Scheduler
pkg/schedule/schedulers/grant_hot_region.go, pkg/schedule/schedulers/grant_hot_region_test.go
Adds type-safe extraction of store-leader-id by validating it is a string before parsing; returns 400 on invalid type; test verifies rejection of non-string leader ID values.
Grant Leader Scheduler
pkg/schedule/schedulers/grant_leader.go, pkg/schedule/schedulers/grant_leader_test.go
Enhances ranges handling to accept both []string and []any types with conversion and validation; resumes leader transfer on invalid type; tests validate string array ranges and rejection of mixed-type ranges.
Transfer Witness Leader Scheduler
pkg/schedule/schedulers/transfer_witness_leader.go, pkg/schedule/schedulers/transfer_witness_leader_test.go
Adds safe type assertion in RecvRegionInfo with nil return for non-matching scheduler types; prevents panics; test verifies nil return for incompatible scheduler types.
Scheduler Controller
pkg/schedule/schedulers/scheduler_controller.go
Adds nil-safety check in CheckTransferWitnessLeader by resolving receiver channel once and validating against nil before use in select case.
API Layer
server/api/scheduler.go
Standardizes error responses by changing invalid input errors from 500 to 400 status; adds explicit error handling in CreateScheduler and handleSchedulerConfig; guards against nil response dereference in delete path.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~35 minutes

Suggested labels

size/XXL, lgtm

Suggested reviewers

  • okJiang
  • JmPotato
  • rleungx

Poem

🐰 Through fields of code, a rabbit hops with glee,
Input guards and type checks, as safe as they can be!
No more panicked types, no nil that's unaware,
Four-hundred errors blooming everywhere!
Validation's dance—so graceful, so secure. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.69% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: hardening scheduler config input validation across the schedule and server packages.
Description check ✅ Passed The description includes a commit message explaining the changes and covers most required sections (problem, changes, checklist, release note), though no issue number is provided.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bugfix/ai

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Mar 7, 2026

[FORMAT CHECKER NOTIFICATION]

Notice: To remove the do-not-merge/needs-linked-issue label, please provide the linked issue number on one line in the PR body, for example: Issue Number: close #123 or Issue Number: ref #456, multiple issues should use full syntax for each issue and be separated by a comma, like: Issue Number: close #123, ref #456.

📖 For more info, you can check the "Linking issues" section in the CONTRIBUTING.md.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
server/api/scheduler.go (1)

128-133: Consider using a different HTTP status code.

HTTP 406 (Not Acceptable) is typically used for content negotiation failures (Accept header mismatch). For "scheduler config handler is unavailable", HTTP 503 (Service Unavailable) or 500 might be more semantically appropriate.

However, this is a minor concern and doesn't affect functionality.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/scheduler.go` around lines 128 - 133, Replace the inappropriate
HTTP 406 responses in the scheduler config handler with a more semantically
correct status (use http.StatusServiceUnavailable (503) or
http.StatusInternalServerError (500)); update both h.r.JSON(w,
http.StatusNotAcceptable, err.Error()) and the subsequent h.r.JSON(w,
http.StatusNotAcceptable, "scheduler config handler is unavailable") calls in
the scheduler config handler to return the chosen 5xx status so errors and the
unavailable message use the correct HTTP code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/schedule/schedulers/balance_range.go`:
- Around line 121-134: The timeout parsing branch in BalanceRange handler
currently accepts "0s" and negative durations; after parsing timeoutStr with
time.ParseDuration (used to set job.Timeout) add a check that the parsed
duration is strictly positive and return a 400 via handler.rd.JSON with an
explanatory message (e.g., "timeout must be > 0") if timeout <= 0; ensure this
validation happens before assigning job.Timeout so invalid values are rejected
consistently.

In `@pkg/schedule/schedulers/evict_leader.go`:
- Around line 173-183: The current pauseLeaderTransferIfStoreNotExist returns
true for the case where the store exists (so callers cannot tell if this request
actually paused the store), causing updateConfig to unconditionally call
resumeLeaderTransferIfExist on 4xx early returns and clear an existing pause;
change the logic so pauseLeaderTransferIfStoreNotExist (and its callers in
updateConfig) track and return whether this particular request performed the
PauseLeaderTransfer call (true only when PauseLeaderTransfer was invoked
successfully), and update every early-return cleanup branch in updateConfig to
call resumeLeaderTransferIfExist(id) only when that returned flag is true;
ensure the same guard is applied to the other referenced range (lines ~431-450)
so resume is not called for stores you didn’t pause.

In `@pkg/schedule/schedulers/grant_leader.go`:
- Around line 268-296: The code consumes "ranges" before ensuring a valid
store_id; update the handler in grant_leader.go to first validate that
input["store_id"] exists and is the expected type (e.g., int/float64/string as
your API expects) and return the same ResumeLeaderTransfer + HTTP 400 path on
failure, then only proceed to parse rangesVal (refer to symbols rangesVal,
input, handler.config.ResumeLeaderTransfer, handler.config.getRanges) and append
to args; ensure you unlock the config and call handler.rd.JSON on the new error
path exactly like the existing ranges-type error branches so malformed payloads
cannot let a range token be misinterpreted as a store_id.
- Around line 276-291: The current update handler resumes leader transfer
unconditionally on error paths even for stores that were not paused by this
request; modify the logic in the grant leader update flow (inside the request
parsing loop that handles StoreIDWithRanges and the branch building ranges) to
track whether you called handler.config.cluster.PauseLeaderTransfer (e.g., a
boolean like pausedByReq per store or overall) and only call
handler.config.cluster.ResumeLeaderTransfer(id, constant.Out) when that flag is
true; ensure the flag is set immediately after calling PauseLeaderTransfer and
cleared on successful commit so all error returns (the JSON bad-request
responses) only resume transfers for stores paused by this request, while
preserving the existing handler.config.Lock()/Unlock() usage around those
cluster calls.

---

Nitpick comments:
In `@server/api/scheduler.go`:
- Around line 128-133: Replace the inappropriate HTTP 406 responses in the
scheduler config handler with a more semantically correct status (use
http.StatusServiceUnavailable (503) or http.StatusInternalServerError (500));
update both h.r.JSON(w, http.StatusNotAcceptable, err.Error()) and the
subsequent h.r.JSON(w, http.StatusNotAcceptable, "scheduler config handler is
unavailable") calls in the scheduler config handler to return the chosen 5xx
status so errors and the unavailable message use the correct HTTP code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cd95b201-f573-4109-8cc9-8fc69997e5f8

📥 Commits

Reviewing files that changed from the base of the PR and between 95cde21 and db0ecac.

📒 Files selected for processing (12)
  • pkg/schedule/schedulers/balance_range.go
  • pkg/schedule/schedulers/balance_range_test.go
  • pkg/schedule/schedulers/evict_leader.go
  • pkg/schedule/schedulers/evict_leader_test.go
  • pkg/schedule/schedulers/grant_hot_region.go
  • pkg/schedule/schedulers/grant_hot_region_test.go
  • pkg/schedule/schedulers/grant_leader.go
  • pkg/schedule/schedulers/grant_leader_test.go
  • pkg/schedule/schedulers/scheduler_controller.go
  • pkg/schedule/schedulers/transfer_witness_leader.go
  • pkg/schedule/schedulers/transfer_witness_leader_test.go
  • server/api/scheduler.go

Comment on lines +121 to +134
if timeoutVal, exists := input["timeout"]; exists {
timeoutStr, ok := timeoutVal.(string)
if !ok {
handler.rd.JSON(w, http.StatusBadRequest, "timeout must be a string")
return
}
job.Timeout = timeout
if len(timeoutStr) > 0 {
timeout, err := time.ParseDuration(timeoutStr)
if err != nil {
handler.rd.JSON(w, http.StatusBadRequest, fmt.Sprintf("timeout:%s is invalid", timeoutStr))
return
}
job.Timeout = timeout
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject non-positive timeout values too.

time.ParseDuration accepts "0s" and negative durations, so this still lets malformed payloads create jobs that immediately finish in shouldFinished(). Returning 400 unless timeout > 0 would keep the new validation behavior consistent.

Suggested fix
 	if timeoutVal, exists := input["timeout"]; exists {
 		timeoutStr, ok := timeoutVal.(string)
 		if !ok {
 			handler.rd.JSON(w, http.StatusBadRequest, "timeout must be a string")
 			return
 		}
 		if len(timeoutStr) > 0 {
 			timeout, err := time.ParseDuration(timeoutStr)
 			if err != nil {
 				handler.rd.JSON(w, http.StatusBadRequest, fmt.Sprintf("timeout:%s is invalid", timeoutStr))
 				return
 			}
+			if timeout <= 0 {
+				handler.rd.JSON(w, http.StatusBadRequest, "timeout must be greater than 0")
+				return
+			}
 			job.Timeout = timeout
 		}
 	}

As per coding guidelines, "HTTP handlers validate payloads and return proper status codes; avoid panics."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if timeoutVal, exists := input["timeout"]; exists {
timeoutStr, ok := timeoutVal.(string)
if !ok {
handler.rd.JSON(w, http.StatusBadRequest, "timeout must be a string")
return
}
job.Timeout = timeout
if len(timeoutStr) > 0 {
timeout, err := time.ParseDuration(timeoutStr)
if err != nil {
handler.rd.JSON(w, http.StatusBadRequest, fmt.Sprintf("timeout:%s is invalid", timeoutStr))
return
}
job.Timeout = timeout
}
if timeoutVal, exists := input["timeout"]; exists {
timeoutStr, ok := timeoutVal.(string)
if !ok {
handler.rd.JSON(w, http.StatusBadRequest, "timeout must be a string")
return
}
if len(timeoutStr) > 0 {
timeout, err := time.ParseDuration(timeoutStr)
if err != nil {
handler.rd.JSON(w, http.StatusBadRequest, fmt.Sprintf("timeout:%s is invalid", timeoutStr))
return
}
if timeout <= 0 {
handler.rd.JSON(w, http.StatusBadRequest, "timeout must be greater than 0")
return
}
job.Timeout = timeout
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/schedule/schedulers/balance_range.go` around lines 121 - 134, The timeout
parsing branch in BalanceRange handler currently accepts "0s" and negative
durations; after parsing timeoutStr with time.ParseDuration (used to set
job.Timeout) add a check that the parsed duration is strictly positive and
return a 400 via handler.rd.JSON with an explanatory message (e.g., "timeout
must be > 0") if timeout <= 0; ensure this validation happens before assigning
job.Timeout so invalid values are rejected consistently.

Comment on lines 173 to 183
func (conf *evictLeaderSchedulerConfig) pauseLeaderTransferIfStoreNotExist(id uint64) (bool, error) {
conf.RLock()
defer conf.RUnlock()
if _, exist := conf.StoreIDWithRanges[id]; !exist {
if err := conf.cluster.PauseLeaderTransfer(id, constant.In); err != nil {
return exist, err
}
if _, exist := conf.StoreIDWithRanges[id]; exist {
return true, nil
}
if err := conf.cluster.PauseLeaderTransfer(id, constant.In); err != nil {
return false, err
}
return true, nil
return false, nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Track whether this request paused the store, not whether the store already exists.

pauseLeaderTransferIfStoreNotExist now returns true for an existing config, but all the 4xx cleanup paths still call resumeLeaderTransferIfExist(id), which resumes unconditionally. That means a bad update against an already-evicted store can clear the old pause and silently disable the live config.

Suggested fix
-	exist, err = handler.config.pauseLeaderTransferIfStoreNotExist(id)
+	exist, err = handler.config.pauseLeaderTransferIfStoreNotExist(id)
 	if err != nil {
 		handler.rd.JSON(w, http.StatusInternalServerError, err.Error())
 		return
 	}
+	pausedForUpdate := inputHasStoreID && !exist
-			handler.config.resumeLeaderTransferIfExist(id)
+			if pausedForUpdate {
+				handler.config.resumeLeaderTransferIfExist(id)
+			}

Apply the same guard to the other early-return cleanup branches in updateConfig.

As per coding guidelines, "HTTP handlers validate payloads and return proper status codes; avoid panics."

Also applies to: 431-450

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/schedule/schedulers/evict_leader.go` around lines 173 - 183, The current
pauseLeaderTransferIfStoreNotExist returns true for the case where the store
exists (so callers cannot tell if this request actually paused the store),
causing updateConfig to unconditionally call resumeLeaderTransferIfExist on 4xx
early returns and clear an existing pause; change the logic so
pauseLeaderTransferIfStoreNotExist (and its callers in updateConfig) track and
return whether this particular request performed the PauseLeaderTransfer call
(true only when PauseLeaderTransfer was invoked successfully), and update every
early-return cleanup branch in updateConfig to call
resumeLeaderTransferIfExist(id) only when that returned flag is true; ensure the
same guard is applied to the other referenced range (lines ~431-450) so resume
is not called for stores you didn’t pause.

Comment on lines +268 to 296
rangesVal, hasRanges := input["ranges"]
if hasRanges {
var ranges []string
switch val := rangesVal.(type) {
case []string:
ranges = val
case []any:
ranges = make([]string, 0, len(val))
for _, item := range val {
s, ok := item.(string)
if !ok {
handler.config.Lock()
handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
handler.config.Unlock()
handler.rd.JSON(w, http.StatusBadRequest, errs.ErrSchedulerConfig.FastGenByArgs("ranges"))
return
}
ranges = append(ranges, s)
}
default:
handler.config.Lock()
handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
handler.config.Unlock()
handler.rd.JSON(w, http.StatusBadRequest, errs.ErrSchedulerConfig.FastGenByArgs("ranges"))
return
}
args = append(args, ranges...)
} else if exists {
args = append(args, handler.config.getRanges(id)...)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Require a valid store_id before consuming ranges.

If store_id is missing or wrong-typed, this block still appends ranges into args, so buildWithArgs can treat the first range token as the store ID instead of rejecting the payload. That can update the wrong store on malformed requests.

Suggested fix
-	rangesVal, hasRanges := input["ranges"]
+	rangesVal, hasRanges := input["ranges"]
 	if hasRanges {
+		if !ok {
+			handler.rd.JSON(w, http.StatusBadRequest, errs.ErrSchedulerConfig.FastGenByArgs("id"))
+			return
+		}
 		var ranges []string
 		switch val := rangesVal.(type) {

As per coding guidelines, "HTTP handlers validate payloads and return proper status codes; avoid panics."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/schedule/schedulers/grant_leader.go` around lines 268 - 296, The code
consumes "ranges" before ensuring a valid store_id; update the handler in
grant_leader.go to first validate that input["store_id"] exists and is the
expected type (e.g., int/float64/string as your API expects) and return the same
ResumeLeaderTransfer + HTTP 400 path on failure, then only proceed to parse
rangesVal (refer to symbols rangesVal, input,
handler.config.ResumeLeaderTransfer, handler.config.getRanges) and append to
args; ensure you unlock the config and call handler.rd.JSON on the new error
path exactly like the existing ranges-type error branches so malformed payloads
cannot let a range token be misinterpreted as a store_id.

Comment on lines +276 to +291
for _, item := range val {
s, ok := item.(string)
if !ok {
handler.config.Lock()
handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
handler.config.Unlock()
handler.rd.JSON(w, http.StatusBadRequest, errs.ErrSchedulerConfig.FastGenByArgs("ranges"))
return
}
ranges = append(ranges, s)
}
default:
handler.config.Lock()
handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
handler.config.Unlock()
handler.rd.JSON(w, http.StatusBadRequest, errs.ErrSchedulerConfig.FastGenByArgs("ranges"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only resume leader transfer when this request actually paused it.

For stores already present in StoreIDWithRanges, the earlier branch does not call PauseLeaderTransfer, but these new error paths still call ResumeLeaderTransfer. A bad update can therefore clear the existing pause and leave an active grant-leader config unenforced.

Suggested fix
-	var exists bool
+	var exists bool
+	var pausedForUpdate bool
 	var id uint64
 	idFloat, ok := input["store_id"].(float64)
 	if ok {
 		id = (uint64)(idFloat)
 		handler.config.RLock()
 		if _, exists = handler.config.StoreIDWithRanges[id]; !exists {
 			if err := handler.config.cluster.PauseLeaderTransfer(id, constant.Out); err != nil {
 				handler.config.RUnlock()
 				handler.rd.JSON(w, http.StatusInternalServerError, err.Error())
 				return
 			}
+			pausedForUpdate = true
 		}
 		handler.config.RUnlock()
 		args = append(args, strconv.FormatUint(id, 10))
 	}
-					handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
+					if pausedForUpdate {
+						handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
+					}
...
-			handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
+			if pausedForUpdate {
+				handler.config.cluster.ResumeLeaderTransfer(id, constant.Out)
+			}

As per coding guidelines, "HTTP handlers validate payloads and return proper status codes; avoid panics."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/schedule/schedulers/grant_leader.go` around lines 276 - 291, The current
update handler resumes leader transfer unconditionally on error paths even for
stores that were not paused by this request; modify the logic in the grant
leader update flow (inside the request parsing loop that handles
StoreIDWithRanges and the branch building ranges) to track whether you called
handler.config.cluster.PauseLeaderTransfer (e.g., a boolean like pausedByReq per
store or overall) and only call handler.config.cluster.ResumeLeaderTransfer(id,
constant.Out) when that flag is true; ensure the flag is set immediately after
calling PauseLeaderTransfer and cleared on successful commit so all error
returns (the JSON bad-request responses) only resume transfers for stores paused
by this request, while preserving the existing handler.config.Lock()/Unlock()
usage around those cluster calls.

@ti-chi-bot
Copy link
Contributor

ti-chi-bot bot commented Mar 7, 2026

@bufferflies: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-unit-test-next-gen-3 db0ecac link true /test pull-unit-test-next-gen-3

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has signed the dco. do-not-merge/needs-linked-issue release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant