Skip to content

OSDOCS#14369: Update the z-stream RNs for 4.18.10 #92422

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 23, 2025

Conversation

tmalove
Copy link
Contributor

@tmalove tmalove commented Apr 22, 2025

Version(s):
4.18

Issue:
OSDOCS-14369

Link to docs preview:
4.18.10

QE review:

  • QE has approved this change.
    N/A for z-stream relnotes

Additional information:
The errata URLs will return 404 until the go-live date of 04/22/25.

@openshift-ci openshift-ci bot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Apr 22, 2025
@ocpdocs-previewbot
Copy link

ocpdocs-previewbot commented Apr 22, 2025

🤖 Tue Apr 22 21:49:27 - Prow CI generated the docs preview:

https://92422--ocpdocs-pr.netlify.app/openshift-enterprise/latest/release_notes/ocp-4-18-release-notes.html

@tmalove
Copy link
Contributor Author

tmalove commented Apr 22, 2025

/retest

@tmalove
Copy link
Contributor Author

tmalove commented Apr 22, 2025

/retest-required

@tmalove tmalove force-pushed the OSDOCS-14369 branch 2 times, most recently from 9ec086f to b548915 Compare April 22, 2025 11:50
@tmalove
Copy link
Contributor Author

tmalove commented Apr 22, 2025

/label peer-review-needed

@openshift-ci openshift-ci bot added the peer-review-needed Signifies that the peer review team needs to review this PR label Apr 22, 2025
@lahinson lahinson added peer-review-in-progress Signifies that the peer review team is reviewing this PR and removed peer-review-needed Signifies that the peer review team needs to review this PR labels Apr 22, 2025
Copy link
Contributor

@lahinson lahinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few nits for your consideration. Otherwise, looks good!

[id="ocp-4-18-10-enhancements_{context}"]
==== Enhancements

* In the bootstrap phase of the installation process, the transport layer security (TLS) between the `metal3` `httpd` server and the node's Baseboard Management Controller (BMC) is enabled by default in {product-title} 4.18 and later. The `httpd` server is on port 6183 instead of port 6180 when TLS is enabled. Disable the TLS setting by adding 'disableVirtualMediaTLS: true' to the Provisioning custom resource (CR) file that is created on the disk. (link:https://issues.redhat.com/browse/OCPBUGS-39404[OCPBUGS-39404])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* In the bootstrap phase of the installation process, the transport layer security (TLS) between the `metal3` `httpd` server and the node's Baseboard Management Controller (BMC) is enabled by default in {product-title} 4.18 and later. The `httpd` server is on port 6183 instead of port 6180 when TLS is enabled. Disable the TLS setting by adding 'disableVirtualMediaTLS: true' to the Provisioning custom resource (CR) file that is created on the disk. (link:https://issues.redhat.com/browse/OCPBUGS-39404[OCPBUGS-39404])
* In the bootstrap phase of the installation process, the Transport Layer Security (TLS) between the `metal3` `httpd` server and the node's Baseboard Management Controller (BMC) is enabled by default in {product-title} 4.18 and later. The `httpd` server is on port 6183 instead of port 6180 when TLS is enabled. Disable the TLS setting by adding 'disableVirtualMediaTLS: true' to the Provisioning custom resource (CR) file that is created on the disk. (link:https://issues.redhat.com/browse/OCPBUGS-39404[OCPBUGS-39404])

In the phrase "...to the Provisioning custom resource (CR) file...", does "Provisioning" need to be capitalized?

[id="ocp-4-18-10-bug-fixes_{context}"]
==== Bug fixes

* Previously, the Prometheus remote-write proxy configuration was not correctly applied to the Prometheus user workload custom resource (CR), which caused communication and data collection problems in the cluster. With this release, the user workload monitoring (UWM) Prometheus configurations, including user workload Prometheus, correctly inherits the proxy settings from the cluster proxy resource. (link:https://issues.redhat.com/browse/OCPBUGS-38655[OCPBUGS-38655])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, the Prometheus remote-write proxy configuration was not correctly applied to the Prometheus user workload custom resource (CR), which caused communication and data collection problems in the cluster. With this release, the user workload monitoring (UWM) Prometheus configurations, including user workload Prometheus, correctly inherits the proxy settings from the cluster proxy resource. (link:https://issues.redhat.com/browse/OCPBUGS-38655[OCPBUGS-38655])
* Previously, the Prometheus remote-write proxy configuration was not correctly applied to the Prometheus user workload custom resource (CR), which caused communication and data collection problems in the cluster. With this release, the user workload monitoring (UWM) Prometheus configurations, including user workload Prometheus, correctly inherit the proxy settings from the cluster proxy resource. (link:https://issues.redhat.com/browse/OCPBUGS-38655[OCPBUGS-38655])


* Previously, the Prometheus remote-write proxy configuration was not correctly applied to the Prometheus user workload custom resource (CR), which caused communication and data collection problems in the cluster. With this release, the user workload monitoring (UWM) Prometheus configurations, including user workload Prometheus, correctly inherits the proxy settings from the cluster proxy resource. (link:https://issues.redhat.com/browse/OCPBUGS-38655[OCPBUGS-38655])

* Previously, when running Red Hat Enterprise Linux CoreOS (RHCOS) in an active environment, the `rpm-ostree-fix-shadow-mode.service` that used to run caused the `rpm-ostree-fix-shadow-mode.service` to fail. With this release, the `rpm-ostree-fix-shadow-mode.service` does not activate when RHCOS does not run from an installed environment. (link:https://issues.redhat.com/browse/OCPBUGS-41625[OCPBUGS-41625])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, when running Red Hat Enterprise Linux CoreOS (RHCOS) in an active environment, the `rpm-ostree-fix-shadow-mode.service` that used to run caused the `rpm-ostree-fix-shadow-mode.service` to fail. With this release, the `rpm-ostree-fix-shadow-mode.service` does not activate when RHCOS does not run from an installed environment. (link:https://issues.redhat.com/browse/OCPBUGS-41625[OCPBUGS-41625])
* Previously, when running {op-system-first} in an active environment, the `rpm-ostree-fix-shadow-mode.service` that used to run caused the `rpm-ostree-fix-shadow-mode.service` to fail. With this release, the `rpm-ostree-fix-shadow-mode.service` does not activate when {op-system} does not run from an installed environment. (link:https://issues.redhat.com/browse/OCPBUGS-41625[OCPBUGS-41625])

If possible, add the appropriate nouns after rpm-ostree-fix-shadow-mode.service and rpm-ostree-fix-shadow-mode.service to indicate what those things are.


* Previously, when running Red Hat Enterprise Linux CoreOS (RHCOS) in an active environment, the `rpm-ostree-fix-shadow-mode.service` that used to run caused the `rpm-ostree-fix-shadow-mode.service` to fail. With this release, the `rpm-ostree-fix-shadow-mode.service` does not activate when RHCOS does not run from an installed environment. (link:https://issues.redhat.com/browse/OCPBUGS-41625[OCPBUGS-41625])

* Previously, an incorrect component import in `SimpleSelect.tsx` caused an undefined function `r` in `react-dom.production.min.js`. This component caused error messages on the *Dashboards* and *Metrics* pages related to dropdown lists. With this release, the dropdown lists on the affected pages function correctly, eliminating the error message. (link:https://issues.redhat.com/browse/OCPBUGS-42845[OCPBUGS-42845])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, an incorrect component import in `SimpleSelect.tsx` caused an undefined function `r` in `react-dom.production.min.js`. This component caused error messages on the *Dashboards* and *Metrics* pages related to dropdown lists. With this release, the dropdown lists on the affected pages function correctly, eliminating the error message. (link:https://issues.redhat.com/browse/OCPBUGS-42845[OCPBUGS-42845])
* Previously, an incorrect component import in the `SimpleSelect.tsx` file caused an undefined `r` function in the `react-dom.production.min.js` file. This component caused error messages on the *Dashboards* and *Metrics* pages related to dropdown lists. With this release, the dropdown lists on the affected pages function correctly. (link:https://issues.redhat.com/browse/OCPBUGS-42845[OCPBUGS-42845])

Feel free to replace "file" with a different word if needed.


* Previously, an incorrect component import in `SimpleSelect.tsx` caused an undefined function `r` in `react-dom.production.min.js`. This component caused error messages on the *Dashboards* and *Metrics* pages related to dropdown lists. With this release, the dropdown lists on the affected pages function correctly, eliminating the error message. (link:https://issues.redhat.com/browse/OCPBUGS-42845[OCPBUGS-42845])

* Previously, an error in the image pull secret controller's secret token rotation logic caused a temporary, invalid token for authentication, which caused disruptions in the image pull process. With this release, the updated image pull secret controller guarantees a smooth and continuous image pull process, because it eliminates the period when the token is not valid while the token rotates. (link:https://issues.redhat.com/browse/OCPBUGS-54304[OCPBUGS-54304])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, an error in the image pull secret controller's secret token rotation logic caused a temporary, invalid token for authentication, which caused disruptions in the image pull process. With this release, the updated image pull secret controller guarantees a smooth and continuous image pull process, because it eliminates the period when the token is not valid while the token rotates. (link:https://issues.redhat.com/browse/OCPBUGS-54304[OCPBUGS-54304])
* Previously, an error in the rotation logic of the image pull secret controller's secret token caused a temporary, invalid token for authentication. As a consequence, the image pull process was disrupted. With this release, the updated image pull secret controller eliminates the period when the token is not valid while the token rotates. As a result, the image pull process is smooth and continuous. (link:https://issues.redhat.com/browse/OCPBUGS-54304[OCPBUGS-54304])

I tried to break up some of the longer sentences to make them easier to read.


* Previously, an error in the image pull secret controller's secret token rotation logic caused a temporary, invalid token for authentication, which caused disruptions in the image pull process. With this release, the updated image pull secret controller guarantees a smooth and continuous image pull process, because it eliminates the period when the token is not valid while the token rotates. (link:https://issues.redhat.com/browse/OCPBUGS-54304[OCPBUGS-54304])

* Previously, an error occurred in {hcp-capital}-managed clusters because of the omission of `shutdown-watch-termination-grace-period` in the `kube-apiserver` configuration. This led to unstable shutdown of applications in {hcp}-managed clusters. With this release, an update improves the shutdown process of applications in {hcp}-managed clusters, providing a grace period for the `kube-apiserver` configuration. During a shutdown, the application stability is improved and potential errors are decreased. (link:https://issues.redhat.com/browse/OCPBUGS-53404[OCPBUGS-53404])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, an error occurred in {hcp-capital}-managed clusters because of the omission of `shutdown-watch-termination-grace-period` in the `kube-apiserver` configuration. This led to unstable shutdown of applications in {hcp}-managed clusters. With this release, an update improves the shutdown process of applications in {hcp}-managed clusters, providing a grace period for the `kube-apiserver` configuration. During a shutdown, the application stability is improved and potential errors are decreased. (link:https://issues.redhat.com/browse/OCPBUGS-53404[OCPBUGS-53404])
* Previously, an error occurred in {hcp}-managed clusters because of the omission of the `shutdown-watch-termination-grace-period` setting in the `kube-apiserver` configuration. This error led to the unstable shutdown of applications in {hcp}-managed clusters. With this release, an update improves the shutdown process of applications in {hcp}-managed clusters, providing a grace period for the `kube-apiserver` configuration. During a shutdown, the application stability is improved and potential errors are decreased. (link:https://issues.redhat.com/browse/OCPBUGS-53404[OCPBUGS-53404])


* Previously, an error occurred in {hcp-capital}-managed clusters because of the omission of `shutdown-watch-termination-grace-period` in the `kube-apiserver` configuration. This led to unstable shutdown of applications in {hcp}-managed clusters. With this release, an update improves the shutdown process of applications in {hcp}-managed clusters, providing a grace period for the `kube-apiserver` configuration. During a shutdown, the application stability is improved and potential errors are decreased. (link:https://issues.redhat.com/browse/OCPBUGS-53404[OCPBUGS-53404])

* Previously, an issue with the version of `github.com/sherine-k/catalog-filter`, `oc-mirror` stopped, caused instability in the mirroring process. With this release, the `github.com/sherine-k/catalog-filter` element in the `go.mod` file is updated, which solves the problem and ensures a stable and reliable mirroring process. (link:https://issues.redhat.com/browse/OCPBUGS-54727[OCPBUGS-54727])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, an issue with the version of `github.com/sherine-k/catalog-filter`, `oc-mirror` stopped, caused instability in the mirroring process. With this release, the `github.com/sherine-k/catalog-filter` element in the `go.mod` file is updated, which solves the problem and ensures a stable and reliable mirroring process. (link:https://issues.redhat.com/browse/OCPBUGS-54727[OCPBUGS-54727])
* Previously, an issue with the version of the `github.com/sherine-k/catalog-filter` element stopped, causing instability in the mirroring process. With this release, the `github.com/sherine-k/catalog-filter` element in the `go.mod` file is updated, which solves the problem and ensures a stable and reliable mirroring process. (link:https://issues.redhat.com/browse/OCPBUGS-54727[OCPBUGS-54727])

Feel free to change my revision if it isn't accurate. I wasn't sure how oc-mirror fit in to the first sentence.


* Previously, an issue with the version of `github.com/sherine-k/catalog-filter`, `oc-mirror` stopped, caused instability in the mirroring process. With this release, the `github.com/sherine-k/catalog-filter` element in the `go.mod` file is updated, which solves the problem and ensures a stable and reliable mirroring process. (link:https://issues.redhat.com/browse/OCPBUGS-54727[OCPBUGS-54727])

* Previously, an iteration counter increment omission in the `scrapeCache` led to an incorrect series count for subsequent scrapes. This resulted in interrupted monitoring and potential data loss during the Prometheus scrape process. With this release, an update ensures uninterrupted monitoring, because Prometheus continues scraping and processing data while parsing errors. (link:https://issues.redhat.com/browse/OCPBUGS-54940[OCPBUGS-54940])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Previously, an iteration counter increment omission in the `scrapeCache` led to an incorrect series count for subsequent scrapes. This resulted in interrupted monitoring and potential data loss during the Prometheus scrape process. With this release, an update ensures uninterrupted monitoring, because Prometheus continues scraping and processing data while parsing errors. (link:https://issues.redhat.com/browse/OCPBUGS-54940[OCPBUGS-54940])
* Previously, an iteration counter increment omission in the `scrapeCache` setting led to an incorrect series count for subsequent scrapes. As a result, monitoring was interrupted and data could potentially be lost during the Prometheus scrape process. With this release, an update ensures uninterrupted monitoring, because Prometheus continues scraping and processing data while parsing errors. (link:https://issues.redhat.com/browse/OCPBUGS-54940[OCPBUGS-54940])

@lahinson lahinson added peer-review-done Signifies that the peer review team has reviewed this PR branch/enterprise-4.18 and removed peer-review-in-progress Signifies that the peer review team is reviewing this PR labels Apr 22, 2025
@lahinson lahinson added this to the Continuous Release milestone Apr 22, 2025
Copy link

openshift-ci bot commented Apr 22, 2025

@tmalove: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@tmalove
Copy link
Contributor Author

tmalove commented Apr 23, 2025

/label merge-review-needed

@openshift-ci openshift-ci bot added the merge-review-needed Signifies that the merge review team needs to review this PR label Apr 23, 2025
@ShaunaDiaz ShaunaDiaz removed the merge-review-needed Signifies that the merge review team needs to review this PR label Apr 23, 2025
Copy link
Contributor

@ShaunaDiaz ShaunaDiaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@ShaunaDiaz ShaunaDiaz merged commit e7febcf into openshift:enterprise-4.18 Apr 23, 2025
2 checks passed
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch/enterprise-4.18 lgtm Indicates that a PR is ready to be merged. peer-review-done Signifies that the peer review team has reviewed this PR size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants