-
Notifications
You must be signed in to change notification settings - Fork 630
🌱 feat: implements nodeadm bootstrapping type #5700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
🌱 feat: implements nodeadm bootstrapping type #5700
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
testing with this manifest apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
name: default-control-plane
infrastructureRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
name: default-control-plane
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
metadata:
name: default-control-plane
spec:
addons:
- name: kube-proxy
version: v1.32.0-eksbuild.2
network:
cni:
cniIngressRules:
- description: kube-proxy metrics
fromPort: 10249
protocol: tcp
toPort: 10249
- description: NVIDIA Data Center GPU Manager metrics
fromPort: 9400
protocol: tcp
toPort: 9400
- description: Prometheus node exporter metrics
fromPort: 9100
protocol: tcp
toPort: 9100
region: us-west-2
sshKeyName: ""
version: v1.33.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
metadata:
name: default
spec:
template:
spec:
cloudInit:
insecureSkipSecretsManager: true
ami:
eksLookupType: AmazonLinux2023
instanceMetadataOptions:
httpTokens: required
httpPutResponseHopLimit: 2
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: m5a.16xlarge
rootVolume:
size: 80
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfigTemplate
metadata:
name: default
spec:
template:
spec: {}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: default
spec:
clusterName: default
replicas: 1
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfigTemplate
name: default
clusterName: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
name: default
version: v1.33.0 |
/retest |
2 similar comments
/retest |
/retest |
8f854bd
to
81f3664
Compare
/test ? |
@faiq: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test /test pull-cluster-api-provider-aws-e2e-eks |
Does this work with AWSManagedMachinePool ? |
81f3664
to
59ecae0
Compare
@dsanders1234 try this apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: default
spec:
clusterName: default
template:
spec:
bootstrap:
#dataSecretName: ""
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfig
name: default
clusterName: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedMachinePool
name: default
version: v1.33.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedMachinePool
metadata:
name: default
spec:
roleName: "nodes.cluster-api-provider-aws.sigs.k8s.io"
scaling:
minSize: 1
maxSize: 3
amiType: CUSTOM
awsLaunchTemplate:
ami:
eksLookupType: AmazonLinux2023
instanceMetadataOptions:
httpTokens: required
httpPutResponseHopLimit: 2
instanceType: "m5a.16xlarge"
rootVolume:
size: 80
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfig
metadata:
name: default
spec:
kubelet:
config:
evictionHard:
memory.available: "2000Mi" |
/test pull-cluster-api-provider-aws-e2e-eks |
10503ae
to
476eb43
Compare
/test pull-cluster-api-provider-aws-e2e-eks |
/retest |
3 similar comments
/retest |
/retest |
/retest |
476eb43
to
234d905
Compare
/test pull-cluster-api-provider-aws-e2e-eks |
234d905
to
278eaba
Compare
/test pull-cluster-api-provider-aws-e2e-eks |
278eaba
to
a090977
Compare
a090977
to
4f06259
Compare
/retest |
1 similar comment
/retest |
@faiq: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR implements the nodeadm config type outlined by the KEP #5678
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
I'd like some guidance on how to change the following
Checklist:
Release note: