-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: wait for COSI controller to be ready #1014
base: main
Are you sure you want to change the base?
Conversation
ca25e3b
to
32809e4
Compare
ctx, input.WorkloadCluster.Namespace, input.WorkloadCluster.Name, | ||
).GetClient() | ||
|
||
WaitForDeploymentsAvailable(ctx, framework.WaitForDeploymentsAvailableInput{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
This additional check is good.
Does not helm Release proxy's Status will be in "Failed" state if the deployment fails?
I traced the code to https://github.com/helm/helm/blob/main/pkg/kube/wait.go#L63 where it checks for Ready
before setting on the status
field on the HelmReleaseProxy.
context:
I am also looking into ways to report status of addon upgrade back to the CAREN clients.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good find! I copied this from existing code
cluster-api-runtime-extensions-nutanix/test/e2e/csi_helpers.go
Lines 316 to 324 in 162915e
WaitForDeploymentsAvailable(ctx, framework.WaitForDeploymentsAvailableInput{ | |
Getter: workloadClusterClient, | |
Deployment: &appsv1.Deployment{ | |
ObjectMeta: metav1.ObjectMeta{ | |
Name: "nutanix-csi-controller", | |
Namespace: "ntnx-system", | |
}, | |
}, | |
}, input.deploymentIntervals...) |
If its not necessary we can remove all of them in a future PR, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we wait for HCPs right now so as to not block the CAAPH control loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have this code to wait for HCP to be ready. https://github.com/nutanix-cloud-native/cluster-api-runtime-extensions-nutanix/blob/main/pkg/handlers/generic/lifecycle/addons/helmaddon.go#L220C6-L220C19
It is only set to be invoked for the metallb addon.
HCPs right now so as to not block the CAAPH control loop.
Is it because we dont want to wait/timeout returning from AfterControlPlaneInitialization
/BeforeClusterUpgrade
hook?
So cluster upgrade will continue even if a critical addon deployment failed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering if we should wait for both HCP and HRP to be ready for upgrade addons
usecase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think changes in this PR is good to merge. Do not want to block merging it. I will take this discussion offline.
162915e
to
a42a3db
Compare
What problem does this PR solve?:
Missed this in #1008 since I thought that all HCP are already checked.
Which issue(s) this PR fixes:
Fixes #
How Has This Been Tested?:
Special notes for your reviewer: