-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onboardingTicket: add uid to the ticket #2844
Conversation
I faced storagecluster UID error while testing the PR 2827. The PR 2827 can be tested after this PR is merged. |
4f79b83
to
c5244d6
Compare
0a6badd
to
78d3681
Compare
controllers/util/k8sutil.go
Outdated
|
||
func GetStorageClusterInNamespace(ctx context.Context, cl client.Client, namespace string) (*ocsv1.StorageCluster, error) { | ||
storageClusterList := &ocsv1.StorageClusterList{} | ||
err := cl.List(ctx, storageClusterList, client.InNamespace(namespace)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please cap this list operation to 1 result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was going through #2837, and from what I understood we can have multiple storageClusters, but the other storageCluster's phase will be ignored. Modified the function there to GetStorageClusterInNamespace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You cannot have multi-storage clusters in a single namespace, and this lists only inside the namespace. So my ask adding client.Count(1)
is meaningful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I remember we can have multiple storageCluster created in the same namespace, but their phases will be ignored.
@malayparida2000 @iamniting please correct me if I am wrong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have multple storageclusters inside a single namespace, Just that except one all else would be in phase ignored. Limiting the list to 1 result can easily give us the wrong result with a phasIgnored storageCluster. I would suggest listing all the items & then filtering from that list based on the requirement is the better approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Issue: #2877
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls excuse for pitching in and correct me if I'm not understanding it properly. Agreed, it's an user error but if it did happen, count of (1) has a min of 50% chance of going wrong.
until we make the storagecluster in a ns singleton I don't think we incur a lot of mem usage if runtime does a deep copy to provide us as the stack size can go upto MBs and last time I checked storagecluster is around 2KB (without managed fields) but w/ pointer and all it'll more than that but not in a 2 digit MBs.
however, I agree w/ Ohad since we are returning a pointer to caller and storagecluster escapes to heap so does the list that is queried in this function increasing GC and heap usage, but kube clients outside of manager aren't backed up by cache and so every list call hits the api server (with the limit if set).
imo, we should code for correctness and list all storageclusters to filter out ignored one's unless we make storagecluster a singleton before merging this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I completely agree with Leela. Whether or not we decide to enforce having a single StorageCluster as mandatory is a separate discussion. However, currently, limiting the results to just one could lead to incorrect outcomes if we actually have more than one StorageCluster. So we can not limit the result to count(1) as of now.
Regarding the idea of creating a webhook to prevent additional StorageCluster creation, let's consider the practical implications. In most cases, a customer would either have a single StorageCluster as intended, or they might mistakenly create a second one. It's unlikely that a customer would intentionally create hundreds of ignored StorageCluster CRs to overwhelm our cache. Given this, the overhead of implementing a webhook to restrict additional StorageCluster creation might outweigh listine an extra StorageCluster in the "ignored" phase if one exists due to a customer error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@leelavg The point is that the probability of that happening is very low and we will inform that on the log and will instruct the user/support to delete the other StorageCluster. The code as it is now just ignores the additional storage cluster which is the wrong thing to do! It should be an error.
now, considering the amount of places and flows where this method is invoked and the count of deep copies this method will create for a very big struct (storage cluster). I am asking us to treat an additional storage cluster as an error not as thing that is just ignored
@rewantsoni please make that change as requested
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given this, the overhead of implementing a webhook to restrict additional StorageCluster creation might outweigh listine an extra StorageCluster in the "ignored" phase if one exists due to a customer error.
We can deliberate about the necessity for a webhook at a later time, that has nothing to do with the fact that the system is misconfigured and we just ignore it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rewantsoni I see the changes to the token done for the peertokens
handler, but where are the changes to the peerclient
handler?
The introduction of a storagecluster UID into the token need to happen there as well (it should be a mandatory field)
|
1 similar comment
|
7cb261c
to
f25ed85
Compare
services/ux-backend/handlers/onboarding/clienttokens/handler.go
Outdated
Show resolved
Hide resolved
onboarding ticket should contain the storageCluster UID the token was generated for and for that it requires input for the namespacedName of the storageCluster we want to generate the ticket for Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: nb-ohad, rewantsoni The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
502f1dd
into
red-hat-storage:main
onboarding ticket should contain the storageCluster UID the token was generated for and to fetch the storageCluster for UID, we fetch the storage Cluster from operator namespace