Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onboardingTicket: add uid to the ticket #2844

Merged
merged 2 commits into from
Nov 6, 2024

Conversation

rewantsoni
Copy link
Member

@rewantsoni rewantsoni commented Oct 8, 2024

onboarding ticket should contain the storageCluster UID the token was generated for and to fetch the storageCluster for UID, we fetch the storage Cluster from operator namespace

@mrudraia1
Copy link

I faced storagecluster UID error while testing the PR 2827. The PR 2827 can be tested after this PR is merged.

services/ux-backend/main.go Outdated Show resolved Hide resolved
controllers/util/client.go Outdated Show resolved Hide resolved
controllers/util/k8sutil.go Outdated Show resolved Hide resolved
onboarding-validation-keys-generator/main.go Outdated Show resolved Hide resolved
services/ux-backend/main.go Show resolved Hide resolved
services/ux-backend/main.go Outdated Show resolved Hide resolved
services/ux-backend/main.go Outdated Show resolved Hide resolved
onboarding-validation-keys-generator/main.go Outdated Show resolved Hide resolved
@rewantsoni rewantsoni force-pushed the tokens branch 3 times, most recently from 0a6badd to 78d3681 Compare October 21, 2024 09:44

func GetStorageClusterInNamespace(ctx context.Context, cl client.Client, namespace string) (*ocsv1.StorageCluster, error) {
storageClusterList := &ocsv1.StorageClusterList{}
err := cl.List(ctx, storageClusterList, client.InNamespace(namespace))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please cap this list operation to 1 result?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was going through #2837, and from what I understood we can have multiple storageClusters, but the other storageCluster's phase will be ignored. Modified the function there to GetStorageClusterInNamespace

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You cannot have multi-storage clusters in a single namespace, and this lists only inside the namespace. So my ask adding client.Count(1) is meaningful

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I remember we can have multiple storageCluster created in the same namespace, but their phases will be ignored.
@malayparida2000 @iamniting please correct me if I am wrong

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have multple storageclusters inside a single namespace, Just that except one all else would be in phase ignored. Limiting the list to 1 result can easily give us the wrong result with a phasIgnored storageCluster. I would suggest listing all the items & then filtering from that list based on the requirement is the better approach.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: #2877

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls excuse for pitching in and correct me if I'm not understanding it properly. Agreed, it's an user error but if it did happen, count of (1) has a min of 50% chance of going wrong.

until we make the storagecluster in a ns singleton I don't think we incur a lot of mem usage if runtime does a deep copy to provide us as the stack size can go upto MBs and last time I checked storagecluster is around 2KB (without managed fields) but w/ pointer and all it'll more than that but not in a 2 digit MBs.

however, I agree w/ Ohad since we are returning a pointer to caller and storagecluster escapes to heap so does the list that is queried in this function increasing GC and heap usage, but kube clients outside of manager aren't backed up by cache and so every list call hits the api server (with the limit if set).

imo, we should code for correctness and list all storageclusters to filter out ignored one's unless we make storagecluster a singleton before merging this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree with Leela. Whether or not we decide to enforce having a single StorageCluster as mandatory is a separate discussion. However, currently, limiting the results to just one could lead to incorrect outcomes if we actually have more than one StorageCluster. So we can not limit the result to count(1) as of now.

Regarding the idea of creating a webhook to prevent additional StorageCluster creation, let's consider the practical implications. In most cases, a customer would either have a single StorageCluster as intended, or they might mistakenly create a second one. It's unlikely that a customer would intentionally create hundreds of ignored StorageCluster CRs to overwhelm our cache. Given this, the overhead of implementing a webhook to restrict additional StorageCluster creation might outweigh listine an extra StorageCluster in the "ignored" phase if one exists due to a customer error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leelavg The point is that the probability of that happening is very low and we will inform that on the log and will instruct the user/support to delete the other StorageCluster. The code as it is now just ignores the additional storage cluster which is the wrong thing to do! It should be an error.

now, considering the amount of places and flows where this method is invoked and the count of deep copies this method will create for a very big struct (storage cluster). I am asking us to treat an additional storage cluster as an error not as thing that is just ignored

@rewantsoni please make that change as requested

Copy link
Contributor

@nb-ohad nb-ohad Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malayparida2000

Given this, the overhead of implementing a webhook to restrict additional StorageCluster creation might outweigh listine an extra StorageCluster in the "ignored" phase if one exists due to a customer error.

We can deliberate about the necessity for a webhook at a later time, that has nothing to do with the fact that the system is misconfigured and we just ignore it.

controllers/util/provider.go Outdated Show resolved Hide resolved
controllers/util/provider.go Show resolved Hide resolved
services/types.go Outdated Show resolved Hide resolved
controllers/util/k8sclient.go Outdated Show resolved Hide resolved
Copy link
Contributor

@nb-ohad nb-ohad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rewantsoni I see the changes to the token done for the peertokens handler, but where are the changes to the peerclient handler?
The introduction of a storagecluster UID into the token need to happen there as well (it should be a mandatory field)

@rewantsoni
Copy link
Member Author

@rewantsoni You are in the context of handling the message, please follow the proper way to write into the response stream (w.WriteHeader), and use a proper HTTP response code
@nb-ohad http.Error is a helper function that sets the error code on the header and sets the response. See here

1 similar comment
@rewantsoni
Copy link
Member Author

@rewantsoni You are in the context of handling the message, please follow the proper way to write into the response stream (w.WriteHeader), and use a proper HTTP response code
@nb-ohad http.Error is a helper function that sets the error code on the header and sets the response. See here

@rewantsoni rewantsoni force-pushed the tokens branch 4 times, most recently from 7cb261c to f25ed85 Compare October 22, 2024 05:06
@rewantsoni rewantsoni requested a review from leelavg November 4, 2024 07:17
controllers/util/k8sutil.go Outdated Show resolved Hide resolved
onboarding ticket should contain the storageCluster UID the token
was generated for and for that it requires input for the
namespacedName of the storageCluster we want to generate the
ticket for

Signed-off-by: Rewant Soni <[email protected]>
Signed-off-by: Rewant Soni <[email protected]>
@nb-ohad
Copy link
Contributor

nb-ohad commented Nov 6, 2024

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 6, 2024
Copy link
Contributor

openshift-ci bot commented Nov 6, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nb-ohad, rewantsoni

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 6, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit 502f1dd into red-hat-storage:main Nov 6, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants