-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement DRClusterConfig reconciler to create required ClusterClaims #1485
Conversation
cd24518
to
8b258ad
Compare
internal/controller/drclusters.go
Outdated
return err | ||
} | ||
|
||
if _, err := mwu.FindManifestWork(mwu.BuildManifestWorkName(util.MWTypeDRCConfig), drcluster.GetName()); err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ShyamsundarR it's the only place we are checking manifest after deletion. If there is no error in line 248, the manifest will be deleted, so why we need an additional check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, I want to ensure that the DRClusterConfig resource at the managed cluster is deleted before we proceed to delete the MW for the ramen components at the managed cluster. If we reach that before the resource is actually deleted, then the DRClusterConfig reconciler may not be existing to finalize the resource locally.
Hence we delete and wait till it is actually deleted, then we proceed with deletion of the ramen operators on the managed cluster.
internal/controller/util/mw_util.go
Outdated
) error { | ||
rawObject, err := GetRawExtension(mw.Spec.Workload.Manifests, gvk) | ||
if err != nil { | ||
return fmt.Errorf("failed fetching MaintenanceMode from manifest %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
failed fetching resource from manifest? Not MaintenanceMode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
internal/controller/util/mw_util.go
Outdated
|
||
err = json.Unmarshal(rawObject.Raw, object) | ||
if err != nil { | ||
return fmt.Errorf("failed unmarshaling MaintenanceMode from manifest %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as previous
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
if err := u.statusUpdate(); err != nil { | ||
u.log.Info("failed to update status", "failure", err) | ||
} | ||
|
||
return ctrl.Result{Requeue: requeue || u.requeue}, reconcileError | ||
return ctrl.Result{Requeue: requeue || u.requeue}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is an error during fencing - we will not return error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is an error in fencing we will note that in the status and log it, and we set requeue to true (for future reconciles). Returning an error is not required at this point.
// another DRPolicy | ||
added := map[string]bool{} | ||
|
||
for idx := range drpolicies.Items { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible, that policy is marked for deletion? Do we need to handle it anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly there is a test case for this that passes, though based on the comment I think it is possible that we may retain a stale schedule till a future DRCluster reconcile (due to any reason) is initiated. Will fix this by skipping deleted DRPolicies in the loop.
The parent PR that this is a part of is merged, hence will address it as a separate PR.
e822464
to
a1c09b4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
62b1124
to
8f10323
Compare
Also, cleanup some scaffolding comments. Signed-off-by: Shyamsundar Ranganathan <[email protected]>
- Add finalizer to resource being reconciled - Remove on delete - Update reconciler to rate limit max exponential backoff to 5 minutes Signed-off-by: Shyamsundar Ranganathan <[email protected]>
Signed-off-by: Shyamsundar Ranganathan <[email protected]>
Building the scaffold for the overall functionality. Signed-off-by: Shyamsundar Ranganathan <[email protected]>
Signed-off-by: Shyamsundar Ranganathan <[email protected]>
For classes listed, those that do not need a ClusterClaim any longer are deleted. Added a StorageClass watcher as well to the reconcile on changes to StorageClasses. Signed-off-by: Shyamsundar Ranganathan <[email protected]>
Signed-off-by: Shyamsundar Ranganathan <[email protected]>
8f10323
to
e3fe7f6
Compare
…RamenDR#1485) * Add logger to DRClusterConfig reconciler Also, cleanup some scaffolding comments. Signed-off-by: Shyamsundar Ranganathan <[email protected]> * Add initial reconcile for DRClusterConfig - Add finalizer to resource being reconciled - Remove on delete - Update reconciler to rate limit max exponential backoff to 5 minutes Signed-off-by: Shyamsundar Ranganathan <[email protected]> * Add roles for various storage classes and cluster claims Signed-off-by: Shyamsundar Ranganathan <[email protected]> * Add StorageClass listing and dummy functions for claim creation Building the scaffold for the overall functionality. Signed-off-by: Shyamsundar Ranganathan <[email protected]> * Add ClusterClaims for detected StorageClasses Signed-off-by: Shyamsundar Ranganathan <[email protected]> * Implement pruning of ClusterClaims For classes listed, those that do not need a ClusterClaim any longer are deleted. Added a StorageClass watcher as well to the reconcile on changes to StorageClasses. Signed-off-by: Shyamsundar Ranganathan <[email protected]> * Implement CLassClaims for VRClass and VSClass Signed-off-by: Shyamsundar Ranganathan <[email protected]> --------- Signed-off-by: Shyamsundar Ranganathan <[email protected]>
Part of #1403
This implements creation of ClusterClaims for the various classes detected on the ManagedCluster that is also marked via a DRClusterConfig as a DR peer.
Future considerations:
clusterID
)clusterID
validation