diff --git a/docs/reference/Scheduler.md b/docs/reference/Scheduler.md index d30f742a9..8c403cf0d 100644 --- a/docs/reference/Scheduler.md +++ b/docs/reference/Scheduler.md @@ -203,7 +203,8 @@ autoscaling: "metadata": {} } } - ] + ], + "pdbMaxUnavailable": "10%", "autoscaling": { "enabled": true, "min": 10, @@ -234,6 +235,7 @@ forwarders: Forwarders autoscaling: Autoscaling spec: Spec annotation: Map +pdbMaxUnavailable: String ``` - **Name**: Scheduler name. This name is unique and will be the same name used for the kubernetes namespace. It's @@ -254,6 +256,7 @@ annotation: Map used by them, limits and images. More info [here](#spec). - **annotations**: Allows annotations for the scheduler's game room. Know more about annotations on Kubernetes [here](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations) +- **pdbMaxUnavailable**: Defines the disruption budget for game rooms. Optional and defaults to 10%. Value can be defined as a string representing the % between 0 and 100, "5%", or a raw number of rooms "100". ### PortRange The **PortRange** is used to select a random port for a GRU between **start** and **end**. @@ -403,4 +406,13 @@ It is represented as: - **name**: Name of the port. Facilitates on recognition; - **protocol**: Port protocol. Can be UDP, TCP or SCTP.; - **port**: The port exposed. -- **hostPortRange**: The [port range](#portrange) for the port to be allocated in the host. Mutually exclusive with the port range configured in the root structure. \ No newline at end of file +- **hostPortRange**: The [port range](#portrange) for the port to be allocated in the host. Mutually exclusive with the port range configured in the root structure. + +#### PDB Max Unavailable + +A string value that defines the disruption budget of Game Rooms from a specific scheduler. +Maestro will create a [PDB Resource](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) +to prevent evictions drastically impacting availability of the Game Rooms. + +By default this value is set to 10%, so at worst runtime can evit 10% of the pods. There is no way to control +what pods will be evicted - if it prefers ready, pending, etc.