Category: Data, Redis, Kubernetes

by If you try to run a resource-hungry application, especially on a cluster which has autoscaling enabled, at some point this happens: Well, this is true to some extent, but the answer is – it depends, and it all boils down to a crucial topic associated with Kubernetes cluster management.

Container hard memory limit is, in this case, 16GB, and we can’t run our container.

This is the easiest and safest way to make sure that pod autoscaling and cluster scaling down is not going to affect the overall solution stability – as long as the minimal set of containers configured with the disruption budget can fit the minimal cluster size and it is enough to handle the bare minimum of requests.

Not really, because it is possible to fit 20 video converters at once in the cluster when there is no traffic on the UI (frontend and backend) and we artificially limit the deployment ability to scale.

Related Articles