Horizontal pod autoscaling in Kubernetes

kubernetes vertical pod autoscaler
horizontal pod autoscaler github
kubernetes autoscaling yaml example
horizontal pod autoscaler custom metrics
helm horizontal pod autoscaler

I have a cluster that scales based on the CPU usage of my pods. The documentation states that i should prevent thrashing by scaling to fast. I want to play around with the autoscaling speed but i can't seem to find where to apply the following flags:

  • --horizontal-pod-autoscaler-downscale-delay
  • --horizontal-pod-autoscaler-upscale-delay

My goal is to set the cooldown timer lower then 5m or 3m, does anyone know how this is done or where I can find documentation on how to configure this? Also if this has to be configured in the hpa autoscaling YAML file, does anyone know what definition should be used for this or where I can find documentation on how to configure the YAML? This is a link to the Kubernetes documentation about scaling cooldowns i used.

The HPA controller is part of the controller manager and you'll need to pass the flags to it, see also the docs. It is not something you'd do via kubectl. It's part of the control plane (master) so depends on how you set up Kubernetes and/or which offering you're using. For example, in GKE the control plane is not accessible, in Minikube you'd ssh into the node, etc.

Horizontal Pod Autoscaler Walkthrough, Now that the deployment is running, we will create a Horizontal Pod Autoscaler for it. To create it, we will use kubectl autoscale command,  The Horizontal Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The current stable version, which only includes support for CPU autoscaling, can be found in the autoscaling/v1 API version.

As per all the discussion over here is my experience and it's working for me, may be it can help someone.

ssh to master node and edit /etc/kubernetes/manifests/kube-controller-manager.manifest like below

- /hyperkube
- controller-manager
- --kubeconfig=/etc/kubernetes/kube-controller-manager-kubeconfig.yaml
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/service-account-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
- --enable-hostpath-provisioner=false
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --pod-eviction-timeout=5m0s
- --profiling=false
- --terminated-pod-gc-threshold=12500
- --horizontal-pod-autoscaler-downscale-delay=2m0s
- --horizontal-pod-autoscaler-upscale-delay=2m0s
- --v=2
- --use-service-account-credentials=true
- --feature-gates=Initializers=False,PersistentLocalVolumes=False,VolumeScheduling=False,MountPropagation=False

The quoted part is the parameters I have added. without restarting the kubelet service it's updated.

If you don't find this value updated you can restart systemctl restart kubelet.

Note : I have created HA-cluster using kubespray

Hope this can be savior for someone.

Thank you!

Autoscaling in Kubernetes, Effective kubernetes auto-scaling requires coordination between two layers of scalability: (1) Pods layer autoscalers, this includes Horizontal Pod Autoscaler  You can read a description of the Horizontal Pod Autoscaler algorithm in the Kubernetes project documentation. Responding to multiple metrics If you configure a workload to autoscale based on

if you setup cluster using kubeadm then add those parameters to kubeadm master configuration file under controllerManagerExtraArgs. sample is given below

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: ${k8s_version}
cloudProvider: vsphere
  advertiseAddress: ${k8s_master_ip}
  controlPlaneEndpoint: ${k8s_master_lb_hostname}
  - ${k8s_master_lb_ip}
  - ${k8s_master_ip0}
  - ${k8s_master_ip1}
  - ${k8s_master_ip2}
  - ${k8s_master_lb_hostname}
  endpoint-reconciler-type: lease
  cloud-config: /etc/vsphere/config
  enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,Priority"
  horizontal-pod-autoscaler-use-rest-clients: true
  horizontal-pod-autoscaler-downscale-delay: 5m0s
  horizontal-pod-autoscaler-upscale-delay: 2m0s
  - https://${k8s_master_ip0}:2379
  - https://${k8s_master_ip1}:2379
  - https://${k8s_master_ip2}:2379
  caFile: /etc/kubernetes/pki/etcd/ca.pem
  certFile: /etc/kubernetes/pki/etcd/client.pem
  keyFile: /etc/kubernetes/pki/etcd/client-key.pem


Kubernetes Autoscaling 101: Cluster Autoscaler, Horizontal Pod , Send feedback. Horizontal Pod Autoscaling. Contents; Overview; How HPA works. Per-Pod resources; Responding to multiple metrics; Preventing thrashing. apiVersion: autoscaling/v1 is the default, and allows you to autoscale based only on CPU utilization. To autoscale based on other metrics, using `apiVersion: autoscaling/v2beta1 is recommended. The example in Configuring a Deployment uses apiVersion: autoscaling/v1. apiVersion: autoscaling/v2beta1 is recommended for creating new HPA objects. It allows you to autoscale based on multiple metrics, including custom or external metrics.

Configuring a Horizontal Pod Autoscaler, The Horizontal Pod Autoscaler is a built-in Kubernetes feature that allows to horizontally scale applications based on one or more monitored metrics. Horizontal  Horizontal Pod Autoscaler Walkthrough Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with beta support, on some other, application-provided metrics).

Horizontal Pod Autoscaling, In this article we will take a deep dive into Kubernetes autoscaling tools including the cluster autoscaler, the horizontal pod autoscaler and the  Explore the HorizontalPodAutoscaler resource of the autoscaling/v1 module, including examples, input properties, output properties, lookup functions, and supporting types. configuration of a horizontal pod autoscaler.

How to autoscale apps on Kubernetes with custom metrics, When facing sensible load variations on your Kubernetes infrastructure, the horizontal pod autoscaling is the solution! Check this article to get  After a Kubernetes Cluster is ready you can add a Horizontal Pod Autoscaler (also referred to as an HPA) so that your Cluster adds and removes Pods as necessary based on resource metrics.

  • You now have labels for custom metrics with Kubernetes 1.12 (Sept. 2018): stackoverflow.com/a/52565900/6309
  • I have tried accessing the kube-controller-manager through kubectl but i can't seem to find the correct way. Do you know how to access the kube-controller-manager?
  • Not something you'd do via kubectl. It's part of the control plane (master) so depends on how you set up Kubernetes and/or which offering you're using. For example, in GKE the control plane is not accessible, in Minikube you'd ssh into the node, etc.
  • So it is not possible to do it when using the GKE? Or should i ssh into the master node?
  • Not on GKE, no. You don't have access to the master there.
  • Thanks and maybe my (old) experiment here is still somehow useful: github.com/mhausenblas/k8s-autoscale