How to schedule pods restart

kubectl restart pods in deployment
kubernetes restart pod periodically
pod restart policy
kubectl rollout restart
pod-reaper
restart kubernetes
kubernetes dashboard restart pod
minikube restart pod

Is it possible to restart pods automatically based on the time?

For example, I would like to restart the pods of my cluster every morning at 8.00 AM.

There's a specific resource for that: CronJob

Here an example:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: your-cron
spec:
  schedule: "*/20 8-19 * * 1-5"
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: your-periodic-batch-job
        spec:
          containers:
          - name: my-image
            image: your-image
            imagePullPolicy: IfNotPresent
          restartPolicy: OnFailure

change spec.concurrencyPolicy to Replace if you want to replace the old pod when starting a new pod. Using Forbid, the new pod creation will be skip if the old pod is still running.

What's the best way to restart a pod at regular intervals? : kubernetes, You are effectively programming your app to fail on a schedule. Kubernetes handles restarts like normal. And your application controls the timing of when to start  This is a filthy way to restart pods but sometimes we need to do filthy things.. “How to schedule pods to restart using Kubernetes CronJob” is published by Gloria Palma González.

Another quick and dirty option for a pod that has a restart policy of Always (which cron jobs are not supposed to handle - see creating a cron job spec pod template) is a livenessProbe that simply tests the time and restarts the pod on a specified schedule ex. After startup, wait an hour, then check hour every minute, if hour is 3(AM) fail probe and restart, otherwise pass

livenessProbe:
  exec:
    command:
    - exit $(test $(date +%H) -eq 3 && echo 1 || echo 0)
  failureThreshold: 1
  initialDelaySeconds: 3600
  periodSeconds: 60

Time granularity is up to how you return the date and test ;) Of course this does not work if you are already utilizing the liveness probe as an actual liveness probe ¯\_(ツ)_/¯

How to Restart Kubernetes Pod - FAUN, This article demonstrates how to restart your running pods with kubectl (a command line interface for running commands against Kubernetes  To start the pod again, set the replicas to more than 0. kubectl scale deployment chat --replicas=2 -n service kubectl get pods -n service NAME READY STATUS RESTARTS AGE. api-7996469c47-d7zl2 1/1

Use a cronjob, but not to run your pods, but to schedule a Kubernetes API command that will restart the deployment everyday (kubectl rollout restart). That way if something goes wrong, the old pods will not be down or removed.

Rollouts create new ReplicaSets, and wait for them to be up, before killing off old pods, and rerouting the traffic. Service will continue uninterrupted.

You have to setup RBAC, so that the Kubernetes client running from inside the cluster has permissions to do needed calls to the Kubernetes API.

---
# Service account the client will use to reset the deployment,
# by default the pods running inside the cluster can do no such things.
kind: ServiceAccount
apiVersion: v1
metadata:
  name: deployment-restart
  namespace: <YOUR NAMESPACE>
---
# allow getting status and patching only the one deployment you want
# to restart
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: deployment-restart
  namespace: <YOUR NAMESPACE>
rules:
  - apiGroups: ["apps", "extensions"]
    resources: ["deployments"]
    resourceNames: ["<YOUR DEPLOYMENT NAME>"]
    verbs: ["get", "patch", "list", "watch"] # "list" and "watch" are only needed
                                             # if you want to use `rollout status`
---
# bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: deployment-restart
  namespace: <YOUR NAMESPACE>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: deployment-restart
subjects:
  - kind: ServiceAccount
    name: deployment-restart
    namespace: <YOUR NAMESPACE>

And the cronjob specification itself:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: deployment-restart
  namespace: <YOUR NAMESPACE>
spec:
  concurrencyPolicy: Forbid
  schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock
  jobTemplate:
    spec:
      backoffLimit: 2 # this has very low chance of failing, as all this does
                      # is prompt kubernetes to schedule new replica set for
                      # the deployment
      activeDeadlineSeconds: 600 # timeout, makes most sense with 
                                 # "waiting for rollout" variant specified below
      template:
        spec:
          serviceAccountName: deployment-restart # name of the service
                                                 # account configured above
          restartPolicy: Never
          containers:
            - name: kubectl
              image: bitnami/kubectl # probably any kubectl image will do,
                                     # optionaly specify version, but this
                                     # should not be necessary, as long the
                                     # version of kubectl is new enough to
                                     # have `rollout restart`
              command:
                - 'kubectl'
                - 'rollout'
                - 'restart'
                - 'deployment/<YOUR DEPLOYMENT NAME>'

Optionally, if you want the cronjob to wait for the deployment to roll out, change the cronjob command to:

command:
 - bash
 - -c
 - >-
   kubectl rollout restart deployment/<YOUR DEPLOYMENT NAME> &&
   kubectl rollout status deployment/<YOUR DEPLOYMENT NAME>

Pod Lifecycle, Cron jobs can also schedule individual tasks for a specific time, such as if you -​c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure Now, find the pods that the last scheduled job created and view the  Control Pod Placement to Projects. The Pod Node Selector admission controller allows you to force pods onto nodes associated with a specific project and prevent pods from being scheduled in those nodes. The Pod Node Selector admission controller determines where a pod can be placed using labels on projects and node selectors specified in pods.

According to cronjob-in-kubernetes-to-restart-delete-the-pod-in-a-deployment you could create a kind: CronJob with a jobTemplate having containers. So your CronJob will start those containers with a activeDeadlineSeconds of one day (until restart). According to you example, it will be then schedule: 0 8 * * ? for 8:00AM

Running Automated Tasks with a CronJob, CronJobs can also schedule individual tasks for a specific time, such as if you The Job specification restart policy only applies to the pods, and not the job  If a container on a pod fails and the restart policy is set to OnFailure, the pod stays on the node and the container is restarted. If you do not want the container to restart, use a restart policy of Never .

Running tasks in pods using jobs, Same as 1, but instead of deleting each pod, the command iterates through the pods and issues some kind of "restart" command to each pod  Kured checks for nodes that require a reboot every 60 minutes by default. Monitor and review reboot process. When one of the replicas in the DaemonSet has detected that a node reboot is required, a lock is placed on the node through the Kubernetes API. This lock prevents additional pods being scheduled on the node.

Rolling restart of pods · Issue #13488 · kubernetes/kubernetes · GitHub, By default, all the running pods have the restart policy set to always Node affinity will ensure that pods are scheduled on nodes that meet  Wait 30 seconds, then open the lid. Go to Settings > Bluetooth and tap the "i" icon next to your AirPods. 2 Then tap Forget This Device, and tap again to confirm. With the lid open, press and hold the setup button on the back of the case for about 15 seconds, until you see the status light flashing amber. With the lid open, place your AirPods

5 Best Practices for Configuring Kubernetes Pods Running in , kubectl get po NAME READY STATUS RESTARTS AGE myapp-pod PodScheduled True the Pod has been scheduled to a node ( node : a  As discussed in pod lifecycle, Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a Node A node is a worker machine in Kubernetes. dies, the Pods scheduled to that node are scheduled for deletion, after a timeout period.

Comments
  • It is not clear to me how this works. Does it deploy a new pod and thus Kubernetes automatically remove one of the old pods?
  • Surely this approach would cause it to continually restart during the specified period, like a whole minute. Using precision to the second makes it potentially possible to miss the check all together. Maybe checking uptime if the update is greater than 24 hours would be simpler and more appropriate?
  • This approach does avoid restart storms by waiting an hour after startup to begin the probe again (initialDelaySeconds) so anywhere between 3:00 and 3:01 it fails, and then once it restarts it waits an hour to start checking time again (with startup time for a fairly large vert.x app ~ 25 seconds, first probes start between 4:01 and 4:02)
  • The above liveness command cannot be written on one line this way. You can use - bash, - -c, and - exit $(test $(date +%H) -eq 3 && echo 1 || echo 0) on three separate lines though.
  • @MassoodKhaari you are correct, since the test is run in the pod's Docker container, the date / test / exit commands are entirely dependent on the container's shell
  • This approach has some downtime. After liveness probe fails and before the container is restarted, pod cannot accept traffic. If all containers happen to restart at the same time, there will be service interruption.