Kubernetes delete pod job

kubectl delete job
kubernetes job queue
kubernetes job multiple containers
kubernetes delete job after completion
kubernetes sequential jobs
kubernetes job vs deployment
kubernetes run job before deployment
kubernetes cron job

I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary stop gap to fix an issue.

Yes, it's possible.

I think the easiest way is just to call the Kubernernes API directly from a job. Considering RBAC is configured, something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: cleanup
spec:
  serviceAccountName: service-account-that-has-access-to-api
  template:
    spec:
      containers:
      - name: cleanup
        image: image-that-has-curl
        command:
        - curl
        - -ik 
        - -X
        - DELETE
        - -H
        - "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
        - https://kubernetes.default.svc.cluster.local/api/v1/namespaces/{namespace}/pods/{name}
      restartPolicy: Never
  backoffLimit: 4

You can also run a kubectl proxy sidecar to connect to the cluster using localhost. More information here

Or even running plain kubectl in a pod is also an option: Kubernetes - How to run kubectl commands inside a container?

Kubernetes version (use kubectl version ): v1.13.0; Cloud provider or hardware configuration: 15 nodes; OS (e.g. from /etc/os-release):. NAME="  Use a CronJob (1, 2) to run the Job every hour. K8S API can be accessed from Pod with proper permissions. When a Pod is created a default ServiceAccount is assigned to it by default. The default ServiceAccount has no RoleBinding and hence the default ServiceAccount and also the Pod has no permissions to invoke the API.

According to the docs "Deleting a Job will cleanup the pods it created" - this only appears to be true when deleting jobs via kubectl. Issuing  These automated jobs run like Cron tasks on a Linux or UNIX system. Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period. Cron jobs have limitations and idiosyncrasies.

There is another workaround possibly.

You could create a liveness probe (super easy if you have none already) that doesn't run until after one hour and always fail.

livenessProbe:
  tcpSocket:
    port: 1234
  initialDelaySeconds: 3600

This will wait 3600 seconds (1 hour) and then try to connect to port 1234 and if that fails it will kill the container (not the pod!).

When the Job gets deleted, any created pods get deleted as well. Your First Kubernetes Job. Creating a Kubernetes Job, like other Kubernetes  @soltysh There doesn't appear to be any ownerReference set. This is the full yaml description of one of the pods: apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-01-16T08:00:07Z" generateName: autoimport-1547625600- labels: controller-uid: bcf8d9b3-1964-11e9-8180-2a239420aa56 job-name: autoimport-1547625600 name: autoimport-1547625600-llh89 namespace: foo-prod resourceVersion

A job in Kubernetes is a supervisor for pods carrying out batch processes, that is, a process that runs for kubectl delete job countdown job "countdown" deleted. Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs? The K8s Job Documentation states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted.

Deleting Pods. You can perform a graceful pod deletion with the following command: kubectl delete pods <pod>. For  Using the garbage collector would only delete the pods, but the job itself would still be in the system. If you don't want to delete the job manually, you could write a little script that is running in your cluster and checks for completed jobs and deletes them.

Deleting a Job will cleanup the pods it created. This is important because kubernetes by default will try to get a success out of a failing job. kubectl delete -n NAMESPACE deployment DEPLOYMENT. Where NAMESPACE is the namespace it's in, and DEPLOYMENT is the name of the deployment. In some cases it could also be running due to a job or daemonset. Check the following and run their appropriate delete command.

Comments
  • Why not run a cron outside of k8s to delete certain pods every hour. It's much easier?
  • If the command returns a non-zero value, the kubelet kills the Container and restarts it. - kubernetes.io/docs/tasks/configure-pod-container/… - So, we are back to square one.
  • Yes, this is what is supposed to happen when a pod is deleted. Or is it not a part of a replicaset/deployment?
  • but then the pod would be useable or the app inside the pod be useable in the mean time? web application serve traffic
  • The OP was about Pod deletion. But it's not getting deleted in this solution.
  • @user1555190, no. As soon as it gets deleted it is take out of any service that wraps it. Once the container has been restarted it will be available in the service. If the purpose is for the pod to be unavailable this won't work. If the purpose is for the app to restart this will work perfekt.