Kubernetes: kubectl apply does not update pods when using "latest" tag

kubectl patch configmap
kubectl restart pod
kubectl apply not working
kubernetes update
kubernetes rolling update
kubectl replace
kubernetes stop pod
kubernetes auto update image

I'm using kubectl apply to update my Kubernetes pods:

kubectl apply -f /my-app/service.yaml
kubectl apply -f /my-app/deployment.yaml

Below is my service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: my-app
  labels:
    run: my-app
spec:
  type: NodePort
  selector:
    run: my-app 
  ports:
  - protocol: TCP
    port: 9000
    nodePort: 30769

Below is my deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:  
  selector:
    matchLabels:
      run: my-app
  replicas: 2
  template:
    metadata:
      labels:
        run: my-app
    spec:
      containers:
      - name: my-app
        image: dockerhubaccount/my-app-img:latest
        ports:
        - containerPort: 9000
          protocol: TCP
      imagePullSecrets:
      - name: my-app-img-credentials
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%

This works fine the first time, but on subsequent runs, my pods are not getting updated.

I have read the suggested workaround at https://github.com/kubernetes/kubernetes/issues/33664 which is:

kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

I was able to run the above command, but it did not resolve the issue for me.

I know that I can trigger pod updates by manually changing the image tag from "latest" to another tag, but I want to make sure I get the latest image without having to check Docker Hub.

Any help would be greatly appreciated.

If nothing changes in the deployment spec, the pods will not be updated for you. This is one of many reasons it is not recommended to use :latest, as the other answer went into more detail on. The Deployment controller is very simple and pretty much just does DeepEquals(old.Spec.Template, new.Spec.Template), so you need some actual change, such as you have with the PATCH call by setting a label with the current datetime.

Using kubectl apply updates deployment but leaves pods behind , What keywords did you search in Kubernetes issues before filing this one? Using kubectl apply updates deployment but leaves pods behind #39798. Closed Apply command does not apply new configuration correctly in  In that particular case the problem arose when using kubectl rolling-update. In my case I see the same problem when using kubectl apply -f deplyoment.yaml. It's not happening all the time, quite seldom actually.

You're missing an imagePullPolicy in your deployment. Try this:

containers:
- name: my-app
  image: dockerhubaccount/my-app-img:latest
  imagePullPolicy: Always

The default policy is ifNotPresent which is why yours is not updating.

I will incorporate two notes from the link:

Note: You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly

Note: The caching semantics of the underlying image provider make even imagePullPolicy: Always efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed

Managing Resources, Use kubectl patch to update Kubernetes API objects in place. Do a It conflicts with the core values of the Kubernetes project and our community does not tolerate it. The output shows that the Deployment has two Pods. kubectl apply -f https://k8s.io/examples/application/deployment-retainkeys.yaml. kubernetes/kubernetes#29542 has been fixed, use apply instead of create to create third party resources in custom service monitoring example. Tapppi mentioned this issue May 3, 2017 Use kubectl apply in custom service example deploy #334

Turns out I misunderstood the workaround command I gave from the link.

I thought it was a one-time command that configured my deployment to treat all future kubectl apply commands as a trigger to update my pods.

I actually just had to run the command every time I wanted to update my pods:

kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

Many thanks to everyone who helped!

Update API Objects in Place Using kubectl patch, Apply Creates and Updates Resources in a cluster through running kubectl apply Any Resources that do not exist and are declared in Resource Config when  Update API Objects in Place Using kubectl patch. This task shows how to use kubectl patch to update an API object in place. The exercises in this task demonstrate a strategic merge patch and a JSON merge patch. Before you begin. You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your

There are two things here that relates the issue,

  1. It is suggested to use kubectl apply for the first time while creating a resource and later recommended to use kubectl replace or kubectl edit or kubectl patch commands which subsequently call the kubectl apply.

  2. Once you create a service using either kubectl apply or kubetcl create you cannot replace that service with a yaml file. In other words, service generates a random IP that cannot be patched or replaced. The only option to recreate a service is to delete the service and recreate it with the same name.

NOTE: When I tried replacing a service using kubectl apply command while I was trying to create a backup and restore solution resulted this below error.

kubectl apply -f replace-service.yaml -n restore-proj
The Service "test-q12" is invalid: spec.clusterIP: Invalid value: "10.102.x.x": provided IP is already allocated.

Apply · The Kubectl Book, # note that the annotation does not contain replicas # because it was not updated through apply kubectl.kubernetes.io/last-applied-configuration: |  Quick question - why not set an annotation on the pod template with the current time to force the repull. I believe this would execute an update using the deployment's strategy to rollout the new im age. I put together an example of how to write a controller to do this in response to webhook callbacks from dockerhub.

Declarative Management of Kubernetes Objects Using , You can also use a shorthand alias for kubectl that also works with completion: values of the Kubernetes project and our community does not tolerate it. It creates and updates resources in a cluster through running kubectl apply . labelling) kubectl get pods --show-labels # Check which nodes are  v1.18 Release Notes (EN) Kubernetes version and version skew support policy (EN) Compilando desde código fuente Learning environment Installing Kubernetes with Minikube (EN) Installing Kubernetes with Kind (EN)

kubectl Cheat Sheet, You can define Deployments to create new ReplicaSets, or to It conflicts with the core values of the Kubernetes project and our community does not tolerate it. kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment. Let's update the nginx Pods to use the nginx:1.16.1 image instead of the  Typically a tutorial has several sections, each of which has a sequence of steps. Before walking through each tutorial, you may want to bookmark the Standardized Glossary page for later references. Basics Kubernetes Basics is an in-depth interactive tutorial that helps you understand the Kubernetes system and try out some basic Kubernetes features.

Deployments, This is not a guide for people who want to debug their cluster. Services Debugging Pods The first step in debugging a Pod is taking a look at it. Update Your App For example, run kubectl apply --validate -f mypod.yaml . Use kubectl to list information about the deployment. Update the deployment. Before you begin. You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube, or you can use one of these Kubernetes playgrounds:

Comments
  • According to kubernetes official documentation, you can omit the imagePullPolicy and use :latest as the tag for the image. kubernetes.io/docs/concepts/containers/images/#updating-images
  • @PrafullLadha According to the same documentation using latest is discouraged, and the default policy is ifNotPresent. Please see the update on my answer. In fact the very first sentence on your link is the answer I gave above
  • IfNotPresent is not the default when the tag is latest. It's not recommended you use that feature, but it is present to reduce unexpected behavior.
  • I see, fair enough, was not aware of that
  • @rath Thanks, I tried adding "imagePullPolicy: Always" to my deployment.yaml, but it did not work. I seem to have misunderstood the workaround command from the link I gave. It was not a one-time command that made future kubectl apply commands update pods, it was the actual command that updated the pods. So all I had to do was run the command whenever I wanted to update my pods. Will add this as an answer later. Thanks!
  • Ah, I see, yes, doing delete and create is perfectly fine instead of apply. You can use an Ingress to make the service accessible in a better way
  • @rath You can always use a ingress or if you are on any cloud provider you can use auto provisioned load balancer by service to route traffic to the Pods in round robin.