How to get history of Pods run on Kubernetes Node?

kubectl get pods
kubectl delete pod
kubectl describe pod
kubectl logs pod
kubectl get events history
kubectl patch
kubectl jsonpath
kubectl scale deployment

In our Kuberenetes cluster, we are running into sporadic situations where a cluster node runs out of memory and Linux invokes OOM killer. Looking at the logs, it appears that the Pods scheduled onto the Node are requesting more memory than can be allocated by the Node.

The issue is that, when OOM killer is invoked, it prints out a list of processes and their memory usage. However, as all of our Docker containers are Java services, the "process name" just appears as "java", not allowing us to track down which particular Pod is causing the issues.

How can I get the history of which Pods were scheduled to run on a particular Node and when?

We use Prometheus to monitor OOM events.

This expression should report the number of times that memory usage has reached the limits:

rate(container_memory_failcnt{pod_name!=""}[5m]) > 0

FYI: this is the next best thing to proper docs, the code

kubectl Cheat Sheet, A Pod is a Kubernetes abstraction that represents a group of one or more IP address Information about how to run each container, such as the A Node can have multiple pods, and the Kubernetes master (Page History). Looking at the logs, it appears that the Pods scheduled onto the Node are requesting more memory than can be allocated by the Node. The issue is that, when OOM killer is invoked, it prints out a list of processes and their memory usage. However, as all of our Docker containers are Java services,

You can now use kube-state-metrics kube_pod_container_status_terminated_reason to detect OOM events

kube_pod_container_status_terminated_reason{reason="OOMKilled"}

kube_pod_container_status_terminated_reason{container="addon-resizer",endpoint="http-metrics",instance="100.125.128.3:8080",job="kube-state-metrics",namespace="monitoring",pod="kube-state-metrics-569ffcff95-t929d",reason="OOMKilled",service="kube-state-metrics"}

Viewing Pods and Nodes, We use Prometheus to monitor OOM events. This expression should report the number of times that memory usage has reached the limits: A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster.

I guess your pods don't have requests and limits set, or the values are not ideal.

If you setup this properly, when a pod starts to use too much ram, that pod will be killed and you will be able to find out what is causing the issues.

About seeing all the pods on a node, you can go with kubectl get events or docker ps -a on the node, as cited on the other answers/comments.

How to get history of Pods run on Kubernetes Node?, We use Prometheus to monitor OOM events. This expression should report the number of times that memory usage has reached the limits: When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a Node A node is a worker machine in Kubernetes. in your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, the Pod is evicted for lack of resources, or the Node fails.

One way is to see docker ps -a output and correlate the container names with your pod's containers.

java - How to get history of Pods run on Kubernetes Node?, To use the kubectl logs command, you would pass either a pod name the etcd-​minikube pod in the kube-system namespace, you would run: The decision to delete the pods cannot be communicated to the kubelet until communication with the apiserver is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node. In versions of Kubernetes prior to 1.5, the node controller would force delete these unreachable pods from the

How to View Logs in Kubectl, My Amazon Elastic Kubernetes Service (Amazon EKS) pods that are running To get information from the Events history of your pod, run the  Kubernetes API - gets Pods on specific nodes As mentioned in the accepted answer the PR is now merged and you can get pods by node as follows: kubectl get pods

Troubleshoot Pod Status in Amazon EKS, A Kubernetes cluster is a set of node machines for running containerized applications. Manage your Red Hat certifications, view exam history, and download If 1 of those containers crashes, Kubernetes will see that only 2 replicas are running, How does a cluster relate to a node, a pod, and other Kubernetes terms? Understanding Pods. A Pod is the basic execution unit of a Kubernetes application–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your Cluster A set of machines, called nodes, that run containerized applications managed by Kubernetes.

What is a Kubernetes cluster?, How to tune Kubernetes to get the most out of your production workloads. to pods through scaling, deployment history, and rollback features. You would have encountered a situation where the pod is in running state but  The answer to this question (Will (can) Kubernetes run Docker containers on the master node(s)?) suggests that it is indeed possible to run user pods on a master node - but doesn't address whether there are any issues associated with allowing this.

Comments
  • One way would be to check kubectl get events and get an idea of creation/deletion of various pods on different nodes.