Why Prometheus pod pending after setup it by helm in Kubernetes cluster on Rancher server?

prometheus helm
prometheus kubernetes persistent volume
install prometheus kubernetes helm
pod has unbound immediate persistentvolumeclaims
prometheus helm update configuration
prometheus-k8s-0 pending
kubernetes storage
kubernetes pvc selector

Installed Rancher server and 2 Rancher agents in Vagrant. Then switch to K8S environment from Rancher server.

On Rancher server host, installed kubectl and helm. Then installed Prometheus by Helm:

helm install stable/prometheus

Now check the status from Kubernetes dashboard, there are 2 pods pending:

It noticed PersistentVolumeClaim is not bound, so aren't the K8S components been installed default with Rancher server?

(another name, same issue)

Edit
> kubectl get pvc
NAME                                   STATUS    VOLUME    CAPACITY   
ACCESSMODES   STORAGECLASS   AGE
voting-prawn-prometheus-alertmanager   Pending                                                     6h
voting-prawn-prometheus-server         Pending                                                     6h
> kubectl get pv
No resources found.
Edit 2
$ kubectl describe pvc voting-prawn-prometheus-alertmanager
Name:          voting-prawn-prometheus-alertmanager
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        app=prometheus
               chart=prometheus-4.6.9
               component=alertmanager
               heritage=Tiller
               release=voting-prawn
Annotations:   <none>
Capacity:
Access Modes:
Events:
  Type    Reason         Age                From                         Message
  ----    ------         ----               ----                         -------
  Normal  FailedBinding  12s (x10 over 2m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

$ kubectl describe pvc voting-prawn-prometheus-server
Name:          voting-prawn-prometheus-server
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        app=prometheus
               chart=prometheus-4.6.9
               component=server
               heritage=Tiller
               release=voting-prawn
Annotations:   <none>
Capacity:
Access Modes:
Events:
  Type    Reason         Age                From                         Message
  ----    ------         ----               ----                         -------
  Normal  FailedBinding  12s (x14 over 3m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

Monitoring Kubernetes with Prometheus, However, failing to properly monitor the health of a cluster (and the applications it With that setup, let's use the stable/prometheus Helm chart. First, make 0/2 Pending 0 5m prometheus-kube-state-metrics-76cd4b4cf9-7qhbh 1/1 Running 0 5m kubectl get pod -l app=prometheus,component=server -o  This article is a follow up to Custom Alerts Using Prometheus Queries.In this post, we will also demo installing Prometheus and configuring Alertmanager to send emails when alerts are fired, but in a much simpler way – using Rancher all the way through.

PV are cluster scoped and PVC are namespaced scope. If your application running in a different namespace and PVC in a different namespace, it can be issue. If yes, use RBAC to give proper permissions, or put app and PVC in same namespace.

Can you make sure PV which is getting created from Storage class is the default SC of the cluster ?

Custom Alerts Using Prometheus Queries, Write and configure custom alerting rules, which will fire alerts when conditions are met. Prometheus Server: main component that scrapes and stores metrics in a Use Rancher to set up and configure your Kubernetes cluster. As soon as Helm has finished the deployment we can check what pods  If you only have one node, but you want to use the Rancher server in production in the future, it is better to install Rancher on a single-node Kubernetes cluster than to install it with Docker. One option is to install Rancher with Helm on a Kubernetes cluster, but to only use a single node in the cluster.

I found that i was missing storage class and storage volumes. fixed similar problems on my cluster by first creating a storage class.

kubectl apply -f storageclass.ymal 

storageclass.ymal:
    {
      "kind": "StorageClass",
      "apiVersion": "storage.k8s.io/v1",
      "metadata": {
        "name": "local-storage",
        "annotations": {
          "storageclass.kubernetes.io/is-default-class": "true"
        }
      },
      "provisioner": "kubernetes.io/no-provisioner",
      "reclaimPolicy": "Delete"

and the using the storage class when install Prometheus with helm

helm install stable/prometheus --set server.storageClass=local-storage

and i was also forced to create a volume for Prometheus to bind to

kubectl apply -f prometheusVolume.yaml

prometheusVolume.yaml:    
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: prometheus-volume
    spec:
      storageClassName: local-storage
      capacity:
        storage: 2Gi #Size of the volume
      accessModes:
        - ReadWriteOnce #type of access
      hostPath:
        path: "/mnt/data" #host location

You could use other storage classes, found that there as a lot to chose between but then there might be other steps involved to get it working.

Rancher Docs: Imported clusters, Setting up Kubernetes Clusters in Rancher Integrating Rancher and Prometheus for Cluster Monitoring Catalogs, Helm Charts and Apps cannot connect to the configured server-url , the cluster will remain in Pending state, showing Check if the cattle-node-agent pods are present on each node, have status Running  Install Rancher HA via helm; Enable monitoring (use default settings except 2GB ram for prometheus) Shut down the node running the prometheus-cluster-monitoring-0 pod; Result: prometheus-cluster-monitoring-0 pod is never restarted on another node. Only way to get monitoring back is to restore whatever node prometheus-cluster-monitoring-0 ran on.

Kubernetes monitoring with Prometheus – Prometheus operator , Operators are Kubernetes-specific applications (pods) that configure, manage and Prometheus servers; Alertmanager; Grafana; Host node_exporter; kube-​state-metrics The Operator acts on the following custom resource definitions (​CRDs): So if you have helm setup with your cluster, just run: Prometheus is an open source monitoring framework.Explaining Prometheus is out of the scope of this article. In this article, I will guide you to setup Prometheus on a Kubernetes cluster and collect node, pods and services metrics automatically using Kubernetes service discovery configurations.

Setup Prometheus/Grafana Monitoring On Azure Kubernetes Cluster , For installation of prometheus, following helm command can be used AGE prometheus-alertmanager Pending default 1s prometheus-server Pending default export POD_NAME=$(kubectl get pods --namespace monitoring -l "app​=prometheus Manage Azure Kubernetes Service (AKS) with Rancher. The default is for Rancher to generate a CA and uses cert-manager to issue the certificate for access to the Rancher server interface.. Because rancher is the default option for ingress.tls.source, we are not specifying ingress.tls.source when running the helm install command.

[stable/prometheus-operatort] endless terminating/pending loop , Describe the bug I try to deploy the stable/prometheus-operator helm chart on a 3 worker node rancher cluster with monitoring already Version of Helm and Kubernetes: helm install monitoring stable/prometheus-operator Just these two pods run into an endless Pending and then Terminating Loop: This 20 minute detailed walk-through will provide you everything you need to know to monitor a Kubernetes Cluster. It leverages a number of technologies, including Prometheus, Grafana, Kubernetes

Comments
  • What is the output of kubectl get pvc,pv?
  • @Nickolay I added results of your command.
  • And please add output of kubectl describe pvc <pvc_name> as well
  • @Nickolay Added. Edit 2
  • @online: have you found a solution yet? Am facing the same issue :(
  • This helped me even on a bare-metal cluster without Rancher.

Hot Questions

  • Wait for creation database to finish8944
  • Iterative function to subtract columns from a specific column in a dataframe and have the values appear in a new column4079
  • How to fetch the first and last record of a grouped record in a MySQL query with aggregate functions?7420
  • Add new card to existing customer in Stripe Node.js380
  • How can I use the same controller in multiple places without creating new instances of it? AngularJS7078
  • How do I use the PHP array_walk function to rearrange keys in a multidimensional array?2117
  • Rails console: in `require': cannot load such file -- readline (LoadError)3698
  • "Error saving credentials" in docker login6247
  • Failed to execute 'postMessage' on 'DOMWindow': The target origin provided does not match the recipient window's origin ('null')3009
  • Is there an elegant way of checking whether at least one element has a certain class?3198
  • Retrieving json child with php (inside another foreach)6384
  • How to capitalize first letter of first word in a sentence?591
  • How to open a batch file from php script with administrator privileges7021
  • SailsJS Connections1332
  • int a[] = {1,2,}; Weird comma allowed. Any particular reason?4261
  • Javascript - AJAX request inside loops6769
  • what is the exact order of execution for try, catch and finally?6777
  • Why must you state a variable again after every prompt message if you are are subtracting a number from it? JavaScript3594
  • Using patindex in SQL Server to check six continuous numeric values6799
  • Extract data from System.Data.DataRow in powershell9282