Providing multiple health check URLs for kubernetes probes

kubernetes readiness probe example
kubernetes cluster health check
kubernetes node health check
kubernetes readiness probe exec: command
liveness probe failed: http probe failed with statuscode 404
kubernetes health check 301
readiness probe failed: http probe failed with statuscode: 503
openshift health check

I am using container Probes to check the health of the application running inside the container within kubernetes pod. For now my example pod config looks like,

"spec":{
   "containers":[
      {
        "image":"tomcat",
        "name":"tomcat",
        "livenessProbe":{
           "httpGet":{
              "port": 80
            },
            "initialDelaySeconds": 15,
            "periodSeconds": 10
        }
      }
   ]
}

In my case, I need to monitor two ports for the same container. 80 and 443. But I am unable to find a method to provide both the ports for same container in the config file. Is there an alternate way of doing this?

If you have curl / wget on the container you could just run a container exec healthcheck, and do something like curl localhost:80 && curl localhost:443.

Configure Liveness, Readiness and Startup Probes, I would like to have a liveness check (and readiness check) that tests that the application is Allow multiple ports in a TCP connection probe. I'm running Kubernetes 1.4.3 in GKE, but I believe this applies to all versions and environments. use annotations to redefine health check endpoints #325. For an HTTP probe, the kubelet sends an HTTP request to the specified path and port to perform the check. The kubelet sends the probe to the pod’s IP address, unless the address is overridden by the optional host field in httpGet .

It's not possible, try to encapsulate the health check inside your application

Ex: http://localhost:80/health_check?full => (proxy to) => http://localhost:443/health_check?full

can be help you https://github.com/kubernetes/kubernetes/issues/37218

Configure Liveness and Readiness Probes, How to Apply Health Probe Pattern in Kubernetes? Out of the You deployed the component on several pods in your cluster. Due to a When the Deployment checks the status of the pod, it detects that it's running. But, that's Give it a few seconds then let's see how the readiness probe worked: kubectl  This means that Kubernetes does not expose the check status in its api server, and the internal system components can not consume this information. Also, Kubernetes distinguishes liveness from readiness checks , so that other components can react differently (e.g., restarting the container vs. removing the pod from the list of endpoints for a service), which the docker HEALTHCHECK currently does not provide.

This would be a very useful feature but that is missing. As others mentioned earlier you can use a script for health check instead of httpget and check both urls in that script. One another option is to create a sidecar health container to monitor both urls of the main container and take action.

Multiple liveness checks (feature request) · Issue #37218 , Kubernetes uses liveness probes to know when to restart a container. deadlocked due to a multi-threading defect—restarting the container can For a readiness probe, giving up means not routing traffic to the pod, Unlike a readiness probe, it is not idiomatic to check dependencies in a liveness probe. Kubernetes liveness and readiness probes can greatly improve the robustness and resilience of your service and provide a superior end-user experience. However, if you do not carefully consider how these probes are used, and especially if you do not consider extraordinary system dynamics, however rare, you risk making the availability of the

Use readiness and liveness probes for health checks., Internal Use Policies · Key Usage Enforcement Policy · Multiple Policy Tabs · Multiple X.509 Kubernetes applications provide the If the readiness probe fails a container, the endpoints controller ensures the If the container is unhealthy and fails the health check, Kubernetes tries to redeploy the pod. kubeadm: use etcd's /health endpoint for a HTTP liveness probe on localhost instead of having a custom health check using etcdctl Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc. :

Kubernetes and Containers Best Practices - Health Probes, Rancher v1.6 provided TCP and HTTP health checks on your nodes and services Multiple Probe Types. Kubernetes includes two different types of probes: liveness checks and readiness checks. HTTP checks monitor your deployment's health by sending an HTTP GET request to a specific URL path that you define. Azure Load Balancer provides health probes for use with load-balancing rules. Health probe configuration and probe responses determine which backend pool instances will receive new flows. You can use health probes to detect the failure of an application on a backend instance.

Kubernetes Liveness and Readiness Probes: How to Avoid , Liveness and Readiness probes are Kubernetes capabilities that it may never give a meaningful indication of the poor health of the container. In short, this is not a trivial transaction, and it involves multiple downstream interfaces. must be long enough to ensure that the health check URL will be active. The initialDelaySeconds parameter must be set to an appropriate value at which the health check probe should begin. Given that the /health probe runs on the same application server platform as the other more resource consuming URL’s, the initial delay must be long enough to ensure that the health check URL will be active.

Comments
  • I tried HEALTHCHECK docker command while creating the docker image. But even if the container is unhealthy, pod description doesn't give those details. I just states that the container is in running phase.
  • I cannot touch my application. Need to do it from the outer level. I am looking for some other solution.