How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes

helm nginx-ingress configmap
nginx ingress configmap example
nginx ingress configuration-snippet
nginx configmap
helm ingress example
nginx ingress use-forwarded-headers
configure nginx ingress controller kubernetes
nginx ingress annotations

I've found a documentation about how to configure your NginX ingress controller using ConfigMap:

Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.

My ingress controller:

helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress

My config map:

kind: ConfigMap
apiVersion: v1
  name: ingress-configmap
  proxy-read-timeout: "86400s"
  client-max-body-size: "2g"
  use-http2: "false"

My ingress:

apiVersion: extensions/v1beta1
kind: Ingress
  name: ingress
  annotations: "HTTPS"
    - hosts:
      secretName: ingress-tls
    - host:
          - path: /
              serviceName: web
              servicePort: 443
          - path: /api
              serviceName: api
              servicePort: 443

How do I make my Ingress to load the configuration from the ConfigMap?

I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller. In order to load your ConfigMap you need to override it with your own (check out the namespace).

kind: ConfigMap
apiVersion: v1
  name: {name-of-the-helm-chart}-nginx-ingress-controller
  namespace: {namespace-where-the-nginx-ingress-is-deployed}
  proxy-read-timeout: "86400"
  proxy-body-size: "2g"
  use-http2: "false"

The list of config properties can be found here.

How to use ConfigMap configuration with Helm NginX Ingress , This can be verified with kubectl logs <pod-name-of-controller> -n <namespace- arg-if-not-in-default-namespace> . My log messages contained� For NGINX Plus: $ helm install --name my-release -f values-plus.yaml . The command deploys the Ingress controller in your Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.

One can pass config mag properties at the time of installation too:

helm install stable/nginx-ingress --name nginx-ingress --set controller.config.use-forwarded-headers='"true"'

NOTE: for non-string values had to use single quotes around double quotes to get it working.

Installation with Helm, helm install --name my-release -f values-plus.yaml . The command deploys the Ingress controller in your Kubernetes cluster in the default configuration. The NGINX configuration will be updated. ConfigMap and Ingress Annotations. Annotations allow you to configure advanced NGINX features and customize or fine tune NGINX behavior. The ConfigMap applies globally, meaning that it affects every Ingress resource. In contrast, annotations always apply to their Ingress resource.

If you used helm install to install the ingress-nginx, if no explicit value for which ConfigMap the nginx controller should look at was passed, the default value seems like it is {namespace}/{release-name}-nginx-ingress-controller. This is generated by (See similar if it's a dead link).

To verify for yourself, try to find your command that you installed the ingress-nginx chart with, and add --dry-run --debug to the command. This will show you the yaml files generated by Tiller to be applied to the cluster. The line # Source: nginx-ingress/templates/controller-deployment.yaml begins the controller deployment which has an arg of --configmap=. The value of this arg is what needs to be the name of the ConfigMap for the controller to sense, and use to update its own .conf file. This could be passed explicitly, but if it is not, it will have a default value.

If a ConfigMap is created with the RIGHT name, the controller's logs will show that it picked up the configuration change and reloaded itself.

This can be verified with kubectl logs <pod-name-of-controller> -n <namespace-arg-if-not-in-default-namespace>. My log messages contained the text Configuration changes detected, backend reload required. These log messages will not be present if the ConfigMap name was wrong.

I believe the official documentation for this is unnecessarily lacking, but maybe I'm incorrect? I will try to submit a PR with these details. Someone who knows more should help flesh them out so people don't need to stumble on this unnecessarily.

Cheers, thanks for your post.

ConfigMap Resource, Using ConfigMap. Our installation instructions deploy an empty ConfigMap while the default installation manifests specify it in the command-line arguments of the Ingress controller. Create a ConfigMap file with the name nginx-config.yaml and set the values that make sense for your setup: ConfigMap: using a Configmap to set global configurations in NGINX. Annotations: use this if you want a specific configuration for a particular Ingress rule. Custom template: when more specific settings are required, like open_file_cache, adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.

When you apply ConfigMap configuration with needful key-value data, Ingress controller picks up this information and insert it to the nested nginx-ingress-controller Pod's original configuration file /etc/nginx/nginx.conf, therefore it's easy afterwards to verify whether ConfigMap's values have been successfully reflected or not, by checking actual nginx.conf inside the corresponded Pod.

You can also check logs from the relevant nginx-ingress-controller Pod in order to check whether ConfigMap data already reloaded to the backend nginx.conf, or if not to investigate the reason.

Setting up an NGINX Ingress Controller on PMKFT, The vast majority of Kubernetes clusters are used to host containers that process The first step required to use NGINX as an Ingress controller on a Platform9 managed In the case of NGINX, its recommended configuration has three ConfigMaps: To install an NGINX Ingress controller using Helm, use the chart� A ConfigMap is an API object used to store non-confidential data in key-value pairs. PodsA Pod represents a set of running containers in your cluster. can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volumeA directory containing data, accessible to the containers in a pod. . A ConfigMap allows you to decouple environment-specific

What you have is an ingress yaml and not an Ingress controller deployment yaml , Ingress Controller is the Pod that actually does the work and usually is an nginx container itself. An example of such a configuration can be found here in the documentation you shared.


Using that example provided , you can also use following way to load config into nginx using config map

      - name: nginx-config
        mountPath: /etc/nginx/nginx.conf
       subPath: nginx.conf
     - name: nginx-config
       name: nginx-config 

nginx-config contains your nginx configuration as part of config map

nginx-ingress-controller 0.33.0 for Kubernetes | Helm Hub, These commands deploy nginx-ingress-controller on the Kubernetes cluster in the default configuration. config, Nginx ConfigMap entries, {} reportNodeInternalIp, If using hostNetwork=true , setting reportNodeInternalIp= true , will pass the� ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.

nginx-ingress 1.7.2 for Kubernetes | Helm Hub, Introduction. This chart deploys the NGINX Ingress controller in your Kubernetes cluster. helm install --name my-release nginx-stable/nginx-ingress Requires error-log-level: debug in the ConfigMap via controller.config.entries . false. Attention. The default configuration watches Ingress object from all the namespaces.. To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace.

How To Set Up an Nginx Ingress on DigitalOcean Kubernetes Using , The Controller can be configured to publish the IP address on each Ingress by setting the controller.publishService.enabled parameter to true during helm install . Use the kubectl create configmap command to create ConfigMaps from directories, files, or literal values: kubectl create configmap <map-name> <data-source>. where <map-name> is the name you want to assign to the ConfigMap and <data-source> is the directory, file, or literal value to draw the data from.

Set up Ingress on Minikube with the NGINX Ingress Controller , An Ingress controller fulfills the rules set in the Ingress. kubectl command-line tool must be configured to communicate with Install Service Catalog using Helm Install Service Catalog using SC Configuring Redis using a ConfigMap The next step lets you access the app using the Ingress resource.

  • is --configmap in a yaml somewhere? how do you see what --configmap is on a running deployment?
  • --configmap is not a recognized flag for helm. While I have no trouble creating a config map and nginx ingress, I am still clueless how to link the two together. The ingress is not picking up the properties from the config map.
  • Don't use the: --configmap option, name your configmap in a same way as Helm internally calls the configmap. If you read my answer again you will be able to spot it.
  • The name of the config map that is applied is {name-of-the-helm-chart}-ingress-nginx-ingress-controller and will be picked up from the namespace where the chart is deployed. Adding a comment just in case the edits in the answer are rejected. Thanks a lot for your help @NeverEndingQueue! Cheers!!!
  • Glad I could help. Thanks for your edit, I've adjusted is slightly. I think it's not: {name-of-the-helm-chart}-ingress-nginx-ingress-controller, but: {name-of-the-helm-chart}-nginx-ingress-controller. Is that right?
  • Thanks. Yes the ConfigMap change nicely affects the nginx.conf inside. If someone wants to check whether NginX config was affected on the outside (without going into pod), you can set either: server_tokens off or server_tokens on and notice how whether NginX advertises itself in the HTTP headers.
  • what kind of logs should i see in the controller if a configmap was detected? because it seems like i followed everything here and i'm not sure if my .conf is updating
  • kubectl exec -ndefault nginx-ingress-controller-b545558d8-829dz -- cat /etc/nginx/nginx.conf | grep tokens for example.
  • As you've pointed out the custom template is one way of configuring NginX controller: custom-template but the ConfigMap with it's own key convention here: configmap is another way. Please note that configmap provides configuration directly in data:. I am looking not how to load custom template from ConfigMap, but how to load config from ConfigMap directly.
  • I assume you mean the: --configmap string "Name of the ConfigMap containing custom global configurations for the controller." from the I am actually using Helm is there a way to load it? Helm does seem to support only the: controller.customTemplate.configMapName and controller.customTemplate.configMapKey which are for complete custom template. Link:
  • check out this link -->…