SSH into kubernetes nodes created through KOPS

kops create secret
kops create cluster
kops create cluster ssh key
kops create cluster from yaml
kops create secret sshpublickey
kops create instance group
kops delete secret
ssh public key must be specified when running with aws

I created a Kubernetes cluster through Kops. The configuration and the ssh keys were in a machine that I don't have access to anymore. Is it possible to ssh to the nodes through kops even if I have lost the key? I see there is a command -

kops get secrets

This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?

I see the cluster state is stored in S3. Does it store the secret key as well?


You can't recover the private key, but you should be able install a new public key following this procedure:

kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes to reconfigure the auto-scaling groups
kops rolling-update cluster --name <clustername> --yes to immediately roll all the machines so they have the new key (optional)

Taken from this document:

https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access

SSHing to Cluster — Cloud Posse Developer Hub, Learn how to connect via SSH to kops cluster. kops bastion ssh. The standard kops topology deployed by Cloud Posse includes a bastion node (sometimes� I created a Kubernetes cluster through Kops. The configuration and the ssh keys were in a machine that I don't have access to anymore. Is it possible to ssh to the nodes through kops even if I have lost the key? I see there is a command - kops get secrets This gives me all the secrets. Can I use th


This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?

Not really. These are secrets to access the kube-apiserver in the cluster. For example, for you to be able to run kubectl commands.

I see the cluster state is stored in S3. Does it store the secret key as well?

It's stored in S3 but not the ssh keys to access the servers. Those are stored in AWS under 'Key Pairs'.

Unfortunately, you can only get your private key that you can use to log in only once (when you create the keypair). So I think you are out of luck if you don't have the private key. If you have access to the AWS console you could snapshot the root drive of your instances and recreate your nodes (or control plane) one by one with a different AWS keypair that you have the private key for.

kubernetes/kops, Issue : Unable to ssh to cluster using key provided while creating cluster I I tried steps as mentioned in : https://github.com/kubernetes/kops/blob/ error listing nodes in cluster: Get https://api.sak.k8s.test/api/v1/nodes: dial� This page shows how to use kubectl exec to get a shell to a running container. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube, or you can use one of these Kubernetes playgrounds: Katacoda Play with Kubernetes Getting a shell


In my case when I installed the cluster with Kops I had to run ssh-keygen like below that created id_rsa.pub/pvt keys. This is allowing me to simply do a ssh or

ssh-keygen
kops create secret --name ${KOPS_CLUSTER_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub

and then created the cluster with

kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
ssh admin@ec2-13-59-4-99.us-east-2.compute.amazonaws.com

Unable to ssh to cluster using key provided while creating cluster , Even though Kops makes it a cake walk to create Kubernetes cluster, To be able to access the cluster nodes via ssh, we need to add a key. root@ip-172-31-27-46:~# ssh admin@54.89.52.138 Permission denied (publickey) kops create cluster --name demo.k8s.local --zones us-east-1a --node-count 1 --node-size t2.xlarge --master-size t2.micro --yes. root@ip-172-31-27-46:~# kops get secrets --type sshpublickey admin Using cluster from kubectl context: demo.k8s.local. TYPE NAME ID


You can run new daemonset with gcr.io/google-containers/startup-script containers, to update the public key on all your nodes, this will help you in case you've a new node spun and will replace the public key in all existing nodes.

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: startup-script
  labels:
    app: startup-script
spec:
  template:
    metadata:
      labels:
        app: startup-script
    spec:
      hostPID: true
      containers:
        - name: startup-script
          image: gcr.io/google-containers/startup-script:v1
          imagePullPolicy: Always
          securityContext:
            privileged: true
          env:
          - name: STARTUP_SCRIPT
            value: |
              #! /bin/bash
              touch /tmp/foo
              #echo "MYPUBLICKEY" > /home/admin/.ssh/authorized_keys
              echo done

replace MYPUBLICKEY with your public key, and the username after home, here admin will get replace depending on what OS you are using. This will help you access the node via ssh without changing/replacing your existing nodes

You can also add user-data in the ig while performing kops edit ig nodes and add the small one liner to append your public key.

How to Setup a Perfect Kubernetes Cluster using KOPS on AWS, Kubernetes Kops cluster on AWS Below are steps to create a test cluster using kops. Install kops binary ssh to master or worker nodes Update Nodes and Master in the cluster. We can change numner of nodes and number of masters using following commands. kops edit ig nodes change minSize and maxSize to 0 kops get ig- to get master node name kops edit ig - change min and max size to 0 kops update cluster --yes Optional (Create terraform scripts through kops)


Kubernetes Kops cluster on AWS. Kops, This guide uses kops to setup a cluster on AWS. This should be seen as This role will be used to give your CI host permission to create and destroy resources on AWS In order to SSH into your cluster you will need to set up a bastion node . Our kubernetes cluster will run in a private topology (i.e. in privates subnets). The kubernetes API (running on masters node) will only accessible through a Load Balancer (created by kops). All the node won’t be internet accessible by default, but using a bastion host we will be able to ssh to them.


Kubernetes on Amazon Web Services (AWS) — Zero to JupyterHub , SSH is allowed to the masters and the nodes, by default from anywhere. created with kops create cluster using Kubernetes 1.11 or later will have this setting in� The nodes will sit the until a bootstrap file is created and once available attempt to provision the node. Note enabling bootstrap tokens does not provision bootstrap tokens for the worker nodes. Under this configuration it is assumed a third-party process is provisioning the tokens on behalf of the worker nodes.


Security - Kubernetes Operations, Upgrade a kops cluster from one Kubernetes version to another kops create secret --name ${NEW_NAME} sshpublickey admin -i ~/.ssh/id_rsa.pub Try kubectl get nodes --show-labels , kubectl get pods --all-namespaces etc until you are� Our kubernetes cluster will run in a private topology (i.e. in privates subnets). The kubernetes API (running on masters node) will only accessible through a Load Balancer (created by kops). All the node won’t be internet accessible by default, but using a bastion host we will be able to ssh to them.