Previously at work I created our own SasS deployment management tool. One pain problem in this project is running kubectl command inside the container. By that time, time for the project is limited, I have to hack this part up by installing gcloud into the project container and authenticate with GCP at container boot time. Now I finally got some time to do some search on how to do it right. This post is my summary of the process:
Step 1
Well, unfortunately we still need to use gcloud do authenticate for the first time to setup in order to make the kubectl command able to talk to your kubernetes cluster API. The command is:
gcloud auth login
Step 2
Get the config file, note that this config file is not the kubeconfig file we will use at the end.
gcloud container clusters get-credentials <cluster-name> --project <project-name>
After run this command in your shell, a config file should be created in your home directory at ~/.kube/config
which expires in one hour by default Before you do next step, you probably want to make sure your kubectl command is actually working:
kubectl cluster-info
Step 3
Now let’s get the actually kubeconfig file by using the kubernetes TLS Certificates API. No worry is you are not familiar with this API, me neither. The Teleport team already wrote a shell script to do this job for us:
Note: this script require a golang package cfssl and cfssljson install it if you don’t have it. Also make sure
$GOPATH/bin
is included in your shell path.
wget -O - https://raw.githubusercontent.com/gravitational/teleport/master/examples/gke-auth/get-kubeconfig.sh | bash
Step 4
After few seconds running of the command above, you should be able to see the kubeconfig
file inside ./build
directory. Now you Can just use it with kubectl command:
kubectl --kubeconfig ./build/kubeconfig get nodes
Reference: https://gravitational.com/blog/kubectl-gke/