Using Hyperkube
While you can run all the components as regular system daemons in unit files, you can also run the API server, the scheduler, and the controller-manager as containers. This is what kubeadm does.
Similar to minikube, there is a handy all-in-one binary named hyperkube, which is available as a container image (e.g. gcr.io/google_containers/hyperkube:v1.16.7). This is hosted by Google, so you may need to add a new repository so Docker would know where to pull the file. Updated and changes to sub-commands are ongoing. Reference the help of the current version for more information. You can find the current release of software here: https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/hyperkube.
This method of installation consists in running a kubelet as a system daemon and configuring it to read in manifests that specify how to run the other components (i.e. the API server, the scheduler, etcd, the controller). In these manifests, the hyperkube image is used. The kubelet will watch over them and make sure they get restarted if they die.
To get a feel for this, you can simply download the hyperkube image and run a container to get help usage:
$ docker run --rm gcr.io/google_containers/hyperkube:v1.16.7 /hyperkube kube-apiserver --help
$ docker run --rm gcr.io/google_containers/hyperkube:v1.16.7 /hyperkube kube-scheduler --help
$ docker run --rm gcr.io/google_containers/hyperkube:v1.16.7 /hyperkube kube-controller-manager --help
This is also a very good way to start learning the various configuration flags.
Last updated