Kubernetes is great for large-scale applications, providing security, scalability, and portability, which most organizations recognize. For large-scale applications, Kubernetes is usually deployed through a cloud provider such as Azure, AWS, or GCP with the full breadth of infrastructure options. But what about smaller local applications?
Whether you would like to experiment with an idea, get up to speed on Kubernetes, want to apply large-scale applications in IoT or on the Edge, or would like local clusters for tasks that require multiple configurations during your CI/CD end-to-end testing process, there are options available.
The three main choices are:
They all solve the same problem: they allow you to deploy, monitor, and execute local Kubernetes clusters quickly. If you want to dive deeper into the difference between the three, here’s a great article that compares them. Included below is a summary table (with some corrections and updates) from the article.
|Stars on GitHub (1000S)||21.2||8.1||17.2|
|Supported Architectures||AMD64||AMD64||AMD64, ARMv7, ARM64|
|Supported Container Runtimes||Docker, CRI-O, containerd, gVisor||Docker||Docker, containerd|
|Startup Time Initial (Following)||5:19 (3:15)||2:48 (1:06)||0:15 (0:15)|
|Memory Requirements||2GB||8GB (Windows, macOS)||512MB|
|Requires Root||No||No||Yes (rootless is experimental)|
|Multi-Cluster Support||Yes||Yes||No (can be achieved using containers)|
Among the three, minikube is the most mature and widely adopted, according to our Kubernetes statistics, making it a safe bet for long-term support. Since minikube is supported by Kubernetes, you can directly engage the support network through the #minikube community in Slack.
An introduction to minikube
minikube makes the core functions of Kubernetes such as networking, dashboards, and security policies easier to use while allowing you to move beyond its limits when required. You will explore this soon by setting up a sample application in less than a minute. There are a number of add-ons available, making it simple to add additional Kubernetes services to the core services.
For example, there are add-ons for Google Cloud Auth, gcp-auth, metrics through metrics-server, Helm Package Manager, Nvidia GPUs, and an image registry. You can see a full list of what’s easily supported by running
minikube addons list after you install the service. You can also see the add-ons through the GitHub repo.
**minikube Add-ons List**
|| ADD-ON NAME||| PROFILE |||STATUS ||
|| ambassador||| minikube |||disabled ||
|| auto-pause||| minikube |||disabled ||
|| csi-hostpath-driver||| minikube |||disabled ||
|| dashboard||| minikube |||disabled ||
|| default-storageclass||| minikube |||enabled ||
|| efk||| minikube |||disabled ||
|| freshpod||| minikube |||disabled ||
|| gcp-auth||| minikube |||disabled ||
|| gvisor||| minikube |||disabled ||
|| helm-tiller||| minikube |||disabled ||
|| ingress||| minikube |||disabled ||
|| ingress-dns||| minikube |||disabled ||
|| istio||| minikube |||disabled ||
|| istio-provisioner||| minikube |||disabled ||
|| kubevirt||| minikube |||disabled ||
|| logviewer||| minikube |||disabled ||
|| metallb||| minikube |||disabled ||
|| metrics-server||| minikube |||disabled ||
|| nvidia-driver-installer||| minikube |||disabled ||
|| nvidia-gpu-device-plugin||| minikube |||disabled ||
|| olm||| minikube |||disabled ||
|| pod-security-policy||| minikube |||disabled ||
|| registry||| minikube |||disabled ||
|| registry-aliases||| minikube |||disabled ||
|| registry-creds||| minikube |||disabled ||
|| storage-provisioner||| minikube |||enabled ||
|| storage-provisioner-gluster||| minikube |||disabled ||
|| volumesnapshots||| minikube |||disabled ||
The platform is enabled on Linux, Windows, and OSX, making it easy to get started without worrying about compatibility. It also has high performance and has a large base of developers, with over 650 contributors on GitHub. These are not confined to the creators. Recently, the community reported that as of 2020, the number of non-Google related commits had grown by more than seventy percent.
As the platform has started to mature, it remains true to its core: it’s easy to set up and deploy locally. minikube is particularly useful in situations where you need multi-cluster support and have a constraint on memory. It’s not so useful when startup times aren’t critical, since K3s and kind both offer lower startup times at the cost of higher memory and loss of multi-cluster support.
Setting up minikube
To set up minikube, you can use a default virtual machine such as VMware VirtualBox, or Parallels, or a containerization application such as Docker, Hyperkit, or Podman. You can also run minikube natively without a virtual machine or containerization application on a localhost shell by specifying
minikube start --driver=none.
Beware, however, that this comes with possible security vulnerabilities, data loss on failure, and other issues which are documented in detail here.
Once your virtual machine or containerization application is installed, run the following snippet to install minikube (for Mac, see other instructions provided here), and start minikube.
kubectl get po -A, you should see minikube pods running.
> kubectl get pods --all-namespaces
If you don’t have kubectl, download it through
minikube kubectl -- get po -A. Alternatively, minikube has kubectl contained within, which can be set up as a symlink. Following this, you can start your exploration through a dashboard by running
Once you have finished provisioning the Kubernetes cluster, deploy an application. Here, we deploy a simple nginx server.
In the command, you specify that you want to host the service called
hello-minikube, with the specified image and then expose it on port 8080. You can now access this on your service online through
minikube service hello-minikube. To manage the service, you can prompt minikube with commands like
minikube config set memory 16384,
minikube delete --all.
Similarities between Kubernetes and minikube
Now that you have an idea of how to deploy on minikube, we can dive deeper into the differences between Kubernetes and minikube.
minikube is Kubernetes through a local deployment. It creates a local endpoint that you can access through kubectl. You can choose the number of nodes in your cluster by specifying the
--nodes flag with your driver.
minikube start --nodes 4 -p hello-minikube
You can get your running nodes through kubectl and check their status by specifying the program.
minikube logs can be accessed through
minikube logs with the associated flags, such as
-f, follow, or
You can set up the logs to output during minikube’s interaction with a testing system. Telemetry is also supported through OpenTelemetry Tracing and Stackdriver.
Overall, the experience is very similar to Kubernetes, with a reduced overhead and local cluster option. In fact, you can even switch between Kubernetes and minikube fairly easily using
minikube is a great alternative and/or complement to Kubernetes, enabling serverless development. In the light of these benefits, it’s best to use minikube when you’re looking to augment your current services with smaller-scale local deployment (such as rapid development environments), testing, or edge computing. It’s also a great tool for prototyping your ideas, running a CI/CD automated test pipeline, and working with IoT and edge applications where cluster startup and shutdown times are important. As the community continues to grow, so will the number of applications for minikube.