Creating a Kubernetes cluster
If you want to create a Kubernetes cluster, you can do so via the Cleura Cloud Management Panel using Gardener. This guide shows you how to do that, and how to deploy a sample application on such a cluster.
Prerequisites
- If this is your first time using Gardener in Cleura Cloud, you need to activate the service from the Cleura Cloud Management Panel.
- To access the Kubernetes cluster from your computer, you will need to install
kubectl
on your machine.
Creating a Kubernetes cluster in Cleura Cloud Management Panel
To get started, navigate to https://cleura.cloud, and in the side panel choose Containers → Gardener. You will see a Gardener page, in which you can create and manage your clusters. Click Create Kubernetes cluster.
In Gardener terminology, a Kubernetes cluster is referred as a Shoot (as in, new plant growth).
In the opened form, fill in the name of the new shoot cluster and select a region to see the rest of the options.
In the Worker Groups section, create at least one worker group. Pay attention to the values you set for the following values:
- Machine Type: The flavor your worker nodes will use; this determines the number of CPU cores and RAM allocated to them.
- Volume Size: The amount of local storage allocated to each worker node.
- Autoscaler Min: The minimum number of worker nodes to run in the cluster at any time.
- Autoscaler Max: The maximum number of worker nodes the cluster automatically scales to, in the event that the current number of nodes cannot handle the deployed workload.
- Max Surge: The maximum number of additional nodes to deploy in an autoscaling event.
For a test cluster, you can leave all values at their defaults, and click Create at the bottom.
In the list of clusters, you will see your new Gardener shoot bootstrapping. The icon on the left marks the progress. Creating the cluster can take several minutes.
A note on quotas
Your Gardener worker nodes are subject to quotas applicable to your Cleura Cloud project. You should make sure that considering your selection of worker node flavor (which determines the number of virtual cores and virtual RAM allocated to each node), the volume size, and the Autoscaler Max value, you are not at risk of violating any quota.
For example, if your project is configured with the default quotas, and you select the b.4c16gb
flavor for your worker nodes, your cluster would be able to run with a maximum of 3 worker nodes (since their total memory footprint would be 3×16=48 GiB, just short of the default 50 GiB limit).
A 4th node would push your total memory allocation to 64 GiB, violating your quota.
If necessary, be sure to request a quota increase via our Service Center.
Interacting with your cluster
Once your new shoot cluster is operational, you can start interacting with it.
Created: 2022-09-08