In the Part 1 and Part 2 of the TKG deployment series blog posts, we saw how to prepare the bootstarp vm and the vsphere environment respectively. If you have missed, please review those posts before going through this one. In this blog post, we will see how to deploy a TKG management cluster.
Management Cluster:
A management cluster is the first cluster that you deploy when you create a TKG instance. It’s a Kubernetes cluster that performs the role of the primary management and operational center for the TKG instance. This is where Cluster API runs to create the Tanzu Kubernetes workload clusters in which your application workloads run.
How to deploy management cluster:
There are two ways of deploying a management cluster.
- With the Installer Interface
- With the TKG CLI
Installer interface is an easier option as it will give us an interface to select and configure the TKG deployment. If the prerequisites are met, tkg init --ui
launches the Tanzu Kubernetes Grid installer interface. We will make use of the TKG CLI for the deployment here.
Deploying management cluster using TKG CLI
While preparing the bootstrap vm, we ran the tkg get management-cluster command which created the .tkg folder in the home directory. This directory contains the management cluster configuration file config.yaml
and other configuration files. The content of the config.yaml
first will be as below.
ubuntu@cli-vm:~/.tkg$ cat config.yaml
cert-manager-timeout: 30m0s
overridesFolder: /home/ubuntu/.tkg/overrides
BASTION_HOST_ENABLED: "true"
NODE_STARTUP_TIMEOUT: 20m
providers:
- name: cluster-api
url: /home/ubuntu/.tkg/providers/cluster-api/v0.3.10/core-components.yaml
type: CoreProvider
- name: aws
url: /home/ubuntu/.tkg/providers/infrastructure-aws/v0.5.5/infrastructure-components.yaml
type: InfrastructureProvider
- name: vsphere
url: /home/ubuntu/.tkg/providers/infrastructure-vsphere/v0.7.1/infrastructure-components.yaml
type: InfrastructureProvider
- name: azure
url: /home/ubuntu/.tkg/providers/infrastructure-azure/v0.4.8/infrastructure-components.yaml
type: InfrastructureProvider
- name: tkg-service-vsphere
url: /home/ubuntu/.tkg/providers/infrastructure-tkg-service-vsphere/v1.0.0/unused.yaml
type: InfrastructureProvider
- name: kubeadm
url: /home/ubuntu/.tkg/providers/bootstrap-kubeadm/v0.3.10/bootstrap-components.yaml
type: BootstrapProvider
- name: kubeadm
url: /home/ubuntu/.tkg/providers/control-plane-kubeadm/v0.3.10/control-plane-components.yaml
type: ControlPlaneProvider
- name: docker
url: /home/ubuntu/.tkg/providers/infrastructure-docker/v0.3.10/infrastructure-components.yaml
type: InfrastructureProvider
images:
all:
repository: registry.tkg.vmware.run/cluster-api
cert-manager:
repository: registry.tkg.vmware.run/cert-manager
tag: v0.16.1_vmware.1
release:
version: v1.2.0
Update the config.yaml file
Before we start with the TKG management cluster deployment, we need to update the vSphere environment details into the .tkg/config.yaml file. If you are using the Tanzu Kubernetes Grid installer interface instead of the CLI, this becomes easier as the config.yaml files gets automatically updated based on the input from the GUI.
ubuntu@cli-vm:~/.tkg$ cat config.yaml
cert-manager-timeout: 30m0s
overridesFolder: /home/ubuntu/.tkg/overrides
NODE_STARTUP_TIMEOUT: 20m
BASTION_HOST_ENABLED: "true"
providers:
- name: cluster-api
url: /home/ubuntu/.tkg/providers/cluster-api/v0.3.10/core-components.yaml
type: CoreProvider
- name: aws
url: /home/ubuntu/.tkg/providers/infrastructure-aws/v0.5.5/infrastructure-components.yaml
type: InfrastructureProvider
- name: vsphere
url: /home/ubuntu/.tkg/providers/infrastructure-vsphere/v0.7.1/infrastructure-components.yaml
type: InfrastructureProvider
- name: azure
url: /home/ubuntu/.tkg/providers/infrastructure-azure/v0.4.8/infrastructure-components.yaml
type: InfrastructureProvider
- name: tkg-service-vsphere
url: /home/ubuntu/.tkg/providers/infrastructure-tkg-service-vsphere/v1.0.0/unused.yaml
type: InfrastructureProvider
- name: kubeadm
url: /home/ubuntu/.tkg/providers/bootstrap-kubeadm/v0.3.10/bootstrap-components.yaml
type: BootstrapProvider
- name: kubeadm
url: /home/ubuntu/.tkg/providers/control-plane-kubeadm/v0.3.10/control-plane-components.yaml
type: ControlPlaneProvider
- name: docker
url: /home/ubuntu/.tkg/providers/infrastructure-docker/v0.3.10/infrastructure-components.yaml
type: InfrastructureProvider
images:
all:
repository: registry.tkg.vmware.run/cluster-api
cert-manager:
repository: registry.tkg.vmware.run/cert-manager
tag: v0.16.1_vmware.1
release:
version: v1.2.0
VSPHERE_SERVER: vcsa-01a.corp.local
VSPHERE_DATACENTER: /RegionA01
VSPHERE_NETWORK: DSwitch-Management
VSPHERE_CONTROL_PLANE_DISK_GIB: "20"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_CONTROL_PLANE_MEM_MIB: "2048"
VSPHERE_WORKER_MEM_MIB: "2048"
VSPHERE_PASSWORD: <encoded:Vk13YXJlMSE=>
VSPHERE_DATASTORE: /RegionA01/datastore/map-vol
VSPHERE_USERNAME: administrator@vsphere.local
VSPHERE_WORKER_DISK_GIB: "20"
VSPHERE_WORKER_NUM_CPUS: "2"
VSPHERE_HA_PROXY_DISK_GIB: "20"
SERVICE_CIDR: 100.64.0.0/13
CLUSTER_CIDR: 100.96.0.0/11
VSPHERE_RESOURCE_POOL: /RegionA01/host/RegionA01-MGMT/Resources/TKG-Mgmt
VSPHERE_FOLDER: /RegionA01/vm/TKG-Mgmt
# VSPHERE_TEMPLATE will be autodetected based on the kubernetes version. Please use VSPHERE_TEMPLATE only to override this behavior
VSPHERE_TEMPLATE: /RegionA01/vm/photon-3-kube-v1.19.1+vmware.2
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC5KYNeWQgVHrDHaEhBCLF1vIR0OAtUIJwjKYkY4E/5HhEu8fPFvBOIHPFTPrtkX4vzSiMFKE5WheKGQIpW3HHlRbmRPc9oe6nNKlsUfFAaJ7OKF146Gjpb7lWs/C34mjdtxSb1D/YcHSyqK5mxhyHAXPp8lcgi/55uUGxwiKDA6gQ+UA/xtrKk60s6MvYMzOxJiUQbWYr3MJ3NSz6PJVXMvlsAac6U+vX4U9eJP6/C1YDyBaiT96cb/B9TkvpLrhPwqMZdYVomVHsdY7YriJB93MRinKaDJor1aIE/HMsMpbgFCNA7mma9x5HS/57Imw==
admin@corp.local
For more details on how to update the configuration parameters in the config.yaml file, refer to the vmware tkg documentation.
Deploy management cluster
tkg init --infrastructure vsphere --vsphere-controlplane-endpoint-ip 192.168.100.50 --plan dev --plan dev
TKG init logs would start updating at /tmp/tkg-20210129T224833100162020.log
As soon as you run the above tkg init command, you will see the a docker container created in the bootstrap vm. This is nothing but the kind cluster getting created as a docker container.

The progress of the deployment can be seen from the same session where you run the tkg init command as in the below snippet.
ubuntu@cli-vm:~$ tkg init --infrastructure vsphere --vsphere-controlplane-endpoint-ip 192.168.100.75 --plan dev --plan dev
Logs of the command execution can also be found at: /tmp/tkg-20210129T224833100162020.log
Validating the pre-requisites...
vSphere 7.0 Environment Detected.
You have connected to a vSphere 7.0 environment which does not have vSphere with Tanzu enabled. vSphere with Tanzu includes
an integrated Tanzu Kubernetes Grid Service which turns a vSphere cluster into a platform for running Kubernetes workloads in dedicated
resource pools. Configuring Tanzu Kubernetes Grid Service is done through vSphere HTML5 client.
Tanzu Kubernetes Grid Service is the preferred way to consume Tanzu Kubernetes Grid in vSphere 7.0 environments. Alternatively you may
deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0.
Do you want to configure vSphere with Tanzu? [y/N]: n
Would you like to deploy a non-integrated Tanzu Kubernetes Grid management cluster on vSphere 7.0? [y/N]: y
Deploying TKG management cluster on vSphere 7.0 ...
Setting up management cluster...
Validating configuration...
Using infrastructure provider vsphere:v0.7.1
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /home/ubuntu/.kube-tkg/tmp/config_6dE79GcH
From the above init log output, we see that it’s creating the bootstrapper. Make a note of the kube config file(config_6dE79GcH in my case) that is created for the kind cluster and using this we can connect to the kind cluster and see the k8s resources as in the below image.

If you see any of the pods in kind cluster failing, check the logs on the pod to troubleshoot the failure. The kind cluster should be in a healthy state for the management cluster creation to complete successfully.
Bootstrapper created. Kubeconfig: /home/ubuntu/.kube-tkg/tmp/config_alpijHo9
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.10" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.10" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.10" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.1" TargetNamespace="capv-system"
Start creating management cluster...
Saving management cluster kuebconfig into /home/ubuntu/.kube/config
You can see that the management cluster creation has now started. If you go to the vCenter now, you will see

From the init logs above, you can also see the kube config file created at /home/ubuntu/.kube/config for the management cluster creation that is in progress. You can run kubectl command using this kubeconfig file to connect and check the resources getting created for the management cluster.

Lets look at the TKG init logs again
Saving management cluster kuebconfig into /home/ubuntu/.kube/config
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.10" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.10" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.10" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.1" TargetNamespace="capv-system"
Waiting for the management cluster to get ready for move...
Waiting for addons installation...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster tkg-mgmt-vsphere-20210129224846 as 'tkg-mgmt-vsphere-20210129224846-admin@tkg-mgmt-vsphere-20210129224846'.
Management cluster created!
You can now create your first workload cluster by running the following:
tkg create cluster [name] -kubernetes-version=[version] --plan=[plan]
We can see that the management cluster creation has completed successfully. In the below screenshot, we see both the control plane node and the workload nodes created.

You can run the below command to see the deployed TKG management cluster in the vSphere environment.
tkg get management-cluster

You can also run the below command to get the management cluster
tkg get clusters --include-management-cluster

This completes the deployment of TKG management cluster. In the next part, we will see how to deploy a work load cluster using the management cluster that we have created here.
Leave a Reply