Changing user cluster configurations

This page is for platform administrators.

This page describes how to modify your user cluster configuration by using Anthos Management Center Console or the API.

Update user clusters by using Management Center Console

You can update your user clusters by adding or removing machines as nodes. In version 1.8.1 and later, you can modify the control plane nodes as well as the cluster worker nodes.

  1. In Management Center Console, open the Clusters menu.
  2. In the list of clusters, click the cluster that you want to edit.
  3. Click Edit.
  4. Click Node pool details. Node pool details
  5. In the Control plane nodes list, select the machines to run the system workload. Edit control plane nodes
  6. In the Worker nodes list, select the machines for the cluster to run on.
  7. Click Update.

The newly added worker node machines are installed as part of the cluster and removed nodes are drained of their workloads and removed from the cluster. The removed worker nodes can be added to other clusters if needed, but can also be reimaged.

Update user clusters by using the API

Update control plane nodes

  1. Get the existing Cluster configuration.

    kubectl --kubeconfig ADMIN_KUBECONFIG get clusters.baremetal.cluster.gke.io USER_CLUSTER_NAME -n cluster-USER_CLUSTER_NAME -o yaml > USER_CLUSTER_NAME.yaml
    
  2. Modify USER_CLUSTER_NAME.yaml spec.controlPlane.nodePoolSpec.nodes to add new or remove existing nodes.

    ...
    spec:
      controlPlane:
        nodePoolSpec:
          nodes:
          - address: MACHINE_1_IP
          - address: MACHINE_2_IP
    ...
    
  3. Apply the changes.

    kubectl --kubeconfig ADMIN_KUBECONFIG apply -f USER_CLUSTER_NAME.yaml
    

Update worker nodes

  1. Get the existing NodePool configuration for the worker node pool.

    kubectl --kubeconfig ADMIN_KUBECONFIG get nodepools.baremetal.cluster.gke.io NODEPOOL_NAME -n cluster-USER_CLUSTER_NAME -o yaml > NODEPOOL_NAME.yaml
    
  2. Modify NODEPOOL_NAME.yaml spec.nodes to add new or remove existing nodes.

    ...
    spec:
      nodes:
      - address: MACHINE_1_IP
      - address: MACHINE_2_IP
    ...
    
  3. Apply the changes.

    kubectl --kubeconfig ADMIN_KUBECONFIG apply -f NODEPOOL_NAME.yaml
    

Adding node pools to user clusters

By default there is one worker node pool mapped to a user cluster. You can use the API to add more node pools to a user cluster.

  1. Create a new node pool YAML file, for example np2.yaml.

    apiVersion: baremetal.cluster.gke.io/v1
    kind: NodePool
    metadata:
      name: NODEPOOL_NAME
      namespace: cluster-USER_CLUSTER_NAME
    spec:
    clusterName: USER_CLUSTER_NAME
      nodes:
      - address: MACHINE_1_IP
      - address: MACHINE_2_IP
    

    Replace the following:

    • NODEPOOL_NAME: the name of the new nodepool, for example nodepool-2.
    • USER_CLUSTER_NAME: the name of the user cluster that you want to create the node pool for.
    • MACHINE_1_IP,MACHINE_2_IP : the IP address of the machine. You can specify one or more machine IP addresses.
  2. Apply your node pool configuration to the admin cluster:

    kubectl --kubeconfig ADMIN_KUBECONFIG apply -f np2.yaml
    

    Replace ADMIN_KUBECONFIG with the path to the admin cluster kubeconfig file.