Changing IP pools
About changing IP pools
When using Calico IPAM, each workload is assigned an address from the selection of configured IP pools. You may want to modify the IP pool of a running cluster for one of the following reasons:
- To move to a larger CIDR that can accommodate more workloads.
- To move off of a CIDR that was used accidentally.
Purpose of this page
Provide guidance on how to change from one IP pool to another on a running cluster.
This guide only applies if you are using Calico IPAM.
While Calico supports changing IP pools, not all orchestrators do. Be sure to consult the documentation of the orchestrator you are using to ensure it supports changing the workload CIDR.
For example, in Kubernetes, all three of the following arguments must be equal to, or contain, the Calico IP pool CIDRs:
OpenShift does not support changing the pod network CIDR (as per their documentation on the
osm_cluster_network_cidr configuration field.
Application availability impact
This process will require the recreation of all Calico-networked workloads, which will have some impact on the availability of your applications.
Consequences of deleting an IP pool without following this migration procedure
Removing an IP pool without following this migration procedure can cause network connectivity disruptions in any running workloads with addresses from that IP pool. Namely:
- If IP-in-IP was enabled on the IP pool, those workloads will no longer have their traffic encapsulated.
- If nat-outgoing was enabled on the IP pool, those workloads will no longer have their traffic NAT’d.
- If using Calico BGP routing, routes to pods will no longer be aggregated.
Changing an IP pool
The basic process is as follows:
- Add a new IP pool.
- Disable the old IP pool. This prevents new IPAM allocations from the old IP pool without affecting the networking of existing workloads.
- Recreate all existing workloads that were assigned an address from the old IP pool.
- Remove the old IP pool.
In this example, we created a cluster with kubeadm. We wanted the pods to use IPs in the range
10.0.0.0/16 so we set
--pod-network-cidr=10.0.0.0/16 when running
kubeadm init. However, we
installed Calico without setting the default IP pool to match. Running
calicoctl get ippool -o wide shows
Calico created its default IP pool of
Based on the output of
calicoctl get wep --all-namespaces, we see
kube-dns has already been allocated an address
from the wrong range:
Let’s get started.
Add a new IP pool:
calicoctl create -f -<<EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: new-pool spec: cidr: 10.0.0.0/16 ipipMode: Always natOutgoing: true EOF
We should now have two enabled IP pools, which we can see when running
calicoctl get ippool -o wide:
Disable the old IP pool.
First save the IP pool definition to disk:
calicoctl get ippool -o yaml > pool.yaml
pool.yamlshould look like this:
apiVersion: projectcalico.org/v3 items: - apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: 192.0.0.0/16 ipipMode: Always natOutgoing: true - apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: new-pool spec: cidr: 10.0.0.0/16 ipipMode: Always natOutgoing: true
Note: Some extra cluster-specific information has been redacted to improve readibility.
Edit the file, adding
disabled: trueto the
apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: 192.0.0.0/16 ipipMode: Always natOutgoing: true disabled: true
Apply the changes:
calicoctl apply -f pool.yaml
We should see the change reflected in the output of
calicoctl get ippool -o wide:
Recreate all existing workloads using IPs from the disabled pool. In this example, kube-dns is the only workload networked by Calico:
kubectl delete pod -n kube-system kube-dns-6f4fd4bdf-8q7zp
Check that the new workload now has an address in the new IP pool by running
calicoctl get wep --all-namespaces:
Delete the old IP pool:
calicoctl delete pool default-ipv4-ippool
For more information on the structure of the IP pool resource, see the IP pools reference.