Get started with VPP networking
Install Calico and enable the tech preview of the VPP dataplane.
Warning! The VPP dataplane is a tech preview and should not be used in production clusters. It has had limited testing and it will contain bugs (please report these on the Calico Users slack or Github). In addition, it does not support all the features of Calico and it is currently missing some security features such as Host Endpoint policies.
The VPP dataplane mode has several advantages over standard Linux networking pipeline mode:
- Scales to higher throughput, especially with WireGuard encryption enabled
- Further improves encryption performance with IPsec
- Native support for Kubernetes services without needing kube-proxy, which:
- Reduces first-packet latency for packets to services
- Preserves external client source IP addresses all the way to the pod
The VPP dataplane is entirely compatible with the other Calico dataplanes, meaning you can have a cluster with VPP-enabled nodes along with regular nodes. This makes it easy to migrate a cluster from Linux or eBPF networking to VPP networking.
In the future, the VPP dataplane will offer additional features for network-intensive applications, such as providing
memif userspace packet interfaces to the pods (instead of regular Linux network devices).
Trying out the tech preview will give you a taste of these benefits and an opportunity to give feedback to the VPP dataplane team.
This how-to guide uses the following Calico features:
- VPP dataplane
The Vector Packet Processor (VPP) is a high-performance, open-source userspace network dataplane written in C, developed under the fd.io umbrella. It supports many standard networking features (L2, L3 routing, NAT, encapsulations), and is easily extensible using plugins. The VPP dataplane uses plugins to efficiently implement Kubernetes services load balancing and Calico policies.
This guide details two ways to install Calico with the VPP dataplane:
- On a managed EKS cluster. This is the option that requires the least configuration
- On any Kubernetes cluster
In both cases, here are the details of what you will get:
Install Calico with the VPP dataplane on an EKS cluster
For these instructions, we will use
eksctl to provision the cluster. However, you can use any of the methods in Getting Started with Amazon EKS
Before you get started, make sure you have downloaded and configured the necessary prerequisites
Provision and configure the cluster
First, create an Amazon EKS cluster without any nodes.
eksctl create cluster --name my-calico-cluster --without-nodegroup
Since this cluster will use Calico for networking, you must delete the
aws-nodedaemon set to disable AWS VPC networking for pods.
kubectl delete daemonset -n kube-system aws-node
Now that you have a cluster configured, you can install Calico.
kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v0.14.0-calicov3.19.0/yaml/generated/calico-vpp-eks.yaml
Install using the optional DPDK driver for better networking performance:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v0.14.0-calicov3.19.0/yaml/generated/calico-vpp-eks-dpdk.yaml
Finally, add nodes to the cluster.
eksctl create nodegroup --cluster my-calico-cluster --node-type t3.medium --node-ami auto --max-pods-per-node 100
Tip: The –max-pods-per-node option above, ensures that EKS does not limit the number of pods based on node-type. For the full set of node group options, see
eksctl create nodegroup --help.
Install Calico with the VPP dataplane on any Kubernetes cluster
The VPP dataplane has the following requirements:
- A blank Kubernetes cluster, where no CNI was ever configured.
- These base requirements, except those related to the management of
Optional For some hardware, the following hugepages configuration may enable VPP to use more efficient drivers:
- At least 128 x 2MB-hugepages are available (
cat /proc/meminfo | grep HugePages_Free)
vfio_pcion centos) or
uio_pci_generickernel module is loaded. For example:
echo "vfio-pci" > /etc/modules-load.d/95-vpp.conf modprobe vfio-pci echo "vm.nr_hugepages = 128" >> /etc/sysctl.conf sysctl -p # restart kubelet to take the changes into account, you may need to use a different command depending on how kubelet was installed systemctl restart kubelet
Configure nodes for VPP
Start by getting the appropriate yaml manifest for the Calico VPP dataplane:
# If you have configured hugepages on your machines curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v0.14.0-calicov3.19.0/yaml/generated/calico-vpp.yaml
# If not, or if you're unsure curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v0.14.0-calicov3.19.0/yaml/generated/calico-vpp-nohuge.yaml
Then configure these parameters in the
calico-vpp-config ConfigMap in the yaml manifest.
vpp_dataplane_interfaceis the primary interface that VPP will use. It must be the name of a Linux interface, configured with an address. The address configured on this interface must be the node address in Kubernetes (
kubectl get nodes -o wide).
service_prefixis the Kubernetes service CIDR. You can retrieve it by running:
kubectl cluster-info dump | grep -m 1 service-cluster-ip-range
If this command doesn’t return anything, you can leave the default value of
vpp_uplink_driverconfigures how VPP grabs the physical interface, available values are:
"": will automatically select and try drivers based on available resources, starting with the fastest
avf: use the native AVF driver
virtio: use the native virtio driver (requires hugepages)
af_xdp: use an AF_XDP socket to drive the interface (requires kernel 5.4 or newer)
af_packet: use an AF_PACKET socket to drive the interface (slow but works everywhere)
none: do not configure connectivity automatically. This can be used when configuring the interface manually
kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: calico-vpp-dataplane data: service_prefix: 10.96.0.0/12 vpp_dataplane_interface: eth1 vpp_uplink_driver: "" ...
Note Calico uses
192.168.0.0/16 as the IP range for the pods by default. If this IP range is used somewhere else in your environment, you should further customize the manifest to change it.
Apply the configuration
To apply the configuration, run:
kubectl apply -f calico-vpp.yaml
This will create all the resources necessary to connect your pods through VPP and configure Calico on the nodes.
- Install and configure calicoctl to configure and monitor your cluster.