Red Hat Enterprise Linux packaged install
These instructions will take you through a first-time install of Calico. If you are upgrading an existing system, please see Calico on OpenStack upgrade instead.
There are three sections to the install: installing etcd, adding Calico to OpenStack control nodes, and adding Calico to OpenStack compute nodes. Follow the Common steps on each node before moving on to the specific instructions in the control and compute sections. If you want to create a combined control and compute node, work through all three sections.
Before you begin
- Ensure that you meet the requirements.
- Confirm that you have SSH access to and root privileges on one or more Red Hat Enterprise Linux (RHEL) hosts.
- Make sure you have working DNS between the RHEL hosts (use
/etc/hostsif you don’t have DNS on your network).
- Install OpenStack with Neutron and ML2 networking on the RHEL hosts.
Some steps need to be taken on all machines being installed with Calico. These steps are detailed in this section.
Add the EPEL repository. You may have already added this to install OpenStack.
Configure the Calico repository:
cat > /etc/yum.repos.d/calico.repo <<EOF [calico] name=Calico Repository baseurl=https://binaries.projectcalico.org/rpm/calico-3.5/ enabled=1 skip_if_unavailable=0 gpgcheck=1 gpgkey=https://binaries.projectcalico.org/rpm/calico-3.5/key priority=97 EOF
Calico operation requires an etcd v3 key/value store—this may be installed on a single machine or as a cluster. For production you will likely want multiple nodes for greater performance and reliability; please refer to the upstream etcd docs for detailed advice and setup. Here we present a sample recipe for a single node cluster.
Install etcd, and ensure that it is initially not running:
yum install -y etcd systemctl stop etcd
Place the following in
<uuid>with their appropriate values for the machine.
ETCD_DATA_DIR=/var/lib/etcd ETCD_NAME=<hostname> ETCD_ADVERTISE_CLIENT_URLS="http://<public_ip>:2379,http://<public_ip>:4001" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://<public_ip>:2380" ETCD_INITIAL_CLUSTER="<hostname>=http://<public_ip>:2380" ETCD_INITIAL_CLUSTER_STATE=new ETCD_INITIAL_CLUSTER_TOKEN=<uuid>
You can obtain a
<uuid>by running the
It should return a
<uuid>value such as the following.
uuidgentool is not installed, run
yum install -y util-linuxto install it.
Launch etcd and set it to restart after a reboot:
systemctl start etcd systemctl enable etcd
Control node install
On each control node, perform the following steps:
Delete all configured OpenStack state, in particular any instances, routers, subnets and networks (in that order) created by the install process referenced above. You can do this using the web dashboard or at the command line.
Tip: The Admin and Project sections of the web dashboard both have subsections for networks and routers. Some networks may need to be deleted from the Admin section.
Important: The Calico install will fail if incompatible state is left around.
/etc/neutron/neutron.conffile. In the
[DEFAULT]section, find the line beginning with
core_plugin, and change it to read
core_plugin = calico. Also remove any existing setting for
yum install -y calico-control
Restart the neutron server process:
service neutron-server restart
Compute node install
On each compute node, perform the following steps:
/etc/nova/nova.confand remove the line from the
[DEFAULT]section that reads:
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
Remove the lines from the
True, if there are any. Additionally, if there is a line setting
metadata_proxy_shared_secret, comment that line out as well.
Restart nova compute.
service openstack-nova-compute restart
If this node is also a controller, additionally restart nova-api.
service openstack-nova-api restart
If they’re running, stop the Open vSwitch services.
service neutron-openvswitch-agent stop service openvswitch stop
Then, prevent the services running if you reboot.
chkconfig openvswitch off chkconfig neutron-openvswitch-agent off
Then, on your control node, run the following command to find the agents that you just stopped.
For each agent, delete them with the following command on your control node, replacing
<agent-id>with the ID of the agent.
neutron agent-delete <agent-id>
Install Neutron infrastructure code on the compute host.
yum install -y openstack-neutron
/etc/neutron/neutron.conf. In the
[oslo_concurrency]section, ensure that the
lock_pathvariable is uncommented and set as follows.
[calico]section with the following content, where
<ip>is the IP address of the etcd server.
[calico] etcd_host = <ip>
Stop and disable the Neutron DHCP agent, and install the Calico DHCP agent (which uses etcd, allowing it to scale to higher numbers of hosts).
service neutron-dhcp-agent stop chkconfig neutron-dhcp-agent off yum install -y calico-dhcp-agent
Stop and disable any other routing/bridging agents such as the L3 routing agent or the Linux bridging agent. These conflict with Calico.
service neutron-l3-agent stop chkconfig neutron-l3-agent off
Repeat for bridging agent and any others.
If this node is not a controller, install and start the Nova Metadata API. This step is not required on combined compute and controller nodes.
yum install -y openstack-nova-api service openstack-nova-metadata-api restart chkconfig openstack-nova-metadata-api on
Install the BIRD BGP client.
yum install -y bird bird6
yum install -y calico-compute
Configure BIRD. By default Calico assumes that you will deploy a route reflector to avoid the need for a full BGP mesh. To this end, it includes configuration scripts to prepare a BIRD config file with a single peering to the route reflector. If that’s correct for your network, you can run either or both of the following commands.
For IPv4 connectivity between compute hosts:
calico-gen-bird-conf.sh <compute_node_ip> <route_reflector_ip> <bgp_as_number>
And/or for IPv6 connectivity between compute hosts:
calico-gen-bird6-conf.sh <compute_node_ipv4> <compute_node_ipv6> <route_reflector_ipv6> <bgp_as_number>
If you are configuring a full BGP mesh you need to handle the BGP configuration appropriately on each compute host. The scripts above can be used to generate a sample configuration for BIRD, by replacing the
<route_reflector_ip>with the IP of one other compute host—this will generate the configuration for a single peer connection, which you can duplicate and update for each compute host in your mesh.
To maintain connectivity between VMs if BIRD crashes or is upgraded, configure BIRD graceful restart. Edit the systemd unit file /usr/lib/systemd/system/bird.service (and bird6.service for IPv6):
-Rto the end of the
KillSignal=SIGKILLas a new line in the
systemctl daemon-reloadto tell systemd to reread that file.
Ensure that BIRD (and/or BIRD 6 for IPv6) is running and starts on reboot.
service bird restart service bird6 restart chkconfig bird on chkconfig bird6 on
/etc/calico/felix.cfgwith the following content, where
<ip>is the IP address of the etcd server.
[global] DatastoreType = etcdv3 EtcdAddr = <ip>:2379
Restart the Felix service.
service calico-felix restart