Install an OpenShift 4 cluster with Calico


Big picture

Install an OpenShift 4 cluster with Calico.


Augments the applicable steps in the OpenShift documentation to install Calico.

How to

Before you begin

Create a configuration file for the OpenShift installer

First, create a staging directory for the installation. This directory will contain the configuration file, along with cluster state files, that OpenShift installer will create:

mkdir openshift-tigera-install && cd openshift-tigera-install

Now run OpenShift installer to create a default configuration file:

openshift-install create install-config

Note: Refer to the OpenShift installer documentation for more information about the installer and any configuration changes required for your platform.

Once the installer has finished, your staging directory will contain the configuration file install-config.yaml.

Update the configuration file to use Calico

Override the OpenShift networking to use Calico and update the AWS instance types to meet the system requirements:

sed -i 's/OpenShiftSDN/Calico/' install-config.yaml

Generate the install manifests

Now generate the Kubernetes manifests using your configuration file:

openshift-install create manifests

Download the Calico manifests for OpenShift and add them to the generated manifests directory:

curl -o manifests/01-crd-installation.yaml
curl -o manifests/01-crd-tigerastatus.yaml
curl -o manifests/crd.projectcalico.org_bgpconfigurations.yaml
curl -o manifests/crd.projectcalico.org_bgppeers.yaml
curl -o manifests/crd.projectcalico.org_blockaffinities.yaml
curl -o manifests/crd.projectcalico.org_clusterinformations.yaml
curl -o manifests/crd.projectcalico.org_felixconfigurations.yaml
curl -o manifests/crd.projectcalico.org_globalnetworkpolicies.yaml
curl -o manifests/crd.projectcalico.org_globalnetworksets.yaml
curl -o manifests/crd.projectcalico.org_hostendpoints.yaml
curl -o manifests/crd.projectcalico.org_ipamblocks.yaml
curl -o manifests/crd.projectcalico.org_ipamconfigs.yaml
curl -o manifests/crd.projectcalico.org_ipamhandles.yaml
curl -o manifests/crd.projectcalico.org_ippools.yaml
curl -o manifests/crd.projectcalico.org_kubecontrollersconfigurations.yaml
curl -o manifests/crd.projectcalico.org_networkpolicies.yaml
curl -o manifests/crd.projectcalico.org_networksets.yaml
curl -o manifests/00-namespace-tigera-operator.yaml
curl -o manifests/02-rolebinding-tigera-operator.yaml
curl -o manifests/02-role-tigera-operator.yaml
curl -o manifests/02-serviceaccount-tigera-operator.yaml
curl -o manifests/02-configmap-calico-resources.yaml
curl -o manifests/02-tigera-operator.yaml
curl -o manifests/01-cr-installation.yaml

Optionally provide additional configuration

You may want to provide Calico with additional configuration at install-time. For example, BGP configuration or peers. You can use a Kubernetes ConfigMap with your desired Calico resources in order to set configuration as part of the installation. If you do not need to provide additional configuration, you can skip this section.

To include Calico resources during installation, edit manifests/02-configmap-calico-resources.yaml in order to add your own configuration.

Note: If you have a directory with the Calico resources, you can create the file with the command:

oc create configmap -n tigera-operator calico-resources \
  --from-file=<resource-directory> --dry-run -o yaml \
  > manifests/02-configmap-calico-resources.yaml

With recent versions of oc it is necessary to have a kubeconfig configured or add --server='' even though it is not used.

Note: If you have provided a calico-resources configmap and the tigera-operator pod fails to come up with Init:CrashLoopBackOff, check the output of the init-container with oc logs -n tigera-operator -l k8s-app=tigera-operator -c create-initial-resources.

Create the cluster

Start the cluster creation with the following command and wait for it to complete.

openshift-install create cluster

Once the above command is complete, you can verify Calico is installed by verifying the components are available with the following command.

oc get tigerastatus

Note: To get more information, add -o yaml to the above command.

Optionally integrate with Operator Lifecycle Manager (OLM)

In OpenShift Container Platform, the Operator Lifecycle Manager helps cluster administrators manage the lifecycle of operators in their cluster. Managing the Calico operator with OLM gives administrators a single place to manage operators.

In order to register the running Calico operator with OLM, first you will need to create an OperatorGroup for the operator:

oc apply -f - <<EOF
kind: OperatorGroup
  name: tigera-operator
  namespace: tigera-operator
    - tigera-operator

Next, you will create a Subscription to the operator. By subscribing to the operator package, the Calico operator will be managed by OLM.

oc apply -f - <<EOF
kind: Subscription
  name: tigera-operator
  namespace: tigera-operator
  channel: stable
  installPlanApproval: Manual
  name: tigera-operator
  source: certified-operators
  sourceNamespace: openshift-marketplace
  startingCSV: tigera-operator.v1.10.3

Finally, log in to the OpenShift console, navigate to the Installed Operators section and approve the Install Plan for the operator.

Note: This may trigger the operator deployment and all of its resources (pods, deployments, etc.) to be recreated.

The OpenShift console provides an interface for editing the operator installation, viewing the operator’s status, and more.

Next steps


Recommended - Networking

  • If you are using the default BGP networking with full-mesh node-to-node peering with no encapsulation, go to Configure BGP peering to get traffic flowing between pods.
  • If you are unsure about networking options, or want to implement encapsulation (overlay networking), see Determine best networking option.

Recommended - Security