Deploying Calico and Kubernetes on Container Linux by CoreOS using Vagrant and VirtualBox
These instructions allow you to set up a Kubernetes cluster with Calico networking using Vagrant and the Calico CNI plugin. This guide does not set up TLS between Kubernetes components.
1. Deploy cluster using Vagrant
1.1 Install dependencies
- VirtualBox 5.0.0 or greater.
- Vagrant 1.7.4 or greater.
- Curl
1.2 Download the source files
curl -O https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/vagrant/Vagrantfile
curl -O https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/vagrant/master-config.yaml
curl -O https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/vagrant/node-config.yaml
1.3 Startup and SSH
Run
vagrant up
Note: This will deploy a Kubernetes master and two Kubernetes nodes. To run more nodes, modify the value
num_instances
in the Vagrantfile before runningvagrant up
.
To connect to your servers
- Linux/Mac OS X
- run
vagrant ssh <hostname>
- run
- Windows
- Follow instructions from https://github.com/nickryand/vagrant-multi-putty
- run
vagrant putty <hostname>
1.4 Verify environment
You should now have three CoreOS Container Linux servers:
Hostname | IP |
---|---|
k8s-master | 172.18.18.101 |
k8s-node-01 | 172.18.18.102 |
k8s-node-02 | 172.18.18.103 |
At this point, it’s worth checking that your servers can ping each other.
From k8s-master
ping 172.18.18.102
ping 172.18.18.103
From k8s-node-01
ping 172.18.18.101
ping 172.18.18.103
From k8s-node-02
ping 172.18.18.101
ping 172.18.18.102
If you see ping failures, the likely culprit is a problem with the VirtualBox network between the VMs. You should
check that each host is connected to the same virtual network adapter in VirtualBox and rebooting the host may also
help. Remember to shut down the VMs with vagrant halt
before you reboot.
You should also verify each host can access etcd. The following will return an error if etcd is not available.
curl -L http://172.18.18.101:2379/version
And finally check that Docker is running on both hosts by running
docker ps
2. Configuring the Cluster and kubectl
Prequisite: kubectl
installed
Let’s configure kubectl
so you can access the cluster from your local machine.
kubectl config set-cluster vagrant-cluster --server=http://172.18.18.101:8080
kubectl config set-context vagrant-system --cluster=vagrant-cluster
kubectl config use-context vagrant-system
3. Install Addons
Install Calico
Calico can be installed on Kubernetes using Kubernetes resources (DaemonSets, etc).
The Calico self-hosted installation consists of three objects in the kube-system
Namespace:
- A
ConfigMap
which contains the Calico configuration. - A
DaemonSet
which installs thecalico/node
pod and CNI plugin. - A
ReplicaSet
which installs thecalico/kube-policy-controller
pod.
Install the Calico manifest:
kubectl apply -f https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/hosted/calico.yaml
You should see the pods start in the kube-system
Namespace:
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-node-1f4ih 2/2 Running 0 1m
calico-node-hor7x 2/2 Running 0 1m
calico-node-si5br 2/2 Running 0 1m
calico-policy-controller-so4gl 1/1 Running 0 1m
info: 1 completed object(s) was(were) not shown in pods list. Pass --show-all to see all objects.
Install DNS
To install KubeDNS, use the provided manifest. This enables Kubernetes Service discovery.
kubectl apply -f https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/manifests/skydns.yaml
Next Steps
You should now have a fully functioning Kubernetes cluster using Calico for networking. You’re ready to use your cluster.
We recommend you try using Calico for Kubernetes NetworkPolicy.