Connecting to Tasks

When Calico is used to network Mesos tasks, each task is assigned a uniquely addressable IP address.

Since these IP addresses are dynamically assigned and may change when a task is re-deployed, it is recommended to use one of the available service discovery solutions available for Mesos.


Important: Modifying Mesos-DNS in DC/OS may break components which rely on Mesos-DNS resolving to Agent IPs. It is instead recommended to use Navstar DNS entries. Skip ahead for information on how to use this alternative DNS service.

Mesos-DNS is an open source DNS service for Mesos which gathers task information from the Mesos Master and uses it to serve DNS entries which resolve to a task’s IP. Mesos-DNS provides the following DNS entry for tasks launched by Marathon:


By default, Mesos-DNS is configured to return the IP address of the Agent that is running the task. When using Calico, it needs to be reconfigured to return the tasks’ Container IP instead. To do this, modify its configuration to preferentially resolve to a task’s netinfo for the IPSources field, as follows:

  "IPSources": ["netinfo", "host"]

For example, a Calico-networked task named nginx scaled to 3 instances will respond to DNS lookups with the Calico IPs:

[vagrant@calico-01 ~]$ nslookup nginx.marathon.mesos

Name:   nginx.marathon.mesos
Name:   nginx.marathon.mesos
Name:   nginx.marathon.mesos

DC/OS 1.8+ includes Navstar & Spartan DNS services. These DNS services provide additional DNS entries which always resolve to a task’s Container IP.

The following address will resolve to a task’s Calico IP:


Load Balancing / External Service Discovery

For accessing Calico tasks from outside the cluster, Marathon-LB can be used to forward requests on to Calico tasks. The following diagram demonstrates how this works:


As per the official marathon-lb docs, select a HAPROXY_GROUP and optionally choose a HAPROXY_VHOST for your application in its Marathon application definition.

Additionally, for Calico, you’ll need to explicitly tell it which port to load balance to. Set containerPort in your application’s portMappings:

  "id": "nginx",
  "instances": 3,
  "labels": {
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "nginx",
      "network": "USER",
      "portMappings": [{"containerPort": 80}]
  "ipAddress": {
      "networkName": "calico"

Requests made to the VHOST URL (from a host which resolves the address to the public slave) will be fulfilled by the Calico task backing it. We can simulate this resolution with a simple curl:

curl -H "Host:"

Note: Replace with the IP of the public slave where Marathon-LB is running.

A look into the haproxy.cfg generated by Marathon-LB shows that requests to this VIP will be round robin’d to the Calico IPs assigned to our nginx instances:

docker exec marathon-lb cat haproxy.cfg


backend nginx_10000
  balance roundrobin
  mode http
  option forwardfor
  http-request set-header X-Forwarded-Port %[dst_port]
  http-request add-header X-Forwarded-Proto https if { ssl_fc }
  server 172_24_197_101_192_168_84_196_80
  server 172_24_197_101_192_168_84_197_80
  server 172_24_197_102_192_168_50_70_80