Roberto Carratalá bio photo

Roberto Carratalá

Linux Geek. Devops & Kubernetes enthusiast. Architect @ Red Hat.

LinkedIn Github

What’s the difference between the Routers / IngressController of Openshift 4 in cloud environments and the deployments in On-premise? What are the main components involved?

Let’s take a look!

Overview and Example Scenario

This example is applicable in all UPI deployments in On-premise (Baremetal, VMWare UPI, RHV UPI, OSP UPI, etc) and also the RHV IPI, VMWARE IPI and Baremetal IPI (with some differences that will be analyzed in next blog posts) of Openshift 4.

In this example a Openshift 4.5 UPI on top of Openstack is used.

Ingresscontrollers in On-premise deployments

By default when a UPI Openshift 4 is installed a default ingresscontroller is configured. Let’s take a look:

$ oc get ingresscontroller -n openshift-ingress-operator
default   3d16h

If we check the endpointPublishingStrategy in the definition of the ingresscontroller we can see:

$ oc get -n openshift-ingress-operator default -o yaml | yq .status.endpointPublishingStrategy
  "type": "HostNetwork"

As you noticed, the endpoint publishing strategy is configured as type: HostNetwork.

The HostNetwork endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.

An Ingress controller with the HostNetwork endpoint publishing strategy can have only one Pod replica per node.

If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each Pod replica requests ports 80 and 443 on the node host where it is scheduled, a replica cannot be scheduled to a node if another Pod on the same node is using those ports.

  • HostNetwork spec

Interesting right? But what is the HostNetwork and to what applies?

The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.

Creating a pod with hostNetwork: true on OpenShift is a privileged operation. For these reasons, the host networking is not a good way to make your applications accessible from outside of the cluster.

What is the host networking good for? For cases where a direct access to the host networking is required. As we have in our UPI installation of Openshift, where the pods of the HAproxy need to access to the host machine interfaces for expose the HAproxy ports in order to LoadBalance them.

Let’s check the Openshift Routers created by the ingresscontroller then.

Openshift Routers in UPI deployment

First check the pods of the routers deployed:

$ oc get pod -n openshift-ingress -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE                                                   NOMINATED NODE   READINESS GATES
router-default-754bf5f974-62ntm   1/1     Running   0          41h   <none>           <none>
router-default-754bf5f974-qqmx5   1/1     Running   0          41h   <none>           <none>

Also, look at that the IP is from the hostsubnet and not from the pod network as by default is:

$ oc get clusternetworks
default     redhat/openshift-ovs-networkpolicy

$ oc get hostsubnets
NAME                                                   HOST                                                   HOST IP       SUBNET          EGRESS CIDRS   EGRESS IPS

Let’s inspect in detail one of the instances of the Openshift routers:

$ oc get pod router-default-754bf5f974-62ntm -n openshift-ingress -o json | jq -r '.spec.hostNetwork'

As we can see the hostNetwork is enabled as we expected.

Also more interesting things in the routers yaml definition are:

$ oc get pod router-default-754bf5f974-62ntm -n openshift-ingress -o json | jq -r '.spec.containers[].ports'
    "containerPort": 80,
    "hostPort": 80,
    "name": "http",
    "protocol": "TCP"
    "containerPort": 443,
    "hostPort": 443,
    "name": "https",
    "protocol": "TCP"
    "containerPort": 1936,
    "hostPort": 1936,
    "name": "metrics",
    "protocol": "TCP"

The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user. So, the hostPort feature allows to expose a single container port on the host IP.

  • HostPort spec

What is the hostPort used for? As we checked the Openshift routers are deployed as a set of containers running on top of our Openshift cluster. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Openshift cluster (from an external Loadbalancer, physical as f5 or a virtual as Nginx)

For this reason each Openshift router pods are located in different Openshift nodes, because it can not be overlapped due to the hostPort as we mentioned before.

If we check inside the pod of the Openshift Routers (haproxy):

$ oc exec -ti router-default-754bf5f974-62ntm -n openshift-ingress bash
bash-4.2$ ss -laputen | grep LISTEN | grep 443
tcp    LISTEN     0      128       *:443                   *:*                   uid:1000560000 ino:53555 sk:14 <->

bash-4.2$ ip ad | grep veth
11: vethb4826b01@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
14: veth5dc748f8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
81: veth2190afa3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
82: veth8b6082e1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default
84: vethf61d7613@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default

As we can see they can see every interface because is a privileged pod with the hostnetwork: true

Furthermore, as we check the services we have a ClusterIP svc that have as backends the Openshift router pods:

$ oc get svc -n openshift-ingress
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
router-internal-default   ClusterIP   <none>        80/TCP,443/TCP,1936/TCP   3d14h

As we can see the ports are 80/443/1936 for http, https, and metrics:

$ oc get svc router-internal-default -o json | jq .spec.ports
    "name": "http",
    "port": 80,
    "protocol": "TCP",
    "targetPort": "http"
    "name": "https",
    "port": 443,
    "protocol": "TCP",
    "targetPort": "https"
    "name": "metrics",
    "port": 1936,
    "protocol": "TCP",
    "targetPort": 1936

And the backends are the two replicas of the pods of the routers:

$ oc describe svc router-internal-default
Name:              router-internal-default
Namespace:         openshift-ingress
Annotations: router-metrics-certs-default
Type:              ClusterIP
Port:              http  80/TCP
TargetPort:        http/TCP
Port:              https  443/TCP
TargetPort:        https/TCP
Port:              metrics  1936/TCP
TargetPort:        1936/TCP
Session Affinity:  None
Events:            <none>

And that’s it! In the next blog post we will tweak the ingresscontroller to deploy route sharding in the same hosts, changing the hostNetwork spec.

Stay tuned!

Happy Openshifting!