9 minute read

How you can control and predict which source Ip address are used for your workloads when they reach outside Kubernetes resources? How you can control and trace in your firewalls and IPS/IDS which workloads with specific source Ips are allowed and what are not?

Let’s dig in!

Overview

When you have workloads in your OpenShift cluster, and you try to reach external hosts/resources, by default cluster egress traffic gets NAT’ed to the node IP where it’s deployed your workload / pod.

This causes that the external hosts (or any external firewall/ IDS/IPS that are controlling and filtering the traffic in your networks) can’t distinguish the traffic originated in your pods/workloads because they don’t use the same sourceIp, and will depend which OpenShift node are used for run the workloads.

A diagram of the default cluster egress traffic could be this:

But how I can reserve private IP source IP for all egress traffic of my workloads in my project X?

Lets introduce Egress IP in OpenShift!

Egress IP in OpenShift 4

Egress IPs is an OpenShift feature that allows for the assignment of an IP to a namespace (the egress IP) so that all outbound traffic from that namespace appears as if it is originating from that IP address (technically it is NATed with the specified IP).

So in a nutshell is used to provide an application or namespace the ability to use a static IP for egress traffic regardless of the node the workload is running on. This allows for the opening of firewalls, whitelisting of traffic and other controls to be placed around traffic egressing the cluster.

The egress IP becomes the network identity of the namespace and all the applications running in it. Without egress IP, traffic from different namespaces would be indistinguishable because by default outbound traffic is NATed with the IP of the nodes, which are normally shared among projects.

While this process is slightly different from cloud vendor to vendor, Egress IP addresses are implemented as additional IP addresses on the primary network interface of the node and must be in the same subnet as the node’s primary IP address.

Depending the SDN that you are using, the implementation of the EgressIP are slightly different, and in this blog post we will cover two SDN implementations that are shipped and fully supported in OpenShift (and are the most broadly used): OpenShift SDN and OVN Kubernetes CNI Plugin.

Set up some prerequisites

But first we need to check the default behaviour and set up a scenario to debug and trace our workloads source IPs and the flow between the pods/containers of our workloads and the External resources outside of the cluster.

For tracing purposes and to simulate external resources being requested from the workloads inside of OpenShift cluster, we will set up a simple Httpd web server and monitor the source IP in the access logs of the webserver, when we’ll request from our workloads.

  • Deploy httpd with static webpage in port 8080 in bastion host:
bastion # sudo yum install -y httpd

# grep ^[^#] /etc/httpd/conf/httpd.conf | grep Listen
Listen 8080

bastion # cat > /var/www/html/index.html << EOF
<html>
<head/>
<body>OK</body>
</html>
EOF

bastion # sudo systemctl start httpd
  • Open the firewall ports of the httpd:
bastion # sudo firewall-cmd --zone=public --permanent --add-port=8080/tcp
bastion # systemctl restart firewalld
bastion # hostname -I | awk '{print $1}'
10.1.8.72

If we curl from another host to our brand new httpd, we check that effectively we can trace their source IP:

[root@helper ~]# curl 10.1.8.72:8080
<html>
<head/>
<body>OK</body>
</html>

# tail -n1 /var/log/httpd/access_log
bastion # tail -n1 /var/log/httpd/access_log
192.168.7.77 - - [22/Jul/2021:08:06:26 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.29.0"

In this case the 192.168.7.77 is the host where we produced the request.

Default Egress Traffic Tests - Non Egress IP

We can test now a workload request from inside of our OCP cluster without an Egress IP:

  • Create a workload for testing (hello-world-nginx in this case)
# oc new-project no-egress-ip
# oc create deployment test-nginx --image quay.io/redhattraining/hello-world-nginx
  • Check the Pod IP and the Host IP of the node that is running the workload:
# oc get pod -n no-egress-ip -o custom-columns=NAME:.spec.containers[0].name,NODE:.spec.nodeName,POD_IP:.status.podIP,HOST_IP:.status.hostIP
NAME                NODE                     POD_IP        HOST_IP
hello-world-nginx   worker0.ocp4.rober.lab   10.254.4.22   192.168.7.11

In this case the POD_IP is 10.254.4.22 (a pod ip inside of the SDN) and the Host_IP is the 192.168.7.11 that corresponds with the worker0 of our cluster.

  • If we check the nodes, filtering for the worker0 we check have both the same Host_IP address:
# oc get nodes -o wide | grep worker0
worker0.ocp4.rober.lab   Ready    worker   7d     v1.20.0+c8905da   192.168.7.11   <none>        Red Hat Enterprise Linux CoreOS 47.83.202104090345-0 (Ootpa)   4.18.0-240.22.1.el8_3.x86_64   cri-o://1.20.2-6.rhaos4.7.gitf1d5201.el8
  • If we execute a curl inside of the OpenShift Cluster, requesting the IP of our external resource (web server from before):
# oc exec -ti -n no-egress-ip deploy/test-nginx -- curl http://10.1.8.72:8080
<html>
<head/>
<body>OK</body>
</html>

# tail -n5 /var/log/httpd/access_log
192.168.7.11 - - [22/Jul/2021:08:31:48 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.11 - - [22/Jul/2021:08:31:48 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.11 - - [22/Jul/2021:08:31:49 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.11 - - [22/Jul/2021:08:31:50 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.11 - - [22/Jul/2021:08:31:50 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"

We check that effectively the source IP used is the HOST_IP of the worker0, and not the POD_IP. Why? Because as we checked in the first diagram by default outbound traffic is NATed with the IP of the nodes, which are normally shared among projects.

If we moved the workload to another node, we will receive another source IP totally different, and this is not friendly for traditional firewall systems, to add whitelistings or firewall rules to identify and control our workloads requests / egress traffic.

Let’s see how the Egress IP can help us!

Egress IP with OpenShift SDN

As described before, you can configure the OpenShift SDN default Container Network Interface (CNI) network provider to assign one or more egress IP addresses to a project.

  • Generate a new namespace for tests the Egress IPs:
# oc new-project egress-test

You can assign egress IP addresses to namespaces by setting the egressIPs parameter of the NetNamespace object. After an egress IP is associated with a project, OpenShift SDN allows you to assign egress IPs to hosts in two ways:

  1. In the automatically assigned approach, an egress IP address range is assigned to a node.
  2. In the manually assigned approach, a list of one or more egress IP address is assigned to a node.

Namespaces that request an egress IP address are matched with nodes that can host those egress IP addresses, and then the egress IP addresses are assigned to those nodes.

  • In this case we will use an manually assigned approach, a list of one IP address to assign to the node. Update the NetNamespace object by specifying the following JSON object with the desired IP addresses (in this case we used 192.168.7.200/24)
# oc patch netnamespace egress-test --type=merge -p '{"egressIPs": ["192.168.7.200"]}'
netnamespace.network.openshift.io/egress-test patched
  • Check the netnamespace associated to the specific project that we created before that is successfully associated:
# oc get netnamespace egress-test -o yaml | grep -A2 egressIPs
egressIPs:
- 192.168.7.200
kind: NetNamespace
--
      f:egressIPs: {}
    manager: kubectl-patch
    operation: Update

When a namespace has multiple egress IP addresses, if the node hosting the first egress IP address is unreachable, OpenShift Container Platform will automatically switch to using the next available egress IP address until the first egress IP address is reachable again.

  • Check the hostsubnets of the worker that we will used to hold the egressIP that we assigned (in this case manually, but could be assigned automatically)
# oc get hostsubnet | grep worker2
worker2.ocp4.rober.lab   worker2.ocp4.rober.lab   192.168.7.13   10.254.5.0/24
  • After adding the egress IP to the NetNamespace, we need to add the same IP to the HostSubNet:
# oc patch hostsubnet worker2.ocp4.rober.lab --type=merge -p '{"egressIPs": ["192.168.7.200/24"]}'
hostsubnet.network.openshift.io/worker2.ocp4.rober.lab patched
  • If we check the hostsubnet again, we notice that the EGRESS_IP column appeared with the ip assigned in the step before:
# oc get hostsubnet | grep worker2
worker2.ocp4.rober.lab   worker2.ocp4.rober.lab   192.168.7.13   10.254.5.0/24                  ["192.168.7.200"]
  • Now on the node, OpenShift automatically added this IP to the primary interface of the node:
# oc debug node/worker2.ocp4.rober.lab
Starting pod/worker2ocp4roberlab-debug ...
To use host binaries, run `chroot /host`
Pod IP: 192.168.7.13
If you don't see a command prompt, try pressing enter.
sh-4.4# chroot /host bash
[root@worker2 /]#

[root@worker2 /]# ip a | grep -A5 ens3
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:e7:02:71 brd ff:ff:ff:ff:ff:ff
    inet 192.168.7.13/24 brd 192.168.7.255 scope global dynamic noprefixroute ens3
       valid_lft 12388sec preferred_lft 12388sec
    inet 192.168.7.200/24 brd 192.168.7.255 scope global secondary ens3:eip
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fee7:271/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

As we can check in the ip address command in the main interface used in this node that have assigned the HOST_IP that is used in the OpenShift Cluster, a second ip is attached to the interface (in this case ens3), and will be used as the Egress IP for the Project/Namespace egress-test.

With this setup, all egress traffic for project egress-test will be routed to the node hosting the specified egress IP (worker2), and then connected (using NAT) to that IP address.

Tests with the OpenShift SDN

Now that we have the Egress IP configured and in place, let’s test it!

  • In the namespace / project where we configured the egress-ip (egress-test), launch the example application that we’ve used in the previous test:
# oc -n egress-test  create deployment test-nginx --image quay.io/redhattraining/hello-world-nginx

  • Check the node running the workload and their IP:
# oc get pod -n egress-test -o custom-columns=NAME:.spec.containers[0].name,NODE:.spec.nodeName,POD_IP:.status.podIP,HOST_IP:.status.hostIP
NAME                NODE                     POD_IP        HOST_IP
hello-world-nginx   worker0.ocp4.rober.lab   10.254.4.27   192.168.7.11

In this case our application is running in the worker0 as well, with the HOST_IP of 192.168.7.11.

  • Let’s perform a curl inside of our workload of example to the webserver that it’s in outside of our OpenShift cluster:
# oc exec -n egress-test -ti deploy/test-nginx -- curl http://10.1.8.72:8080
<html>
<head/>
<body>OK</body>
</html>
  • And in the Access log we will see the EgressIP that we selected, NOT the HOST_IP:
# tail -n5 /var/log/httpd/access_log
192.168.7.200 - - [22/Jul/2021:08:47:44 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.200 - - [22/Jul/2021:08:47:44 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.200 - - [22/Jul/2021:08:47:45 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.200 - - [22/Jul/2021:08:47:46 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"
192.168.7.200 - - [22/Jul/2021:08:47:47 -0400] "GET / HTTP/1.1" 200 39 "-" "curl/7.61.1"

If the workload runs in other workers, or if it’s scale to more replicas, the source IP that it’s observed in our webserver will be the Egress IP as well.

Remember that all egress traffic for project egress-test will be routed to the node hosting the specified egress IP (worker2), and then connected (using NAT) to that IP address.

Check out the EgressIP using the OVN Kubernetes CNI plugin blog post and see the differences between this two methods!

NOTE: Opinions expressed in this blog are my own and do not necessarily reflect that of the company I work for.

Happy OpenShifting!