James Sumners
2017-10-11 20:06:03 UTC
I lied. I'm not having fun.
I have a system with three NICs: eth0, eth1, eth2. This system is to be a
"load balancer," or, more accurately, a reverse proxy for many end points.
My desire is to make eth0 a "maintenance" NIC and bond eth1 & eth2 into the
primary service interface, bond0. I have three subnets in play: 10.0.1.0/24,
10.0.2.0/24, and 10.0.3.0/24. Pretend that 10.0.2/24 and 10.0.3/24 are
public, Internet accessible, subnets and 10.0.1/24 is private. The proxy
will serve end points on all three of these subnets.
Okay, so let's setup the interfaces:
```
$ echo -e "1 service\n2 bond" >> /etc/iproute2/rt_tables
$ ip link set eth0 up
$ ip addr add 10.0.1.2/24 dev eth0
$ ip rule add iif eth0 prio 0 table service
$ ip route add to 0.0.0.0/0 via 10.0.1.1 dev eth0 prio 10000 table main
$ ip route add default via 10.0.1.1 dev eth0 table service
$ ip route flush cache
$ ip link add dev bond0 address 00:00:00:aa:bb:cc type bond
$ echo balance-alb > /sys/class/net/bond0/bonding/mode
$ echo 100 > /sys/class/net/bond0/bonding/miimon
$ ip link set eth1 master bond0
$ ip link set eth2 master bond0
$ ip link set bond0 up
$ ip addr add 10.0.2.2/24 dev bond0 # see note 1 below
$ ip rule add iif bond0 prio 0 table bond
$ ip route add default via 10.0.2.1 dev bond0 table bond
$ ip route flush cache
```
Cool. Now let's add an end point:
```
$ ip addr add 10.0.3.15/32 dev bond0
```
So, what's the problem? The switches see 10.0.3.15 as being associated with
eth0. Thus, things don't work correctly. I can use tcpdump to monitor the
traffic on bond0, ping 10.0.3.15 and see the traffic come in, but the
pinger never gets a pong.
At this point I'm probably just going to say to hell with the maintenance
interface and have all traffic on the bond and routed with the main table.
But I figured I'd see if anyone has any guesses about why this
configuration isn't working. To the best of my knowledge, the following
should be true:
1. Traffic originating on the system will be routed through 10.0.1.1 via
the eth0 interface as per the "main" routing table.
2. Traffic originating remotely via 10.0.1.2 will route through 10.0.1.1
via the eth0 interface as per the "service" routing table.
3. Traffic originating remotely via 10.0.2.2 or 10.0.3.15 will route
through 10.0.2.1 via the bond0 interface as per the "bond" routing table.
Note 1: this is actually a pair of systems configured for failover with
Ucarp as provided by https://github.com/jsumners/ucarp-rhel7 . Ucarp needs
a "master IP" to tie the VIPs to.
I have a system with three NICs: eth0, eth1, eth2. This system is to be a
"load balancer," or, more accurately, a reverse proxy for many end points.
My desire is to make eth0 a "maintenance" NIC and bond eth1 & eth2 into the
primary service interface, bond0. I have three subnets in play: 10.0.1.0/24,
10.0.2.0/24, and 10.0.3.0/24. Pretend that 10.0.2/24 and 10.0.3/24 are
public, Internet accessible, subnets and 10.0.1/24 is private. The proxy
will serve end points on all three of these subnets.
Okay, so let's setup the interfaces:
```
$ echo -e "1 service\n2 bond" >> /etc/iproute2/rt_tables
$ ip link set eth0 up
$ ip addr add 10.0.1.2/24 dev eth0
$ ip rule add iif eth0 prio 0 table service
$ ip route add to 0.0.0.0/0 via 10.0.1.1 dev eth0 prio 10000 table main
$ ip route add default via 10.0.1.1 dev eth0 table service
$ ip route flush cache
$ ip link add dev bond0 address 00:00:00:aa:bb:cc type bond
$ echo balance-alb > /sys/class/net/bond0/bonding/mode
$ echo 100 > /sys/class/net/bond0/bonding/miimon
$ ip link set eth1 master bond0
$ ip link set eth2 master bond0
$ ip link set bond0 up
$ ip addr add 10.0.2.2/24 dev bond0 # see note 1 below
$ ip rule add iif bond0 prio 0 table bond
$ ip route add default via 10.0.2.1 dev bond0 table bond
$ ip route flush cache
```
Cool. Now let's add an end point:
```
$ ip addr add 10.0.3.15/32 dev bond0
```
So, what's the problem? The switches see 10.0.3.15 as being associated with
eth0. Thus, things don't work correctly. I can use tcpdump to monitor the
traffic on bond0, ping 10.0.3.15 and see the traffic come in, but the
pinger never gets a pong.
At this point I'm probably just going to say to hell with the maintenance
interface and have all traffic on the bond and routed with the main table.
But I figured I'd see if anyone has any guesses about why this
configuration isn't working. To the best of my knowledge, the following
should be true:
1. Traffic originating on the system will be routed through 10.0.1.1 via
the eth0 interface as per the "main" routing table.
2. Traffic originating remotely via 10.0.1.2 will route through 10.0.1.1
via the eth0 interface as per the "service" routing table.
3. Traffic originating remotely via 10.0.2.2 or 10.0.3.15 will route
through 10.0.2.1 via the bond0 interface as per the "bond" routing table.
Note 1: this is actually a pair of systems configured for failover with
Ucarp as provided by https://github.com/jsumners/ucarp-rhel7 . Ucarp needs
a "master IP" to tie the VIPs to.
--
James Sumners
http://james.sumners.info/ (technical profile)
http://jrfom.com/ (personal site)
http://haplo.bandcamp.com/ (music)
James Sumners
http://james.sumners.info/ (technical profile)
http://jrfom.com/ (personal site)
http://haplo.bandcamp.com/ (music)