Network interfaces, e.g. eth0 and eth1, traditionally represents actual hardware inside your machine. These hardware components are commonly referred to as network adapters and they come in various shapes and sizes. Some network adapters have dual sockets.
Announcement
You can find all my latest posts on medium.CentOS assigns a user friendly name to these sockets (aka interfaces) called, eth0, eth1,….etc. You can view a full list of all the interfaces on you machine by running:
[root@localhost ~]# ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:d0:c3:19 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 valid_lft 86376sec preferred_lft 86376sec inet6 fe80::ec1:d80f:6948:87b6/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:ab:eb:2e brd ff:ff:ff:ff:ff:ff inet 172.28.128.3/24 brd 172.28.128.255 scope global dynamic enp0s8 valid_lft 1176sec preferred_lft 1176sec inet6 fe80::a00:27ff:feab:eb2e/64 scope link valid_lft forever preferred_lft forever 4: enp0s9: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:75:e3:89 brd ff:ff:ff:ff:ff:ff inet 172.28.128.4/24 brd 172.28.128.255 scope global dynamic enp0s9 valid_lft 1176sec preferred_lft 1176sec inet6 fe80::a00:27ff:fe75:e389/64 scope link valid_lft forever preferred_lft forever 5: virbr0: mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:9c:e9:92 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 6: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:9c:e9:92 brd ff:ff:ff:ff:ff:ff
In CentOS it’s possible to configure two separate network interfaces (e.g. enp0s3 and enp0s8) to work together. This is handy in a number of ways:
- combined bandwidth – This means improved performance
- load balancing – So that one interface doesn’t do all the work while the other is idle
- fault tolerance – if one interface suffers a hardware failure then the other interface can take over
You can follow along this article by using our Network Teaming Vagrant project on Github.
Combining two network interfaces like this is referred to as ‘Link Aggregation’.
There are 2 methods for setting up link aggregation, they are called:
- network bonding – this will eventually get replaced by network teaming
- network teaming
The ‘teaming’ method is new in RHEL 7 and has more features compared to the bonding method. Therefore in this guide we’ll focus on network teaming. First off, you need to install the teamd rpm:
$ yum install teamd
A team based link aggregation has 5 different operating modes:
- broadcast
- roundrobin
- activebackup
- loadbalance
- lacp
You can find more info about these modes in the teamd man pages:
$ man 5 teamd.conf
You can setup Link aggregation using NetworkManager utilities. In our case we’ll use nmcli.
$ nmcli connection add type team con-name CodingBeeTeam0 ifname CodingBeeTeam0 config '{"runner": {"name":"activebackup"}}' Connection 'CodingBeeTeam0' (7a12239f-ca6f-4347-b9db-86122bb22078) successfully added.
This is quite a complex looking command, but you can see examples of this command in:
$ man nmcli-examples $ man teamd.conf # gives json examples.
Here we have created a new connection called ‘CodingBeeTeam0′ as well as an ’emulated’ interface also called ‘CodingBeeTeam0’. To view the connection you can do:
$ nmcli connection show CodingBeeTeam0
The corresponding file that’s been created is:
$ cat /etc/sysconfig/network-scripts/ifcfg-CodingBeeTeam0 DEVICE=CodingBeeTeam0 TEAM_CONFIG="{\"runner\": {\"name\":\"activebackup\"}}" DEVICETYPE=Team BOOTPROTO=dhcp DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=CodingBeeTeam0 UUID=7a12239f-ca6f-4347-b9db-86122bb22078 ONBOOT=yes
As for the ’emulated’ interface, you can view this using the ip command:
$ ip addr show CodingBeeTeam0 8: CodingBeeTeam0:mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:1e:05:19:50:0a brd ff:ff:ff:ff:ff:ff
This is more of a simulated simulated interface because it’s not linked to any actual hardware. Instead the CodingBeeTeam0 connection will manage traffic from other actual interfaces, such as eth0, enp0s3, wlan0, etc. Hence CodingBeeTeam0 can be thought of a ‘virtual interface’ that carries data traffic to/from other real interfaces. This interface is often referred to as the team-interface
This virtual interfaces uses something called ‘runners’ which you can imagine as an army of ants that carrys data traffic between the team-interface and the other real interfaces. You can control the logic that these runners uses when deciding which interface it should send/retrieve traffic, by specifying the operation mode. In our case we have chosen activebackup. Note '{"runner": {"name":"activebackup"}}'
is actually written in json syntax.
In order for our new team connection, CodingBeeTeam0, to be able to start receiving traffic, we need to (manually) assign it with an ip address, which we can do like this:
$ nmcli connection modify CodingBeeTeam0 ipv4.addresses '192.168.2.50/24'
It doesn’t matter what ip address you assign to it, as long as it falls inside any of the private network ranges.
This ends up adding the following line to the the file:
$ cat /etc/sysconfig/network-scripts/ifcfg-CodingBeeTeam0 DEVICE=CodingBeeTeam0 TEAM_CONFIG="{\"runner\": {\"name\":\"activebackup\"}}" DEVICETYPE=Team BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no NAME=CodingBeeTeam0 UUID=7a12239f-ca6f-4347-b9db-86122bb22078 ONBOOT=yes IPADDR=192.168.2.50 PREFIX=24 PEERDNS=yes PEERROUTES=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes
Since we have manually assigned an IP address (rather than getting dhcp to automatically assign it with an IP address), it means that before using this connection we first need to disable dhcp for this connection:
$ nmcli connection modify CodingBeeTeam0 ipv4.method manual
This ends up changing the following line:
$ cat /etc/sysconfig/network-scripts/ifcfg-CodingBeeTeam0 DEVICE=CodingBeeTeam0 TEAM_CONFIG="{\"runner\": {\"name\":\"activebackup\"}}" DEVICETYPE=Team BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no NAME=CodingBeeTeam0 UUID=7a12239f-ca6f-4347-b9db-86122bb22078 ONBOOT=yes IPADDR=192.168.2.50 PREFIX=24 IPV6_PEERDNS=yes IPV6_PEERROUTES=yes
We can now take a more detailed look of our team connection, CodingBeeTeam0, using the teamdctl command:
[root@localhost ~]# teamdctl CodingBeeTeam0 state setup: runner: activebackup runner: active port:
However we still haven’t specified which actual interfaces (e.g. enp0s8 and enp0s9) the team connection (CodingBeeTeam0) should be routing traffic to. We can set this using the nmcli command, for example here’s the command to attach enp0s8 to CodingBeeTeam0:
$ nmcli connection add type team-slave con-name CodingBeeTeam0-port1 ifname enp0s8 master CodingBeeTeam0 Connection 'CodingBeeTeam0-port1' (ffefdb3c-d61b-4800-9495-de9af09e62e7) successfully added.
Here we are telling nmcli to “create a new team-slave connection using the existing interface, enp0s8, and give this team-slave the name ‘CodingBeeTeam0-port1’, then attach that team-slave connection to it’s master team-connection, called ‘CodingBeeTeam0′”
We need to attach at least 2 interfaces to our team connection, so let’s do the same thing with enp0s9:
$ nmcli connection add type team-slave con-name CodingBeeTeam0-port2 ifname enp0s9 master CodingBeeTeam0 Connection 'CodingBeeTeam0-port2' (b2f6d366-6ddf-471f-bc86-aee143b29b1b) successfully added.
Now if we check our team-connection’s state again, we see:
[root@localhost ~]# teamdctl CodingBeeTeam0 state setup: runner: activebackup runner: active port:
Nothing has changed, that’s because we need to activate both the team-slave connections, and the team connection itself. To activate everything we do:
$ nmcli connection up $ nmcli connection up CodingBeeTeam0-port1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) $ nmcli connection up CodingBeeTeam0-port2 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8)
Now if we check the states again, we see:
$ teamdctl CodingBeeTeam0 state setup: runner: activebackup ports: enp0s8 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp0s9 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp0s8
This is also indicated when we run:
$ ip a 1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:d0:c3:19 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 valid_lft 79552sec preferred_lft 79552sec inet6 fe80::ec1:d80f:6948:87b6/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: mtu 1500 qdisc pfifo_fast master CodingBeeTeam0 state UP qlen 1000 link/ether 08:00:27:ab:eb:2e brd ff:ff:ff:ff:ff:ff 4: enp0s9: 5: virbr0:mtu 1500 qdisc pfifo_fast master CodingBeeTeam0 state UP qlen 1000 link/ether 08:00:27:ab:eb:2e brd ff:ff:ff:ff:ff:ff mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:9c:e9:92 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 6: virbr0-nic: mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:9c:e9:92 brd ff:ff:ff:ff:ff:ff 7: CodingBeeTeam0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 08:00:27:ab:eb:2e brd ff:ff:ff:ff:ff:ff inet 192.168.2.50/24 brd 192.168.2.255 scope global CodingBeeTeam0 valid_lft forever preferred_lft forever inet6 fe80::7b52:6256:17cd:a47d/64 scope link valid_lft forever preferred_lft forever
Now we can take down a interface and see the failover in action:
$ nmcli connection down CodingBeeTeam0-port1 Connection 'CodingBeeTeam0-port1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7) $ teamdctl CodingBeeTeam0 state setup: runner: activebackup ports: enp0s9 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp0s9
As an alternative to using nmcli, you can also do all this via the gui. To start it up, make sure you are in the gnome desktop, then start up the gui by running:
$ nm-connection-editor
Useful resources
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/ch-Configure_Network_Teaming.html
[post-content post_name=rhsca-quiz]
Combining two network interfaces to act like one.
– double bandwidth
– load balancing
– redundancies
– network bonding
– network teaming
network teaming – it’s newer and has more features
– broadcast
– roundrobin
– activebackup
– loadbalance
$ man 5 teamd.conf
1. Create new connection of the type ‘team’.
2. Manually assign an ip address (that falls in one of the private ranges) to our new team connection
3. Completely disable dhcp for our new team connection. That’s to avoid dhcp overriding our manual ip4 address.
4. Create new connection of type team-slave for both enp0s8 and enp0s9
5. Activate both team-slave connections and master connection
6. Test everything is working.
$ nmcli connection add type team con-name cbteam ifname cbteam config ‘{“runner”: {“name”:”activebackup”}}’
4 arguments:
type: team
con-name:
if-name:
config:
– new ‘virtual’ interface is created. It appears in ‘ip addr show’
– new connection file created
$ teamdctl cbteam state
# and
$ ip address show cbteam
$ nmcli connection modify cbteam ipv4.addresses ‘192.168.2.50/24’
$ nmcli connection modify cbteam ipv4.method manual
$ nmcli connection add type team-slave con-name cbteam_enp0s8 ifname enp0s8 master cbteam
$ nmcli connection add type team-slave con-name cbteam_enp0s9 ifname enp0s9 master cbteam
4 arguements:
1. connection type (team-slave)
2. connection name (e.g. cbteam_enp0s8)
3. interface name (enp0s8)
4. what is the master’s name (cbteam)
$ nmcli connection up cbteam
$ nmcli connection up cbteam_enp0s8
$ nmcli connection up cbteam_enp0s9
$ teamdctl cbteam state
$ nm-connection-editor