Skip to main content

Check if jumbo frames are working

This is a simple post showing how to test if large MTU is working in your network using ping tool:

When jumbo frames are not enabled in all networking devs:

# ping -s 8000 -M do -c 5 172.16.64.75
PING 172.16.64.75 (172.16.64.75) 2000(2028) bytes of data.
From 172.16.64.68 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 172.16.64.68 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 172.16.64.68 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 172.16.64.68 icmp_seq=1 Frag needed and DF set (mtu = 1500)
From 172.16.64.68 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- 172.16.64.75 ping statistics ---
0 packets transmitted, 0 received, +5 errors

And with enabled jumbo frames:

# ping -s 8000 -M do -c 5 172.16.64.75
PING 172.16.64.75 (172.16.64.75) 8000(8028) bytes of data.
8008 bytes from 172.16.64.75: icmp_seq=1 ttl=64 time=0.461 ms
8008 bytes from 172.16.64.75: icmp_seq=2 ttl=64 time=0.360 ms
8008 bytes from 172.16.64.75: icmp_seq=3 ttl=64 time=0.402 ms
8008 bytes from 172.16.64.75: icmp_seq=4 ttl=64 time=0.410 ms
8008 bytes from 172.16.64.75: icmp_seq=5 ttl=64 time=0.347 ms

--- 172.16.64.75 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3998ms
rtt min/avg/max/mdev = 0.347/0.396/0.461/0.040 ms

Testing 10G network with iperf

 

I am testing the real bandwidth I can get with a 10G network connection. My server has an intel X540-AT2 network card with 2 10G interfaces.

The server is configured to use bonding in balance-alb mode, but in this test only one interface comes into play because the iperf client only gets a connection from one MAC address.

$ sudo cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:f9:3d:54
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:25:90:f9:3d:55
Slave queue ID: 0

 

Frist run in the server iperf in server mode:

$ sudo iperf -s
 ------------------------------------------------------------
 Server listening on TCP port 5001
 TCP window size: 85.3 KByte (default)
 ------------------------------------------------------------

 

An from one client run iperf connecting to the server IP

The first test is using a max transmision unit of 1500 bytes (default)):

$ sudo iperf -c 172.17.16.78 -i1 -t 10 -m
------------------------------------------------------------
Client connecting to 172.17.16.78, TCP port 5001
TCP window size: 92.9 KByte (default)
------------------------------------------------------------
[  3] local 172.17.16.79 port 52458 connected with 172.17.16.78 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   535 MBytes  4.49 Gbits/sec
[  3]  1.0- 2.0 sec   594 MBytes  4.98 Gbits/sec
[  3]  2.0- 3.0 sec   554 MBytes  4.64 Gbits/sec
[  3]  3.0- 4.0 sec   553 MBytes  4.64 Gbits/sec
[  3]  4.0- 5.0 sec   565 MBytes  4.74 Gbits/sec
[  3]  5.0- 6.0 sec   605 MBytes  5.07 Gbits/sec
[  3]  6.0- 7.0 sec   597 MBytes  5.01 Gbits/sec
[  3]  7.0- 8.0 sec   587 MBytes  4.92 Gbits/sec
[  3]  8.0- 9.0 sec   602 MBytes  5.05 Gbits/sec
[  3]  0.0-10.0 sec  5.67 GBytes  4.87 Gbits/sec
[  3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)

With this test I can barely get around 5Gbits/sec. It seems too poor for a 10G network card.

In the next test I’ve changed the MTU to 9000 (jumbo frames) in all network devices (server, client and switch):

 

$ sudo iperf -c 172.17.16.78 -i1 -t 10 -m
------------------------------------------------------------
Client connecting to 172.17.16.78, TCP port 5001
TCP window size: 92.9 KByte (default)
------------------------------------------------------------
[ 3] local 172.17.16.79 port 52101 connected with 172.17.16.78 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 971 MBytes 8.14 Gbits/sec
[ 3] 1.0- 2.0 sec 1.09 GBytes 9.37 Gbits/sec
[ 3] 2.0- 3.0 sec 1.05 GBytes 9.04 Gbits/sec
[ 3] 3.0- 4.0 sec 1.06 GBytes 9.14 Gbits/sec
[ 3] 4.0- 5.0 sec 1.10 GBytes 9.43 Gbits/sec
[ 3] 5.0- 6.0 sec 1.11 GBytes 9.51 Gbits/sec
[ 3] 6.0- 7.0 sec 1009 MBytes 8.46 Gbits/sec
[ 3] 7.0- 8.0 sec 1.04 GBytes 8.94 Gbits/sec
[ 3] 8.0- 9.0 sec 1.11 GBytes 9.56 Gbits/sec
[ 3] 9.0-10.0 sec 1.07 GBytes 9.21 Gbits/sec
[ 3] 0.0-10.0 sec 10.6 GBytes 9.08 Gbits/sec
[ 3] MSS size 8948 bytes (MTU 8988 bytes, unknown interface)

 

And now, it looks very different, I can saturate the network link.
To change MTU, I’ve changed it in the server and in the client, in all network devices: bond interface and its slaves. Also the switch must support it. You can change the device MTU easly with this command:

$ sudo ip link set dev eth0 mtu 1500