User Tools

Site Tools


documentation:examples:aggregating_multiple_isp_links_with_mlvpn

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
documentation:examples:aggregating_multiple_isp_links_with_mlvpn [2020/02/21 21:21] – [Links bandwidth] olivierdocumentation:examples:aggregating_multiple_isp_links_with_mlvpn [2020/02/21 21:37] – [Virtual Lab setp] olivier
Line 13: Line 13:
 {{:documentation:examples:bsdrp-lab-mlvpn-details.png|}} {{:documentation:examples:bsdrp-lab-mlvpn-details.png|}}
  
-===== Virtual Lab setp =====+===== Virtual Lab setup =====
  
-This chapter will describe how to start each routers and configuring the centrals routers.+This chapter will describe how to start each routers and configuring the centrals routers.
  
 More information on these BSDRP lab scripts available on [[documentation:examples:How to build a BSDRP router lab]]. More information on these BSDRP lab scripts available on [[documentation:examples:How to build a BSDRP router lab]].
Line 22: Line 22:
  
 <code> <code>
-# ./tools/BSDRP-lab-bhyve.sh -n 5+# ./tools/BSDRP-lab-bhyve.sh -n 6
 BSD Router Project (http://bsdrp.net) - bhyve full-meshed lab script BSD Router Project (http://bsdrp.net) - bhyve full-meshed lab script
-Setting-up a virtual lab with VM(s): +Setting-up a virtual lab with VM(s): 
-- Working directory: /tmp/BSDRP +- Working directory: /root/BSDRP-VMs 
-- Each VM have core(s) and 256M RAM+- Each VM has a total of 1 (1 cores and 1 threads) and 512M RAM 
 +- Emulated NIC: virtio-net
 - Switch mode: bridge + tap - Switch mode: bridge + tap
 - 0 LAN(s) between all VM - 0 LAN(s) between all VM
 - Full mesh Ethernet links between each VM - Full mesh Ethernet links between each VM
-VM 1 have the following NIC: +VM 1 has the following NIC: 
-- vtnet0 connected to VM 2. +- vtnet0 connected to VM 2 
-- vtnet1 connected to VM 3. +- vtnet1 connected to VM 3 
-- vtnet2 connected to VM 4. +- vtnet2 connected to VM 4 
-- vtnet3 connected to VM 5. +- vtnet3 connected to VM 5 
-VM 2 have the following NIC: +- vtnet4 connected to VM 6 
-- vtnet0 connected to VM 1. +VM 2 has the following NIC: 
-- vtnet1 connected to VM 3. +- vtnet0 connected to VM 1 
-- vtnet2 connected to VM 4. +- vtnet1 connected to VM 3 
-- vtnet3 connected to VM 5. +- vtnet2 connected to VM 4 
-VM 3 have the following NIC: +- vtnet3 connected to VM 5 
-- vtnet0 connected to VM 1. +- vtnet4 connected to VM 6 
-- vtnet1 connected to VM 2. +VM 3 has the following NIC: 
-- vtnet2 connected to VM 4. +- vtnet0 connected to VM 1 
-- vtnet3 connected to VM 5. +- vtnet1 connected to VM 2 
-VM 4 have the following NIC: +- vtnet2 connected to VM 4 
-- vtnet0 connected to VM 1. +- vtnet3 connected to VM 5 
-- vtnet1 connected to VM 2. +- vtnet4 connected to VM 6 
-- vtnet2 connected to VM 3. +VM 4 has the following NIC: 
-- vtnet3 connected to VM 5. +- vtnet0 connected to VM 1 
-VM 5 have the following NIC: +- vtnet1 connected to VM 2 
-- vtnet0 connected to VM 1. +- vtnet2 connected to VM 3 
-- vtnet1 connected to VM 2. +- vtnet3 connected to VM 5 
-- vtnet2 connected to VM 3. +- vtnet4 connected to VM 6 
-- vtnet3 connected to VM 4. +VM 5 has the following NIC: 
-For connecting to VM'serial console, you can use: +- vtnet0 connected to VM 1 
-- VM 1 : cu -l /dev/nmdm1B +- vtnet1 connected to VM 2 
-- VM 2 : cu -l /dev/nmdm2B +- vtnet2 connected to VM 3 
-- VM 3 : cu -l /dev/nmdm3B +- vtnet3 connected to VM 4 
-- VM 4 : cu -l /dev/nmdm4B +- vtnet4 connected to VM 6 
-- VM 5 : cu -l /dev/nmdm5B+VM 6 has the following NIC: 
 +- vtnet0 connected to VM 1 
 +- vtnet1 connected to VM 2 
 +- vtnet2 connected to VM 3 
 +- vtnet3 connected to VM 4 
 +To connect VM'serial console, you can use: 
 +- VM 1 : cu -l /dev/nmdm-BSDRP.1B 
 +- VM 2 : cu -l /dev/nmdm-BSDRP.2B 
 +- VM 3 : cu -l /dev/nmdm-BSDRP.3B 
 +- VM 4 : cu -l /dev/nmdm-BSDRP.4B 
 +- VM 5 : cu -l /dev/nmdm-BSDRP.5B 
 +- VM 6 : cu -l /dev/nmdm-BSDRP.6B
 </code> </code>
  
Line 251: Line 263:
  
 <code> <code>
-[root@R1]~# setfib 2 ping -c 2 10.0.45.5 +[root@VM1]~# setfib 2 ping -c 2 10.0.56.6 
-PING 10.0.45.(10.0.45.5): 56 data bytes +PING 10.0.56.(10.0.56.6): 56 data bytes 
-64 bytes from 10.0.45.5: icmp_seq=0 ttl=62 time=2.057 ms +64 bytes from 10.0.56.6: icmp_seq=0 ttl=62 time=16.473 ms 
-64 bytes from 10.0.45.5: icmp_seq=1 ttl=62 time=1.336 ms+64 bytes from 10.0.56.6: icmp_seq=1 ttl=62 time=20.017 ms
  
---- 10.0.45.ping statistics ---+--- 10.0.56.ping statistics ---
 2 packets transmitted, 2 packets received, 0.0% packet loss 2 packets transmitted, 2 packets received, 0.0% packet loss
-round-trip min/avg/max/stddev = 1.336/1.696/2.057/0.361 ms +round-trip min/avg/max/stddev = 16.473/18.245/20.017/1.772 ms 
-[root@R1]~# setfib 3 ping -c 2 10.0.45.5 +[root@VM1]~# setfib 3 ping -c 2 10.0.56.6 
-PING 10.0.45.(10.0.45.5): 56 data bytes +PING 10.0.56.(10.0.56.6): 56 data bytes 
-64 bytes from 10.0.45.5: icmp_seq=0 ttl=62 time=1.806 ms +64 bytes from 10.0.56.6: icmp_seq=0 ttl=62 time=18.202 ms 
-64 bytes from 10.0.45.5: icmp_seq=1 ttl=62 time=1.852 ms+64 bytes from 10.0.56.6: icmp_seq=1 ttl=62 time=11.193 ms
  
---- 10.0.45.ping statistics ---+--- 10.0.56.ping statistics ---
 2 packets transmitted, 2 packets received, 0.0% packet loss 2 packets transmitted, 2 packets received, 0.0% packet loss
-round-trip min/avg/max/stddev = 1.806/1.829/1.852/0.023 ms+round-trip min/avg/max/stddev = 11.193/14.698/18.202/3.504 ms 
 +[root@VM1]~# setfib 4 ping -c 2 10.0.56.6 
 +PING 10.0.56.6 (10.0.56.6): 56 data bytes 
 +64 bytes from 10.0.56.6: icmp_seq=0 ttl=62 time=10.973 ms 
 +64 bytes from 10.0.56.6: icmp_seq=1 ttl=62 time=14.465 ms
  
 +--- 10.0.56.6 ping statistics ---
 +2 packets transmitted, 2 packets received, 0.0% packet loss
 +round-trip min/avg/max/stddev = 10.973/12.719/14.465/1.746 ms
 </code> </code>
  
Line 279: Line 298:
 Then from the MLVPN client, test bandwidth for each ISP links: Then from the MLVPN client, test bandwidth for each ISP links:
 <code> <code>
-[root@VM1]~# setfib 2 iperf -c 10.0.56.6 
------------------------------------------------------------- 
-Client connecting to 10.0.45.5, TCP port 5001 
-TCP window size: 32.5 KByte (default) 
------------------------------------------------------------- 
-[  3] local 10.0.12.1 port 59888 connected with 10.0.45.5 port 5001 
-[ ID] Interval       Transfer     Bandwidth 
-[  3]  0.0-10.1 sec  11.8 MBytes  9.75 Mbits/sec 
  
-[root@R1]~# setfib 3 iperf -c 10.0.45.5 +[root@VM1]~# setfib 2 iperf3 -c 10.0.56.6 
------------------------------------------------------------- +(...) 
-Client connecting to 10.0.45.5, TCP port 5001 +[ ID] Interval           Transfer     Bitrate         Retr 
-TCP window size: 32.5 KByte (default+[  5  0.00-10.00  sec  11.5 MBytes  9.62 Mbits/sec    0             sender 
------------------------------------------------------------- +[  5  0.00-10.06  sec  11.MBytes  9.53 Mbits/sec                  receiver
-[  3local 10.0.13.1 port 53380 connected with 10.0.45.5 port 5001 +
-[ ID] Interval       Transfer     Bandwidth +
-[  3 0.0-10.sec  11.MBytes  9.75 Mbits/sec+
  
 +[root@VM1]~# setfib 3 iperf3 -c 10.0.56.6
 +(...)
 +[ ID] Interval           Transfer     Bitrate         Retr
 +[  5]   0.00-10.00  sec  11.4 MBytes  9.57 Mbits/sec    3             sender
 +[  5]   0.00-10.06  sec  11.4 MBytes  9.47 Mbits/sec                  receiver
 +
 +
 +[root@VM1]~# setfib 4 iperf3 -c 10.0.56.6
 +Connecting to host 10.0.56.6, port 5201
 +(...)
 +[ ID] Interval           Transfer     Bitrate         Retr
 +[  5]   0.00-10.00  sec  11.5 MBytes  9.62 Mbits/sec    0             sender
 +[  5]   0.00-10.06  sec  11.4 MBytes  9.53 Mbits/sec                  receiver
 </code> </code>
  
Line 304: Line 325:
 MLVPN can be started in debug mode:  MLVPN can be started in debug mode: 
 <code> <code>
-[root@R1]# mlvpn --debug -n mlvpn -u mlvpn +[root@VM1]~# mlvpn --debug -n mlvpn -u mlvpn --config /usr/local/etc/mlvpn/mlvpn.conf 
-2016-04-19T23:48:21 [INFO/config] new password set +2020-02-21T21:25:12 [INFO/config] new password set 
-2016-04-19T23:48:21 [INFO/config] dsl2 tunnel added +2020-02-21T21:25:12 [INFO/config] dsl2 tunnel added 
-2016-04-19T23:48:21 [INFO/config] dsl3 tunnel added +2020-02-21T21:25:12 [INFO/config] dsl3 tunnel added 
-2016-04-19T23:48:21 [INFO] created interface `tun0' +2020-02-21T21:25:12 [INFO/config] dsl4 tunnel added 
-2016-04-19T23:48:21 [INFO] dsl2 bind to 10.0.12.1 +2020-02-21T21:25:12 [INFO] created interface `tun0' 
-2016-04-19T23:48:21 [INFO] dsl3 bind to 10.0.13.1 +2020-02-21T21:25:12 [INFO] dsl2 bind to 10.0.12.1 
-2016-04-19T23:48:21 [INFO/protocol] dsl3 authenticated +2020-02-21T21:25:12 [INFO] dsl3 bind to 10.0.13.1 
-2016-04-19T23:48:21 [INFO/protocol] dsl2 authenticated+2020-02-21T21:25:12 [INFO] dsl4 bind to 10.0.14.1 
 +2020-02-21T21:25:12 [INFO/protocol] dsl2 authenticated 
 +2020-02-21T21:25:12 [INFO/protocol] dsl3 authenticated 
 +2020-02-21T21:25:12 [INFO/protocol] dsl4 authenticated
 </code> </code>
  
 tun interface need to be check (correct IP address and non-1500 MTU): tun interface need to be check (correct IP address and non-1500 MTU):
 <code> <code>
-[root@R1]# ifconfig tun0+[root@VM1]~# ifconfig tun0
 tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1452 tun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1452
         options=80000<LINKSTATE>         options=80000<LINKSTATE>
-        inet6 fe80::5a9c:fcff:fe01:201%tun0 prefixlen 64 scopeid 0x7 +        inet6 fe80::5a9c:fcff:fe01:201%tun0 prefixlen 64 scopeid 0x9 
-        inet 10.0.15.1 --> 10.0.15.netmask 0xfffffffc+        inet 10.0.16.1 --> 10.0.16.netmask 0xfffffffc 
 +        groups: tun
         nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>         nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
-        Opened by PID 2326+        Opened by PID 92891
 </code> </code>
  
 And static route(s) needs to be installed (10.5.5.5/32 in this example): And static route(s) needs to be installed (10.5.5.5/32 in this example):
 <code> <code>
-[root@R1]~# netstat -rn4 +[root@VM1]~# route get 10.6.6.6 
-Routing tables +   route to: 10.6.6.6 
- +destination: 10.6.6.6 
-Internet: +       mask: 255.255.255.255 
-Destination        Gateway            Flags      Netif Expire +    gateway: 10.0.16.
-10.0.12.0/24       link#                   vtnet0 +        fib: 0 
-10.0.13.0/24       link#                   vtnet1 +  interface: tun0 
-10.0.15.1          link#            UHS         lo0 +      flags: <UP,GATEWAY,DONE,STATIC> 
-10.0.15.5          link#            UH         tun0 + recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire 
-10.5.5.5/32        10.0.15.5          UGS        tun0 +                       0              1452                 0
-127.0.0.         link#            UH          lo0+
 </code> </code>
 ==== Aggregated bandwidth ==== ==== Aggregated bandwidth ====
Line 345: Line 369:
  
 <code> <code>
-[root@R1]# iperf --bind 10.1.1.1 -c 10.5.5.5 -t 60 +[root@VM1]~iperf3 -10.1.1.1 -c 10.6.6.6 
------------------------------------------------------------- +(...) 
-Client connecting to 10.5.5.5, TCP port 5001 +[ ID] Interval           Transfer     Bitrate         Retr 
-Binding to local address 10.1.1.1 +[  5  0.00-10.00  sec  7.89 MBytes  6.62 Mbits/sec  428             sender 
-TCP window size: 32.3 KByte (default+[  5  0.00-10.01  sec  7.85 MBytes  6.58 Mbits/sec                  receiver
------------------------------------------------------------- +
-[  3local 10.1.1.1 port 5001 connected with 10.5.5.5 port 5001 +
-[ ID] Interval       Transfer     Bandwidth +
-[  3 0.0-60.sec   129 MBytes  18.Mbits/sec+
 </code> </code>
 +
 +Ouch, not expected performance
documentation/examples/aggregating_multiple_isp_links_with_mlvpn.txt · Last modified: 2020/02/21 21:42 by olivier

Except where otherwise noted, content on this wiki is licensed under the following license: BSD 2-Clause
Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki