documentation:examples:multicast_with_pim-sm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
documentation:examples:multicast_with_pim-sm [2019/11/06 08:58] – [Router 3] olivier | documentation:examples:multicast_with_pim-sm [2019/11/08 15:21] – [Checking NIC drivers and Bhyve compatibility with multicast] olivier | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Multicast with PIM-SM ====== | ====== Multicast with PIM-SM ====== | ||
- | This lab show a multicast routing example using BSDRP 1.59 (FreeBSD 10.2 and [[https:// | + | This lab show a multicast routing example using PIM in Sparse Mode. |
===== Presentation ===== | ===== Presentation ===== | ||
Line 23: | Line 23: | ||
Start the lab with 4 routers simulating an e1000 NIC (vtnet interface and they didn't support mcast routing on FreeBSD): | Start the lab with 4 routers simulating an e1000 NIC (vtnet interface and they didn't support mcast routing on FreeBSD): | ||
< | < | ||
- | BSDRP-lab-bhyve.sh -n 4 -e -i BSDRP-1.702-full-amd64-serial.img.xz | + | tools/BSDRP-lab-bhyve.sh -n 4 -e -i BSDRP-1.96-full-amd64-serial.img.xz |
BSD Router Project (http:// | BSD Router Project (http:// | ||
Setting-up a virtual lab with 4 VM(s): | Setting-up a virtual lab with 4 VM(s): | ||
- | - Working directory: /tmp/BSDRP | + | - Working directory: /root/BSDRP-VMs |
- | - Each VM have 1 core(s) and 256M RAM | + | - Each VM has 1 core and 512M RAM |
- Emulated NIC: e1000 | - Emulated NIC: e1000 | ||
- Switch mode: bridge + tap | - Switch mode: bridge + tap | ||
- 0 LAN(s) between all VM | - 0 LAN(s) between all VM | ||
- Full mesh Ethernet links between each VM | - Full mesh Ethernet links between each VM | ||
- | VM 1 have the following NIC: | + | VM 1 has the following NIC: |
- | - vtnet0 | + | - em0 connected to VM 2 |
- | - vtnet1 | + | - em1 connected to VM 3 |
- | - vtnet2 | + | - em2 connected to VM 4 |
- | VM 2 have the following NIC: | + | VM 2 has the following NIC: |
- | - vtnet0 | + | - em0 connected to VM 1 |
- | - vtnet1 | + | - em1 connected to VM 3 |
- | - vtnet2 | + | - em2 connected to VM 4 |
- | VM 3 have the following NIC: | + | VM 3 has the following NIC: |
- | - vtnet0 | + | - em0 connected to VM 1 |
- | - vtnet1 | + | - em1 connected to VM 2 |
- | - vtnet2 | + | - em2 connected to VM 4 |
- | VM 4 have the following NIC: | + | VM 4 has the following NIC: |
- | - vtnet0 | + | - em0 connected to VM 1 |
- | - vtnet1 | + | - em1 connected to VM 2 |
- | - vtnet2 | + | - em2 connected to VM 3 |
- | For connecting to VM' | + | To connect |
- | - VM 1 : cu -l /dev/nmdm1B | + | - VM 1 : cu -l /dev/nmdm-BSDRP.1B |
- | - VM 2 : cu -l /dev/nmdm2B | + | - VM 2 : cu -l /dev/nmdm-BSDRP.2B |
- | - VM 3 : cu -l /dev/nmdm3B | + | - VM 3 : cu -l /dev/nmdm-BSDRP.3B |
- | - VM 4 : cu -l /dev/nmdm4B | + | - VM 4 : cu -l /dev/nmdm-BSDRP.4B |
</ | </ | ||
Line 61: | Line 61: | ||
Configuration: | Configuration: | ||
< | < | ||
- | sysrc hostname=R1 | + | sysrc hostname=VM1 \ |
- | sysrc gateway_enable=NO | + | |
- | sysrc ipv6_gateway_enable=NO | + | |
- | sysrc ifconfig_em0=" | + | |
- | sysrc defaultrouter="10.0.12.2" | + | |
- | hostname | + | service |
service netif restart | service netif restart | ||
service routing restart | service routing restart | ||
Line 73: | Line 73: | ||
==== Router 2 ==== | ==== Router 2 ==== | ||
- | R2 is a PIM router that <del>announce itself (10.0.23.2) as Canditate RP with and adv period of 10 seconds and high priority</ | + | VM2 is a PIM router that announce itself (10.0.23.2) as Canditate RP with and adv period of 10 seconds and high priority |
< | < | ||
- | sysrc hostname=R2 | + | sysrc hostname=VM2 \ |
- | sysrc ifconfig_em0=" | + | |
- | sysrc ifconfig_em1=" | + | |
- | sysrc defaultrouter="10.0.23.3" | + | |
- | sysrc pimd_enable=YES | + | |
cat > / | cat > / | ||
- | #rp-candidate 10.0.23.2 time 10 priority 1 | + | rp-candidate 10.0.23.2 time 10 priority 1 |
- | rp-address 10.0.23.2 | + | #rp-address 10.0.23.2 |
EOF | EOF | ||
- | hostname | + | service |
service netif restart | service netif restart | ||
service routing restart | service routing restart | ||
Line 96: | Line 96: | ||
==== Router 3 ==== | ==== Router 3 ==== | ||
- | We would R3 annonces hitself (10.0.23.3) as a Canditate BootStrap Router with high priority | + | We would VM3 annonces hitself (10.0.23.3) as a Canditate BootStrap Router with high priority. |
< | < | ||
- | sysrc hostname=R3 | + | sysrc hostname=VM3 \ |
- | sysrc ifconfig_em1=" | + | |
- | sysrc ifconfig_em2=" | + | |
- | sysrc defaultrouter="10.0.23.2" | + | |
- | sysrc pimd_enable=YES | + | |
cat > / | cat > / | ||
bsr-candidate 10.0.23.3 priority 1 | bsr-candidate 10.0.23.3 priority 1 | ||
- | #For a static Rendez-vous point: | ||
#rp-address 10.0.23.2 | #rp-address 10.0.23.2 | ||
EOF | EOF | ||
- | hostname | + | service |
service netif restart | service netif restart | ||
service routing restart | service routing restart | ||
Line 120: | Line 119: | ||
< | < | ||
- | sysrc hostname=R4 | + | sysrc hostname=VM4 \ |
- | sysrc gateway_enable=NO | + | |
- | sysrc ipv6_gateway_enable=NO | + | |
- | sysrc ifconfig_em2=" | + | |
- | sysrc defaultrouter="10.0.34.3" | + | |
- | hostname | + | service |
service netif restart | service netif restart | ||
service routing restart | service routing restart | ||
Line 131: | Line 130: | ||
</ | </ | ||
+ | ===== Checking NIC drivers and Bhyve compatibility with multicast ===== | ||
+ | |||
+ | Before to star with advanced routing setup, just start to test simple multicast between 2 relatives host: Some NIC (vtnet) or some hypervisors network setup aren't compliant with very simple multicast. | ||
+ | |||
+ | On VM1, start a mcast generator (client emitting mcast): | ||
+ | < | ||
+ | [root@VM1]~# | ||
+ | ------------------------------------------------------------ | ||
+ | Client connecting to 239.1.1.1, UDP port 5001 | ||
+ | Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust) | ||
+ | Setting multicast TTL to 32 | ||
+ | UDP buffer size: 9.00 KByte (default) | ||
+ | ------------------------------------------------------------ | ||
+ | [ 3] local 10.0.12.1 port 46636 connected with 239.1.1.1 port 5001 | ||
+ | [ ID] Interval | ||
+ | [ 3] 0.0- 1.0 sec 131 KBytes | ||
+ | [ 3] 1.0- 2.0 sec 128 KBytes | ||
+ | [ 3] 2.0- 3.0 sec 128 KBytes | ||
+ | [ 3] 0.0- 3.5 sec 446 KBytes | ||
+ | [ 3] Sent 311 datagrams | ||
+ | (...) | ||
+ | </ | ||
+ | |||
+ | On the direct connected VM2, start to check if in non-promiscious mode it sees mcast packets comming: | ||
+ | < | ||
+ | [root@VM2]~# | ||
+ | tcpdump: verbose output suppressed, use -v or -vv for full protocol decode | ||
+ | listening on em0, link-type EN10MB (Ethernet), capture size 262144 bytes | ||
+ | 15: | ||
+ | 15: | ||
+ | 2 packets captured | ||
+ | 2 packets received by filter | ||
+ | 0 packets dropped by kernel | ||
+ | </ | ||
+ | |||
+ | => VM2 is receiving mcast packets from 10.0.12.1 to mcast group 239.1.1.1. | ||
+ | Now on VM2 start a mcast listener (server receiving), it should receive multicast flow | ||
+ | |||
+ | < | ||
+ | [root@VM2]~# | ||
+ | ------------------------------------------------------------ | ||
+ | Server listening on UDP port 5001 | ||
+ | Binding to local address 239.1.1.1 | ||
+ | Joining multicast group 239.1.1.1 | ||
+ | Receiving 1470 byte datagrams | ||
+ | UDP buffer size: 41.1 KByte (default) | ||
+ | ------------------------------------------------------------ | ||
+ | [ 3] local 239.1.1.1 port 5001 connected with 192.168.100.149 port 35181 | ||
+ | [ ID] Interval | ||
+ | [ 3] 0.0- 1.0 sec 129 KBytes | ||
+ | [ 3] 1.0- 2.0 sec 128 KBytes | ||
+ | [ 3] 2.0- 3.0 sec 128 KBytes | ||
+ | [ 3] 3.0- 4.0 sec 128 KBytes | ||
+ | [ 3] 4.0- 5.0 sec 128 KBytes | ||
+ | [ 3] 5.0- 6.0 sec 129 KBytes | ||
+ | [ 3] 6.0- 7.0 sec 128 KBytes | ||
+ | (...) | ||
+ | </ | ||
+ | => Notice the mcast receiver is correctly receiving at 1Mb/s. | ||
+ | |||
+ | Here is a non working example: | ||
+ | < | ||
+ | [root@VM2]~# | ||
+ | ------------------------------------------------------------ | ||
+ | Server listening on UDP port 5001 | ||
+ | Binding to local address 239.1.1.1 | ||
+ | Joining multicast group 239.1.1.1 | ||
+ | Receiving 1470 byte datagrams | ||
+ | UDP buffer size: 41.1 KByte (default) | ||
+ | ------------------------------------------------------------ | ||
+ | (...) | ||
+ | </ | ||
+ | |||
+ | => Here it doesn' | ||
===== Checking pimd behavior ===== | ===== Checking pimd behavior ===== | ||
Line 138: | Line 211: | ||
< | < | ||
- | [root@R2]~# pimd -r | + | [root@VM2]~# pimd -r |
- | Virtual Interface Table | + | Virtual Interface Table ====================================================== |
- | | + | Vif Local Address |
- | | + | --- --------------- |
- | | + | |
- | | + | 1 10.0.23.2 |
+ | 2 10.0.12.254 | ||
- | Multicast Routing Table | + | |
- | | + | |
- | --------------------------(*, | + | Multicast Routing Table ====================================================== |
+ | --------------------------------- (*,*,G) ------------------------------------ | ||
Number of Groups: 0 | Number of Groups: 0 | ||
Number of Cache MIRRORs: 0 | Number of Cache MIRRORs: 0 | ||
+ | ------------------------------------------------------------------------------ | ||
</ | </ | ||
- | => R2 sees R3 | + | => VM2 sees VM3 as PIM neighbor |
< | < | ||
- | [root@R3]~# pimd -r | + | [root@VM3]~# pimd -r |
- | Virtual Interface Table | + | Virtual Interface Table ====================================================== |
- | | + | Vif Local Address |
- | | + | --- --------------- |
- | | + | |
- | | + | 1 10.0.34.254 |
+ | 2 10.0.23.3 | ||
- | Multicast Routing Table | + | |
- | | + | |
- | --------------------------(*, | + | Multicast Routing Table ====================================================== |
+ | --------------------------------- (*,*,G) ------------------------------------ | ||
Number of Groups: 0 | Number of Groups: 0 | ||
Number of Cache MIRRORs: 0 | Number of Cache MIRRORs: 0 | ||
+ | ------------------------------------------------------------------------------ | ||
</ | </ | ||
- | => R3 sees R2. | + | => VM3 sees VM2 as PIM Designated Router neighbor. |
==== Does PIM daemon locally register to PIM mcast group ? ==== | ==== Does PIM daemon locally register to PIM mcast group ? ==== | ||
Line 175: | Line 254: | ||
< | < | ||
- | [root@R2]~# ifmcstat | + | [root@VM2]~# ifmcstat |
em0: | em0: | ||
inet 10.0.12.2 | inet 10.0.12.2 | ||
Line 224: | Line 303: | ||
</ | </ | ||
< | < | ||
- | [root@R3]~# ifmcstat | + | [root@VM3]~# ifmcstat |
em0: | em0: | ||
em1: | em1: | ||
Line 279: | Line 358: | ||
===== Testing ===== | ===== Testing ===== | ||
- | ==== 1. Sart a mcast generator (IPerf client) on R1 ==== | + | ==== 1. Sart a mcast generator (IPerf client) on VM1 ==== |
Start an iperf client to 239.1.1.1. | Start an iperf client to 239.1.1.1. | ||
< | < | ||
- | [root@R1]~# iperf -c 239.1.1.1 -u -T 32 -t 3000 -i 1 | + | [root@VM1]~# iperf -c 239.1.1.1 -u -T 32 -t 3000 -i 1 |
------------------------------------------------------------ | ------------------------------------------------------------ | ||
Client connecting to 239.1.1.1, UDP port 5001 | Client connecting to 239.1.1.1, UDP port 5001 | ||
Line 299: | Line 378: | ||
</ | </ | ||
- | ==== 2. Check R2 updates its mrouting table with discovered mcast source ===== | + | ==== 2. Check VM2 updates its mrouting table with discovered mcast source ===== |
PIM daemon should be updated: | PIM daemon should be updated: | ||
< | < | ||
- | [root@R2]~# pimd -r | + | [root@VM2]~# pimd -r |
Virtual Interface Table ====================================================== | Virtual Interface Table ====================================================== | ||
Vif Local Address | Vif Local Address | ||
--- --------------- | --- --------------- | ||
- | 0 10.0.12.2 | + | 0 10.0.12.254 |
1 10.0.23.2 | 1 10.0.23.2 | ||
- | 2 10.0.12.2 | + | 2 10.0.12.254 |
| | ||
Line 317: | Line 396: | ||
Source | Source | ||
--------------- | --------------- | ||
- | 10.0.12.1 | + | 10.0.12.1 |
- | Joined | + | Joined |
Pruned | Pruned | ||
Leaves | Leaves | ||
Asserted oifs: ... | Asserted oifs: ... | ||
- | Outgoing oifs: ..o | + | Outgoing oifs: ... |
Incoming | Incoming | ||
TIMERS: | TIMERS: | ||
- | | + | |
--------------------------------- (*,*,G) ------------------------------------ | --------------------------------- (*,*,G) ------------------------------------ | ||
Number of Groups: 1 | Number of Groups: 1 | ||
- | Number of Cache MIRRORs: | + | Number of Cache MIRRORs: |
------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ | ||
</ | </ | ||
Line 336: | Line 415: | ||
< | < | ||
- | [root@R2]~# netstat -g | + | [root@VM2]~# netstat -g |
IPv4 Virtual Interface Table | IPv4 Virtual Interface Table | ||
| | ||
- | 0 | + | 0 |
1 | 1 | ||
- | 2 | + | 2 |
IPv4 Multicast Forwarding Table | IPv4 Multicast Forwarding Table | ||
| | ||
- | | + | |
Line 352: | Line 431: | ||
IPv6 Multicast Forwarding Table is empty | IPv6 Multicast Forwarding Table is empty | ||
+ | |||
</ | </ | ||
- | R2 had update its mroute table for adding a source for group 239.1.1.1 comming from vif0 (toward R1). | + | VM2 had update its mroute table for adding a source for group 239.1.1.1 comming from ' |
- | ==== 3. Start a mcast receiver (IPerf server) on R4 ==== | + | ==== 3. Start a mcast receiver (IPerf server) on VM4 ==== |
IPerf server will subscribe to 239.1.1.1 multicast group and receiving mcast traffic: | IPerf server will subscribe to 239.1.1.1 multicast group and receiving mcast traffic: | ||
< | < | ||
- | [root@R4]~# iperf -s -u -B 239.1.1.1 -i 1 | + | [root@VM4]~# iperf -s -u -B 239.1.1.1 -i 1 |
------------------------------------------------------------ | ------------------------------------------------------------ | ||
Server listening on UDP port 5001 | Server listening on UDP port 5001 | ||
Line 385: | Line 465: | ||
</ | </ | ||
- | ==== 4. Check R3 correctly notice this mcast subscriber ==== | + | ==== 4. Check VM3 correctly notice this mcast subscriber ==== |
- | Now the mrouting table of R3 is updated and know it has a customer: | + | Now the mrouting table of VM3 is updated and know it has a customer: |
< | < | ||
- | [root@R3]~# pimd -r | + | [root@VM3]~# pimd -r [9/367] |
Virtual Interface Table ====================================================== | Virtual Interface Table ====================================================== | ||
Vif Local Address | Vif Local Address | ||
--- --------------- | --- --------------- | ||
0 10.0.23.3 | 0 10.0.23.3 | ||
- | 1 10.0.34.3 | + | 1 10.0.34.254 |
2 10.0.23.3 | 2 10.0.23.3 | ||
Line 413: | Line 493: | ||
TIMERS: | TIMERS: | ||
- | | + | |
----------------------------------- (S,G) ------------------------------------ | ----------------------------------- (S,G) ------------------------------------ | ||
Source | Source | ||
Line 426: | Line 506: | ||
TIMERS: | TIMERS: | ||
- | 195 50 0 | + | 200 55 0 |
--------------------------------- (*,*,G) ------------------------------------ | --------------------------------- (*,*,G) ------------------------------------ | ||
Number of Groups: 1 | Number of Groups: 1 | ||
Line 436: | Line 516: | ||
< | < | ||
- | [root@R3]~# netstat -g | + | [root@VM3]~# netstat -g |
IPv4 Virtual Interface Table | IPv4 Virtual Interface Table | ||
| | ||
- | 0 | + | 0 |
- | 1 | + | 1 |
2 | 2 | ||
IPv4 Multicast Forwarding Table | IPv4 Multicast Forwarding Table | ||
| | ||
- | | + | |
Line 452: | Line 532: | ||
IPv6 Multicast Forwarding Table is empty | IPv6 Multicast Forwarding Table is empty | ||
- | |||
</ | </ | ||
- | R3 correctly learn that there is a subscriber to group 239.1.1.1 on interface vif1 (toward | + | VM3 correctly learn that there is a subscriber to group 239.1.1.1 on interface vif1 (toward |
documentation/examples/multicast_with_pim-sm.txt · Last modified: 2019/11/08 19:22 by olivier