User Tools

Site Tools


documentation:examples:multicast_with_pim-sm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

documentation:examples:multicast_with_pim-sm [2017/06/26 00:24] (current)
Line 1: Line 1:
 +====== Multicast with PIM-SM ======
  
 +This lab show a multicast routing example using BSDRP 1.59 (FreeBSD 10.2 and [[https://​github.com/​troglobit/​pimd|pimd]] 2.3.2).
 +
 +===== Presentation =====
 +
 +==== Network diagram ====
 +
 +Here is the logical and physical view:
 +
 +{{:​documentation:​examples:​bsdrp.lab.pim-sm.png|}}
 +
 +===== Setting-up the lab =====
 +
 +==== Downloading BSD Router Project images ====
 +
 +Download BSDRP serial image (prevent to have to use an X display) on Sourceforge.
 +
 +==== Download Lab scripts =====
 +
 +More information on these BSDRP lab scripts available on [[documentation:​examples:​How to build a BSDRP router lab]].
 +
 +Start the lab with 4 routers simulating an e1000 NIC (vtnet interface and they didn't support mcast routing on FreeBSD):
 +<​code>​
 +BSDRP-lab-bhyve.sh -n 4 -e -i BSDRP-1.702-full-amd64-serial.img.xz
 +BSD Router Project (http://​bsdrp.net) - bhyve full-meshed lab script
 +Setting-up a virtual lab with 4 VM(s):
 +- Working directory: /tmp/BSDRP
 +- Each VM have 1 core(s) and 256M RAM
 +- Emulated NIC: e1000
 +- Switch mode: bridge + tap
 +- 0 LAN(s) between all VM
 +- Full mesh Ethernet links between each VM
 +VM 1 have the following NIC:
 +- vtnet0 connected to VM 2
 +- vtnet1 connected to VM 3
 +- vtnet2 connected to VM 4
 +VM 2 have the following NIC:
 +- vtnet0 connected to VM 1
 +- vtnet1 connected to VM 3
 +- vtnet2 connected to VM 4
 +VM 3 have the following NIC:
 +- vtnet0 connected to VM 1
 +- vtnet1 connected to VM 2
 +- vtnet2 connected to VM 4
 +VM 4 have the following NIC:
 +- vtnet0 connected to VM 1
 +- vtnet1 connected to VM 2
 +- vtnet2 connected to VM 3
 +For connecting to VM'​serial console, you can use:
 +- VM 1 : cu -l /dev/nmdm1B
 +- VM 2 : cu -l /dev/nmdm2B
 +- VM 3 : cu -l /dev/nmdm3B
 +- VM 4 : cu -l /dev/nmdm4B
 +</​code>​
 +
 +===== Routers configuration =====
 +
 +==== Router 1 ====
 +
 +Configuration:​
 +<​code>​
 +sysrc hostname=R1
 +sysrc gateway_enable=NO
 +sysrc ipv6_gateway_enable=NO
 +sysrc ifconfig_em0="​inet 10.0.12.1/​24"​
 +sysrc defaultrouter="​10.0.12.2"​
 +hostname R1
 +service netif restart
 +service routing restart
 +config save
 +</​code>​
 +==== Router 2 ====
 +
 +R2 is a PIM router that <​del>​announce itself (10.0.23.2) as Canditate RP with and adv period of 10 seconds and high priority</​del>​ is the rendez-vous point.
 +
 +<​code>​
 +sysrc hostname=R2
 +sysrc ifconfig_em0="​inet 10.0.12.2/​24"​
 +sysrc ifconfig_em1="​inet 10.0.23.2/​24"​
 +sysrc defaultrouter="​10.0.23.3"​
 +sysrc pimd_enable=YES
 +
 +cat > /​usr/​local/​etc/​pimd.conf <<EOF
 +#​rp-candidate 10.0.23.2 time 10 priority 1
 +rp-address 10.0.23.2
 +
 +EOF
 +
 +hostname R2
 +service netif restart
 +service routing restart
 +service pimd start
 +config save
 +</​code>​
 +==== Router 3 ====
 +
 +We would R3 <​del>​annonces hitself (10.0.23.3) as a Canditate BootStrap Router with high priority</​del>​ is configured with R2 as rendez-vous point.
 +
 +<​code>​
 +sysrc hostname=R3
 +sysrc ifconfig_em1="​inet 10.0.23.3/​24"​
 +sysrc ifconfig_em2="​inet 10.0.34.3/​24"​
 +sysrc defaultrouter="​10.0.23.2"​
 +sysrc pimd_enable=YES
 +
 +cat > /​usr/​local/​etc/​pimd.conf <<EOF
 +#​bsr-candidate 10.0.23.3 priority 1
 +rp-address 10.0.23.2
 +EOF
 +
 +hostname R3
 +service netif restart
 +service routing restart
 +service pimd start
 +config save
 +</​code>​
 +==== Router 4 ====
 +
 +<​code>​
 +sysrc hostname=R4
 +sysrc gateway_enable=NO
 +sysrc ipv6_gateway_enable=NO
 +sysrc ifconfig_em2="​inet 10.0.34.4/​24"​
 +sysrc defaultrouter="​10.0.34.3"​
 +hostname R4
 +service netif restart
 +service routing restart
 +config save
 +</​code>​
 +
 +===== Checking pimd behavior =====
 +
 +==== PIM neighbors ====
 +
 +Does the PIM routers see each others ?
 +
 +<​code>​
 +[root@R2]~# pimd -r
 +Virtual Interface Table
 + ​Vif ​ Local address ​   Subnet ​               Thresh ​ Flags          Neighbors
 +   ​0 ​ 10.0.12.2 ​       10.0.12/​24 ​           1       DR NO-NBR
 +   ​1 ​ 10.0.23.2 ​       10.0.23/​24 ​           1       ​PIM ​           10.0.23.3
 +   ​2 ​ 10.0.12.2 ​       register_vif0 ​        1
 +
 +Multicast Routing Table
 + ​Source ​         Group           RP addr         Flags
 +--------------------------(*,​*,​RP)--------------------------
 +Number of Groups: 0
 +Number of Cache MIRRORs: 0
 +</​code>​
 +
 +=> R2 sees R3
 +
 +<​code>​
 +[root@R3]~# pimd -r
 +Virtual Interface Table
 + ​Vif ​ Local address ​   Subnet ​               Thresh ​ Flags          Neighbors
 +   ​0 ​ 10.0.23.3 ​       10.0.23/​24 ​           1       DR PIM         ​10.0.23.2
 +   ​1 ​ 10.0.34.3 ​       10.0.34/​24 ​           1       DR NO-NBR
 +   ​2 ​ 10.0.23.3 ​       register_vif0 ​        1
 +
 +Multicast Routing Table
 + ​Source ​         Group           RP addr         Flags
 +--------------------------(*,​*,​RP)--------------------------
 +Number of Groups: 0
 +Number of Cache MIRRORs: 0
 +</​code>​
 +
 +=> R3 sees R2.
 +==== Does PIM daemon locally register to PIM mcast group ? ====
 +
 +PIM router need to register to 224.0.0.13 mcast group, check if all PIM routers correctly display this group on their enabled interfaces:
 +
 +<​code>​
 +[root@R2]~# ifmcstat
 +em0:
 +        inet 10.0.12.2
 +        igmpv2
 +                group 224.0.0.22 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​16 refcnt 1
 +                group 224.0.0.2 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​02 refcnt 1
 +                group 224.0.0.13 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​0d refcnt 1
 +                group 224.0.0.1 refcnt 1 state silent mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​01 refcnt 1
 +        inet6 fe80:​1::​a8aa:​ff:​fe00:​212
 +        mldv2 flags=2<​USEALLOW>​ rv 2 qi 125 qri 10 uri 3
 +                group ff01:1::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​1::​2:​54c6:​805c refcnt 1
 +                        mcast-macaddr 33:​33:​54:​c6:​80:​5c refcnt 1
 +                group ff02:​1::​2:​ff54:​c680 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​54:​c6:​80 refcnt 1
 +                group ff02:1::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​1::​1:​ff00:​212 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​00:​02:​12 refcnt 1
 +em1:
 +        inet 10.0.23.2
 +        igmpv2
 +                group 224.0.0.22 refcnt 1 state sleeping mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​16 refcnt 1
 +                group 224.0.0.2 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​02 refcnt 1
 +                group 224.0.0.13 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​0d refcnt 1
 +                group 224.0.0.1 refcnt 1 state silent mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​01 refcnt 1
 +        inet6 fe80:​2::​a8aa:​ff:​fe02:​202
 +        mldv2 flags=2<​USEALLOW>​ rv 2 qi 125 qri 10 uri 3
 +                group ff01:2::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​2::​2:​54c6:​805c refcnt 1
 +                        mcast-macaddr 33:​33:​54:​c6:​80:​5c refcnt 1
 +                group ff02:​2::​2:​ff54:​c680 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​54:​c6:​80 refcnt 1
 +                group ff02:2::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​2::​1:​ff02:​202 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​02:​02:​02 refcnt 1
 +</​code>​
 +<​code>​
 +[root@R3]~# ifmcstat
 +em0:
 +em1:
 +        inet 10.0.23.3
 +        igmpv2
 +                group 224.0.0.22 refcnt 1 state sleeping mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​16 refcnt 1
 +                group 224.0.0.2 refcnt 1 state sleeping mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​02 refcnt 1
 +                group 224.0.0.13 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​0d refcnt 1
 +                group 224.0.0.1 refcnt 1 state silent mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​01 refcnt 1
 +        inet6 fe80:​2::​a8aa:​ff:​fe00:​323
 +        mldv2 flags=2<​USEALLOW>​ rv 2 qi 125 qri 10 uri 3
 +                group ff01:2::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​2::​2:​1124:​9296 refcnt 1
 +                        mcast-macaddr 33:​33:​11:​24:​92:​96 refcnt 1
 +                group ff02:​2::​2:​ff11:​2492 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​11:​24:​92 refcnt 1
 +                group ff02:2::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​2::​1:​ff00:​323 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​00:​03:​23 refcnt 1
 +em2:
 +        inet 10.0.34.3
 +        igmpv2
 +                group 224.0.0.22 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​16 refcnt 1
 +                group 224.0.0.2 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​02 refcnt 1
 +                group 224.0.0.13 refcnt 1 state lazy mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​0d refcnt 1
 +                group 224.0.0.1 refcnt 1 state silent mode exclude
 +                        mcast-macaddr 01:​00:​5e:​00:​00:​01 refcnt 1
 +        inet6 fe80:​3::​a8aa:​ff:​fe03:​303
 +        mldv2 flags=2<​USEALLOW>​ rv 2 qi 125 qri 10 uri 3
 +                group ff01:3::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​3::​2:​1124:​9296 refcnt 1
 +                        mcast-macaddr 33:​33:​11:​24:​92:​96 refcnt 1
 +                group ff02:​3::​2:​ff11:​2492 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​11:​24:​92 refcnt 1
 +                group ff02:3::1 refcnt 1
 +                        mcast-macaddr 33:​33:​00:​00:​00:​01 refcnt 1
 +                group ff02:​3::​1:​ff03:​303 refcnt 1
 +                        mcast-macaddr 33:​33:​ff:​03:​03:​03 refcnt 1
 +
 +</​code>​
 +
 +We correctly sees mcast group 224.0.0.13 subscribed on PIM enabled interfaces.
 +
 +===== Testing =====
 +
 +==== 1. Sart a mcast generator (IPerf client) on R1 ====
 +
 +Start an iperf client to 239.1.1.1.
 +
 +<​code>​
 +[root@R1]~# iperf -c 239.1.1.1 -u -T 32 -t 3000 -i 1
 +------------------------------------------------------------
 +Client connecting to 239.1.1.1, UDP port 5001
 +Sending 1470 byte datagrams
 +Setting multicast TTL to 32
 +UDP buffer size: 9.00 KByte (default)
 +------------------------------------------------------------
 +[  3] local 10.0.12.1 port 41484 connected with 239.1.1.1 port 5001
 +[ ID] Interval ​      ​Transfer ​    ​Bandwidth
 +[  3]  0.0- 1.0 sec   129 KBytes ​ 1.06 Mbits/sec
 +[  3]  1.0- 2.0 sec   128 KBytes ​ 1.05 Mbits/sec
 +[  3]  2.0- 3.0 sec   128 KBytes ​ 1.05 Mbits/sec
 +[  3]  3.0- 4.0 sec   128 KBytes ​ 1.05 Mbits/sec
 +</​code>​
 +
 +==== 2. Check R2 updates its mrouting table with discovered mcast source =====
 +
 +PIM daemon should be updated:
 +<​code>​
 +[root@R2]~# pimd -r
 +Virtual Interface Table ======================================================
 +Vif  Local Address ​   Subnet ​             Thresh ​ Flags      Neighbors
 +---  --------------- ​ ------------------ ​ ------ ​ --------- ​ -----------------
 +  0  10.0.12.2 ​       10.0.12/​24 ​              ​1 ​ DR NO-NBR
 +  1  10.0.23.2 ​       10.0.23/​24 ​              ​1 ​ PIM        10.0.23.3
 +  2  10.0.12.2 ​       register_vif0 ​           1
 +
 + ​Vif ​ SSM Group        Sources
 +
 +Multicast Routing Table ======================================================
 +----------------------------------- (S,G) ------------------------------------
 +Source ​          ​Group ​           RP Address ​      Flags
 +--------------- ​ --------------- ​ --------------- ​ ---------------------------
 +10.0.12.1 ​       239.1.1.1 ​       10.0.23.2 ​       CACHE SG
 +Joined ​  oifs: ..j
 +Pruned ​  oifs: ...
 +Leaves ​  oifs: ...
 +Asserted oifs: ...
 +Outgoing oifs: ..o
 +Incoming ​    : I..
 +
 +TIMERS: ​ Entry    JP    RS  Assert VIFS:  0  1  2
 +           ​180 ​   35     ​0 ​      ​0 ​       0  0  0
 +--------------------------------- (*,*,G) ------------------------------------
 +Number of Groups: 1
 +Number of Cache MIRRORs: 1
 +------------------------------------------------------------------------------
 +</​code>​
 +
 +And mcast routing table too:
 +
 +<​code>​
 +[root@R2]~# netstat -g
 +
 +IPv4 Virtual Interface Table
 + ​Vif ​  ​Thresh ​  ​Local-Address ​  ​Remote-Address ​   Pkts-In ​  ​Pkts-Out
 +  0         ​1 ​  ​10.0.12.2 ​                           8013          0
 +  1         ​1 ​  ​10.0.23.2 ​                              ​0 ​         0
 +  2         ​1 ​  ​10.0.12.2 ​                              ​0 ​      8013
 +
 +IPv4 Multicast Forwarding Table
 + ​Origin ​         Group             ​Packets In-Vif ​ Out-Vifs:​Ttls
 + ​10.0.12.1 ​      ​239.1.1.1 ​           8013    0    2:1
 +
 +
 +IPv6 Multicast Interface Table is empty
 +
 +IPv6 Multicast Forwarding Table is empty
 +</​code>​
 +
 +R2 had update its mroute table for adding a source for group 239.1.1.1 comming from vif0 (toward R1).
 +
 +==== 3. Start a mcast receiver (IPerf server) on R4 ====
 +
 +IPerf server will subscribe to 239.1.1.1 multicast group and receiving mcast traffic:
 +
 +<​code>​
 +[root@R4]~# iperf -s -u -B 239.1.1.1 -i 1
 +------------------------------------------------------------
 +Server listening on UDP port 5001
 +Binding to local address 239.1.1.1
 +Joining multicast group  239.1.1.1
 +Receiving 1470 byte datagrams
 +UDP buffer size: 41.1 KByte (default)
 +------------------------------------------------------------
 +[  3] local 239.1.1.1 port 5001 connected with 10.0.12.1 port 41484
 +[ ID] Interval ​      ​Transfer ​    ​Bandwidth ​       Jitter ​  ​Lost/​Total Datagrams
 +[  3]  0.0- 1.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.313 ms 16336/16425 (99%)
 +[  3]  1.0- 2.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.250 ms    0/   89 (0%)
 +[  3]  2.0- 3.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.307 ms    0/   89 (0%)
 +[  3]  3.0- 4.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.262 ms    0/   89 (0%)
 +[  3]  4.0- 5.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.188 ms    0/   89 (0%)
 +[  3]  5.0- 6.0 sec   129 KBytes ​ 1.06 Mbits/​sec ​  0.347 ms    0/   90 (0%)
 +[  3]  6.0- 7.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.238 ms    0/   89 (0%)
 +[  3]  7.0- 8.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.234 ms    0/   89 (0%)
 +[  3]  8.0- 9.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.241 ms    0/   89 (0%)
 +[  3]  9.0-10.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.210 ms    0/   89 (0%)
 +[  3] 10.0-11.0 sec   128 KBytes ​ 1.05 Mbits/​sec ​  0.289 ms    0/   89 (0%)
 +[  3] 11.0-12.0 sec   129 KBytes ​ 1.06 Mbits/​sec ​  0.309 ms    0/   90 (0%)
 +</​code>​
 +
 +==== 4. Check R3 correctly notice this mcast subscriber ====
 +
 +Now the mrouting table of R3 is updated and know it has a customer:
 +
 +<​code>​
 +[root@R3]~# pimd -r
 +Virtual Interface Table ======================================================
 +Vif  Local Address ​   Subnet ​             Thresh ​ Flags      Neighbors
 +---  --------------- ​ ------------------ ​ ------ ​ --------- ​ -----------------
 +  0  10.0.23.3 ​       10.0.23/​24 ​              ​1 ​ DR PIM     ​10.0.23.2
 +  1  10.0.34.3 ​       10.0.34/​24 ​              ​1 ​ DR NO-NBR
 +  2  10.0.23.3 ​       register_vif0 ​           1
 +
 + ​Vif ​ SSM Group        Sources
 +
 +Multicast Routing Table ======================================================
 +----------------------------------- (*,G) ------------------------------------
 +Source ​          ​Group ​           RP Address ​      Flags
 +--------------- ​ --------------- ​ --------------- ​ ---------------------------
 +INADDR_ANY ​      ​239.1.1.1 ​       10.0.23.2 ​       WC RP CACHE
 +Joined ​  oifs: ...
 +Pruned ​  oifs: ...
 +Leaves ​  oifs: .l.
 +Asserted oifs: ...
 +Outgoing oifs: .o.
 +Incoming ​    : I..
 +
 +TIMERS: ​ Entry    JP    RS  Assert VIFS:  0  1  2
 +             ​0 ​   60     ​0 ​      ​0 ​       0  0  0
 +----------------------------------- (S,G) ------------------------------------
 +Source ​          ​Group ​           RP Address ​      Flags
 +--------------- ​ --------------- ​ --------------- ​ ---------------------------
 +10.0.12.1 ​       239.1.1.1 ​       10.0.23.2 ​       SG
 +Joined ​  oifs: ...
 +Pruned ​  oifs: ...
 +Leaves ​  oifs: .l.
 +Asserted oifs: ...
 +Outgoing oifs: .o.
 +Incoming ​    : I..
 +
 +TIMERS: ​ Entry    JP    RS  Assert VIFS:  0  1  2
 +           ​195 ​   50     ​0 ​      ​0 ​       0  0  0
 +--------------------------------- (*,*,G) ------------------------------------
 +Number of Groups: 1
 +Number of Cache MIRRORs: 1
 +------------------------------------------------------------------------------
 +</​code>​
 +
 +And its mcast routing is updated too:
 +
 +<​code>​
 +[root@R3]~# netstat -g
 +
 +IPv4 Virtual Interface Table
 + ​Vif ​  ​Thresh ​  ​Local-Address ​  ​Remote-Address ​   Pkts-In ​  ​Pkts-Out
 +  0         ​1 ​  ​10.0.23.3 ​                           7882          0
 +  1         ​1 ​  ​10.0.34.3 ​                              ​0 ​      7882
 +  2         ​1 ​  ​10.0.23.3 ​                              ​0 ​         0
 +
 +IPv4 Multicast Forwarding Table
 + ​Origin ​         Group             ​Packets In-Vif ​ Out-Vifs:​Ttls
 + ​10.0.12.1 ​      ​239.1.1.1 ​           7882    0    1:1
 +
 +
 +IPv6 Multicast Interface Table is empty
 +
 +IPv6 Multicast Forwarding Table is empty
 +
 +</​code>​
 +
 +R3 correctly learn that there is a subscriber to group 239.1.1.1 on interface vif1 (toward R4).
documentation/examples/multicast_with_pim-sm.txt ยท Last modified: 2017/06/26 00:24 (external edit)