User Tools

Site Tools


documentation:examples:multicast_with_pim-sm

Multicast with PIM-SM

This lab show a multicast routing example using PIM in Sparse Mode.

Presentation

Network diagram

Here is the logical and physical view:

Setting-up the lab

Downloading BSD Router Project images

Download BSDRP serial image (prevent to have to use an X display) on Sourceforge.

Download Lab scripts

More information on these BSDRP lab scripts available on How to build a BSDRP router lab.

Start the lab with 4 routers simulating an e1000 NIC (vtnet interface and they didn't support mcast routing on FreeBSD):

tools/BSDRP-lab-bhyve.sh -n 4 -e -i BSDRP-1.96-full-amd64-serial.img.xz
BSD Router Project (http://bsdrp.net) - bhyve full-meshed lab script
Setting-up a virtual lab with 4 VM(s):
- Working directory: /root/BSDRP-VMs
- Each VM has 1 core and 512M RAM
- Emulated NIC: e1000
- Switch mode: bridge + tap
- 0 LAN(s) between all VM
- Full mesh Ethernet links between each VM
VM 1 has the following NIC:
- em0 connected to VM 2
- em1 connected to VM 3
- em2 connected to VM 4
VM 2 has the following NIC:
- em0 connected to VM 1
- em1 connected to VM 3
- em2 connected to VM 4
VM 3 has the following NIC:
- em0 connected to VM 1
- em1 connected to VM 2
- em2 connected to VM 4
VM 4 has the following NIC:
- em0 connected to VM 1
- em1 connected to VM 2
- em2 connected to VM 3
To connect VM'serial console, you can use:
- VM 1 : cu -l /dev/nmdm-BSDRP.1B
- VM 2 : cu -l /dev/nmdm-BSDRP.2B
- VM 3 : cu -l /dev/nmdm-BSDRP.3B
- VM 4 : cu -l /dev/nmdm-BSDRP.4B

Routers configuration

Router 1

Configuration:

sysrc hostname=VM1 \
 gateway_enable=no \
 ipv6_gateway_enable=no \
 ifconfig_em0="inet 10.0.12.1/24" \
 defaultrouter=10.0.12.254
service hostname restart
service netif restart
service routing restart
config save

Router 2

VM2 is a PIM router that announce itself (10.0.23.2) as Canditate RP with and adv period of 10 seconds and high priority (will be the rendez-vous point).

sysrc hostname=VM2 \
 ifconfig_em0="inet 10.0.12.254/24" \
 ifconfig_em1="inet 10.0.23.2/24" \
 defaultrouter=10.0.23.3 \
 pimd_enable=yes

cat > /usr/local/etc/pimd.conf <<EOF
rp-candidate 10.0.23.2 time 10 priority 1
#rp-address 10.0.23.2

EOF

service hostname restart
service netif restart
service routing restart
service pimd start
config save

Router 3

We would VM3 annonces hitself (10.0.23.3) as a Canditate BootStrap Router with high priority.

sysrc hostname=VM3 \
 ifconfig_em1="inet 10.0.23.3/24" \
 ifconfig_em2="inet 10.0.34.254/24" \
 defaultrouter=10.0.23.2 \
 pimd_enable=yes

cat > /usr/local/etc/pimd.conf <<EOF
bsr-candidate 10.0.23.3 priority 1
#rp-address 10.0.23.2
EOF

service hostname restart
service netif restart
service routing restart
service pimd start
config save

Router 4

sysrc hostname=VM4 \
 gateway_enable=no \
 ipv6_gateway_enable=no \
 ifconfig_em2="inet 10.0.34.4/24" \
 defaultrouter=10.0.34.254
service hostname restart
service netif restart
service routing restart
config save

Checking NIC drivers and Bhyve compatibility with multicast

Before to star with advanced routing setup, just start to test simple multicast between 2 relatives host: Some NIC (vtnet) or some hypervisors network setup aren't compliant with very simple multicast.

On VM1, start a mcast generator (client emitting mcast):

[root@VM1]~# iperf -c 239.1.1.1 -u -T 32 -t 3000 -i 1
------------------------------------------------------------
Client connecting to 239.1.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
Setting multicast TTL to 32
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] local 10.0.12.1 port 46636 connected with 239.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   131 KBytes  1.07 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.5 sec   446 KBytes  1.05 Mbits/sec
[  3] Sent 311 datagrams
(...)

On the direct connected VM2, start to check if in non-promiscious mode it sees mcast packets comming:

[root@VM2]~# tcpdump -pni em0 -c 2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on em0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:22:32.517270 IP 10.0.12.1.33482 > 239.1.1.1.5001: UDP, length 1470
15:22:32.528668 IP 10.0.12.1.33482 > 239.1.1.1.5001: UDP, length 1470
2 packets captured
2 packets received by filter
0 packets dropped by kernel

⇒ VM2 is receiving mcast packets from 10.0.12.1 to mcast group 239.1.1.1. Now on VM2 start a mcast listener (server receiving), it should receive multicast flow

[root@VM2]~# iperf -s -u -B 239.1.1.1%em0 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 239.1.1.1
Joining multicast group  239.1.1.1
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[  3] local 239.1.1.1 port 5001 connected with 192.168.100.149 port 35181
[ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec   0.038 ms  107/  197 (54%)
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.054 ms    0/   89 (0%)
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.021 ms    0/   89 (0%)
[  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec   0.025 ms    0/   89 (0%)
[  3]  4.0- 5.0 sec   128 KBytes  1.05 Mbits/sec   0.024 ms    0/   89 (0%)
[  3]  5.0- 6.0 sec   129 KBytes  1.06 Mbits/sec   0.024 ms    0/   90 (0%)
[  3]  6.0- 7.0 sec   128 KBytes  1.05 Mbits/sec   0.024 ms    0/   89 (0%)
(...)

⇒ Notice the mcast receiver is correctly receiving at 1Mb/s.

Here is a non working example (here because source interface not given, and it uses the other one):

[root@VM2]~# iperf -s -u -B 239.1.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 239.1.1.1
Joining multicast group  239.1.1.1
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
(...)

⇒ Here it doesn't receive traffic and stay in “waiting” mode forever.

Checking pimd behavior

PIM neighbors

Does the PIM routers see each others ?

[root@VM2]~# pimd -r
Virtual Interface Table ======================================================
Vif  Local Address    Subnet              Thresh  Flags      Neighbors
---  ---------------  ------------------  ------  ---------  -----------------
  0  10.0.12.254      10.0.12/24               1  DR NO-NBR
  1  10.0.23.2        10.0.23/24               1  PIM        10.0.23.3
  2  10.0.12.254      register_vif0            1

 Vif  SSM Group        Sources

Multicast Routing Table ======================================================
--------------------------------- (*,*,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0
------------------------------------------------------------------------------

⇒ VM2 sees VM3 as PIM neighbor

[root@VM3]~# pimd -r
Virtual Interface Table ======================================================
Vif  Local Address    Subnet              Thresh  Flags      Neighbors
---  ---------------  ------------------  ------  ---------  -----------------
  0  10.0.23.3        10.0.23/24               1  DR PIM     10.0.23.2
  1  10.0.34.254      10.0.34/24               1  DR NO-NBR
  2  10.0.23.3        register_vif0            1

 Vif  SSM Group        Sources

Multicast Routing Table ======================================================
--------------------------------- (*,*,G) ------------------------------------
Number of Groups: 0
Number of Cache MIRRORs: 0
------------------------------------------------------------------------------

⇒ VM3 sees VM2 as PIM Designated Router neighbor.

Does PIM daemon locally register to PIM mcast group ?

PIM router need to register to 224.0.0.13 mcast group, check if all PIM routers correctly display this group on their enabled interfaces:

[root@VM2]~# ifmcstat
em0:
        inet 10.0.12.2
        igmpv2
                group 224.0.0.22 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:16 refcnt 1
                group 224.0.0.2 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:02 refcnt 1
                group 224.0.0.13 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:0d refcnt 1
                group 224.0.0.1 refcnt 1 state silent mode exclude
                        mcast-macaddr 01:00:5e:00:00:01 refcnt 1
        inet6 fe80:1::a8aa:ff:fe00:212
        mldv2 flags=2<USEALLOW> rv 2 qi 125 qri 10 uri 3
                group ff01:1::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:1::2:54c6:805c refcnt 1
                        mcast-macaddr 33:33:54:c6:80:5c refcnt 1
                group ff02:1::2:ff54:c680 refcnt 1
                        mcast-macaddr 33:33:ff:54:c6:80 refcnt 1
                group ff02:1::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:1::1:ff00:212 refcnt 1
                        mcast-macaddr 33:33:ff:00:02:12 refcnt 1
em1:
        inet 10.0.23.2
        igmpv2
                group 224.0.0.22 refcnt 1 state sleeping mode exclude
                        mcast-macaddr 01:00:5e:00:00:16 refcnt 1
                group 224.0.0.2 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:02 refcnt 1
                group 224.0.0.13 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:0d refcnt 1
                group 224.0.0.1 refcnt 1 state silent mode exclude
                        mcast-macaddr 01:00:5e:00:00:01 refcnt 1
        inet6 fe80:2::a8aa:ff:fe02:202
        mldv2 flags=2<USEALLOW> rv 2 qi 125 qri 10 uri 3
                group ff01:2::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:2::2:54c6:805c refcnt 1
                        mcast-macaddr 33:33:54:c6:80:5c refcnt 1
                group ff02:2::2:ff54:c680 refcnt 1
                        mcast-macaddr 33:33:ff:54:c6:80 refcnt 1
                group ff02:2::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:2::1:ff02:202 refcnt 1
                        mcast-macaddr 33:33:ff:02:02:02 refcnt 1
[root@VM3]~# ifmcstat
em0:
em1:
        inet 10.0.23.3
        igmpv2
                group 224.0.0.22 refcnt 1 state sleeping mode exclude
                        mcast-macaddr 01:00:5e:00:00:16 refcnt 1
                group 224.0.0.2 refcnt 1 state sleeping mode exclude
                        mcast-macaddr 01:00:5e:00:00:02 refcnt 1
                group 224.0.0.13 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:0d refcnt 1
                group 224.0.0.1 refcnt 1 state silent mode exclude
                        mcast-macaddr 01:00:5e:00:00:01 refcnt 1
        inet6 fe80:2::a8aa:ff:fe00:323
        mldv2 flags=2<USEALLOW> rv 2 qi 125 qri 10 uri 3
                group ff01:2::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:2::2:1124:9296 refcnt 1
                        mcast-macaddr 33:33:11:24:92:96 refcnt 1
                group ff02:2::2:ff11:2492 refcnt 1
                        mcast-macaddr 33:33:ff:11:24:92 refcnt 1
                group ff02:2::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:2::1:ff00:323 refcnt 1
                        mcast-macaddr 33:33:ff:00:03:23 refcnt 1
em2:
        inet 10.0.34.3
        igmpv2
                group 224.0.0.22 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:16 refcnt 1
                group 224.0.0.2 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:02 refcnt 1
                group 224.0.0.13 refcnt 1 state lazy mode exclude
                        mcast-macaddr 01:00:5e:00:00:0d refcnt 1
                group 224.0.0.1 refcnt 1 state silent mode exclude
                        mcast-macaddr 01:00:5e:00:00:01 refcnt 1
        inet6 fe80:3::a8aa:ff:fe03:303
        mldv2 flags=2<USEALLOW> rv 2 qi 125 qri 10 uri 3
                group ff01:3::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:3::2:1124:9296 refcnt 1
                        mcast-macaddr 33:33:11:24:92:96 refcnt 1
                group ff02:3::2:ff11:2492 refcnt 1
                        mcast-macaddr 33:33:ff:11:24:92 refcnt 1
                group ff02:3::1 refcnt 1
                        mcast-macaddr 33:33:00:00:00:01 refcnt 1
                group ff02:3::1:ff03:303 refcnt 1
                        mcast-macaddr 33:33:ff:03:03:03 refcnt 1

We correctly sees mcast group 224.0.0.13 subscribed on PIM enabled interfaces.

Testing

1. Sart a mcast generator (IPerf client) on VM1

Start an iperf client to 239.1.1.1.

[root@VM1]~# iperf -c 239.1.1.1 -u -T 32 -t 3000 -i 1
------------------------------------------------------------
Client connecting to 239.1.1.1, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] local 10.0.12.1 port 41484 connected with 239.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec

2. Check VM2 updates its mrouting table with discovered mcast source

PIM daemon should be updated:

[root@VM2]~# pimd -r
Virtual Interface Table ======================================================
Vif  Local Address    Subnet              Thresh  Flags      Neighbors
---  ---------------  ------------------  ------  ---------  -----------------
  0  10.0.12.254      10.0.12/24               1  DR NO-NBR
  1  10.0.23.2        10.0.23/24               1  PIM        10.0.23.3
  2  10.0.12.254      register_vif0            1

 Vif  SSM Group        Sources

Multicast Routing Table ======================================================
----------------------------------- (S,G) ------------------------------------
Source           Group            RP Address       Flags
---------------  ---------------  ---------------  ---------------------------
10.0.12.1        239.1.1.1        10.0.23.2        SG
Joined   oifs: ...
Pruned   oifs: ...
Leaves   oifs: ...
Asserted oifs: ...
Outgoing oifs: ...
Incoming     : I..

TIMERS:  Entry    JP    RS  Assert VIFS:  0  1  2
             0     0     0       0        0  0  0
--------------------------------- (*,*,G) ------------------------------------
Number of Groups: 1
Number of Cache MIRRORs: 0
------------------------------------------------------------------------------

And mcast routing table too:

[root@VM2]~# netstat -g

IPv4 Virtual Interface Table
 Vif   Thresh   Local-Address   Remote-Address    Pkts-In   Pkts-Out
  0         1   10.0.12.254                             0          0
  1         1   10.0.23.2                               0          0
  2         1   10.0.12.254                             0          0

IPv4 Multicast Forwarding Table
 Origin          Group             Packets In-Vif  Out-Vifs:Ttls
 10.0.12.1       239.1.1.1               0  65535


IPv6 Multicast Interface Table is empty

IPv6 Multicast Forwarding Table is empty

VM2 had update its mroute table for adding a source for group 239.1.1.1 comming from '65535'??.

3. Start a mcast receiver (IPerf server) on VM4

IPerf server will subscribe to 239.1.1.1 multicast group and receiving mcast traffic:

[root@VM4]~# iperf -s -u -B 239.1.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 239.1.1.1
Joining multicast group  239.1.1.1
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[  3] local 239.1.1.1 port 5001 connected with 10.0.12.1 port 41484
[ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
[  3]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.313 ms 16336/16425 (99%)
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.250 ms    0/   89 (0%)
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.307 ms    0/   89 (0%)
[  3]  3.0- 4.0 sec   128 KBytes  1.05 Mbits/sec   0.262 ms    0/   89 (0%)
[  3]  4.0- 5.0 sec   128 KBytes  1.05 Mbits/sec   0.188 ms    0/   89 (0%)
[  3]  5.0- 6.0 sec   129 KBytes  1.06 Mbits/sec   0.347 ms    0/   90 (0%)
[  3]  6.0- 7.0 sec   128 KBytes  1.05 Mbits/sec   0.238 ms    0/   89 (0%)
[  3]  7.0- 8.0 sec   128 KBytes  1.05 Mbits/sec   0.234 ms    0/   89 (0%)
[  3]  8.0- 9.0 sec   128 KBytes  1.05 Mbits/sec   0.241 ms    0/   89 (0%)
[  3]  9.0-10.0 sec   128 KBytes  1.05 Mbits/sec   0.210 ms    0/   89 (0%)
[  3] 10.0-11.0 sec   128 KBytes  1.05 Mbits/sec   0.289 ms    0/   89 (0%)
[  3] 11.0-12.0 sec   129 KBytes  1.06 Mbits/sec   0.309 ms    0/   90 (0%)

4. Check VM3 correctly notice this mcast subscriber

Now the mrouting table of VM3 is updated and know it has a customer:

[root@VM3]~# pimd -r                                                                                                                 [9/367]
Virtual Interface Table ======================================================
Vif  Local Address    Subnet              Thresh  Flags      Neighbors
---  ---------------  ------------------  ------  ---------  -----------------
  0  10.0.23.3        10.0.23/24               1  DR PIM     10.0.23.2
  1  10.0.34.254      10.0.34/24               1  DR NO-NBR
  2  10.0.23.3        register_vif0            1

 Vif  SSM Group        Sources

Multicast Routing Table ======================================================
----------------------------------- (*,G) ------------------------------------
Source           Group            RP Address       Flags
---------------  ---------------  ---------------  ---------------------------
INADDR_ANY       239.1.1.1        10.0.23.2        WC RP CACHE
Joined   oifs: ...
Pruned   oifs: ...
Leaves   oifs: .l.
Asserted oifs: ...
Outgoing oifs: .o.
Incoming     : I..

TIMERS:  Entry    JP    RS  Assert VIFS:  0  1  2
             0    50     0       0        0  0  0
----------------------------------- (S,G) ------------------------------------
Source           Group            RP Address       Flags
---------------  ---------------  ---------------  ---------------------------
10.0.12.1        239.1.1.1        10.0.23.2        SG
Joined   oifs: ...
Pruned   oifs: ...
Leaves   oifs: .l.
Asserted oifs: ...
Outgoing oifs: .o.
Incoming     : I..

TIMERS:  Entry    JP    RS  Assert VIFS:  0  1  2
           200    55     0       0        0  0  0
--------------------------------- (*,*,G) ------------------------------------
Number of Groups: 1
Number of Cache MIRRORs: 1
------------------------------------------------------------------------------

And its mcast routing is updated too:

[root@VM3]~# netstat -g

IPv4 Virtual Interface Table
 Vif   Thresh   Local-Address   Remote-Address    Pkts-In   Pkts-Out
  0         1   10.0.23.3                             157          0
  1         1   10.0.34.254                             0        157
  2         1   10.0.23.3                               0          0

IPv4 Multicast Forwarding Table
 Origin          Group             Packets In-Vif  Out-Vifs:Ttls
 10.0.12.1       239.1.1.1             157    0    1:1


IPv6 Multicast Interface Table is empty

IPv6 Multicast Forwarding Table is empty

VM3 correctly learn that there is a subscriber to group 239.1.1.1 on interface vif1 (toward VM4) and the source is on vif0 (toward VM2).

documentation/examples/multicast_with_pim-sm.txt · Last modified: 2019/11/08 19:22 by olivier