ovn: Add an ovs-sandbox based OVN tutorial.
[cascardo/ovs.git] / tutorial / OVN-Tutorial.md
1 OVN Tutorial
2 ============
3
4 This tutorial is intended to give you a tour of the basic OVN features using
5 `ovs-sandbox` as a simulated test environment.  It’s assumed that you have an
6 understanding of OVS before going through this tutorial. Detail about OVN is
7 covered in `ovn-architecture(7)`, but this tutorial lets you quickly see it in
8 action.
9
10 Getting Started
11 ---------------
12
13 For some general information about `ovs-sandbox`, see the “Getting Started”
14 section of [Tutorial.md].
15
16 `ovs-sandbox` does not include OVN support by default.  To enable OVN, you must
17 pass the `--ovn` flag.  For example, if running it straight from the ovs git
18 tree you would run:
19
20     $ make sandbox SANDBOXFLAGS=”--ovn”
21
22 Running the sandbox with OVN enabled does the following additional steps to the
23 environment:
24
25   1. Creates the `OVN_Northbound` and `OVN_Southbound` databases as described in
26      `ovn-nb(5)` and `ovn-sb(5)`.
27
28   2. Creates the `hardware_vtep` database as described in `vtep(5)`.
29
30   3. Runs the `ovn-northd`, `ovn-controller`, and `ovn-controller-vtep` daemons.
31
32   4. Makes OVN and VTEP utilities available for use in the environment,
33      including `vtep-ctl`, `ovn-nbctl`, and `ovn-sbctl`.
34
35 Note that each of these demos assumes you start with a fresh sandbox
36 environment.  Re-run `ovs-sandbox` before starting each section.
37
38 1) Simple two-port setup
39 ------------------------
40
41 This first environment is the simplest OVN example.  It demonstrates using OVN
42 with a single logical switch that has two logical ports, both residing on the
43 same hypervisor.
44
45 Start by running the setup script for this environment.
46
47 [View ovn/env1/setup.sh][env1setup].
48
49     $ ovn/env1/setup.sh
50
51 You can use the `ovn-nbctl` utility to see an overview of the logical topology.
52
53     $ ovn-nbctl show
54     lswitch 78687d53-e037-4555-bcd3-f4f8eaf3f2aa (sw0)
55         lport sw0-port1
56             macs: 00:00:00:00:00:01
57         lport sw0-port2
58             macs: 00:00:00:00:00:02
59
60 The `ovn-sbctl` utility can be used to see into the state stored in the
61 `OVN_Southbound` database.  The `show` command shows that there is a single
62 chassis with two logical ports bound to it.  In a more realistic
63 multi-hypervisor environment, this would list all hypervisors and where all
64 logical ports are located.
65
66     $ ovn-sbctl show
67     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
68         Encap geneve
69             ip: “127.0.0.1”
70         Port_Binding “sw0-port1”
71         Port_Binding “sw0-port2”
72
73 OVN creates logical flows to describe how the network should behave in logical
74 space.  Each chassis then creates OpenFlow flows based on those logical flows
75 that reflect its own local view of the network.  The `ovn-sbctl` command can
76 show the logical flows.
77
78     $ ovn-sbctl lflow-list
79     Datapath: d3466847-2b3a-4f17-8eb2-34f5b0727a70  Pipeline: ingress
80       table=0(port_sec), priority=  100, match=(eth.src[40]), action=(drop;)
81       table=0(port_sec), priority=  100, match=(vlan.present), action=(drop;)
82       table=0(port_sec), priority=   50, match=(inport == "sw0-port1" && eth.src == {00:00:00:00:00:01}), action=(next;)
83       table=0(port_sec), priority=   50, match=(inport == "sw0-port2" && eth.src == {00:00:00:00:00:02}), action=(next;)
84       table=1(     acl), priority=    0, match=(1), action=(next;)
85       table=2( l2_lkup), priority=  100, match=(eth.dst[40]), action=(outport = "_MC_flood"; output;)
86       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:01), action=(outport = "sw0-port1"; output;)
87       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:02), action=(outport = "sw0-port2"; output;)
88     Datapath: d3466847-2b3a-4f17-8eb2-34f5b0727a70  Pipeline: egress
89       table=0(     acl), priority=    0, match=(1), action=(next;)
90       table=1(port_sec), priority=  100, match=(eth.dst[40]), action=(output;)
91       table=1(port_sec), priority=   50, match=(outport == "sw0-port1" && eth.dst == {00:00:00:00:00:01}), action=(output;)
92       table=1(port_sec), priority=   50, match=(outport == "sw0-port2" && eth.dst == {00:00:00:00:00:02}), action=(output;)
93
94 Now we can start taking a closer look at how `ovn-controller` has programmed the
95 local switch.  Before looking at the flows, we can use `ovs-ofctl` to verify the
96 OpenFlow port numbers for each of the logical ports on the switch.  The output
97 shows that `lport1`, which corresponds with our logical port `sw0-port1`, has an
98 OpenFlow port number of `1`.  Similarly, `lport2` has an OpenFlow port number of
99 `2`.
100
101     $ ovs-ofctl show br-int
102     OFPT_FEATURES_REPLY (xid=0x2): dpid:00003e1ba878364d
103     n_tables:254, n_buffers:256
104     capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
105     actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
106      1(lport1): addr:aa:55:aa:55:00:07
107          config:     PORT_DOWN
108          state:      LINK_DOWN
109          speed: 0 Mbps now, 0 Mbps max
110      2(lport2): addr:aa:55:aa:55:00:08
111          config:     PORT_DOWN
112          state:      LINK_DOWN
113          speed: 0 Mbps now, 0 Mbps max
114      LOCAL(br-int): addr:3e:1b:a8:78:36:4d
115          config:     PORT_DOWN
116          state:      LINK_DOWN
117          speed: 0 Mbps now, 0 Mbps max
118     OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
119
120 Finally, use `ovs-ofctl` to see the OpenFlow flows for `br-int`.  Note that some
121 fields have been omitted for brevity.
122
123     $ ovs-ofctl -O OpenFlow13 dump-flows br-int
124     OFPST_FLOW reply (OF1.3) (xid=0x2):
125      table=0, priority=100,in_port=1 actions=set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16)
126      table=0, priority=100,in_port=2 actions=set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16)
127      table=16, priority=100,metadata=0x1,dl_src=01:00:00:00:00:00/01:00:00:00:00:00 actions=drop
128      table=16, priority=100,metadata=0x1,vlan_tci=0x1000/0x1000 actions=drop
129      table=16, priority=50,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01 actions=resubmit(,17)
130      table=16, priority=50,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02 actions=resubmit(,17)
131      table=17, priority=0,metadata=0x1 actions=resubmit(,18)
132      table=18, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=set_field:0xffff->reg7,resubmit(,32)
133      table=18, priority=50,metadata=0x1,dl_dst=00:00:00:00:00:01 actions=set_field:0x1->reg7,resubmit(,32)
134      table=18, priority=50,metadata=0x1,dl_dst=00:00:00:00:00:02 actions=set_field:0x2->reg7,resubmit(,32)
135      table=32, priority=0 actions=resubmit(,33)
136      table=33, priority=100,reg7=0x1,metadata=0x1 actions=resubmit(,34)
137      table=33, priority=100,reg7=0xffff,metadata=0x1 actions=set_field:0x2->reg7,resubmit(,34),set_field:0x1->reg7,resubmit(,34)
138      table=33, priority=100,reg7=0x2,metadata=0x1 actions=resubmit(,34)
139      table=34, priority=100,reg6=0x1,reg7=0x1,metadata=0x1 actions=drop
140      table=34, priority=100,reg6=0x2,reg7=0x2,metadata=0x1 actions=drop
141      table=34, priority=0 actions=set_field:0->reg0,set_field:0->reg1,set_field:0->reg2,set_field:0->reg3,set_field:0->reg4,set_field:0->reg5,resubmit(,48)
142      table=48, priority=0,metadata=0x1 actions=resubmit(,49)
143      table=49, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,64)
144      table=49, priority=50,reg7=0x1,metadata=0x1,dl_dst=00:00:00:00:00:01 actions=resubmit(,64)
145      table=49, priority=50,reg7=0x2,metadata=0x1,dl_dst=00:00:00:00:00:02 actions=resubmit(,64)
146      table=64, priority=100,reg7=0x1,metadata=0x1 actions=output:1
147      table=64, priority=100,reg7=0x2,metadata=0x1 actions=output:2
148
149 The `ovs-appctl` command can be used to generate and OpenFlow trace of how a
150 packet would be processed in this configuration.  This first trace shows a
151 packet from `sw0-port1` to `sw0-port2`.  The packet arrives from port `1` and
152 should be output to port `2`.
153
154 [View ovn/env1/packet1.sh][env1packet1].
155
156     $ ovn/env1/packet1.sh
157
158 Trace a broadcast packet from `sw0-port1`.  The packet arrives from port `1` and
159 should be output to port `2`.
160
161 [View ovn/env1/packet2.sh][env1packet2].
162
163     $ ovn/env1/packet2.sh
164
165 You can extend this setup by adding additional ports.  For example, to add a
166 third port, run this command:
167
168 [View ovn/env1/add-third-port.sh][env1thirdport].
169
170     $ ovn/env1/add-third-port.sh
171
172 Now if you do another trace of a broadcast packet from `sw0-port1`, you will see
173 that it is output to both ports `2` and `3`.
174
175     $ ovn/env1/packet2.sh
176
177 2) 2 switches, 4 ports
178 ----------------------
179
180 This environment is an extension of the last example.  The previous example
181 showed two ports on a single logical switch.  In this environment we add a
182 second logical switch that also has two ports.  This lets you start to see how
183 `ovn-controller` creates flows for isolated networks to co-exist on the same
184 switch.
185
186 [View ovn/env2/setup.sh][env2setup].
187
188     $ ovn/env2/setup.sh
189
190 View the logical topology with `ovn-nbctl`.
191
192     $ ovn-nbctl show
193     lswitch e3190dc2-89d1-44ed-9308-e7077de782b3 (sw0)
194         lport sw0-port1
195             macs: 00:00:00:00:00:01
196         lport sw0-port2
197             macs: 00:00:00:00:00:02
198     lswitch c8ed4c5f-9733-43f6-93da-795b1aabacb1 (sw1)
199         lport sw1-port1
200             macs: 00:00:00:00:00:03
201         lport sw1-port2
202             macs: 00:00:00:00:00:04
203
204 Physically, all ports reside on the same chassis.
205
206     $ ovn-sbctl show
207     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
208         Encap geneve
209             ip: “127.0.0.1”
210         Port_Binding “sw1-port2”
211         Port_Binding “sw0-port2”
212         Port_Binding “sw0-port1”
213         Port_Binding “sw1-port1”
214
215 OVN creates separate logical flows for each logical switch.
216
217     $ ovn-sbctl lflow-list
218     Datapath: 5aa8be0b-8369-49e2-a878-f68872a8d211  Pipeline: ingress
219       table=0(port_sec), priority=  100, match=(eth.src[40]), action=(drop;)
220       table=0(port_sec), priority=  100, match=(vlan.present), action=(drop;)
221       table=0(port_sec), priority=   50, match=(inport == "sw0-port1" && eth.src == {00:00:00:00:00:01}), action=(next;)
222       table=0(port_sec), priority=   50, match=(inport == "sw0-port2" && eth.src == {00:00:00:00:00:02}), action=(next;)
223       table=1(     acl), priority=    0, match=(1), action=(next;)
224       table=2( l2_lkup), priority=  100, match=(eth.dst[40]), action=(outport = "_MC_flood"; output;)
225       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:01), action=(outport = "sw0-port1"; output;)
226       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:02), action=(outport = "sw0-port2"; output;)
227     Datapath: 5aa8be0b-8369-49e2-a878-f68872a8d211  Pipeline: egress
228       table=0(     acl), priority=    0, match=(1), action=(next;)
229       table=1(port_sec), priority=  100, match=(eth.dst[40]), action=(output;)
230       table=1(port_sec), priority=   50, match=(outport == "sw0-port1" && eth.dst == {00:00:00:00:00:01}), action=(output;)
231       table=1(port_sec), priority=   50, match=(outport == "sw0-port2" && eth.dst == {00:00:00:00:00:02}), action=(output;)
232     Datapath: 631fb3c9-b0a3-4e56-bac3-1717c8cbb826  Pipeline: ingress
233       table=0(port_sec), priority=  100, match=(eth.src[40]), action=(drop;)
234       table=0(port_sec), priority=  100, match=(vlan.present), action=(drop;)
235       table=0(port_sec), priority=   50, match=(inport == "sw1-port1" && eth.src == {00:00:00:00:00:03}), action=(next;)
236       table=0(port_sec), priority=   50, match=(inport == "sw1-port2" && eth.src == {00:00:00:00:00:04}), action=(next;)
237       table=1(     acl), priority=    0, match=(1), action=(next;)
238       table=2( l2_lkup), priority=  100, match=(eth.dst[40]), action=(outport = "_MC_flood"; output;)
239       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:03), action=(outport = "sw1-port1"; output;)
240       table=2( l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:04), action=(outport = "sw1-port2"; output;)
241     Datapath: 631fb3c9-b0a3-4e56-bac3-1717c8cbb826  Pipeline: egress
242       table=0(     acl), priority=    0, match=(1), action=(next;)
243       table=1(port_sec), priority=  100, match=(eth.dst[40]), action=(output;)
244       table=1(port_sec), priority=   50, match=(outport == "sw1-port1" && eth.dst == {00:00:00:00:00:03}), action=(output;)
245       table=1(port_sec), priority=   50, match=(outport == "sw1-port2" && eth.dst == {00:00:00:00:00:04}), action=(output;)
246
247 In this setup, `sw0-port1` and `sw0-port2` can send packets to each other, but
248 not to either of the ports on `sw1`.  This first trace shows a packet from
249 `sw0-port1` to `sw0-port2`.  You should see th packet arrive on OpenFlow port
250 `1` and output to OpenFlow port `2`.
251
252 [View ovn/env2/packet1.sh][env2packet1].
253
254     $ ovn/env2/packet1.sh
255
256 This next example shows a packet from `sw0-port1` with a destination MAC address
257 of `00:00:00:00:00:03`, which is the MAC address for `sw1-port1`.  Since these
258 ports are not on the same logical switch, the packet should just be dropped.
259
260 [View ovn/env2/packet2.sh][env2packet2].
261
262     $ ovn/env2/packet2.sh
263
264 3) Two Hypervisors
265 ------------------
266
267 The first two examples started by showing OVN on a single hypervisor.  A more
268 realistic deployment of OVN would span multiple hypervisors.  This example
269 creates a single logical switch with 4 logical ports.  It then simulates having
270 two hypervisors with two of the logical ports bound to each hypervisor.
271
272 [View ovn/env3/setup.sh][env3setup].
273
274     $ ovn/env3/setup.sh
275
276 You can start by viewing the logical topology with `ovn-nbctl`.
277
278     $ ovn-nbctl show
279     lswitch b977dc03-79a5-41ba-9665-341a80e1abfd (sw0)
280         lport sw0-port1
281             macs: 00:00:00:00:00:01
282         lport sw0-port2
283             macs: 00:00:00:00:00:02
284         lport sw0-port4
285             macs: 00:00:00:00:00:04
286         lport sw0-port3
287             macs: 00:00:00:00:00:03
288
289 Using `ovn-sbctl` to view the state of the system, we can see that there are two
290 chassis: one local that we can interact with, and a fake remote chassis. Two
291 logical ports are bound to each.  Both chassis have an IP address of localhost,
292 but in a realistic deployment that would be the IP address used for tunnels to
293 that chassis.
294
295     $ ovn-sbctl show
296     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
297         Encap geneve
298             ip: “127.0.0.1”
299         Port_Binding “sw0-port2”
300         Port_Binding “sw0-port1”
301     Chassis fakechassis
302         Encap geneve
303             ip: “127.0.0.1”
304         Port_Binding “sw0-port4”
305         Port_Binding “sw0-port3”
306
307 Packets between `sw0-port1` and `sw0-port2` behave just like the previous
308 examples.  Packets to ports on a remote chassis are the interesting part of this
309 example.  You may have noticed before that OVN’s logical flows are broken up
310 into ingress and egress tables.  Given a packet from `sw0-port1` on the local
311 chassis to `sw0-port3` on the remote chassis, the ingress pipeline is executed
312 on the local switch.  OVN then determines that it must forward the packet over a
313 geneve tunnel.  When it arrives at the remote chassis, the egress pipeline will
314 be executed there.
315
316 This first packet trace shows the first part of this example.  It’s a packet
317 from `sw0-port1` to `sw0-port3` from the perspective of the local chassis.
318 `sw0-port1` is OpenFlow port `1`.  The tunnel to the fake remote chassis is
319 OpenFlow port `3`.  You should see the ingress pipeline being executed and then
320 the packet output to port `3`, the geneve tunnel.
321
322 [View ovn/env3/packet1.sh][env3packet1].
323
324     $ ovn/env3/packet1.sh
325
326 To simulate what would happen when that packet arrives at the remote chassis we
327 can flip this example around.  Consider a packet from `sw0-port3` to
328 `sw0-port1`.  This trace shows what would happen when that packet arrives at the
329 local chassis.  The packet arrives on OpenFlow port `3` (the tunnel).  You should
330 then see the egress pipeline get executed and the packet output to OpenFlow port
331 `1`.
332
333 [View ovn/env3/packet2.sh][env3packet2].
334
335     $ ovn/env3/packet2.sh
336
337 4) Locally attached networks
338 ----------------------------
339
340 While OVN is generally focused on the implementation of logical networks using
341 overlays, it’s also possible to use OVN as a control plane to manage logically
342 direct connectivity to networks that are locally accessible to each chassis.
343
344 This example includes two hypervisors.  Both hypervisors have two ports on them.
345 We want to use OVN to manage the connectivity of these ports to a network
346 attached to each hypervisor that we will call “physnet1”.
347
348 This scenario requires some additional configuration of `ovn-controller`.  We
349 must configure a mapping between `physnet1` and a local OVS bridge that provides
350 connectivity to that network.  We call these “bridge mappings”.  For our
351 example, the following script creates a bridge called `br-eth1` and then
352 configures `ovn-controller` with a bridge mapping from `physnet1` to `br-eth1`.
353
354 [View ovn/env4/setup1.sh][env4setup1].
355
356     $ ovn/env4/setup1.sh
357
358 At this point we should be able to see that `ovn-controller` has automatically
359 created patch ports between `br-int` and `br-eth1`.
360
361     $ ovs-vsctl show
362     aea39214-ebec-4210-aa34-1ae7d6921720
363         Bridge br-int
364             fail_mode: secure
365             Port “patch-br-int-to-br-eth1”
366                 Interface “patch-br-int-to-br-eth1”
367                     type: patch
368                     options: {peer=”patch-br-eth1-to-br-int”}
369             Port br-int
370                 Interface br-int
371                     type: internal
372         Bridge “br-eth1”
373             Port “br-eth1”
374                 Interface “br-eth1”
375                     type: internal
376             Port “patch-br-eth1-to-br-int”
377                 Interface “patch-br-eth1-to-br-int”
378                     type: patch
379                     options: {peer=”patch-br-int-to-br-eth1”}
380
381 Now we can move on to the next setup phase for this example.  We want to create
382 a fake second chassis and then create the topology that tells OVN we want both
383 ports on both hypervisors connected to `physnet1`.  The way this is modeled in
384 OVN is by creating a logical switch for each port.  The logical switch has the
385 regular VIF port and a `localnet` port.
386
387 [View ovn/env4/setup2.sh][env4setup2].
388
389     $ ovn/env4/setup2.sh
390
391 The logical topology from `ovn-nbctl` should look like this.
392
393     $ ovn-nbctl show
394         lswitch 5a652488-cfba-4f3e-929d-00010cdfde40 (provnet1-2)
395             lport provnet1-2-physnet1
396                 macs: unknown
397             lport provnet1-2-port1
398                 macs: 00:00:00:00:00:02
399         lswitch 5829b60a-eda8-4d78-94f6-7017ff9efcf0 (provnet1-4)
400             lport provnet1-4-port1
401                 macs: 00:00:00:00:00:04
402             lport provnet1-4-physnet1
403                 macs: unknown
404         lswitch 06cbbcb6-38e3-418d-a81e-634ec9b54ad6 (provnet1-1)
405             lport provnet1-1-port1
406                 macs: 00:00:00:00:00:01
407             lport provnet1-1-physnet1
408                 macs: unknown
409         lswitch 9cba3b3b-59ae-4175-95f5-b6f1cd9c2afb (provnet1-3)
410             lport provnet1-3-physnet1
411                 macs: unknown
412             lport provnet1-3-port1
413                 macs: 00:00:00:00:00:03
414
415 `port1` on each logical switch represents a regular logical port for a VIF on a
416 hypervisor.  `physnet1` on each logical switch is the special `localnet` port.
417 You can use `ovn-nbctl` to see that this port has a `type` and `options` set.
418
419     $ ovn-nbctl lport-get-type provnet1-1-physnet1
420     localnet
421
422     $ ovn-nbctl lport-get-options provnet1-1-physnet1
423     network_name=physnet1
424
425 The physical topology should reflect that there are two regular ports on each
426 chassis.
427
428     $ ovn-sbctl show
429     Chassis fakechassis
430         Encap geneve
431             ip: “127.0.0.1”
432         Port_Binding “provnet1-3-port1”
433         Port_Binding “provnet1-4-port1”
434     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
435         Encap geneve
436             ip: “127.0.0.1”
437         Port_Binding “provnet1-2-port1”
438         Port_Binding “provnet1-1-port1”
439
440 All four of our ports should be able to communicate with each other, but they do
441 so through `physnet1`.  A packet from any of these ports to any destination
442 should be output to the OpenFlow port number that corresponds to the patch port
443 to `br-eth1`.
444
445 This example assumes following OpenFlow port number mappings:
446
447 * 1 = patch port to `br-eth1`
448 * 2 = tunnel to the fake second chassis
449 * 3 = lport1, which is the logical port named `provnet1-1-port1`
450 * 4 = lport2, which is the logical port named `provnet1-2-port1`
451
452 We get those port numbers using `ovs-ofctl`:
453
454     $ ovs-ofctl show br-int
455     OFPT_FEATURES_REPLY (xid=0x2): dpid:0000765054700040
456     n_tables:254, n_buffers:256
457     capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
458     actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
459     mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
460      1(patch-br-int-to): addr:de:29:14:95:8a:b8
461          config:     0
462          state:      0
463          speed: 0 Mbps now, 0 Mbps max
464      2(ovn-fakech-0): addr:aa:55:aa:55:00:08
465          config:     PORT_DOWN
466          state:      LINK_DOWN
467          speed: 0 Mbps now, 0 Mbps max
468      3(lport1): addr:aa:55:aa:55:00:09
469          config:     PORT_DOWN
470          state:      LINK_DOWN
471          speed: 0 Mbps now, 0 Mbps max
472      4(lport2): addr:aa:55:aa:55:00:0a
473          config:     PORT_DOWN
474          state:      LINK_DOWN
475          speed: 0 Mbps now, 0 Mbps max
476      LOCAL(br-int): addr:76:50:54:70:00:40
477          config:     PORT_DOWN
478          state:      LINK_DOWN
479          speed: 0 Mbps now, 0 Mbps max
480     OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
481
482 This first trace shows a packet from `provnet1-1-port1` with a destination MAC
483 address of `provnet1-2-port1`.  Despite both of these ports being on the same
484 local switch (`lport1` and `lport2`), we expect all packets to be sent out to
485 `br-eth1` (OpenFlow port 1).  We then expect the network to handle getting the
486 packet to its destination.  In practice, this will be optimized at `br-eth1` and
487 the packet won’t actually go out and back on the network.
488
489 [View ovn/env4/packet1.sh][env4packet1].
490
491     $ ovn/env4/packet1.sh
492
493 This next trace is a continuation of the previous one.  This shows the packet
494 coming back into `br-int` from `br-eth1`.  We now expect the packet to be output
495 to `provnet1-2-port1`, which is OpenFlow port 4.
496
497 [View ovn/env4/packet2.sh][env4packet2].
498
499     $ ovn/env4/packet2.sh
500
501 This next trace shows an example of a packet being sent to a destination on
502 another hypervisor.  The source is `provnet1-2-port1`, but the destination is
503 `provnet1-3-port1`, which is on the other fake chassis.  As usual, we expect the
504 output to be to OpenFlow port 1, the patch port to `br-et1`.
505
506 [View ovn/env4/packet3.sh][env4packet3].
507
508     $ ovn/env4/packet3.sh
509
510 This next test shows a broadcast packet.  The destination should still only be
511 OpenFlow port 1.
512
513 [View ovn/env4/packet4.sh][env4packet4]
514
515     $ ovn/env4/packet4.sh
516
517 Finally, this last trace shows what happens when a broadcast packet arrives
518 from the network.  In this case, it simulates a broadcast that originated from a
519 port on the remote fake chassis and arrived at the local chassis via `br-eth1`.
520 We should see it output to both local ports that are attached to this network
521 (OpenFlow ports 3 and 4).
522
523 [View ovn/env4/packet5.sh][env4packet5]
524
525     $ ovn/env4/packet5.sh
526
527 5) Locally attached networks with VLANs
528 ---------------------------------------
529
530 This example is an extension of the previous one.  We take the same setup and
531 add two more ports to each hypervisor.  Instead of having the new ports directly
532 connected to `physnet1` as before, we indicate that we want them on VLAN 101 of
533 `physnet1`.  This shows how `localnet` ports can be used to provide connectivity
534 to either a flat network or a VLAN on that network.
535
536 [View ovn/env5/setup.sh][env5setup]
537
538     $ ovn/env5/setup.sh
539
540 The logical topology shown by `ovn-nbctl` is similar to `env4`, except we now
541 have 8 regular VIF ports connected to `physnet1` instead of 4.  The additional 4
542 ports we have added are all on VLAN 101 of `physnet1`.  Note that the `localnet`
543 ports representing connectivity to VLAN 101 of `physnet1` have the `tag` field
544 set to `101`.
545
546     $ ovn-nbctl show
547         lswitch 12ea93d0-694b-48e9-adef-d0ddd3ec4ac9 (provnet1-7-101)
548             lport provnet1-7-physnet1-101
549                 parent: , tag:101
550                 macs: unknown
551             lport provnet1-7-101-port1
552                 macs: 00:00:00:00:00:07
553         lswitch c9a5ce3a-15ec-48ea-a898-416013463589 (provnet1-4)
554             lport provnet1-4-port1
555                 macs: 00:00:00:00:00:04
556             lport provnet1-4-physnet1
557                 macs: unknown
558         lswitch e07d4f7a-2085-4fbb-9937-d6192b79a397 (provnet1-1)
559             lport provnet1-1-physnet1
560                 macs: unknown
561             lport provnet1-1-port1
562                 macs: 00:00:00:00:00:01
563         lswitch 6c098474-0509-4219-bc9b-eb4e28dd1aeb (provnet1-2)
564             lport provnet1-2-physnet1
565                 macs: unknown
566             lport provnet1-2-port1
567                 macs: 00:00:00:00:00:02
568         lswitch 723c4684-5d58-4202-b8e3-4ba99ad5ed9e (provnet1-8-101)
569             lport provnet1-8-101-port1
570                 macs: 00:00:00:00:00:08
571             lport provnet1-8-physnet1-101
572                 parent: , tag:101
573                 macs: unknown
574         lswitch 8444e925-ceb2-4b02-ac20-eb2e4cfb954d (provnet1-6-101)
575             lport provnet1-6-physnet1-101
576                 parent: , tag:101
577                 macs: unknown
578             lport provnet1-6-101-port1
579                 macs: 00:00:00:00:00:06
580         lswitch e11e5605-7c46-4395-b28d-cff57451fc7e (provnet1-3)
581             lport provnet1-3-port1
582                 macs: 00:00:00:00:00:03
583             lport provnet1-3-physnet1
584                 macs: unknown
585         lswitch 0706b697-6c92-4d54-bc0a-db5bababb74a (provnet1-5-101)
586             lport provnet1-5-101-port1
587                 macs: 00:00:00:00:00:05
588             lport provnet1-5-physnet1-101
589                 parent: , tag:101
590                 macs: unknown
591
592 The physical topology shows that we have 4 regular VIF ports on each simulated
593 hypervisor.
594
595     $ ovn-sbctl show
596     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
597         Encap geneve
598             ip: “127.0.0.1”
599         Port_Binding “provnet1-6-101-port1”
600         Port_Binding “provnet1-1-port1”
601         Port_Binding “provnet1-2-port1”
602         Port_Binding “provnet1-5-101-port1”
603     Chassis fakechassis
604         Encap geneve
605             ip: “127.0.0.1”
606         Port_Binding “provnet1-4-port1”
607         Port_Binding “provnet1-3-port1”
608         Port_Binding “provnet1-8-101-port1”
609         Port_Binding “provnet1-7-101-port1”
610
611 All of the traces from the previous example, `env4`, should work in this
612 environment and provide the same result.  Now we can show what happens for the
613 ports connected to VLAN 101.  This first example shows a packet originating from
614 `provnet1-5-101-port1`, which is OpenFlow port 5.  We should see VLAN tag 101
615 pushed on the packet and then output to OpenFlow port 1, the patch port to
616 `br-eth1` (the bridge providing connectivity to `physnet1`).
617
618 [View ovn/env5/packet1.sh][env5packet1].
619
620     $ ovn/env5/packet1.sh
621
622 If we look at a broadcast packet arriving on VLAN 101 of `physnet1`, we should
623 see it output to OpenFlow ports 5 and 6 only.
624
625 [View ovn/env5/packet2.sh][env5packet2].
626
627     $ ovn/env5/packet2.sh
628
629
630 [Tutorial.md]:./Tutorial.md
631 [env1setup]:./ovn/env1/setup.sh
632 [env1packet1]:./ovn/env1/packet1.sh
633 [env1packet2]:./ovn/env1/packet2.sh
634 [env1thirdport]:./ovn/env1/add-third-port.sh
635 [env2setup]:./ovn/env2/setup.sh
636 [env2packet1]:./ovn/env2/packet1.sh
637 [env2packet2]:./ovn/env2/packet2.sh
638 [env3setup]:./ovn/env3/setup.sh
639 [env3packet1]:./ovn/env3/packet1.sh
640 [env3packet2]:./ovn/env3/packet2.sh
641 [env4setup1]:./ovn/env4/setup1.sh
642 [env4setup2]:./ovn/env4/setup2.sh
643 [env4packet1]:./ovn/env4/packet1.sh
644 [env4packet2]:./ovn/env4/packet2.sh
645 [env4packet3]:./ovn/env4/packet3.sh
646 [env4packet4]:./ovn/env4/packet4.sh
647 [env4packet5]:./ovn/env4/packet5.sh
648 [env5setup]:./ovn/env5/setup.sh
649 [env5packet1]:./ovn/env5/packet1.sh
650 [env5packet2]:./ovn/env5/packet2.sh