dpif-netlink: add GENEVE creation support
[cascardo/ovs.git] / tutorial / OVN-Tutorial.md
1 OVN Tutorial
2 ============
3
4 This tutorial is intended to give you a tour of the basic OVN features using
5 `ovs-sandbox` as a simulated test environment.  It’s assumed that you have an
6 understanding of OVS before going through this tutorial. Detail about OVN is
7 covered in [ovn-architecture(7)], but this tutorial lets you quickly see it in
8 action.
9
10 Getting Started
11 ---------------
12
13 For some general information about `ovs-sandbox`, see the “Getting Started”
14 section of [Tutorial.md].
15
16 `ovs-sandbox` does not include OVN support by default.  To enable OVN, you must
17 pass the `--ovn` flag.  For example, if running it straight from the ovs git
18 tree you would run:
19
20     $ make sandbox SANDBOXFLAGS=”--ovn”
21
22 Running the sandbox with OVN enabled does the following additional steps to the
23 environment:
24
25   1. Creates the `OVN_Northbound` and `OVN_Southbound` databases as described in
26      [ovn-nb(5)] and [ovn-sb(5)].
27
28   2. Creates the `hardware_vtep` database as described in [vtep(5)].
29
30   3. Runs the [ovn-northd(8)], [ovn-controller(8)], and [ovn-controller-vtep(8)]
31      daemons.
32
33   4. Makes OVN and VTEP utilities available for use in the environment,
34      including [vtep-ctl(8)], [ovn-nbctl(8)], and [ovn-sbctl(8)].
35
36 Note that each of these demos assumes you start with a fresh sandbox
37 environment. **Re-run `ovs-sandbox` before starting each section.**
38
39 Using GDB
40 ---------
41
42 GDB support is not required to go through the tutorial. See the “Using GDB”
43 section of [Tutorial.md] for more info. Additional flags exist for launching
44 the debugger for the OVN programs:
45
46   --gdb-ovn-northd
47   --gdb-ovn-controller
48   --gdb-ovn-controller-vtep
49
50
51 1) Simple two-port setup
52 ------------------------
53
54 This first environment is the simplest OVN example.  It demonstrates using OVN
55 with a single logical switch that has two logical ports, both residing on the
56 same hypervisor.
57
58 Start by running the setup script for this environment.
59
60 [View ovn/env1/setup.sh][env1setup].
61
62     $ ovn/env1/setup.sh
63
64 You can use the `ovn-nbctl` utility to see an overview of the logical topology.
65
66     $ ovn-nbctl show
67     switch 78687d53-e037-4555-bcd3-f4f8eaf3f2aa (sw0)
68         port sw0-port1
69             addresses: 00:00:00:00:00:01
70         port sw0-port2
71             addresses: 00:00:00:00:00:02
72
73 The `ovn-sbctl` utility can be used to see into the state stored in the
74 `OVN_Southbound` database.  The `show` command shows that there is a single
75 chassis with two logical ports bound to it.  In a more realistic
76 multi-hypervisor environment, this would list all hypervisors and where all
77 logical ports are located.
78
79     $ ovn-sbctl show
80     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
81         Encap geneve
82             ip: “127.0.0.1”
83         Port_Binding “sw0-port1”
84         Port_Binding “sw0-port2”
85
86 OVN creates logical flows to describe how the network should behave in logical
87 space.  Each chassis then creates OpenFlow flows based on those logical flows
88 that reflect its own local view of the network.  The `ovn-sbctl` command can
89 show the logical flows.
90
91     $ ovn-sbctl lflow-list
92     Datapath: d3466847-2b3a-4f17-8eb2-34f5b0727a70  Pipeline: ingress
93       table=0(ls_in_port_sec_l2), priority=  100, match=(eth.src[40]), action=(drop;)
94       table=0(ls_in_port_sec_l2), priority=  100, match=(vlan.present), action=(drop;)
95       table=0(ls_in_port_sec_l2), priority=   50, match=(inport == "sw0-port1" && eth.src == {00:00:00:00:00:01}), action=(next;)
96       table=0(ls_in_port_sec_l2), priority=   50, match=(inport == "sw0-port2" && eth.src == {00:00:00:00:00:02}), action=(next;)
97       table=1(ls_in_port_sec_ip), priority=    0, match=(1), action=(next;)
98       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && arp.sha == 00:00:00:00:00:01), action=(next;)
99       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:01) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:01)))), action=(next;)
100       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port2" && eth.src == 00:00:00:00:00:02 && arp.sha == 00:00:00:00:00:02), action=(next;)
101       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port2" && eth.src == 00:00:00:00:00:02 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:02) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:02)))), action=(next;)
102       table=2(ls_in_port_sec_nd), priority=   80, match=(inport == "sw0-port1" && (arp || nd)), action=(drop;)
103       table=2(ls_in_port_sec_nd), priority=   80, match=(inport == "sw0-port2" && (arp || nd)), action=(drop;)
104       table=2(ls_in_port_sec_nd), priority=    0, match=(1), action=(next;)
105       table=3(   ls_in_pre_acl), priority=    0, match=(1), action=(next;)
106       table=4(       ls_in_acl), priority=    0, match=(1), action=(next;)
107       table=5(   ls_in_arp_rsp), priority=    0, match=(1), action=(next;)
108       table=6(   ls_in_l2_lkup), priority=  100, match=(eth.mcast), action=(outport = "_MC_flood"; output;)
109       table=6(   ls_in_l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:01), action=(outport = "sw0-port1"; output;)
110       table=6(   ls_in_l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:02), action=(outport = "sw0-port2"; output;)
111     Datapath: d3466847-2b3a-4f17-8eb2-34f5b0727a70  Pipeline: egress
112       table=0(  ls_out_pre_acl), priority=    0, match=(1), action=(next;)
113       table=1(      ls_out_acl), priority=    0, match=(1), action=(next;)
114       table=2(ls_out_port_sec_ip), priority=    0, match=(1), action=(next;)
115       table=3(ls_out_port_sec_l2), priority=  100, match=(eth.mcast), action=(output;)
116       table=3(ls_out_port_sec_l2), priority=   50, match=(outport == "sw0-port1" && eth.dst == {00:00:00:00:00:01}), action=(output;)
117       table=3(ls_out_port_sec_l2), priority=   50, match=(outport == "sw0-port2" && eth.dst == {00:00:00:00:00:02}), action=(output;)
118
119 Now we can start taking a closer look at how `ovn-controller` has programmed the
120 local switch.  Before looking at the flows, we can use `ovs-ofctl` to verify the
121 OpenFlow port numbers for each of the logical ports on the switch.  The output
122 shows that `lport1`, which corresponds with our logical port `sw0-port1`, has an
123 OpenFlow port number of `1`.  Similarly, `lport2` has an OpenFlow port number of
124 `2`.
125
126     $ ovs-ofctl show br-int
127     OFPT_FEATURES_REPLY (xid=0x2): dpid:00003e1ba878364d
128     n_tables:254, n_buffers:256
129     capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
130     actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
131      1(lport1): addr:aa:55:aa:55:00:07
132          config:     PORT_DOWN
133          state:      LINK_DOWN
134          speed: 0 Mbps now, 0 Mbps max
135      2(lport2): addr:aa:55:aa:55:00:08
136          config:     PORT_DOWN
137          state:      LINK_DOWN
138          speed: 0 Mbps now, 0 Mbps max
139      LOCAL(br-int): addr:3e:1b:a8:78:36:4d
140          config:     PORT_DOWN
141          state:      LINK_DOWN
142          speed: 0 Mbps now, 0 Mbps max
143     OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
144
145 Finally, use `ovs-ofctl` to see the OpenFlow flows for `br-int`.  Note that some
146 fields have been omitted for brevity.
147
148     $ ovs-ofctl -O OpenFlow13 dump-flows br-int
149     OFPST_FLOW reply (OF1.3) (xid=0x2):
150      table=0, priority=100,in_port=1 actions=set_field:0x1->reg5,set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16)
151      table=0, priority=100,in_port=2 actions=set_field:0x2->reg5,set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16)
152      table=16, priority=100,metadata=0x1,vlan_tci=0x1000/0x1000 actions=drop
153      table=16, priority=100,metadata=0x1,dl_src=01:00:00:00:00:00/01:00:00:00:00:00 actions=drop
154      table=16, priority=50,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01 actions=resubmit(,17)
155      table=16, priority=50,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02 actions=resubmit(,17)
156      table=17, priority=0,metadata=0x1 actions=resubmit(,18)
157      table=18, priority=90,icmp6,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02,icmp_type=136,icmp_code=0,nd_tll=00:00:00:00:00:00 actions=resubmit(,19)
158      table=18, priority=90,icmp6,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02,icmp_type=136,icmp_code=0,nd_tll=00:00:00:00:00:02 actions=resubmit(,19)
159      table=18, priority=90,icmp6,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01,icmp_type=136,icmp_code=0,nd_tll=00:00:00:00:00:00 actions=resubmit(,19)
160      table=18, priority=90,icmp6,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01,icmp_type=136,icmp_code=0,nd_tll=00:00:00:00:00:01 actions=resubmit(,19)
161      table=18, priority=90,icmp6,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01,icmp_type=135,icmp_code=0,nd_sll=00:00:00:00:00:01 actions=resubmit(,19)
162      table=18, priority=90,icmp6,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01,icmp_type=135,icmp_code=0,nd_sll=00:00:00:00:00:00 actions=resubmit(,19)
163      table=18, priority=90,icmp6,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02,icmp_type=135,icmp_code=0,nd_sll=00:00:00:00:00:00 actions=resubmit(,19)
164      table=18, priority=90,icmp6,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02,icmp_type=135,icmp_code=0,nd_sll=00:00:00:00:00:02 actions=resubmit(,19)
165      table=18, priority=90,arp,reg6=0x1,metadata=0x1,dl_src=00:00:00:00:00:01,arp_sha=00:00:00:00:00:01 actions=resubmit(,19)
166      table=18, priority=90,arp,reg6=0x2,metadata=0x1,dl_src=00:00:00:00:00:02,arp_sha=00:00:00:00:00:02 actions=resubmit(,19)
167      table=18, priority=80,icmp6,reg6=0x2,metadata=0x1,icmp_type=136,icmp_code=0 actions=drop
168      table=18, priority=80,icmp6,reg6=0x1,metadata=0x1,icmp_type=136,icmp_code=0 actions=drop
169      table=18, priority=80,icmp6,reg6=0x1,metadata=0x1,icmp_type=135,icmp_code=0 actions=drop
170      table=18, priority=80,icmp6,reg6=0x2,metadata=0x1,icmp_type=135,icmp_code=0 actions=drop
171      table=18, priority=80,arp,reg6=0x2,metadata=0x1 actions=drop
172      table=18, priority=80,arp,reg6=0x1,metadata=0x1 actions=drop
173      table=18, priority=0,metadata=0x1 actions=resubmit(,19)
174      table=19, priority=0,metadata=0x1 actions=resubmit(,20)
175      table=20, priority=0,metadata=0x1 actions=resubmit(,21)
176      table=21, priority=0,metadata=0x1 actions=resubmit(,22)
177      table=22, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=set_field:0xffff->reg7,resubmit(,32)
178      table=22, priority=50,metadata=0x1,dl_dst=00:00:00:00:00:01 actions=set_field:0x1->reg7,resubmit(,32)
179      table=22, priority=50,metadata=0x1,dl_dst=00:00:00:00:00:02 actions=set_field:0x2->reg7,resubmit(,32)
180      table=32, priority=0 actions=resubmit(,33)
181      table=33, priority=100,reg7=0x1,metadata=0x1 actions=set_field:0x1->reg5,resubmit(,34)
182      table=33, priority=100,reg7=0xffff,metadata=0x1 actions=set_field:0x2->reg5,set_field:0x2->reg7,resubmit(,34),set_field:0x1->reg5,set_field:0x1->reg7,resubmit(,34),set_field:0xffff->reg7
183      table=33, priority=100,reg7=0x2,metadata=0x1 actions=set_field:0x2->reg5,resubmit(,34)
184      table=34, priority=100,reg6=0x1,reg7=0x1,metadata=0x1 actions=drop
185      table=34, priority=100,reg6=0x2,reg7=0x2,metadata=0x1 actions=drop
186      table=34, priority=0 actions=set_field:0->reg0,set_field:0->reg1,set_field:0->reg2,set_field:0->reg3,set_field:0->reg4,resubmit(,48)
187      table=48, priority=0,metadata=0x1 actions=resubmit(,49)
188      table=49, priority=0,metadata=0x1 actions=resubmit(,50)
189      table=50, priority=0,metadata=0x1 actions=resubmit(,51)
190      table=51, priority=100,metadata=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,64)
191      table=51, priority=50,reg7=0x2,metadata=0x1,dl_dst=00:00:00:00:00:02 actions=resubmit(,64)
192      table=51, priority=50,reg7=0x1,metadata=0x1,dl_dst=00:00:00:00:00:01 actions=resubmit(,64)
193      table=64, priority=100,reg7=0x1,metadata=0x1 actions=output:1
194      table=64, priority=100,reg7=0x2,metadata=0x1 actions=output:2
195
196 The `ovs-appctl` command can be used to generate an OpenFlow trace of how a
197 packet would be processed in this configuration.  This first trace shows a
198 packet from `sw0-port1` to `sw0-port2`.  The packet arrives from port `1` and
199 should be output to port `2`.
200
201 [View ovn/env1/packet1.sh][env1packet1].
202
203     $ ovn/env1/packet1.sh
204
205 Trace a broadcast packet from `sw0-port1`.  The packet arrives from port `1` and
206 should be output to port `2`.
207
208 [View ovn/env1/packet2.sh][env1packet2].
209
210     $ ovn/env1/packet2.sh
211
212 You can extend this setup by adding additional ports.  For example, to add a
213 third port, run this command:
214
215 [View ovn/env1/add-third-port.sh][env1thirdport].
216
217     $ ovn/env1/add-third-port.sh
218
219 Now if you do another trace of a broadcast packet from `sw0-port1`, you will see
220 that it is output to both ports `2` and `3`.
221
222     $ ovn/env1/packet2.sh
223
224 2) 2 switches, 4 ports
225 ----------------------
226
227 This environment is an extension of the last example.  The previous example
228 showed two ports on a single logical switch.  In this environment we add a
229 second logical switch that also has two ports.  This lets you start to see how
230 `ovn-controller` creates flows for isolated networks to co-exist on the same
231 switch.
232
233 [View ovn/env2/setup.sh][env2setup].
234
235     $ ovn/env2/setup.sh
236
237 View the logical topology with `ovn-nbctl`.
238
239     $ ovn-nbctl show
240     switch e3190dc2-89d1-44ed-9308-e7077de782b3 (sw0)
241         port sw0-port1
242             addresses: 00:00:00:00:00:01
243         port sw0-port2
244             addresses: 00:00:00:00:00:02
245     switch c8ed4c5f-9733-43f6-93da-795b1aabacb1 (sw1)
246         port sw1-port1
247             addresses: 00:00:00:00:00:03
248         port sw1-port2
249             addresses: 00:00:00:00:00:04
250
251 Physically, all ports reside on the same chassis.
252
253     $ ovn-sbctl show
254     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
255         Encap geneve
256             ip: “127.0.0.1”
257         Port_Binding “sw1-port2”
258         Port_Binding “sw0-port2”
259         Port_Binding “sw0-port1”
260         Port_Binding “sw1-port1”
261
262 OVN creates separate logical flows for each logical switch.
263
264     $ ovn-sbctl lflow-list
265     Datapath: 5aa8be0b-8369-49e2-a878-f68872a8d211  Pipeline: ingress
266       table=0(ls_in_port_sec_l2), priority=  100, match=(eth.src[40]), action=(drop;)
267       table=0(ls_in_port_sec_l2), priority=  100, match=(vlan.present), action=(drop;)
268       table=0(ls_in_port_sec_l2), priority=   50, match=(inport == "sw1-port1" && eth.src == {00:00:00:00:00:03}), action=(next;)
269       table=0(ls_in_port_sec_l2), priority=   50, match=(inport == "sw1-port2" && eth.src == {00:00:00:00:00:04}), action=(next;)
270       table=1(ls_in_port_sec_ip), priority=    0, match=(1), action=(next;)
271       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw1-port1" && eth.src == 00:00:00:00:00:03 && arp.sha == 00:00:00:00:00:03), action=(next;)
272       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw1-port1" && eth.src == 00:00:00:00:00:03 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:03) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:03)))), action=(next;)
273       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw1-port2" && eth.src == 00:00:00:00:00:04 && arp.sha == 00:00:00:00:00:04), action=(next;)
274       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw1-port2" && eth.src == 00:00:00:00:00:04 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:04) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:04)))), action=(next;)
275       table=2(ls_in_port_sec_nd), priority=   80, match=(inport == "sw1-port1" && (arp || nd)), action=(drop;)
276       table=2(ls_in_port_sec_nd), priority=   80, match=(inport == "sw1-port2" && (arp || nd)), action=(drop;)
277       table=2(ls_in_port_sec_nd), priority=    0, match=(1), action=(next;)
278       table=3(   ls_in_pre_acl), priority=    0, match=(1), action=(next;)
279       table=4(       ls_in_acl), priority=    0, match=(1), action=(next;)
280       table=5(   ls_in_arp_rsp), priority=    0, match=(1), action=(next;)
281       table=6(   ls_in_l2_lkup), priority=  100, match=(eth.mcast), action=(outport = "_MC_flood"; output;)
282       table=6(   ls_in_l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:03), action=(outport = "sw1-port1"; output;)
283       table=6(   ls_in_l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:04), action=(outport = "sw1-port2"; output;)
284     Datapath: 5aa8be0b-8369-49e2-a878-f68872a8d211  Pipeline: egress
285       table=0(  ls_out_pre_acl), priority=    0, match=(1), action=(next;)
286       table=1(      ls_out_acl), priority=    0, match=(1), action=(next;)
287       table=2(ls_out_port_sec_ip), priority=    0, match=(1), action=(next;)
288       table=3(ls_out_port_sec_l2), priority=  100, match=(eth.mcast), action=(output;)
289       table=3(ls_out_port_sec_l2), priority=   50, match=(outport == "sw1-port1" && eth.dst == {00:00:00:00:00:03}), action=(output;)
290       table=3(ls_out_port_sec_l2), priority=   50, match=(outport == "sw1-port2" && eth.dst == {00:00:00:00:00:04}), action=(output;)
291     Datapath: 631fb3c9-b0a3-4e56-bac3-1717c8cbb826  Pipeline: ingress
292       table=0(ls_in_port_sec_l2), priority=  100, match=(eth.src[40]), action=(drop;)
293       table=0(ls_in_port_sec_l2), priority=  100, match=(vlan.present), action=(drop;)
294       table=0(ls_in_port_sec_l2), priority=   50, match=(inport == "sw0-port1" && eth.src == {00:00:00:00:00:01}), action=(next;)
295       table=0(ls_in_port_sec_l2), priority=   50, match=(inport == "sw0-port2" && eth.src == {00:00:00:00:00:02}), action=(next;)
296       table=1(ls_in_port_sec_ip), priority=    0, match=(1), action=(next;)
297       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && arp.sha == 00:00:00:00:00:01), action=(next;)
298       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:01) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:01)))), action=(next;)
299       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port2" && eth.src == 00:00:00:00:00:02 && arp.sha == 00:00:00:00:00:02), action=(next;)
300       table=2(ls_in_port_sec_nd), priority=   90, match=(inport == "sw0-port2" && eth.src == 00:00:00:00:00:02 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll == 00:00:00:00:00:02) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:02)))), action=(next;)
301       table=2(ls_in_port_sec_nd), priority=   80, match=(inport == "sw0-port1" && (arp || nd)), action=(drop;)
302       table=2(ls_in_port_sec_nd), priority=   80, match=(inport == "sw0-port2" && (arp || nd)), action=(drop;)
303       table=2(ls_in_port_sec_nd), priority=    0, match=(1), action=(next;)
304       table=3(   ls_in_pre_acl), priority=    0, match=(1), action=(next;)
305       table=4(       ls_in_acl), priority=    0, match=(1), action=(next;)
306       table=5(   ls_in_arp_rsp), priority=    0, match=(1), action=(next;)
307       table=6(   ls_in_l2_lkup), priority=  100, match=(eth.mcast), action=(outport = "_MC_flood"; output;)
308       table=6(   ls_in_l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:01), action=(outport = "sw0-port1"; output;)
309       table=6(   ls_in_l2_lkup), priority=   50, match=(eth.dst == 00:00:00:00:00:02), action=(outport = "sw0-port2"; output;)
310     Datapath: 631fb3c9-b0a3-4e56-bac3-1717c8cbb826  Pipeline: egress
311       table=0(  ls_out_pre_acl), priority=    0, match=(1), action=(next;)
312       table=1(      ls_out_acl), priority=    0, match=(1), action=(next;)
313       table=2(ls_out_port_sec_ip), priority=    0, match=(1), action=(next;)
314       table=3(ls_out_port_sec_l2), priority=  100, match=(eth.mcast), action=(output;)
315       table=3(ls_out_port_sec_l2), priority=   50, match=(outport == "sw0-port1" && eth.dst == {00:00:00:00:00:01}), action=(output;)
316       table=3(ls_out_port_sec_l2), priority=   50, match=(outport == "sw0-port2" && eth.dst == {00:00:00:00:00:02}), action=(output;)
317
318
319 In this setup, `sw0-port1` and `sw0-port2` can send packets to each other, but
320 not to either of the ports on `sw1`.  This first trace shows a packet from
321 `sw0-port1` to `sw0-port2`.  You should see th packet arrive on OpenFlow port
322 `1` and output to OpenFlow port `2`.
323
324 [View ovn/env2/packet1.sh][env2packet1].
325
326     $ ovn/env2/packet1.sh
327
328 This next example shows a packet from `sw0-port1` with a destination MAC address
329 of `00:00:00:00:00:03`, which is the MAC address for `sw1-port1`.  Since these
330 ports are not on the same logical switch, the packet should just be dropped.
331
332 [View ovn/env2/packet2.sh][env2packet2].
333
334     $ ovn/env2/packet2.sh
335
336 3) Two Hypervisors
337 ------------------
338
339 The first two examples started by showing OVN on a single hypervisor.  A more
340 realistic deployment of OVN would span multiple hypervisors.  This example
341 creates a single logical switch with 4 logical ports.  It then simulates having
342 two hypervisors with two of the logical ports bound to each hypervisor.
343
344 [View ovn/env3/setup.sh][env3setup].
345
346     $ ovn/env3/setup.sh
347
348 You can start by viewing the logical topology with `ovn-nbctl`.
349
350     $ ovn-nbctl show
351     switch b977dc03-79a5-41ba-9665-341a80e1abfd (sw0)
352         port sw0-port1
353             addresses: 00:00:00:00:00:01
354         port sw0-port2
355             addresses: 00:00:00:00:00:02
356         port sw0-port4
357             addresses: 00:00:00:00:00:04
358         port sw0-port3
359             addresses: 00:00:00:00:00:03
360
361 Using `ovn-sbctl` to view the state of the system, we can see that there are two
362 chassis: one local that we can interact with, and a fake remote chassis. Two
363 logical ports are bound to each.  Both chassis have an IP address of localhost,
364 but in a realistic deployment that would be the IP address used for tunnels to
365 that chassis.
366
367     $ ovn-sbctl show
368     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
369         Encap geneve
370             ip: “127.0.0.1”
371         Port_Binding “sw0-port2”
372         Port_Binding “sw0-port1”
373     Chassis fakechassis
374         Encap geneve
375             ip: “127.0.0.1”
376         Port_Binding “sw0-port4”
377         Port_Binding “sw0-port3”
378
379 Packets between `sw0-port1` and `sw0-port2` behave just like the previous
380 examples.  Packets to ports on a remote chassis are the interesting part of this
381 example.  You may have noticed before that OVN’s logical flows are broken up
382 into ingress and egress tables.  Given a packet from `sw0-port1` on the local
383 chassis to `sw0-port3` on the remote chassis, the ingress pipeline is executed
384 on the local switch.  OVN then determines that it must forward the packet over a
385 geneve tunnel.  When it arrives at the remote chassis, the egress pipeline will
386 be executed there.
387
388 This first packet trace shows the first part of this example.  It’s a packet
389 from `sw0-port1` to `sw0-port3` from the perspective of the local chassis.
390 `sw0-port1` is OpenFlow port `1`.  The tunnel to the fake remote chassis is
391 OpenFlow port `3`.  You should see the ingress pipeline being executed and then
392 the packet output to port `3`, the geneve tunnel.
393
394 [View ovn/env3/packet1.sh][env3packet1].
395
396     $ ovn/env3/packet1.sh
397
398 To simulate what would happen when that packet arrives at the remote chassis we
399 can flip this example around.  Consider a packet from `sw0-port3` to
400 `sw0-port1`.  This trace shows what would happen when that packet arrives at the
401 local chassis.  The packet arrives on OpenFlow port `3` (the tunnel).  You should
402 then see the egress pipeline get executed and the packet output to OpenFlow port
403 `1`.
404
405 [View ovn/env3/packet2.sh][env3packet2].
406
407     $ ovn/env3/packet2.sh
408
409 4) Locally attached networks
410 ----------------------------
411
412 While OVN is generally focused on the implementation of logical networks using
413 overlays, it’s also possible to use OVN as a control plane to manage logically
414 direct connectivity to networks that are locally accessible to each chassis.
415
416 This example includes two hypervisors.  Both hypervisors have two ports on them.
417 We want to use OVN to manage the connectivity of these ports to a network
418 attached to each hypervisor that we will call “physnet1”.
419
420 This scenario requires some additional configuration of `ovn-controller`.  We
421 must configure a mapping between `physnet1` and a local OVS bridge that provides
422 connectivity to that network.  We call these “bridge mappings”.  For our
423 example, the following script creates a bridge called `br-eth1` and then
424 configures `ovn-controller` with a bridge mapping from `physnet1` to `br-eth1`.
425
426 [View ovn/env4/setup1.sh][env4setup1].
427
428     $ ovn/env4/setup1.sh
429
430 At this point we should be able to see that `ovn-controller` has automatically
431 created patch ports between `br-int` and `br-eth1`.
432
433     $ ovs-vsctl show
434     aea39214-ebec-4210-aa34-1ae7d6921720
435         Bridge br-int
436             fail_mode: secure
437             Port “patch-br-int-to-br-eth1”
438                 Interface “patch-br-int-to-br-eth1”
439                     type: patch
440                     options: {peer=”patch-br-eth1-to-br-int”}
441             Port br-int
442                 Interface br-int
443                     type: internal
444         Bridge “br-eth1”
445             Port “br-eth1”
446                 Interface “br-eth1”
447                     type: internal
448             Port “patch-br-eth1-to-br-int”
449                 Interface “patch-br-eth1-to-br-int”
450                     type: patch
451                     options: {peer=”patch-br-int-to-br-eth1”}
452
453 Now we can move on to the next setup phase for this example.  We want to create
454 a fake second chassis and then create the topology that tells OVN we want both
455 ports on both hypervisors connected to `physnet1`.  The way this is modeled in
456 OVN is by creating a logical switch for each port.  The logical switch has the
457 regular VIF port and a `localnet` port.
458
459 [View ovn/env4/setup2.sh][env4setup2].
460
461     $ ovn/env4/setup2.sh
462
463 The logical topology from `ovn-nbctl` should look like this.
464
465     $ ovn-nbctl show
466         switch 5a652488-cfba-4f3e-929d-00010cdfde40 (provnet1-2)
467             port provnet1-2-physnet1
468                 addresses: unknown
469             port provnet1-2-port1
470                 addresses: 00:00:00:00:00:02
471         switch 5829b60a-eda8-4d78-94f6-7017ff9efcf0 (provnet1-4)
472             port provnet1-4-port1
473                 addresses: 00:00:00:00:00:04
474             port provnet1-4-physnet1
475                 addresses: unknown
476         switch 06cbbcb6-38e3-418d-a81e-634ec9b54ad6 (provnet1-1)
477             port provnet1-1-port1
478                 addresses: 00:00:00:00:00:01
479             port provnet1-1-physnet1
480                 addresses: unknown
481         switch 9cba3b3b-59ae-4175-95f5-b6f1cd9c2afb (provnet1-3)
482             port provnet1-3-physnet1
483                 addresses: unknown
484             port provnet1-3-port1
485                 addresses: 00:00:00:00:00:03
486
487 `port1` on each logical switch represents a regular logical port for a VIF on a
488 hypervisor.  `physnet1` on each logical switch is the special `localnet` port.
489 You can use `ovn-nbctl` to see that this port has a `type` and `options` set.
490
491     $ ovn-nbctl lsp-get-type provnet1-1-physnet1
492     localnet
493
494     $ ovn-nbctl lsp-get-options provnet1-1-physnet1
495     network_name=physnet1
496
497 The physical topology should reflect that there are two regular ports on each
498 chassis.
499
500     $ ovn-sbctl show
501     Chassis fakechassis
502         Encap geneve
503             ip: “127.0.0.1”
504         Port_Binding “provnet1-3-port1”
505         Port_Binding “provnet1-4-port1”
506     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
507         Encap geneve
508             ip: “127.0.0.1”
509         Port_Binding “provnet1-2-port1”
510         Port_Binding “provnet1-1-port1”
511
512 All four of our ports should be able to communicate with each other, but they do
513 so through `physnet1`.  A packet from any of these ports to any destination
514 should be output to the OpenFlow port number that corresponds to the patch port
515 to `br-eth1`.
516
517 This example assumes following OpenFlow port number mappings:
518
519 * 1 = patch port to `br-eth1`
520 * 2 = tunnel to the fake second chassis
521 * 3 = lport1, which is the logical port named `provnet1-1-port1`
522 * 4 = lport2, which is the logical port named `provnet1-2-port1`
523
524 We get those port numbers using `ovs-ofctl`:
525
526     $ ovs-ofctl show br-int
527     OFPT_FEATURES_REPLY (xid=0x2): dpid:0000765054700040
528     n_tables:254, n_buffers:256
529     capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
530     actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src
531     mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
532      1(patch-br-int-to): addr:de:29:14:95:8a:b8
533          config:     0
534          state:      0
535          speed: 0 Mbps now, 0 Mbps max
536      2(ovn-fakech-0): addr:aa:55:aa:55:00:08
537          config:     PORT_DOWN
538          state:      LINK_DOWN
539          speed: 0 Mbps now, 0 Mbps max
540      3(lport1): addr:aa:55:aa:55:00:09
541          config:     PORT_DOWN
542          state:      LINK_DOWN
543          speed: 0 Mbps now, 0 Mbps max
544      4(lport2): addr:aa:55:aa:55:00:0a
545          config:     PORT_DOWN
546          state:      LINK_DOWN
547          speed: 0 Mbps now, 0 Mbps max
548      LOCAL(br-int): addr:76:50:54:70:00:40
549          config:     PORT_DOWN
550          state:      LINK_DOWN
551          speed: 0 Mbps now, 0 Mbps max
552     OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
553
554 This first trace shows a packet from `provnet1-1-port1` with a destination MAC
555 address of `provnet1-2-port1`.  Despite both of these ports being on the same
556 local switch (`lport1` and `lport2`), we expect all packets to be sent out to
557 `br-eth1` (OpenFlow port 1).  We then expect the network to handle getting the
558 packet to its destination.  In practice, this will be optimized at `br-eth1` and
559 the packet won’t actually go out and back on the network.
560
561 [View ovn/env4/packet1.sh][env4packet1].
562
563     $ ovn/env4/packet1.sh
564
565 This next trace is a continuation of the previous one.  This shows the packet
566 coming back into `br-int` from `br-eth1`.  We now expect the packet to be output
567 to `provnet1-2-port1`, which is OpenFlow port 4.
568
569 [View ovn/env4/packet2.sh][env4packet2].
570
571     $ ovn/env4/packet2.sh
572
573 This next trace shows an example of a packet being sent to a destination on
574 another hypervisor.  The source is `provnet1-2-port1`, but the destination is
575 `provnet1-3-port1`, which is on the other fake chassis.  As usual, we expect the
576 output to be to OpenFlow port 1, the patch port to `br-et1`.
577
578 [View ovn/env4/packet3.sh][env4packet3].
579
580     $ ovn/env4/packet3.sh
581
582 This next test shows a broadcast packet.  The destination should still only be
583 OpenFlow port 1.
584
585 [View ovn/env4/packet4.sh][env4packet4]
586
587     $ ovn/env4/packet4.sh
588
589 Finally, this last trace shows what happens when a broadcast packet arrives
590 from the network.  In this case, it simulates a broadcast that originated from a
591 port on the remote fake chassis and arrived at the local chassis via `br-eth1`.
592 We should see it output to both local ports that are attached to this network
593 (OpenFlow ports 3 and 4).
594
595 [View ovn/env4/packet5.sh][env4packet5]
596
597     $ ovn/env4/packet5.sh
598
599 5) Locally attached networks with VLANs
600 ---------------------------------------
601
602 This example is an extension of the previous one.  We take the same setup and
603 add two more ports to each hypervisor.  Instead of having the new ports directly
604 connected to `physnet1` as before, we indicate that we want them on VLAN 101 of
605 `physnet1`.  This shows how `localnet` ports can be used to provide connectivity
606 to either a flat network or a VLAN on that network.
607
608 [View ovn/env5/setup.sh][env5setup]
609
610     $ ovn/env5/setup.sh
611
612 The logical topology shown by `ovn-nbctl` is similar to `env4`, except we now
613 have 8 regular VIF ports connected to `physnet1` instead of 4.  The additional 4
614 ports we have added are all on VLAN 101 of `physnet1`.  Note that the `localnet`
615 ports representing connectivity to VLAN 101 of `physnet1` have the `tag` field
616 set to `101`.
617
618     $ ovn-nbctl show
619         switch 12ea93d0-694b-48e9-adef-d0ddd3ec4ac9 (provnet1-7-101)
620             port provnet1-7-physnet1-101
621                 parent: , tag:101
622                 addresses: unknown
623             port provnet1-7-101-port1
624                 addresses: 00:00:00:00:00:07
625         switch c9a5ce3a-15ec-48ea-a898-416013463589 (provnet1-4)
626             port provnet1-4-port1
627                 addresses: 00:00:00:00:00:04
628             port provnet1-4-physnet1
629                 addresses: unknown
630         switch e07d4f7a-2085-4fbb-9937-d6192b79a397 (provnet1-1)
631             port provnet1-1-physnet1
632                 addresses: unknown
633             port provnet1-1-port1
634                 addresses: 00:00:00:00:00:01
635         switch 6c098474-0509-4219-bc9b-eb4e28dd1aeb (provnet1-2)
636             port provnet1-2-physnet1
637                 addresses: unknown
638             port provnet1-2-port1
639                 addresses: 00:00:00:00:00:02
640         switch 723c4684-5d58-4202-b8e3-4ba99ad5ed9e (provnet1-8-101)
641             port provnet1-8-101-port1
642                 addresses: 00:00:00:00:00:08
643             port provnet1-8-physnet1-101
644                 parent: , tag:101
645                 addresses: unknown
646         switch 8444e925-ceb2-4b02-ac20-eb2e4cfb954d (provnet1-6-101)
647             port provnet1-6-physnet1-101
648                 parent: , tag:101
649                 addresses: unknown
650             port provnet1-6-101-port1
651                 addresses: 00:00:00:00:00:06
652         switch e11e5605-7c46-4395-b28d-cff57451fc7e (provnet1-3)
653             port provnet1-3-port1
654                 addresses: 00:00:00:00:00:03
655             port provnet1-3-physnet1
656                 addresses: unknown
657         switch 0706b697-6c92-4d54-bc0a-db5bababb74a (provnet1-5-101)
658             port provnet1-5-101-port1
659                 addresses: 00:00:00:00:00:05
660             port provnet1-5-physnet1-101
661                 parent: , tag:101
662                 addresses: unknown
663
664 The physical topology shows that we have 4 regular VIF ports on each simulated
665 hypervisor.
666
667     $ ovn-sbctl show
668     Chassis “56b18105-5706-46ef-80c4-ff20979ab068”
669         Encap geneve
670             ip: “127.0.0.1”
671         Port_Binding “provnet1-6-101-port1”
672         Port_Binding “provnet1-1-port1”
673         Port_Binding “provnet1-2-port1”
674         Port_Binding “provnet1-5-101-port1”
675     Chassis fakechassis
676         Encap geneve
677             ip: “127.0.0.1”
678         Port_Binding “provnet1-4-port1”
679         Port_Binding “provnet1-3-port1”
680         Port_Binding “provnet1-8-101-port1”
681         Port_Binding “provnet1-7-101-port1”
682
683 All of the traces from the previous example, `env4`, should work in this
684 environment and provide the same result.  Now we can show what happens for the
685 ports connected to VLAN 101.  This first example shows a packet originating from
686 `provnet1-5-101-port1`, which is OpenFlow port 5.  We should see VLAN tag 101
687 pushed on the packet and then output to OpenFlow port 1, the patch port to
688 `br-eth1` (the bridge providing connectivity to `physnet1`).
689
690 [View ovn/env5/packet1.sh][env5packet1].
691
692     $ ovn/env5/packet1.sh
693
694 If we look at a broadcast packet arriving on VLAN 101 of `physnet1`, we should
695 see it output to OpenFlow ports 5 and 6 only.
696
697 [View ovn/env5/packet2.sh][env5packet2].
698
699     $ ovn/env5/packet2.sh
700
701
702 6) Stateful ACLs
703 ----------------
704
705 ACLs provide a way to do distributed packet filtering for OVN networks.  One
706 example use of ACLs is that OpenStack Neutron uses them to implement security
707 groups.  ACLs are implemented using conntrack integration with OVS.
708
709 Start with a simple logical switch with 2 logical ports.
710
711 [View ovn/env6/setup.sh][env6setup].
712
713     $ ovn/env6/setup.sh
714
715 A common use case would be the following policy applied for `sw0-port1`:
716
717 * Allow outbound IP traffic and associated return traffic.
718 * Allow incoming ICMP requests and associated return traffic.
719 * Allow incoming SSH connections and associated return traffic.
720 * Drop other incoming IP traffic.
721
722 The following script applies this policy to our environment.
723
724 [View ovn/env6/add-acls.sh][env6acls].
725
726     $ ovn/env6/add-acls.sh
727
728 We can view the configured ACLs on this network using the `ovn-nbctl` command.
729
730     $ ovn-nbctl acl-list sw0
731     from-lport  1002 (inport == “sw0-port1” && ip) allow-related
732       to-lport  1002 (outport == “sw0-port1” && ip && icmp) allow-related
733       to-lport  1002 (outport == “sw0-port1” && ip && tcp && tcp.dst == 22) allow-related
734       to-lport  1001 (outport == “sw0-port1” && ip) drop
735
736 Now that we have ACLs configured, there are new entries in the logical flow
737 table in the stages `switch_in_pre_acl`, switch_in_acl`, `switch_out_pre_acl`,
738 and `switch_out_acl`.
739
740     $ ovn-sbctl lflow-list
741
742 Let’s look more closely at `switch_out_pre_acl` and `switch_out_acl`.
743
744 In `switch_out_pre_acl`, we match IP traffic and put it through the connection
745 tracker.  This populates the connection state fields so that we can apply policy
746 as appropriate.
747
748     table=0(switch_out_pre_acl), priority=  100, match=(ip), action=(ct_next;)
749     table=1(switch_out_pre_acl), priority=    0, match=(1), action=(next;)
750
751 In `switch_out_acl`, we allow packets associated with existing connections.  We
752 drop packets that are deemed to be invalid (such as non-SYN TCP packet not
753 associated with an existing connection).
754
755     table=1(switch_out_acl), priority=65535, match=(!ct.est && ct.rel && !ct.new && !ct.inv), action=(next;)
756     table=1(switch_out_acl), priority=65535, match=(ct.est && !ct.rel && !ct.new && !ct.inv), action=(next;)
757     table=1(switch_out_acl), priority=65535, match=(ct.inv), action=(drop;)
758
759 For new connections, we apply our configured ACL policy to decide whether to
760 allow the connection or not.  In this case, we’ll allow ICMP or SSH.  Otherwise,
761 we’ll drop the packet.
762
763     table=1(switch_out_acl), priority= 2002, match=(ct.new && (outport == “sw0-port1” && ip && icmp)), action=(ct_commit; next;)
764     table=1(switch_out_acl), priority= 2002, match=(ct.new && (outport == “sw0-port1” && ip && tcp && tcp.dst == 22)), action=(ct_commit; next;)
765     table=1(switch_out_acl), priority= 2001, match=(outport == “sw0-port1” && ip), action=(drop;)
766
767 When using ACLs, the default policy is to allow and track IP connections.  Based
768 on our above policy, IP traffic directed at `sw0-port1` will never hit this flow
769 at priority 1.
770
771     table=1(switch_out_acl), priority=    1, match=(ip), action=(ct_commit; next;)
772     table=1(switch_out_acl), priority=    0, match=(1), action=(next;)
773
774 Note that conntrack integration is not yet supported in ovs-sandbox, so the
775 OpenFlow flows will not represent what you’d see in a real environment.  The
776 logical flows described above give a very good idea of what the flows look like,
777 though.
778
779 [This blog post][openstack-ovn-acl-blog] discusses OVN ACLs from an OpenStack
780 perspective and also provides an example of what the resulting OpenFlow flows
781 look like.
782
783 7) Container Ports
784 ------------------
785
786 OVN supports containers running directly on the hypervisors and running
787 containers inside VMs. This example shows how OVN supports network
788 virtualization to containers when run inside VMs. Details about how to use
789 docker containers in OVS can be found [here][openvswitch-docker].
790
791 To support container traffic created inside a VM and to distinguish network
792 traffic coming from different container vifs, for each container a logical
793 port needs to be created with parent name set to the VM's logical port and
794 the tag set to the vlan tag of the container vif.
795
796 Start with a simple logical switch with 3 logical ports.
797
798 [View ovn/env7/setup.sh][env7setup].
799
800     $ ovn/env7/setup.sh
801
802 Lets create a container vif attached to the logical port 'sw0-port1' and
803 another container vif attached to the logical port 'sw0-port2'.
804
805 [View ovn/env7/add-container-ports.sh][env7contports]
806
807     $ ovn/env7/add-container-ports.sh
808
809 Run the `ovn-nbctl` command to see the logical ports
810
811     $ovn-nbctl show
812
813
814 As you can see a logical port 'csw0-cport1' is created on a logical
815 switch 'csw0' whose parent is 'sw0-port1' and it has tag set to 42.
816 And a logical port 'csw0-cport2' is created on the logical switch 'csw0'
817 whose parent is 'sw0-port2' and it has tag set to 43.
818
819 Bridge 'br-vmport1' represents the ovs bridge running inside the VM
820 connected to the logical port 'sw0-port1'. In this tutorial the ovs port
821 to 'sw0-port1' is created as a patch port with its peer connected to the
822 ovs bridge 'br-vmport1'. An ovs port 'cport1' is added to 'br-vmport1'
823 which represents the container interface connected to the ovs bridge
824 and vlan tag set to 42. Similarly 'br-vmport2' represents the ovs bridge
825 for the logical port 'sw0-port2' and 'cport2' connected to 'br-vmport2'
826 with vlan tag set to 43.
827
828 This first trace shows a packet from 'csw0-port1' with a destination mac
829 address of 'csw0-port2'. You can see ovs bridge of the vm 'br-vmport1' tags
830 the traffic with vlan id 42 and the traffic reaches to the br-int because
831 of the patch port. As you can see below `ovn-controller` has added a flow
832 to strip the vlan tag and set the reg6 and metadata appropriately.
833
834     $ ovs-ofctl -O OpenFlow13 dump-flows br-int
835     OFPST_FLOW reply (OF1.3) (xid=0x2):
836     cookie=0x0, duration=2767.032s, table=0, n_packets=0, n_bytes=0, priority=150,in_port=3,dl_vlan=42 actions=pop_vlan,set_field:0x3->reg5,set_field:0x2->metadata,set_field:0x1->reg6,resubmit(,16)
837     cookie=0x0, duration=2767.002s, table=0, n_packets=0, n_bytes=0, priority=150,in_port=4,dl_vlan=43 actions=pop_vlan,set_field:0x4->reg5,set_field:0x2->metadata,set_field:0x2->reg6,resubmit(,16)
838     cookie=0x0, duration=2767.032s, table=0, n_packets=0, n_bytes=0, priority=100,in_port=3 actions=set_field:0x1->reg5,set_field:0x1->metadata,set_field:0x1->reg6,resubmit(,16)
839     cookie=0x0, duration=2767.001s, table=0, n_packets=0, n_bytes=0, priority=100,in_port=4 actions=set_field:0x2->reg5,set_field:0x1->metadata,set_field:0x2->reg6,resubmit(,16)
840
841 [View ovn/env7/packet1.sh][env7packet1].
842
843     $ ovn/env5/packet1.sh
844
845
846 The second trace shows a packet from 'csw0-port2' to 'csw0-port1'.
847
848 [View ovn/env7/packet2.sh][env7packet2].
849
850     $ ovn/env5/packet1.sh
851
852 You can extend this setup by adding additional container ports with two
853 hypervisors. Please see the tutorial 3 above.
854
855 [ovn-architecture(7)]:http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
856 [Tutorial.md]:https://github.com/openvswitch/ovs/blob/master/tutorial/Tutorial.md
857 [ovn-nb(5)]:http://openvswitch.org/support/dist-docs/ovn-nb.5.html
858 [ovn-sb(5)]:http://openvswitch.org/support/dist-docs/ovn-sb.5.html
859 [vtep(5)]:http://openvswitch.org/support/dist-docs/vtep.5.html
860 [ovn-northd(8)]:http://openvswitch.org/support/dist-docs/ovn-northd.8.html
861 [ovn-controller(8)]:http://openvswitch.org/support/dist-docs/ovn-controller.8.html
862 [ovn-controller-vtep(8)]:http://openvswitch.org/support/dist-docs/ovn-controller-vtep.8.html
863 [vtep-ctl(8)]:http://openvswitch.org/support/dist-docs/vtep-ctl.8.html
864 [ovn-nbctl(8)]:http://openvswitch.org/support/dist-docs/ovn-nbctl.8.html
865 [ovn-sbctl(8)]:http://openvswitch.org/support/dist-docs/ovn-sbctl.8.html
866 [env1setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/setup.sh
867 [env1packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/packet1.sh
868 [env1packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/packet2.sh
869 [env1thirdport]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env1/add-third-port.sh
870 [env2setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env2/setup.sh
871 [env2packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env2/packet1.sh
872 [env2packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env2/packet2.sh
873 [env3setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env3/setup.sh
874 [env3packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env3/packet1.sh
875 [env3packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env3/packet2.sh
876 [env4setup1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/setup1.sh
877 [env4setup2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/setup2.sh
878 [env4packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet1.sh
879 [env4packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet2.sh
880 [env4packet3]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet3.sh
881 [env4packet4]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet4.sh
882 [env4packet5]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env4/packet5.sh
883 [env5setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env5/setup.sh
884 [env5packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env5/packet1.sh
885 [env5packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env5/packet2.sh
886 [env6setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env6/setup.sh
887 [env6acls]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env6/add-acls.sh
888 [env7setup]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env7/setup.sh
889 [env7contports]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env7/add-container-ports.sh
890 [env7packet1]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env7/packet1.sh
891 [env7packet2]:https://github.com/openvswitch/ovs/blob/master/tutorial/ovn/env7/packet2.sh
892 [openstack-ovn-acl-blog]:http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/
893 [openvswitch-docker]:http://openvswitch.org/support/dist-docs/INSTALL.Docker.md.txt