+ <p><em>Architectural Logical Life Cycle of a Packet</em></p>
+
+ <p>
+ This following description focuses on the life cycle of a packet through
+ a logical datapath, ignoring physical details of the implementation.
+ Please refer to <em>Architectural Physical Life Cycle of a Packet</em> in
+ <code>ovn-architecture</code>(7) for the physical information.
+ </p>
+
+ <p>
+ The description here is written as if OVN itself executes these steps,
+ but in fact OVN (that is, <code>ovn-controller</code>) programs Open
+ vSwitch, via OpenFlow and OVSDB, to execute them on its behalf.
+ </p>
+
+ <p>
+ At a high level, OVN passes each packet through the logical datapath's
+ logical ingress pipeline, which may output the packet to one or more
+ logical port or logical multicast groups. For each such logical output
+ port, OVN passes the packet through the datapath's logical egress
+ pipeline, which may either drop the packet or deliver it to the
+ destination. Between the two pipelines, outputs to logical multicast
+ groups are expanded into logical ports, so that the egress pipeline only
+ processes a single logical output port at a time. Between the two
+ pipelines is also where, when necessary, OVN encapsulates a packet in a
+ tunnel (or tunnels) to transmit to remote hypervisors.
+ </p>
+
+ <p>
+ In more detail, to start, OVN searches the <ref table="Logical_Flow"/>
+ table for a row with correct <ref column="logical_datapath"/>, a <ref
+ column="pipeline"/> of <code>ingress</code>, a <ref column="table_id"/>
+ of 0, and a <ref column="match"/> that is true for the packet. If none
+ is found, OVN drops the packet. If OVN finds more than one, it chooses
+ the match with the highest <ref column="priority"/>. Then OVN executes
+ each of the actions specified in the row's <ref table="actions"/> column,
+ in the order specified. Some actions, such as those to modify packet
+ headers, require no further details. The <code>next</code> and
+ <code>output</code> actions are special.
+ </p>
+
+ <p>
+ The <code>next</code> action causes the above process to be repeated
+ recursively, except that OVN searches for <ref column="table_id"/> of 1
+ instead of 0. Similarly, any <code>next</code> action in a row found in
+ that table would cause a further search for a <ref column="table_id"/> of
+ 2, and so on. When recursive processing completes, flow control returns
+ to the action following <code>next</code>.
+ </p>
+
+ <p>
+ The <code>output</code> action also introduces recursion. Its effect
+ depends on the current value of the <code>outport</code> field. Suppose
+ <code>outport</code> designates a logical port. First, OVN compares
+ <code>inport</code> to <code>outport</code>; if they are equal, it treats
+ the <code>output</code> as a no-op. In the common case, where they are
+ different, the packet enters the egress pipeline. This transition to the
+ egress pipeline discards register data, e.g. <code>reg0</code> ...
+ <code>reg4</code> and connection tracking state, to achieve
+ uniform behavior regardless of whether the egress pipeline is on a
+ different hypervisor (because registers aren't preserve across
+ tunnel encapsulation).
+ </p>
+
+ <p>
+ To execute the egress pipeline, OVN again searches the <ref
+ table="Logical_Flow"/> table for a row with correct <ref
+ column="logical_datapath"/>, a <ref column="table_id"/> of 0, a <ref
+ column="match"/> that is true for the packet, but now looking for a <ref
+ column="pipeline"/> of <code>egress</code>. If no matching row is found,
+ the output becomes a no-op. Otherwise, OVN executes the actions for the
+ matching flow (which is chosen from multiple, if necessary, as already
+ described).
+ </p>
+
+ <p>
+ In the <code>egress</code> pipeline, the <code>next</code> action acts as
+ already described, except that it, of course, searches for
+ <code>egress</code> flows. The <code>output</code> action, however, now
+ directly outputs the packet to the output port (which is now fixed,
+ because <code>outport</code> is read-only within the egress pipeline).
+ </p>
+
+ <p>
+ The description earlier assumed that <code>outport</code> referred to a
+ logical port. If it instead designates a logical multicast group, then
+ the description above still applies, with the addition of fan-out from
+ the logical multicast group to each logical port in the group. For each
+ member of the group, OVN executes the logical pipeline as described, with
+ the logical output port replaced by the group member.
+ </p>
+
+ <p><em>Pipeline Stages</em></p>
+
+ <p>
+ <code>ovn-northd</code> is responsible for populating the
+ <ref table="Logical_Flow"/> table, so the stages are an
+ implementation detail and subject to change. This section
+ describes the current logical flow table.
+ </p>
+
+ <p>
+ The ingress pipeline consists of the following stages:
+ </p>
+ <ul>
+ <li>
+ Port Security (Table 0): Validates the source address, drops
+ packets with a VLAN tag, and, if configured, verifies that the
+ logical port is allowed to send with the source address.
+ </li>
+
+ <li>
+ L2 Destination Lookup (Table 1): Forwards known unicast
+ addresses to the appropriate logical port. Unicast packets to
+ unknown hosts are forwarded to logical ports configured with the
+ special <code>unknown</code> mac address. Broadcast, and
+ multicast are flooded to all ports in the logical switch.
+ </li>
+ </ul>
+
+ <p>
+ The egress pipeline consists of the following stages:
+ </p>
+ <ul>
+ <li>
+ ACL (Table 0): Applies any specified access control lists.
+ </li>
+
+ <li>
+ Port Security (Table 1): If configured, verifies that the
+ logical port is allowed to receive packets with the destination
+ address.
+ </li>
+ </ul>
+