1 <?xml version="1.0" encoding="utf-8"?>
2 <manpage program="ovn-architecture" section="7" title="OVN Architecture">
4 <p>ovn-architecture -- Open Virtual Network architecture</p>
9 OVN, the Open Virtual Network, is a system to support virtual network
10 abstraction. OVN complements the existing capabilities of OVS to add
11 native support for virtual network abstractions, such as virtual L2 and L3
12 overlays and security groups. Services such as DHCP are also desirable
13 features. Just like OVS, OVN's design goal is to have a production-quality
14 implementation that can operate at significant scale.
18 An OVN deployment consists of several components:
24 A <dfn>Cloud Management System</dfn> (<dfn>CMS</dfn>), which is
25 OVN's ultimate client (via its users and administrators). OVN
26 integration requires installing a CMS-specific plugin and
27 related software (see below). OVN initially targets OpenStack
32 We generally speak of ``the'' CMS, but one can imagine scenarios in
33 which multiple CMSes manage different parts of an OVN deployment.
38 An OVN Database physical or virtual node (or, eventually, cluster)
39 installed in a central location.
43 One or more (usually many) <dfn>hypervisors</dfn>. Hypervisors must run
44 Open vSwitch and implement the interface described in
45 <code>IntegrationGuide.md</code> in the OVS source tree. Any hypervisor
46 platform supported by Open vSwitch is acceptable.
51 Zero or more <dfn>gateways</dfn>. A gateway extends a tunnel-based
52 logical network into a physical network by bidirectionally forwarding
53 packets between tunnels and a physical Ethernet port. This allows
54 non-virtualized machines to participate in logical networks. A gateway
55 may be a physical host, a virtual machine, or an ASIC-based hardware
56 switch that supports the <code>vtep</code>(5) schema. (Support for the
57 latter will come later in OVN implementation.)
61 Hypervisors and gateways are together called <dfn>transport node</dfn>
62 or <dfn>chassis</dfn>.
68 The diagram below shows how the major components of OVN and related
69 software interact. Starting at the top of the diagram, we have:
74 The Cloud Management System, as defined above.
79 The <dfn>OVN/CMS Plugin</dfn> is the component of the CMS that
80 interfaces to OVN. In OpenStack, this is a Neutron plugin.
81 The plugin's main purpose is to translate the CMS's notion of logical
82 network configuration, stored in the CMS's configuration database in a
83 CMS-specific format, into an intermediate representation understood by
88 This component is necessarily CMS-specific, so a new plugin needs to be
89 developed for each CMS that is integrated with OVN. All of the
90 components below this one in the diagram are CMS-independent.
96 The <dfn>OVN Northbound Database</dfn> receives the intermediate
97 representation of logical network configuration passed down by the
98 OVN/CMS Plugin. The database schema is meant to be ``impedance
99 matched'' with the concepts used in a CMS, so that it directly supports
100 notions of logical switches, routers, ACLs, and so on. See
101 <code>ovn-nb</code>(5) for details.
105 The OVN Northbound Database has only two clients: the OVN/CMS Plugin
106 above it and <code>ovn-northd</code> below it.
111 <code>ovn-northd</code>(8) connects to the OVN Northbound Database
112 above it and the OVN Southbound Database below it. It translates the
113 logical network configuration in terms of conventional network
114 concepts, taken from the OVN Northbound Database, into logical
115 datapath flows in the OVN Southbound Database below it.
120 The <dfn>OVN Southbound Database</dfn> is the center of the system.
121 Its clients are <code>ovn-northd</code>(8) above it and
122 <code>ovn-controller</code>(8) on every transport node below it.
126 The OVN Southbound Database contains three kinds of data: <dfn>Physical
127 Network</dfn> (PN) tables that specify how to reach hypervisor and
128 other nodes, <dfn>Logical Network</dfn> (LN) tables that describe the
129 logical network in terms of ``logical datapath flows,'' and
130 <dfn>Binding</dfn> tables that link logical network components'
131 locations to the physical network. The hypervisors populate the PN and
132 Port_Binding tables, whereas <code>ovn-northd</code>(8) populates the
137 OVN Southbound Database performance must scale with the number of
138 transport nodes. This will likely require some work on
139 <code>ovsdb-server</code>(1) as we encounter bottlenecks.
140 Clustering for availability may be needed.
146 The remaining components are replicated onto each hypervisor:
151 <code>ovn-controller</code>(8) is OVN's agent on each hypervisor and
152 software gateway. Northbound, it connects to the OVN Southbound
153 Database to learn about OVN configuration and status and to
154 populate the PN table and the <code>Chassis</code> column in
155 <code>Binding</code> table with the hypervisor's status.
156 Southbound, it connects to <code>ovs-vswitchd</code>(8) as an
157 OpenFlow controller, for control over network traffic, and to the
158 local <code>ovsdb-server</code>(1) to allow it to monitor and
159 control Open vSwitch configuration.
163 <code>ovs-vswitchd</code>(8) and <code>ovsdb-server</code>(1) are
164 conventional components of Open vSwitch.
172 +-----------|-----------+
177 | OVN Northbound DB |
182 +-----------|-----------+
185 +-------------------+
186 | OVN Southbound DB |
187 +-------------------+
190 +------------------+------------------+
193 +---------------|---------------+ . +---------------|---------------+
195 | ovn-controller | . | ovn-controller |
198 | ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
200 +-------------------------------+ +-------------------------------+
203 <h2>Chassis Setup</h2>
206 Each chassis in an OVN deployment must be configured with an Open vSwitch
207 bridge dedicated for OVN's use, called the <dfn>integration bridge</dfn>.
208 System startup scripts may create this bridge prior to starting
209 <code>ovn-controller</code> if desired. If this bridge does not exist when
210 ovn-controller starts, it will be created automatically with the default
211 configuration suggested below. The ports on the integration bridge include:
216 On any chassis, tunnel ports that OVN uses to maintain logical network
217 connectivity. <code>ovn-controller</code> adds, updates, and removes
222 On a hypervisor, any VIFs that are to be attached to logical networks.
223 The hypervisor itself, or the integration between Open vSwitch and the
224 hypervisor (described in <code>IntegrationGuide.md</code>) takes care of
225 this. (This is not part of OVN or new to OVN; this is pre-existing
226 integration work that has already been done on hypervisors that support
231 On a gateway, the physical port used for logical network connectivity.
232 System startup scripts add this port to the bridge prior to starting
233 <code>ovn-controller</code>. This can be a patch port to another bridge,
234 instead of a physical port, in more sophisticated setups.
239 Other ports should not be attached to the integration bridge. In
240 particular, physical ports attached to the underlay network (as opposed to
241 gateway ports, which are physical ports attached to logical networks) must
242 not be attached to the integration bridge. Underlay physical ports should
243 instead be attached to a separate Open vSwitch bridge (they need not be
244 attached to any bridge at all, in fact).
248 The integration bridge should be configured as described below.
249 The effect of each of these settings is documented in
250 <code>ovs-vswitchd.conf.db</code>(5):
253 <!-- Keep the following in sync with create_br_int() in
254 ovn/controller/ovn-controller.c. -->
256 <dt><code>fail-mode=secure</code></dt>
258 Avoids switching packets between isolated logical networks before
259 <code>ovn-controller</code> starts up. See <code>Controller Failure
260 Settings</code> in <code>ovs-vsctl</code>(8) for more information.
263 <dt><code>other-config:disable-in-band=true</code></dt>
265 Suppresses in-band control flows for the integration bridge. It would be
266 unusual for such flows to show up anyway, because OVN uses a local
267 controller (over a Unix domain socket) instead of a remote controller.
268 It's possible, however, for some other bridge in the same system to have
269 an in-band remote controller, and in that case this suppresses the flows
270 that in-band control would ordinarily set up. See <code>In-Band
271 Control</code> in <code>DESIGN.md</code> for more information.
276 The customary name for the integration bridge is <code>br-int</code>, but
277 another name may be used.
280 <h2>Logical Networks</h2>
283 A <dfn>logical network</dfn> implements the same concepts as physical
284 networks, but they are insulated from the physical network with tunnels or
285 other encapsulations. This allows logical networks to have separate IP and
286 other address spaces that overlap, without conflicting, with those used for
287 physical networks. Logical network topologies can be arranged without
288 regard for the topologies of the physical networks on which they run.
292 Logical network concepts in OVN include:
297 <dfn>Logical switches</dfn>, the logical version of Ethernet switches.
301 <dfn>Logical routers</dfn>, the logical version of IP routers. Logical
302 switches and routers can be connected into sophisticated topologies.
306 <dfn>Logical datapaths</dfn> are the logical version of an OpenFlow
307 switch. Logical switches and routers are both implemented as logical
312 <h2>Life Cycle of a VIF</h2>
315 Tables and their schemas presented in isolation are difficult to
316 understand. Here's an example.
320 A VIF on a hypervisor is a virtual network interface attached either
321 to a VM or a container running directly on that hypervisor (This is
322 different from the interface of a container running inside a VM).
326 The steps in this example refer often to details of the OVN and OVN
327 Northbound database schemas. Please see <code>ovn-sb</code>(5) and
328 <code>ovn-nb</code>(5), respectively, for the full story on these
334 A VIF's life cycle begins when a CMS administrator creates a new VIF
335 using the CMS user interface or API and adds it to a switch (one
336 implemented by OVN as a logical switch). The CMS updates its own
337 configuration. This includes associating unique, persistent identifier
338 <var>vif-id</var> and Ethernet address <var>mac</var> with the VIF.
342 The CMS plugin updates the OVN Northbound database to include the new
343 VIF, by adding a row to the <code>Logical_Switch_Port</code>
344 table. In the new row, <code>name</code> is <var>vif-id</var>,
345 <code>mac</code> is <var>mac</var>, <code>switch</code> points to
346 the OVN logical switch's Logical_Switch record, and other columns
347 are initialized appropriately.
351 <code>ovn-northd</code> receives the OVN Northbound database update. In
352 turn, it makes the corresponding updates to the OVN Southbound database,
353 by adding rows to the OVN Southbound database <code>Logical_Flow</code>
354 table to reflect the new port, e.g. add a flow to recognize that packets
355 destined to the new port's MAC address should be delivered to it, and
356 update the flow that delivers broadcast and multicast packets to include
357 the new port. It also creates a record in the <code>Binding</code> table
358 and populates all its columns except the column that identifies the
359 <code>chassis</code>.
363 On every hypervisor, <code>ovn-controller</code> receives the
364 <code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
365 in the previous step. As long as the VM that owns the VIF is powered
366 off, <code>ovn-controller</code> cannot do much; it cannot, for example,
367 arrange to send packets to or receive packets from the VIF, because the
368 VIF does not actually exist anywhere.
372 Eventually, a user powers on the VM that owns the VIF. On the hypervisor
373 where the VM is powered on, the integration between the hypervisor and
374 Open vSwitch (described in <code>IntegrationGuide.md</code>) adds the VIF
375 to the OVN integration bridge and stores <var>vif-id</var> in
376 <code>external-ids</code>:<code>iface-id</code> to indicate that the
377 interface is an instantiation of the new VIF. (None of this code is new
378 in OVN; this is pre-existing integration work that has already been done
379 on hypervisors that support OVS.)
383 On the hypervisor where the VM is powered on, <code>ovn-controller</code>
384 notices <code>external-ids</code>:<code>iface-id</code> in the new
385 Interface. In response, in the OVN Southbound DB, it updates the
386 <code>Binding</code> table's <code>chassis</code> column for the
387 row that links the logical port from <code>external-ids</code>:<code>
388 iface-id</code> to the hypervisor. Afterward, <code>ovn-controller</code>
389 updates the local hypervisor's OpenFlow tables so that packets to and from
390 the VIF are properly handled.
394 Some CMS systems, including OpenStack, fully start a VM only when its
395 networking is ready. To support this, <code>ovn-northd</code> notices
396 the <code>chassis</code> column updated for the row in
397 <code>Binding</code> table and pushes this upward by updating the
398 <ref column="up" table="Logical_Switch_Port" db="OVN_NB"/> column
399 in the OVN Northbound database's <ref table="Logical_Switch_Port"
400 db="OVN_NB"/> table to indicate that the VIF is now up. The CMS,
401 if it uses this feature, can then react by allowing the VM's
402 execution to proceed.
406 On every hypervisor but the one where the VIF resides,
407 <code>ovn-controller</code> notices the completely populated row in the
408 <code>Binding</code> table. This provides <code>ovn-controller</code>
409 the physical location of the logical port, so each instance updates the
410 OpenFlow tables of its switch (based on logical datapath flows in the OVN
411 DB <code>Logical_Flow</code> table) so that packets to and from the VIF
412 can be properly handled via tunnels.
416 Eventually, a user powers off the VM that owns the VIF. On the
417 hypervisor where the VM was powered off, the VIF is deleted from the OVN
422 On the hypervisor where the VM was powered off,
423 <code>ovn-controller</code> notices that the VIF was deleted. In
424 response, it removes the <code>Chassis</code> column content in the
425 <code>Binding</code> table for the logical port.
429 On every hypervisor, <code>ovn-controller</code> notices the empty
430 <code>Chassis</code> column in the <code>Binding</code> table's row
431 for the logical port. This means that <code>ovn-controller</code> no
432 longer knows the physical location of the logical port, so each instance
433 updates its OpenFlow table to reflect that.
437 Eventually, when the VIF (or its entire VM) is no longer needed by
438 anyone, an administrator deletes the VIF using the CMS user interface or
439 API. The CMS updates its own configuration.
443 The CMS plugin removes the VIF from the OVN Northbound database,
444 by deleting its row in the <code>Logical_Switch_Port</code> table.
448 <code>ovn-northd</code> receives the OVN Northbound update and in turn
449 updates the OVN Southbound database accordingly, by removing or updating
450 the rows from the OVN Southbound database <code>Logical_Flow</code> table
451 and <code>Binding</code> table that were related to the now-destroyed
456 On every hypervisor, <code>ovn-controller</code> receives the
457 <code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
458 in the previous step. <code>ovn-controller</code> updates OpenFlow
459 tables to reflect the update, although there may not be much to do, since
460 the VIF had already become unreachable when it was removed from the
461 <code>Binding</code> table in a previous step.
465 <h2>Life Cycle of a Container Interface Inside a VM</h2>
468 OVN provides virtual network abstractions by converting information
469 written in OVN_NB database to OpenFlow flows in each hypervisor. Secure
470 virtual networking for multi-tenants can only be provided if OVN controller
471 is the only entity that can modify flows in Open vSwitch. When the
472 Open vSwitch integration bridge resides in the hypervisor, it is a
473 fair assumption to make that tenant workloads running inside VMs cannot
474 make any changes to Open vSwitch flows.
478 If the infrastructure provider trusts the applications inside the
479 containers not to break out and modify the Open vSwitch flows, then
480 containers can be run in hypervisors. This is also the case when
481 containers are run inside the VMs and Open vSwitch integration bridge
482 with flows added by OVN controller resides in the same VM. For both
483 the above cases, the workflow is the same as explained with an example
484 in the previous section ("Life Cycle of a VIF").
488 This section talks about the life cycle of a container interface (CIF)
489 when containers are created in the VMs and the Open vSwitch integration
490 bridge resides inside the hypervisor. In this case, even if a container
491 application breaks out, other tenants are not affected because the
492 containers running inside the VMs cannot modify the flows in the
493 Open vSwitch integration bridge.
497 When multiple containers are created inside a VM, there are multiple
498 CIFs associated with them. The network traffic associated with these
499 CIFs need to reach the Open vSwitch integration bridge running in the
500 hypervisor for OVN to support virtual network abstractions. OVN should
501 also be able to distinguish network traffic coming from different CIFs.
502 There are two ways to distinguish network traffic of CIFs.
506 One way is to provide one VIF for every CIF (1:1 model). This means that
507 there could be a lot of network devices in the hypervisor. This would slow
508 down OVS because of all the additional CPU cycles needed for the management
509 of all the VIFs. It would also mean that the entity creating the
510 containers in a VM should also be able to create the corresponding VIFs in
515 The second way is to provide a single VIF for all the CIFs (1:many model).
516 OVN could then distinguish network traffic coming from different CIFs via
517 a tag written in every packet. OVN uses this mechanism and uses VLAN as
518 the tagging mechanism.
523 A CIF's life cycle begins when a container is spawned inside a VM by
524 the either the same CMS that created the VM or a tenant that owns that VM
525 or even a container Orchestration System that is different than the CMS
526 that initially created the VM. Whoever the entity is, it will need to
527 know the <var>vif-id</var> that is associated with the network interface
528 of the VM through which the container interface's network traffic is
529 expected to go through. The entity that creates the container interface
530 will also need to choose an unused VLAN inside that VM.
534 The container spawning entity (either directly or through the CMS that
535 manages the underlying infrastructure) updates the OVN Northbound
536 database to include the new CIF, by adding a row to the
537 <code>Logical_Switch_Port</code> table. In the new row,
538 <code>name</code> is any unique identifier,
539 <code>parent_name</code> is the <var>vif-id</var> of the VM
540 through which the CIF's network traffic is expected to go through
541 and the <code>tag</code> is the VLAN tag that identifies the
542 network traffic of that CIF.
546 <code>ovn-northd</code> receives the OVN Northbound database update. In
547 turn, it makes the corresponding updates to the OVN Southbound database,
548 by adding rows to the OVN Southbound database's <code>Logical_Flow</code>
549 table to reflect the new port and also by creating a new row in the
550 <code>Binding</code> table and populating all its columns except the
551 column that identifies the <code>chassis</code>.
555 On every hypervisor, <code>ovn-controller</code> subscribes to the
556 changes in the <code>Binding</code> table. When a new row is created
557 by <code>ovn-northd</code> that includes a value in
558 <code>parent_port</code> column of <code>Binding</code> table, the
559 <code>ovn-controller</code> in the hypervisor whose OVN integration bridge
560 has that same value in <var>vif-id</var> in
561 <code>external-ids</code>:<code>iface-id</code>
562 updates the local hypervisor's OpenFlow tables so that packets to and
563 from the VIF with the particular VLAN <code>tag</code> are properly
564 handled. Afterward it updates the <code>chassis</code> column of
565 the <code>Binding</code> to reflect the physical location.
569 One can only start the application inside the container after the
570 underlying network is ready. To support this, <code>ovn-northd</code>
571 notices the updated <code>chassis</code> column in <code>Binding</code>
572 table and updates the <ref column="up" table="Logical_Switch_Port"
573 db="OVN_NB"/> column in the OVN Northbound database's
574 <ref table="Logical_Switch_Port" db="OVN_NB"/> table to indicate that the
575 CIF is now up. The entity responsible to start the container application
576 queries this value and starts the application.
580 Eventually the entity that created and started the container, stops it.
581 The entity, through the CMS (or directly) deletes its row in the
582 <code>Logical_Switch_Port</code> table.
586 <code>ovn-northd</code> receives the OVN Northbound update and in turn
587 updates the OVN Southbound database accordingly, by removing or updating
588 the rows from the OVN Southbound database <code>Logical_Flow</code> table
589 that were related to the now-destroyed CIF. It also deletes the row in
590 the <code>Binding</code> table for that CIF.
594 On every hypervisor, <code>ovn-controller</code> receives the
595 <code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
596 in the previous step. <code>ovn-controller</code> updates OpenFlow
597 tables to reflect the update.
601 <h2>Architectural Physical Life Cycle of a Packet</h2>
604 This section describes how a packet travels from one virtual machine or
605 container to another through OVN. This description focuses on the physical
606 treatment of a packet; for a description of the logical life cycle of a
607 packet, please refer to the <code>Logical_Flow</code> table in
608 <code>ovn-sb</code>(5).
612 This section mentions several data and metadata fields, for clarity
619 When OVN encapsulates a packet in Geneve or another tunnel, it attaches
620 extra data to it to allow the receiving OVN instance to process it
621 correctly. This takes different forms depending on the particular
622 encapsulation, but in each case we refer to it here as the ``tunnel
623 key.'' See <code>Tunnel Encapsulations</code>, below, for details.
626 <dt>logical datapath field</dt>
628 A field that denotes the logical datapath through which a packet is being
630 <!-- Keep the following in sync with MFF_LOG_DATAPATH in
631 ovn/lib/logical-fields.h. -->
632 OVN uses the field that OpenFlow 1.1+ simply (and confusingly) calls
633 ``metadata'' to store the logical datapath. (This field is passed across
634 tunnels as part of the tunnel key.)
637 <dt>logical input port field</dt>
640 A field that denotes the logical port from which the packet
641 entered the logical datapath.
642 <!-- Keep the following in sync with MFF_LOG_INPORT in
643 ovn/lib/logical-fields.h. -->
644 OVN stores this in Nicira extension register number 14.
648 Geneve and STT tunnels pass this field as part of the tunnel key.
649 Although VXLAN tunnels do not explicitly carry a logical input port,
650 OVN only uses VXLAN to communicate with gateways that from OVN's
651 perspective consist of only a single logical port, so that OVN can set
652 the logical input port field to this one on ingress to the OVN logical
657 <dt>logical output port field</dt>
660 A field that denotes the logical port from which the packet will
661 leave the logical datapath. This is initialized to 0 at the
662 beginning of the logical ingress pipeline.
663 <!-- Keep the following in sync with MFF_LOG_OUTPORT in
664 ovn/lib/logical-fields.h. -->
665 OVN stores this in Nicira extension register number 15.
669 Geneve and STT tunnels pass this field as part of the tunnel key.
670 VXLAN tunnels do not transmit the logical output port field.
674 <dt>conntrack zone field for logical ports</dt>
676 A field that denotes the connection tracking zone for logical ports.
677 The value only has local significance and is not meaningful between
678 chassis. This is initialized to 0 at the beginning of the logical
679 <!-- Keep the following in sync with MFF_LOG_CT_ZONE in
680 ovn/lib/logical-fields.h. -->
681 ingress pipeline. OVN stores this in Nicira extension register
685 <dt>conntrack zone fields for Gateway router</dt>
687 Fields that denote the connection tracking zones for Gateway routers.
688 These values only have local significance (only on chassis that have
689 Gateway routers instantiated) and is not meaningful between
690 chassis. OVN stores the zone information for DNATting in Nicira
691 <!-- Keep the following in sync with MFF_LOG_DNAT_ZONE and
692 MFF_LOG_SNAT_ZONE in ovn/lib/logical-fields.h. -->
693 extension register number 11 and zone information for SNATing in Nicira
694 extension register number 12.
699 The VLAN ID is used as an interface between OVN and containers nested
700 inside a VM (see <code>Life Cycle of a container interface inside a
701 VM</code>, above, for more information).
706 Initially, a VM or container on the ingress hypervisor sends a packet on a
707 port attached to the OVN integration bridge. Then:
713 OpenFlow table 0 performs physical-to-logical translation. It matches
714 the packet's ingress port. Its actions annotate the packet with
715 logical metadata, by setting the logical datapath field to identify the
716 logical datapath that the packet is traversing and the logical input
717 port field to identify the ingress port. Then it resubmits to table 16
718 to enter the logical ingress pipeline.
722 Packets that originate from a container nested within a VM are treated
723 in a slightly different way. The originating container can be
724 distinguished based on the VIF-specific VLAN ID, so the
725 physical-to-logical translation flows additionally match on VLAN ID and
726 the actions strip the VLAN header. Following this step, OVN treats
727 packets from containers just like any other packets.
731 Table 0 also processes packets that arrive from other chassis. It
732 distinguishes them from other packets by ingress port, which is a
733 tunnel. As with packets just entering the OVN pipeline, the actions
734 annotate these packets with logical datapath and logical ingress port
735 metadata. In addition, the actions set the logical output port field,
736 which is available because in OVN tunneling occurs after the logical
737 output port is known. These three pieces of information are obtained
738 from the tunnel encapsulation metadata (see <code>Tunnel
739 Encapsulations</code> for encoding details). Then the actions resubmit
740 to table 33 to enter the logical egress pipeline.
746 OpenFlow tables 16 through 31 execute the logical ingress pipeline from
747 the <code>Logical_Flow</code> table in the OVN Southbound database.
748 These tables are expressed entirely in terms of logical concepts like
749 logical ports and logical datapaths. A big part of
750 <code>ovn-controller</code>'s job is to translate them into equivalent
751 OpenFlow (in particular it translates the table numbers:
752 <code>Logical_Flow</code> tables 0 through 15 become OpenFlow tables 16
757 Most OVN actions have fairly obvious implementations in OpenFlow (with
758 OVS extensions), e.g. <code>next;</code> is implemented as
759 <code>resubmit</code>, <code><var>field</var> =
760 <var>constant</var>;</code> as <code>set_field</code>. A few are worth
761 describing in more detail:
765 <dt><code>output:</code></dt>
767 Implemented by resubmitting the packet to table 32. If the pipeline
768 executes more than one <code>output</code> action, then each one is
769 separately resubmitted to table 32. This can be used to send
770 multiple copies of the packet to multiple ports. (If the packet was
771 not modified between the <code>output</code> actions, and some of the
772 copies are destined to the same hypervisor, then using a logical
773 multicast output port would save bandwidth between hypervisors.)
776 <dt><code>get_arp(<var>P</var>, <var>A</var>);</code></dt>
779 Implemented by storing arguments into OpenFlow fields, then
780 resubmitting to table 65, which <code>ovn-controller</code>
781 populates with flows generated from the <code>MAC_Binding</code>
782 table in the OVN Southbound database. If there is a match in table
783 65, then its actions store the bound MAC in the Ethernet
784 destination address field.
788 (The OpenFlow actions save and restore the OpenFlow fields used for
789 the arguments, so that the OVN actions do not have to be aware of
794 <dt><code>put_arp(<var>P</var>, <var>A</var>, <var>E</var>);</code></dt>
797 Implemented by storing the arguments into OpenFlow fields, then
798 outputting a packet to <code>ovn-controller</code>, which updates
799 the <code>MAC_Binding</code> table.
803 (The OpenFlow actions save and restore the OpenFlow fields used for
804 the arguments, so that the OVN actions do not have to be aware of
813 OpenFlow tables 32 through 47 implement the <code>output</code> action
814 in the logical ingress pipeline. Specifically, table 32 handles
815 packets to remote hypervisors, table 33 handles packets to the local
816 hypervisor, and table 34 discards packets whose logical ingress and
817 egress port are the same.
821 Logical patch ports are a special case. Logical patch ports do not
822 have a physical location and effectively reside on every hypervisor.
823 Thus, flow table 33, for output to ports on the local hypervisor,
824 naturally implements output to unicast logical patch ports too.
825 However, applying the same logic to a logical patch port that is part
826 of a logical multicast group yields packet duplication, because each
827 hypervisor that contains a logical port in the multicast group will
828 also output the packet to the logical patch port. Thus, multicast
829 groups implement output to logical patch ports in table 32.
833 Each flow in table 32 matches on a logical output port for unicast or
834 multicast logical ports that include a logical port on a remote
835 hypervisor. Each flow's actions implement sending a packet to the port
836 it matches. For unicast logical output ports on remote hypervisors,
837 the actions set the tunnel key to the correct value, then send the
838 packet on the tunnel port to the correct hypervisor. (When the remote
839 hypervisor receives the packet, table 0 there will recognize it as a
840 tunneled packet and pass it along to table 33.) For multicast logical
841 output ports, the actions send one copy of the packet to each remote
842 hypervisor, in the same way as for unicast destinations. If a
843 multicast group includes a logical port or ports on the local
844 hypervisor, then its actions also resubmit to table 33. Table 32 also
845 includes a fallback flow that resubmits to table 33 if there is no
850 Flows in table 33 resemble those in table 32 but for logical ports that
851 reside locally rather than remotely. For unicast logical output ports
852 on the local hypervisor, the actions just resubmit to table 34. For
853 multicast output ports that include one or more logical ports on the
854 local hypervisor, for each such logical port <var>P</var>, the actions
855 change the logical output port to <var>P</var>, then resubmit to table
860 A special case is that when a localnet port exists on the datapath,
861 remote port is connected by switching to the localnet port. In this
862 case, instead of adding a flow in table 32 to reach the remote port, a
863 flow is added in table 33 to switch the logical outport to the localnet
864 port, and resubmit to table 33 as if it were unicasted to a logical
865 port on the local hypervisor.
869 Table 34 matches and drops packets for which the logical input and
870 output ports are the same. It resubmits other packets to table 48.
876 OpenFlow tables 48 through 63 execute the logical egress pipeline from
877 the <code>Logical_Flow</code> table in the OVN Southbound database.
878 The egress pipeline can perform a final stage of validation before
879 packet delivery. Eventually, it may execute an <code>output</code>
880 action, which <code>ovn-controller</code> implements by resubmitting to
881 table 64. A packet for which the pipeline never executes
882 <code>output</code> is effectively dropped (although it may have been
883 transmitted through a tunnel across a physical network).
887 The egress pipeline cannot change the logical output port or cause
894 OpenFlow table 64 performs logical-to-physical translation, the
895 opposite of table 0. It matches the packet's logical egress port. Its
896 actions output the packet to the port attached to the OVN integration
897 bridge that represents that logical port. If the logical egress port
898 is a container nested with a VM, then before sending the packet the
899 actions push on a VLAN header with an appropriate VLAN ID.
903 If the logical egress port is a logical patch port, then table 64
904 outputs to an OVS patch port that represents the logical patch port.
905 The packet re-enters the OpenFlow flow table from the OVS patch port's
906 peer in table 0, which identifies the logical datapath and logical
907 input port based on the OVS patch port's OpenFlow port number.
912 <h2>Life Cycle of a VTEP gateway</h2>
915 A gateway is a chassis that forwards traffic between the OVN-managed
916 part of a logical network and a physical VLAN, extending a
917 tunnel-based logical network into a physical network.
921 The steps below refer often to details of the OVN and VTEP database
922 schemas. Please see <code>ovn-sb</code>(5), <code>ovn-nb</code>(5)
923 and <code>vtep</code>(5), respectively, for the full story on these
929 A VTEP gateway's life cycle begins with the administrator registering
930 the VTEP gateway as a <code>Physical_Switch</code> table entry in the
931 <code>VTEP</code> database. The <code>ovn-controller-vtep</code>
932 connected to this VTEP database, will recognize the new VTEP gateway
933 and create a new <code>Chassis</code> table entry for it in the
934 <code>OVN_Southbound</code> database.
938 The administrator can then create a new <code>Logical_Switch</code>
939 table entry, and bind a particular vlan on a VTEP gateway's port to
940 any VTEP logical switch. Once a VTEP logical switch is bound to
941 a VTEP gateway, the <code>ovn-controller-vtep</code> will detect
942 it and add its name to the <var>vtep_logical_switches</var>
943 column of the <code>Chassis</code> table in the <code>
944 OVN_Southbound</code> database. Note, the <var>tunnel_key</var>
945 column of VTEP logical switch is not filled at creation. The
946 <code>ovn-controller-vtep</code> will set the column when the
947 correponding vtep logical switch is bound to an OVN logical network.
951 Now, the administrator can use the CMS to add a VTEP logical switch
952 to the OVN logical network. To do that, the CMS must first create a
953 new <code>Logical_Switch_Port</code> table entry in the <code>
954 OVN_Northbound</code> database. Then, the <var>type</var> column
955 of this entry must be set to "vtep". Next, the <var>
956 vtep-logical-switch</var> and <var>vtep-physical-switch</var> keys
957 in the <var>options</var> column must also be specified, since
958 multiple VTEP gateways can attach to the same VTEP logical switch.
962 The newly created logical port in the <code>OVN_Northbound</code>
963 database and its configuration will be passed down to the <code>
964 OVN_Southbound</code> database as a new <code>Port_Binding</code>
965 table entry. The <code>ovn-controller-vtep</code> will recognize the
966 change and bind the logical port to the corresponding VTEP gateway
967 chassis. Configuration of binding the same VTEP logical switch to
968 a different OVN logical networks is not allowed and a warning will be
969 generated in the log.
973 Beside binding to the VTEP gateway chassis, the <code>
974 ovn-controller-vtep</code> will update the <var>tunnel_key</var>
975 column of the VTEP logical switch to the corresponding <code>
976 Datapath_Binding</code> table entry's <var>tunnel_key</var> for the
977 bound OVN logical network.
981 Next, the <code>ovn-controller-vtep</code> will keep reacting to the
982 configuration change in the <code>Port_Binding</code> in the
983 <code>OVN_Northbound</code> database, and updating the
984 <code>Ucast_Macs_Remote</code> table in the <code>VTEP</code> database.
985 This allows the VTEP gateway to understand where to forward the unicast
986 traffic coming from the extended external network.
990 Eventually, the VTEP gateway's life cycle ends when the administrator
991 unregisters the VTEP gateway from the <code>VTEP</code> database.
992 The <code>ovn-controller-vtep</code> will recognize the event and
993 remove all related configurations (<code>Chassis</code> table entry
994 and port bindings) in the <code>OVN_Southbound</code> database.
998 When the <code>ovn-controller-vtep</code> is terminated, all related
999 configurations in the <code>OVN_Southbound</code> database and
1000 the <code>VTEP</code> database will be cleaned, including
1001 <code>Chassis</code> table entries for all registered VTEP gateways
1002 and their port bindings, and all <code>Ucast_Macs_Remote</code> table
1003 entries and the <code>Logical_Switch</code> tunnel keys.
1007 <h1>Design Decisions</h1>
1009 <h2>Tunnel Encapsulations</h2>
1012 OVN annotates logical network packets that it sends from one hypervisor to
1013 another with the following three pieces of metadata, which are encoded in
1014 an encapsulation-specific fashion:
1019 24-bit logical datapath identifier, from the <code>tunnel_key</code>
1020 column in the OVN Southbound <code>Datapath_Binding</code> table.
1024 15-bit logical ingress port identifier. ID 0 is reserved for internal
1025 use within OVN. IDs 1 through 32767, inclusive, may be assigned to
1026 logical ports (see the <code>tunnel_key</code> column in the OVN
1027 Southbound <code>Port_Binding</code> table).
1031 16-bit logical egress port identifier. IDs 0 through 32767 have the same
1032 meaning as for logical ingress ports. IDs 32768 through 65535,
1033 inclusive, may be assigned to logical multicast groups (see the
1034 <code>tunnel_key</code> column in the OVN Southbound
1035 <code>Multicast_Group</code> table).
1040 For hypervisor-to-hypervisor traffic, OVN supports only Geneve and STT
1041 encapsulations, for the following reasons:
1046 Only STT and Geneve support the large amounts of metadata (over 32 bits
1047 per packet) that OVN uses (as described above).
1051 STT and Geneve use randomized UDP or TCP source ports that allows
1052 efficient distribution among multiple paths in environments that use ECMP
1057 NICs are available to offload STT and Geneve encapsulation and
1063 Due to its flexibility, the preferred encapsulation between hypervisors is
1064 Geneve. For Geneve encapsulation, OVN transmits the logical datapath
1065 identifier in the Geneve VNI.
1067 <!-- Keep the following in sync with ovn/controller/physical.h. -->
1068 OVN transmits the logical ingress and logical egress ports in a TLV with
1069 class 0x0102, type 0, and a 32-bit value encoded as follows, from MSB to
1075 <bits name="rsv" above="1" below="0" width=".25"/>
1076 <bits name="ingress port" above="15" width=".75"/>
1077 <bits name="egress port" above="16" width=".75"/>
1082 Environments whose NICs lack Geneve offload may prefer STT encapsulation
1083 for performance reasons. For STT encapsulation, OVN encodes all three
1084 pieces of logical metadata in the STT 64-bit tunnel ID as follows, from MSB
1090 <bits name="reserved" above="9" below="0" width=".5"/>
1091 <bits name="ingress port" above="15" width=".75"/>
1092 <bits name="egress port" above="16" width=".75"/>
1093 <bits name="datapath" above="24" width="1.25"/>
1098 For connecting to gateways, in addition to Geneve and STT, OVN supports
1099 VXLAN, because only VXLAN support is common on top-of-rack (ToR) switches.
1100 Currently, gateways have a feature set that matches the capabilities as
1101 defined by the VTEP schema, so fewer bits of metadata are necessary. In
1102 the future, gateways that do not support encapsulations with large amounts
1103 of metadata may continue to have a reduced feature set.