X-Git-Url: http://git.cascardo.eti.br/?a=blobdiff_plain;f=ovn%2Fovn-architecture.7.xml;h=318555b6292748e8900bc31045e0dbc18e8497a5;hb=1d7b2eceaeb059e42c1e1cd3d32c192e2ab22271;hp=6971d69d96e044c20cc753fff9821b8d30085951;hpb=ca1564ec8f27ec0f56104a5f76bbe1c29f07e53f;p=cascardo%2Fovs.git diff --git a/ovn/ovn-architecture.7.xml b/ovn/ovn-architecture.7.xml index 6971d69d9..318555b62 100644 --- a/ovn/ovn-architecture.7.xml +++ b/ovn/ovn-architecture.7.xml @@ -48,18 +48,18 @@
- Zero or more gateways. A gateway extends a tunnel-based
- logical network into a physical network by bidirectionally forwarding
- packets between tunnels and a physical Ethernet port. This allows
- non-virtualized machines to participate in logical networks. A gateway
- may be a physical host, a virtual machine, or an ASIC-based hardware
- switch that supports the vtep
(5) schema. (Support for the
- latter will come later in OVN implementation.)
+ Zero or more gateways. A gateway extends a tunnel-based
+ logical network into a physical network by bidirectionally forwarding
+ packets between tunnels and a physical Ethernet port. This allows
+ non-virtualized machines to participate in logical networks. A gateway
+ may be a physical host, a virtual machine, or an ASIC-based hardware
+ switch that supports the vtep
(5) schema. (Support for the
+ latter will come later in OVN implementation.)
- Hypervisors and gateways are together called transport node - or chassis. + Hypervisors and gateways are together called transport node + or chassis.
- The OVN/CMS Plugin is the component of the CMS that - interfaces to OVN. In OpenStack, this is a Neutron plugin. - The plugin's main purpose is to translate the CMS's notion of logical - network configuration, stored in the CMS's configuration database in a - CMS-specific format, into an intermediate representation understood by - OVN. + The OVN/CMS Plugin is the component of the CMS that + interfaces to OVN. In OpenStack, this is a Neutron plugin. + The plugin's main purpose is to translate the CMS's notion of logical + network configuration, stored in the CMS's configuration database in a + CMS-specific format, into an intermediate representation understood by + OVN.
- This component is necessarily CMS-specific, so a new plugin needs to be - developed for each CMS that is integrated with OVN. All of the - components below this one in the diagram are CMS-independent. + This component is necessarily CMS-specific, so a new plugin needs to be + developed for each CMS that is integrated with OVN. All of the + components below this one in the diagram are CMS-independent.
- The OVN Northbound Database receives the intermediate
- representation of logical network configuration passed down by the
- OVN/CMS Plugin. The database schema is meant to be ``impedance
- matched'' with the concepts used in a CMS, so that it directly supports
- notions of logical switches, routers, ACLs, and so on. See
- ovs-nb
(5) for details.
+ The OVN Northbound Database receives the intermediate
+ representation of logical network configuration passed down by the
+ OVN/CMS Plugin. The database schema is meant to be ``impedance
+ matched'' with the concepts used in a CMS, so that it directly supports
+ notions of logical switches, routers, ACLs, and so on. See
+ ovn-nb
(5) for details.
- The OVN Northbound Database has only two clients: the OVN/CMS Plugin
- above it and ovn-nbd
below it.
+ The OVN Northbound Database has only two clients: the OVN/CMS Plugin
+ above it and ovn-northd
below it.
ovn-nbd
(8) connects to the OVN Northbound Database above it
- and the OVN Database below it. It translates the logical network
- configuration in terms of conventional network concepts, taken from the
- OVN Northbound Database, into logical datapath flows in the OVN Database
- below it.
+ ovn-northd
(8) connects to the OVN Northbound Database
+ above it and the OVN Southbound Database below it. It translates the
+ logical network configuration in terms of conventional network
+ concepts, taken from the OVN Northbound Database, into logical
+ datapath flows in the OVN Southbound Database below it.
- The OVN Database is the center of the system. Its clients
- are ovn-nbd
(8) above it and ovn-controller
(8)
- on every transport node below it.
+ The OVN Southbound Database is the center of the system.
+ Its clients are ovn-northd
(8) above it and
+ ovn-controller
(8) on every transport node below it.
- The OVN Database contains three kinds of data: Physical
- Network (PN) tables that specify how to reach hypervisor and
- other nodes, Logical Network (LN) tables that describe the
- logical network in terms of ``logical datapath flows,'' and
- Binding tables that link logical network components'
- locations to the physical network. The hypervisors populate the PN and
- Binding tables, whereas ovn-nbd
(8) populates the LN
- tables.
+ The OVN Southbound Database contains three kinds of data: Physical
+ Network (PN) tables that specify how to reach hypervisor and
+ other nodes, Logical Network (LN) tables that describe the
+ logical network in terms of ``logical datapath flows,'' and
+ Binding tables that link logical network components'
+ locations to the physical network. The hypervisors populate the PN and
+ Port_Binding tables, whereas ovn-northd
(8) populates the
+ LN tables.
- OVN Database performance must scale with the number of transport nodes.
- This will likely require some work on ovsdb-server
(1) as
- we encounter bottlenecks. Clustering for availability may be needed.
+ OVN Southbound Database performance must scale with the number of
+ transport nodes. This will likely require some work on
+ ovsdb-server
(1) as we encounter bottlenecks.
+ Clustering for availability may be needed.
ovn-controller
(8) is OVN's agent on each hypervisor and
- software gateway. Northbound, it connects to the OVN Database to learn
- about OVN configuration and status and to populate the PN and Bindings
- tables with the hypervisor's status. Southbound, it connects to
- ovs-vswitchd
(8) as an OpenFlow controller, for control over
- network traffic, and to the local ovsdb-server
(1) to allow
- it to monitor and control Open vSwitch configuration.
+ software gateway. Northbound, it connects to the OVN Southbound
+ Database to learn about OVN configuration and status and to
+ populate the PN table and the Chassis
column in
+ Binding
table with the hypervisor's status.
+ Southbound, it connects to ovs-vswitchd
(8) as an
+ OpenFlow controller, for control over network traffic, and to the
+ local ovsdb-server
(1) to allow it to monitor and
+ control Open vSwitch configuration.
Each chassis in an OVN deployment must be configured with an Open vSwitch
bridge dedicated for OVN's use, called the integration bridge.
- System startup scripts create this bridge prior to starting
- ovn-controller
. The ports on the integration bridge include:
+ System startup scripts may create this bridge prior to starting
+ ovn-controller
if desired. If this bridge does not exist when
+ ovn-controller starts, it will be created automatically with the default
+ configuration suggested below. The ports on the integration bridge include:
- The integration bridge must be configured with failure mode ``secure'' to
- avoid switching packets between isolated logical networks before
- ovn-controller
starts up. See Controller Failure
- Settings
in ovs-vsctl
(8) for more information.
+ The integration bridge should be configured as described below.
+ The effect of each of these settings is documented in
+ ovs-vswitchd.conf.db
(5):
fail-mode=secure
ovn-controller
starts up. See Controller Failure
+ Settings
in ovs-vsctl
(8) for more information.
+ other-config:disable-in-band=true
In-Band
+ Control
in DESIGN.md
for more information.
+
The customary name for the integration bridge is br-int
, but
another name may be used.
+ A logical network implements the same concepts as physical + networks, but they are insulated from the physical network with tunnels or + other encapsulations. This allows logical networks to have separate IP and + other address spaces that overlap, without conflicting, with those used for + physical networks. Logical network topologies can be arranged without + regard for the topologies of the physical networks on which they run. +
+ ++ Logical network concepts in OVN include: +
+ +@@ -258,9 +316,15 @@ understand. Here's an example.
++ A VIF on a hypervisor is a virtual network interface attached either + to a VM or a container running directly on that hypervisor (This is + different from the interface of a container running inside a VM). +
+
The steps in this example refer often to details of the OVN and OVN
- Northbound database schemas. Please see ovn
(5) and
+ Northbound database schemas. Please see ovn-sb
(5) and
ovn-nb
(5), respectively, for the full story on these
databases.
ovs-nbd
receives the OVN Northbound database update. In
- turn, it makes the corresponding updates to the OVN database, by adding
- rows to the OVN database Pipeline
table to reflect the new
- port, e.g. add a flow to recognize that packets destined to the new
- port's MAC address should be delivered to it, and update the flow that
- delivers broadcast and multicast packets to include the new port.
+ ovn-northd
receives the OVN Northbound database update. In
+ turn, it makes the corresponding updates to the OVN Southbound database,
+ by adding rows to the OVN Southbound database Logical_Flow
+ table to reflect the new port, e.g. add a flow to recognize that packets
+ destined to the new port's MAC address should be delivered to it, and
+ update the flow that delivers broadcast and multicast packets to include
+ the new port. It also creates a record in the Binding
table
+ and populates all its columns except the column that identifies the
+ chassis
.
ovn-controller
receives the
- Pipeline
table updates that ovs-nbd
made in the
- previous step. As long as the VM that owns the VIF is powered off,
- ovn-controller
cannot do much; it cannot, for example,
+ Logical_Flow
table updates that ovn-northd
made
+ in the previous step. As long as the VM that owns the VIF is powered
+ off, ovn-controller
cannot do much; it cannot, for example,
arrange to send packets to or receive packets from the VIF, because the
VIF does not actually exist anywhere.
external-ids
:iface-id
in the new
Interface. In response, it updates the local hypervisor's OpenFlow
tables so that packets to and from the VIF are properly handled.
- Afterward, it updates the Bindings
table in the OVN DB,
- adding a row that links the logical port from
+ Afterward, in the OVN Southbound DB, it updates the
+ Binding
table's chassis
column for the
+ row that links the logical port from
external-ids
:iface-id
to the hypervisor.
ovn-nbd
notices the
- new row in the Bindings
table, and pushes this upward by
- updating the column
- in the OVN Northbound database's
- table to indicate that the VIF is now up. The CMS, if it uses this
- feature, can then react by allowing the VM's execution to proceed.
+ networking is ready. To support this, ovn-northd
notices
+ the chassis
column updated for the row in
+ Binding
table and pushes this upward by updating the
+ column in the OVN
+ Northbound database's table to
+ indicate that the VIF is now up. The CMS, if it uses this feature, can
+ then
+ react by allowing the VM's execution to proceed.
ovn-controller
notices the new row in the
- Bindings
table. This provides ovn-controller
+ ovn-controller
notices the completely populated row in the
+ Binding
table. This provides ovn-controller
the physical location of the logical port, so each instance updates the
OpenFlow tables of its switch (based on logical datapath flows in the OVN
- DB Pipeline
table) so that packets to and from the VIF can
- be properly handled via tunnels.
+ DB Logical_Flow
table) so that packets to and from the VIF
+ can be properly handled via tunnels.
ovn-controller
notices that the VIF was deleted. In
- response, it removes the logical port's row from the
- Bindings
table.
+ response, it removes the Chassis
column content in the
+ Binding
table for the logical port.
ovn-controller
notices the row removed
- from the Bindings
table. This means that
- ovn-controller
no longer knows the physical location of the
- logical port, so each instance updates its OpenFlow table to reflect
- that.
+ On every hypervisor, ovn-controller
notices the empty
+ Chassis
column in the Binding
table's row
+ for the logical port. This means that ovn-controller
no
+ longer knows the physical location of the logical port, so each instance
+ updates its OpenFlow table to reflect that.
ovs-nbd
receives the OVN Northbound update and in turn
- updates the OVN database accordingly, by removing or updating the
- rows from the OVN database Pipeline
table that were related
- to the now-destroyed VIF.
+ ovn-northd
receives the OVN Northbound update and in turn
+ updates the OVN Southbound database accordingly, by removing or updating
+ the rows from the OVN Southbound database Logical_Flow
table
+ and Binding
table that were related to the now-destroyed
+ VIF.
+ ovn-controller
receives the
+ Logical_Flow
table updates that ovn-northd
made
+ in the previous step. ovn-controller
updates OpenFlow
+ tables to reflect the update, although there may not be much to do, since
+ the VIF had already become unreachable when it was removed from the
+ Binding
table in a previous step.
+ + OVN provides virtual network abstractions by converting information + written in OVN_NB database to OpenFlow flows in each hypervisor. Secure + virtual networking for multi-tenants can only be provided if OVN controller + is the only entity that can modify flows in Open vSwitch. When the + Open vSwitch integration bridge resides in the hypervisor, it is a + fair assumption to make that tenant workloads running inside VMs cannot + make any changes to Open vSwitch flows. +
+ ++ If the infrastructure provider trusts the applications inside the + containers not to break out and modify the Open vSwitch flows, then + containers can be run in hypervisors. This is also the case when + containers are run inside the VMs and Open vSwitch integration bridge + with flows added by OVN controller resides in the same VM. For both + the above cases, the workflow is the same as explained with an example + in the previous section ("Life Cycle of a VIF"). +
+ ++ This section talks about the life cycle of a container interface (CIF) + when containers are created in the VMs and the Open vSwitch integration + bridge resides inside the hypervisor. In this case, even if a container + application breaks out, other tenants are not affected because the + containers running inside the VMs cannot modify the flows in the + Open vSwitch integration bridge. +
+ ++ When multiple containers are created inside a VM, there are multiple + CIFs associated with them. The network traffic associated with these + CIFs need to reach the Open vSwitch integration bridge running in the + hypervisor for OVN to support virtual network abstractions. OVN should + also be able to distinguish network traffic coming from different CIFs. + There are two ways to distinguish network traffic of CIFs. +
+ ++ One way is to provide one VIF for every CIF (1:1 model). This means that + there could be a lot of network devices in the hypervisor. This would slow + down OVS because of all the additional CPU cycles needed for the management + of all the VIFs. It would also mean that the entity creating the + containers in a VM should also be able to create the corresponding VIFs in + the hypervisor. +
+ ++ The second way is to provide a single VIF for all the CIFs (1:many model). + OVN could then distinguish network traffic coming from different CIFs via + a tag written in every packet. OVN uses this mechanism and uses VLAN as + the tagging mechanism. +
+ +Logical_Port
table. In the new row, name
is
+ any unique identifier, parent_name
is the vif-id
+ of the VM through which the CIF's network traffic is expected to go
+ through and the tag
is the VLAN tag that identifies the
+ network traffic of that CIF.
+ ovn-northd
receives the OVN Northbound database update. In
+ turn, it makes the corresponding updates to the OVN Southbound database,
+ by adding rows to the OVN Southbound database's Logical_Flow
+ table to reflect the new port and also by creating a new row in the
+ Binding
table and populating all its columns except the
+ column that identifies the chassis
.
+ ovn-controller
subscribes to the
+ changes in the Binding
table. When a new row is created
+ by ovn-northd
that includes a value in
+ parent_port
column of Binding
table, the
+ ovn-controller
in the hypervisor whose OVN integration bridge
+ has that same value in vif-id in
+ external-ids
:iface-id
+ updates the local hypervisor's OpenFlow tables so that packets to and
+ from the VIF with the particular VLAN tag
are properly
+ handled. Afterward it updates the chassis
column of
+ the Binding
to reflect the physical location.
+ ovn-northd
+ notices the updated chassis
column in Binding
+ table and updates the column in the OVN Northbound database's
+ table to indicate that the
+ CIF is now up. The entity responsible to start the container application
+ queries this value and starts the application.
+ Logical_Port
table.
+ ovn-northd
receives the OVN Northbound update and in turn
+ updates the OVN Southbound database accordingly, by removing or updating
+ the rows from the OVN Southbound database Logical_Flow
table
+ that were related to the now-destroyed CIF. It also deletes the row in
+ the Binding
table for that CIF.
ovn-controller
receives the
- Pipeline
table updates that ovs-nbd
made in the
- previous step. ovn-controller
updates OpenFlow tables to
- reflect the update, although there may not be much to do, since the VIF
- had already become unreachable when it was removed from the
- Bindings
table in a previous step.
+ Logical_Flow
table updates that ovn-northd
made
+ in the previous step. ovn-controller
updates OpenFlow
+ tables to reflect the update.
+ This section describes how a packet travels from one virtual machine or
+ container to another through OVN. This description focuses on the physical
+ treatment of a packet; for a description of the logical life cycle of a
+ packet, please refer to the Logical_Flow
table in
+ ovn-sb
(5).
+
+ This section mentions several data and metadata fields, for clarity + summarized here: +
+ +Tunnel Encapsulations
, below, for details.
+ + A field that denotes the logical port from which the packet + entered the logical datapath. + + OVN stores this in Nicira extension register number 6. +
+ ++ Geneve and STT tunnels pass this field as part of the tunnel key. + Although VXLAN tunnels do not explicitly carry a logical input port, + OVN only uses VXLAN to communicate with gateways that from OVN's + perspective consist of only a single logical port, so that OVN can set + the logical input port field to this one on ingress to the OVN logical + pipeline. +
++ A field that denotes the logical port from which the packet will + leave the logical datapath. This is initialized to 0 at the + beginning of the logical ingress pipeline. + + OVN stores this in Nicira extension register number 7. +
+ ++ Geneve and STT tunnels pass this field as part of the tunnel key. + VXLAN tunnels do not transmit the logical output port field. +
+Life Cycle of a container interface inside a
+ VM
, above, for more information).
+ + Initially, a VM or container on the ingress hypervisor sends a packet on a + port attached to the OVN integration bridge. Then: +
+ ++ OpenFlow table 0 performs physical-to-logical translation. It matches + the packet's ingress port. Its actions annotate the packet with + logical metadata, by setting the logical datapath field to identify the + logical datapath that the packet is traversing and the logical input + port field to identify the ingress port. Then it resubmits to table 16 + to enter the logical ingress pipeline. +
+ +
+ It's possible that a single ingress physical port maps to multiple
+ logical ports with a type of localnet
. The logical datapath
+ and logical input port fields will be reset and the packet will be
+ resubmitted to table 16 multiple times.
+
+ Packets that originate from a container nested within a VM are treated + in a slightly different way. The originating container can be + distinguished based on the VIF-specific VLAN ID, so the + physical-to-logical translation flows additionally match on VLAN ID and + the actions strip the VLAN header. Following this step, OVN treats + packets from containers just like any other packets. +
+ +
+ Table 0 also processes packets that arrive from other chassis. It
+ distinguishes them from other packets by ingress port, which is a
+ tunnel. As with packets just entering the OVN pipeline, the actions
+ annotate these packets with logical datapath and logical ingress port
+ metadata. In addition, the actions set the logical output port field,
+ which is available because in OVN tunneling occurs after the logical
+ output port is known. These three pieces of information are obtained
+ from the tunnel encapsulation metadata (see Tunnel
+ Encapsulations
for encoding details). Then the actions resubmit
+ to table 33 to enter the logical egress pipeline.
+
+ OpenFlow tables 16 through 31 execute the logical ingress pipeline from
+ the Logical_Flow
table in the OVN Southbound database.
+ These tables are expressed entirely in terms of logical concepts like
+ logical ports and logical datapaths. A big part of
+ ovn-controller
's job is to translate them into equivalent
+ OpenFlow (in particular it translates the table numbers:
+ Logical_Flow
tables 0 through 15 become OpenFlow tables 16
+ through 31). For a given packet, the logical ingress pipeline
+ eventually executes zero or more output
actions:
+
output
actions at all, the
+ packet is effectively dropped.
+ output
action,
+ which ovn-controller
implements by resubmitting the
+ packet to table 32.
+ output
action,
+ then each one is separately resubmitted to table 32. This can be
+ used to send multiple copies of the packet to multiple ports. (If
+ the packet was not modified between the output
actions,
+ and some of the copies are destined to the same hypervisor, then
+ using a logical multicast output port would save bandwidth between
+ hypervisors.)
+
+ OpenFlow tables 32 through 47 implement the output
action
+ in the logical ingress pipeline. Specifically, table 32 handles
+ packets to remote hypervisors, table 33 handles packets to the local
+ hypervisor, and table 34 discards packets whose logical ingress and
+ egress port are the same.
+
+ Logical patch ports are a special case. Logical patch ports do not + have a physical location and effectively reside on every hypervisor. + Thus, flow table 33, for output to ports on the local hypervisor, + naturally implements output to unicast logical patch ports too. + However, applying the same logic to a logical patch port that is part + of a logical multicast group yields packet duplication, because each + hypervisor that contains a logical port in the multicast group will + also output the packet to the logical patch port. Thus, multicast + groups implement output to logical patch ports in table 32. +
+ ++ Each flow in table 32 matches on a logical output port for unicast or + multicast logical ports that include a logical port on a remote + hypervisor. Each flow's actions implement sending a packet to the port + it matches. For unicast logical output ports on remote hypervisors, + the actions set the tunnel key to the correct value, then send the + packet on the tunnel port to the correct hypervisor. (When the remote + hypervisor receives the packet, table 0 there will recognize it as a + tunneled packet and pass it along to table 33.) For multicast logical + output ports, the actions send one copy of the packet to each remote + hypervisor, in the same way as for unicast destinations. If a + multicast group includes a logical port or ports on the local + hypervisor, then its actions also resubmit to table 33. Table 32 also + includes a fallback flow that resubmits to table 33 if there is no + other match. +
+ ++ Flows in table 33 resemble those in table 32 but for logical ports that + reside locally rather than remotely. For unicast logical output ports + on the local hypervisor, the actions just resubmit to table 34. For + multicast output ports that include one or more logical ports on the + local hypervisor, for each such logical port P, the actions + change the logical output port to P, then resubmit to table + 34. +
+ ++ Table 34 matches and drops packets for which the logical input and + output ports are the same. It resubmits other packets to table 48. +
+
+ OpenFlow tables 48 through 63 execute the logical egress pipeline from
+ the Logical_Flow
table in the OVN Southbound database.
+ The egress pipeline can perform a final stage of validation before
+ packet delivery. Eventually, it may execute an output
+ action, which ovn-controller
implements by resubmitting to
+ table 64. A packet for which the pipeline never executes
+ output
is effectively dropped (although it may have been
+ transmitted through a tunnel across a physical network).
+
+ The egress pipeline cannot change the logical output port or cause + further tunneling. +
++ OpenFlow table 64 performs logical-to-physical translation, the + opposite of table 0. It matches the packet's logical egress port. Its + actions output the packet to the port attached to the OVN integration + bridge that represents that logical port. If the logical egress port + is a container nested with a VM, then before sending the packet the + actions push on a VLAN header with an appropriate VLAN ID. +
+ ++ If the logical egress port is a logical patch port, then table 64 + outputs to an OVS patch port that represents the logical patch port. + The packet re-enters the OpenFlow flow table from the OVS patch port's + peer in table 0, which identifies the logical datapath and logical + input port based on the OVS patch port's OpenFlow port number. +
++ A gateway is a chassis that forwards traffic between the OVN-managed + part of a logical network and a physical VLAN, extending a + tunnel-based logical network into a physical network. +
+ +
+ The steps below refer often to details of the OVN and VTEP database
+ schemas. Please see ovn-sb
(5), ovn-nb
(5)
+ and vtep
(5), respectively, for the full story on these
+ databases.
+
Physical_Switch
table entry in the
+ VTEP
database. The ovn-controller-vtep
+ connected to this VTEP database, will recognize the new VTEP gateway
+ and create a new Chassis
table entry for it in the
+ OVN_Southbound
database.
+ Logical_Switch
+ table entry, and bind a particular vlan on a VTEP gateway's port to
+ any VTEP logical switch. Once a VTEP logical switch is bound to
+ a VTEP gateway, the ovn-controller-vtep
will detect
+ it and add its name to the vtep_logical_switches
+ column of the Chassis
table in the
+ OVN_Southbound
database. Note, the tunnel_key
+ column of VTEP logical switch is not filled at creation. The
+ ovn-controller-vtep
will set the column when the
+ correponding vtep logical switch is bound to an OVN logical network.
+ Logical_Port
table entry in the
+ OVN_Northbound
database. Then, the type column
+ of this entry must be set to "vtep". Next, the
+ vtep-logical-switch and vtep-physical-switch keys
+ in the options column must also be specified, since
+ multiple VTEP gateways can attach to the same VTEP logical switch.
+ OVN_Northbound
+ database and its configuration will be passed down to the
+ OVN_Southbound
database as a new Port_Binding
+ table entry. The ovn-controller-vtep
will recognize the
+ change and bind the logical port to the corresponding VTEP gateway
+ chassis. Configuration of binding the same VTEP logical switch to
+ a different OVN logical networks is not allowed and a warning will be
+ generated in the log.
+
+ ovn-controller-vtep
will update the tunnel_key
+ column of the VTEP logical switch to the corresponding
+ Datapath_Binding
table entry's tunnel_key for the
+ bound OVN logical network.
+ ovn-controller-vtep
will keep reacting to the
+ configuration change in the Port_Binding
in the
+ OVN_Northbound
database, and updating the
+ Ucast_Macs_Remote
table in the VTEP
database.
+ This allows the VTEP gateway to understand where to forward the unicast
+ traffic coming from the extended external network.
+ VTEP
database.
+ The ovn-controller-vtep
will recognize the event and
+ remove all related configurations (Chassis
table entry
+ and port bindings) in the OVN_Southbound
database.
+ ovn-controller-vtep
is terminated, all related
+ configurations in the OVN_Southbound
database and
+ the VTEP
database will be cleaned, including
+ Chassis
table entries for all registered VTEP gateways
+ and their port bindings, and all Ucast_Macs_Remote
table
+ entries and the Logical_Switch
tunnel keys.
+ + OVN annotates logical network packets that it sends from one hypervisor to + another with the following three pieces of metadata, which are encoded in + an encapsulation-specific fashion: +
+ +tunnel_key
+ column in the OVN Southbound Datapath_Binding
table.
+ tunnel_key
column in the OVN
+ Southbound Port_Binding
table).
+ tunnel_key
column in the OVN Southbound
+ Multicast_Group
table).
+ + For hypervisor-to-hypervisor traffic, OVN supports only Geneve and STT + encapsulations, for the following reasons: +
+ ++ Due to its flexibility, the preferred encapsulation between hypervisors is + Geneve. For Geneve encapsulation, OVN transmits the logical datapath + identifier in the Geneve VNI. + + + OVN transmits the logical ingress and logical egress ports in a TLV with + class 0xffff, type 0, and a 32-bit value encoded as follows, from MSB to + LSB: +
+ ++ Environments whose NICs lack Geneve offload may prefer STT encapsulation + for performance reasons. For STT encapsulation, OVN encodes all three + pieces of logical metadata in the STT 64-bit tunnel ID as follows, from MSB + to LSB: +
+ ++ For connecting to gateways, in addition to Geneve and STT, OVN supports + VXLAN, because only VXLAN support is common on top-of-rack (ToR) switches. + Currently, gateways have a feature set that matches the capabilities as + defined by the VTEP schema, so fewer bits of metadata are necessary. In + the future, gateways that do not support encapsulations with large amounts + of metadata may continue to have a reduced feature set. +