X-Git-Url: http://git.cascardo.eti.br/?a=blobdiff_plain;f=ovn%2Fovn-sb.xml;h=e674f3a96fd1f97a50d8fbe01236437e1c1ee8e7;hb=bda5a056ba652c4e399412a0853313af46b35379;hp=fdf59f05d5f11168066a051a05364a3cf3faa17d;hpb=713322317cf123eaab403c3c39b0faaef1968ceb;p=cascardo%2Fovs.git diff --git a/ovn/ovn-sb.xml b/ovn/ovn-sb.xml index fdf59f05d..e674f3a96 100644 --- a/ovn/ovn-sb.xml +++ b/ovn/ovn-sb.xml @@ -10,8 +10,8 @@ The OVN Southbound database sits at the center of the OVN architecture. It is the one component that speaks both southbound directly to all the hypervisors and gateways, via - ovn-controller, and northbound to the Cloud Management - System, via ovn-northd: + ovn-controller/ovn-controller-vtep, and + northbound to the Cloud Management System, via ovn-northd:

Database Structure

@@ -35,8 +35,7 @@

- The and tables comprise the - PN tables. + The table comprises the PN tables.

Logical Network (LN) data

@@ -63,8 +62,8 @@ The LN is a slave of the cloud management system running northbound of OVN. That CMS determines the entire OVN logical configuration and therefore the LN's content at any given time is a deterministic function of the CMS's - configuration, although that happens indirectly via the OVN Northbound DB - and ovn-northd. + configuration, although that happens indirectly via the + database and ovn-northd.

@@ -74,15 +73,17 @@

- The table is currently the only LN table. + and contain LN + data.

Bindings data

- The Binding tables contain the current placement of logical components - (such as VMs and VIFs) onto chassis and the bindings between logical ports - and MACs. + Bindings data link logical and physical components. They show the current + placement of logical components (such as VMs and VIFs) onto chassis, and + map logical entities to the values that represent them in tunnel + encapsulations.

@@ -98,15 +99,40 @@

- The table is currently the only binding data. + The and tables + contain binding data.

+

Common Columns

+ +

+ Some tables contain a special column named external_ids. This + column has the same form and purpose each place that it appears, so we + describe it here to save space later. +

+ +
+
external_ids: map of string-string pairs
+
+ Key-value pairs for use by the software that manages the OVN Southbound + database rather than by + ovn-controller/ovn-controller-vtep. In + particular, ovn-northd can use key-value pairs in this + column to relate entities in the southbound database to higher-level + entities (such as entities in the OVN Northbound database). Individual + key-value pairs in this column may be documented in some cases to aid + in understanding and troubleshooting, but the reader should not mistake + such documentation as comprehensive. +
+
+

Each row in this table represents a hypervisor or gateway (a chassis) in the physical network (PN). Each chassis, via - ovn-controller, adds and updates its own row, and keeps a - copy of the remaining rows to determine how to reach other hypervisors. + ovn-controller/ovn-controller-vtep, adds + and updates its own row, and keeps a copy of the remaining rows to + determine how to reach other hypervisors.

@@ -138,20 +164,26 @@ - -

- A gateway is a chassis that forwards traffic between a - logical network and a physical VLAN. Gateways are typically dedicated - nodes that do not host VMs. + +

+ A gateway is a chassis that forwards traffic between the + OVN-managed part of a logical network and a physical VLAN, extending a + tunnel-based logical network into a physical network. Gateways are + typically dedicated nodes that do not host VMs and will be controlled + by ovn-controller-vtep.

- - Maps from the name of a gateway port, which is typically a physical - port (e.g. eth1) or an Open vSwitch patch port, to a record that describes the details of the gatewaying - function. + + Stores all VTEP logical switch names connected by this gateway + chassis. The table entry with + :vtep-physical-switch + equal , and + :vtep-logical-switch + value in + , will be + associated with this . - +
@@ -159,9 +191,10 @@ The column in the table refers to rows in this table to identify how OVN may transmit logical dataplane packets to this chassis. - Each chassis, via ovn-controller(8), adds and updates - its own rows and keeps a copy of the remaining rows to determine - how to reach other chassis. + Each chassis, via ovn-controller(8) or + ovn-controller-vtep(8), adds and updates its own rows + and keeps a copy of the remaining rows to determine how to reach + other chassis.

@@ -181,36 +214,15 @@
- -

- The column in the table refers to rows in this table to connect a chassis - port to a gateway function. Each row in this table describes the logical - networks to which a gateway port is attached. Each chassis, via - ovn-controller(8), adds and updates its own rows, if any - (since most chassis are not gateways), and keeps a copy of the remaining - rows to determine how to reach other chassis. -

- - - Maps from a VLAN ID to a logical port name. Thus, each named logical - port corresponds to one VLAN on the gateway port. - - - - The name of the gateway port in the chassis's Open vSwitch integration - bridge. - -
- - +

- Each row in this table represents one logical flow. The cloud management - system, via its OVN integration, populates this table with logical flows - that implement the L2 and L3 topology specified in the CMS configuration. - Each hypervisor, via ovn-controller, translates the logical - flows into OpenFlow flows specific to its hypervisor and installs them - into Open vSwitch. + Each row in this table represents one logical flow. + ovn-northd populates this table with logical flows + that implement the L2 and L3 topologies specified in the + database. Each hypervisor, via + ovn-controller, translates the logical flows into + OpenFlow flows specific to its hypervisor and installs them into + Open vSwitch.

@@ -219,23 +231,160 @@ flows are written in terms of logical ports and logical datapaths instead of physical ports and physical datapaths. Translation between logical and physical flows helps to ensure isolation between logical datapaths. - (The logical flow abstraction also allows the CMS to do less work, since - it does not have to separately compute and push out physical flows to each - chassis.) + (The logical flow abstraction also allows the OVN centralized + components to do less work, since they do not have to separately + compute and push out physical flows to each chassis.)

The default action when no flow matches is to drop packets.

+

Architectural Logical Life Cycle of a Packet

+ +

+ This following description focuses on the life cycle of a packet through + a logical datapath, ignoring physical details of the implementation. + Please refer to Architectural Physical Life Cycle of a Packet in + ovn-architecture(7) for the physical information. +

+ +

+ The description here is written as if OVN itself executes these steps, + but in fact OVN (that is, ovn-controller) programs Open + vSwitch, via OpenFlow and OVSDB, to execute them on its behalf. +

+ +

+ At a high level, OVN passes each packet through the logical datapath's + logical ingress pipeline, which may output the packet to one or more + logical port or logical multicast groups. For each such logical output + port, OVN passes the packet through the datapath's logical egress + pipeline, which may either drop the packet or deliver it to the + destination. Between the two pipelines, outputs to logical multicast + groups are expanded into logical ports, so that the egress pipeline only + processes a single logical output port at a time. Between the two + pipelines is also where, when necessary, OVN encapsulates a packet in a + tunnel (or tunnels) to transmit to remote hypervisors. +

+ +

+ In more detail, to start, OVN searches the + table for a row with correct , a of ingress, a + of 0, and a that is true for the packet. If none + is found, OVN drops the packet. If OVN finds more than one, it chooses + the match with the highest . Then OVN executes + each of the actions specified in the row's column, + in the order specified. Some actions, such as those to modify packet + headers, require no further details. The next and + output actions are special. +

+ +

+ The next action causes the above process to be repeated + recursively, except that OVN searches for of 1 + instead of 0. Similarly, any next action in a row found in + that table would cause a further search for a of + 2, and so on. When recursive processing completes, flow control returns + to the action following next. +

+ +

+ The output action also introduces recursion. Its effect + depends on the current value of the outport field. Suppose + outport designates a logical port. First, OVN compares + inport to outport; if they are equal, it treats + the output as a no-op. In the common case, where they are + different, the packet enters the egress pipeline. This transition to the + egress pipeline discards register data, e.g. reg0 ... + reg4 and connection tracking state, to achieve + uniform behavior regardless of whether the egress pipeline is on a + different hypervisor (because registers aren't preserve across + tunnel encapsulation). +

+ +

+ To execute the egress pipeline, OVN again searches the table for a row with correct , a of 0, a that is true for the packet, but now looking for a of egress. If no matching row is found, + the output becomes a no-op. Otherwise, OVN executes the actions for the + matching flow (which is chosen from multiple, if necessary, as already + described). +

+ +

+ In the egress pipeline, the next action acts as + already described, except that it, of course, searches for + egress flows. The output action, however, now + directly outputs the packet to the output port (which is now fixed, + because outport is read-only within the egress pipeline). +

+ +

+ The description earlier assumed that outport referred to a + logical port. If it instead designates a logical multicast group, then + the description above still applies, with the addition of fan-out from + the logical multicast group to each logical port in the group. For each + member of the group, OVN executes the logical pipeline as described, with + the logical output port replaced by the group member. +

+ +

Pipeline Stages

+ +

+ ovn-northd is responsible for populating the + table, so the stages are an + implementation detail and subject to change. This section + describes the current logical flow table. +

+ +

+ The ingress pipeline consists of the following stages: +

+ + +

+ The egress pipeline consists of the following stages: +

+ + - The logical datapath to which the logical flow belongs. A logical - datapath implements a logical pipeline among the ports in the table associated with it. (No table represents a - logical datapath.) In practice, the pipeline in a given logical datapath - implements either a logical switch or a logical router, and - ovn-northd reuses the UUIDs for those logical entities from - the OVN_Northbound for logical datapaths. + The logical datapath to which the logical flow belongs. + + + +

+ The primary flows used for deciding on a packet's destination are the + ingress flows. The egress flows implement + ACLs. See Logical Life Cycle of a Packet, above, for details. +

@@ -454,11 +603,7 @@

String constants have the same syntax as quoted strings in JSON (thus, - they are Unicode strings). String constants are used for naming - logical ports. Thus, the useful values are names from the and - tables in a logical flow's . + they are Unicode strings).

@@ -529,12 +674,19 @@

Symbols

+

+ Most of the symbols below have integer type. Only inport + and outport have string type. inport names a + logical port. Thus, its value is a name + from the table. outport may + name a logical port, as inport, or a logical multicast + group defined in the table. For both + symbols, only names within the flow's logical datapath may be used. +

+ +

+ The following predicates are supported: +

+ +

- Logical datapath actions, to be executed when the logical flow - represented by this row is the highest-priority match. + Logical datapath actions, to be executed when the logical flow + represented by this row is the highest-priority match.

- Actions share lexical syntax with the column. An - empty set of actions (or one that contains just white space or - comments), or a set of actions that consists of just - drop;, causes the matched packets to be dropped. - Otherwise, the column should contain a sequence of actions, each - terminated by a semicolon. + Actions share lexical syntax with the column. An + empty set of actions (or one that contains just white space or + comments), or a set of actions that consists of just + drop;, causes the matched packets to be dropped. + Otherwise, the column should contain a sequence of actions, each + terminated by a semicolon.

- The following actions will be initially supported: + The following actions are defined:

output;
- Outputs the packet to the logical port current designated by - outport. Output to the ingress port is implicitly - dropped, that is, output becomes a no-op if - outport == inport. -
+

+ In the ingress pipeline, this action executes the + egress pipeline as a subroutine. If + outport names a logical port, the egress pipeline + executes once; if it is a multicast group, the egress pipeline runs + once for each logical port in the group. +

+ +

+ In the egress pipeline, this action performs the actual + output to the outport logical port. (In the egress + pipeline, outport never names a multicast group.) +

+ +

+ Output to the input port is implicitly dropped, that is, + output becomes a no-op if outport == + inport. Occasionally it may be useful to override + this behavior, e.g. to send an ARP reply to an ARP request; to do + so, use inport = ""; to set the logical input port to + an empty string (which should not be used as the name of any + logical port). +

+
next;
+
next(table);
- Executes the next logical datapath table as a subroutine. -
+ Executes another logical datapath table as a subroutine. By default, + the table after the current one is executed. Specify + table to jump to a specific table in the same pipeline. +
field = constant;
- Sets data or metadata field field to constant value - constant, e.g. outport = "vif0"; to set the - logical output port. Assigning to a field with prerequisites - implicitly adds those prerequisites to ; thus, - for example, a flow that sets tcp.dst applies only to - TCP flows, regardless of whether its mentions - any TCP field. To set only a subset of bits in a field, - field may be a subfield or constant may be - masked, e.g. vlan.pcp[2] = 1; and vlan.pcp = - 4/4; both set the most sigificant bit of the VLAN PCP. Not - all fields are modifiable (e.g. eth.type and - ip.proto are read-only), and not all modifiable fields - may be partially modified (e.g. ip.ttl must assigned as - a whole). -
+

+ Sets data or metadata field field to constant value + constant, e.g. outport = "vif0"; to set the + logical output port. To set only a subset of bits in a field, + specify a subfield for field or a masked + constant, e.g. one may use vlan.pcp[2] = 1; + or vlan.pcp = 4/4; to set the most sigificant bit of + the VLAN PCP. +

+ +

+ Assigning to a field with prerequisites implicitly adds those + prerequisites to ; thus, for example, a flow + that sets tcp.dst applies only to TCP flows, + regardless of whether its mentions any TCP + field. +

+ +

+ Not all fields are modifiable (e.g. eth.type and + ip.proto are read-only), and not all modifiable fields + may be partially modified (e.g. ip.ttl must assigned + as a whole). The outport field is modifiable in the + ingress pipeline but not in the egress + pipeline. +

+ + +
field1 = field2;
+
+

+ Sets data or metadata field field1 to the value of data + or metadata field field2, e.g. reg0 = + ip4.src; copies ip4.src into reg0. + To modify only a subset of a field's bits, specify a subfield for + field1 or field2 or both, e.g. vlan.pcp + = reg0[0..2]; copies the least-significant bits of + reg0 into the VLAN PCP. +

+ +

+ field1 and field2 must be the same type, + either both string or both integer fields. If they are both + integer fields, they must have the same width. +

+ +

+ If field1 or field2 has prerequisites, they + are added implicitly to . It is possible to + write an assignment with contradictory prerequisites, such as + ip4.src = ip6.src[0..31];, but the contradiction means + that a logical flow with such an assignment will never be matched. +

+
+ +
field1 <-> field2;
+
+

+ Similar to field1 = field2; + except that the two values are exchanged instead of copied. Both + field1 and field2 must modifiable. +

+
+ +
ip.ttl--;
+
+

+ Decrements the IPv4 or IPv6 TTL. If this would make the TTL zero + or negative, then processing of the packet halts; no further + actions are processed. (To properly handle such cases, a + higher-priority flow should match on + ip.ttl == {0, 1};.) +

+ +

Prerequisite: ip

+
+ +
ct_next;
+
+

+ Apply connection tracking to the flow, initializing + ct_state for matching in later tables. + Automatically moves on to the next table, as if followed by + next. +

+ +

+ As a side effect, IP fragments will be reassembled for matching. + If a fragmented packet is output, then it will be sent with any + overlapping fragments squashed. The connection tracking state is + scoped by the logical port, so overlapping addresses may be used. + To allow traffic related to the matched flow, execute + ct_commit. +

+ +

+ It is possible to have actions follow ct_next, + but they will not have access to any of its side-effects and + is not generally useful. +

+
+ +
ct_commit;
+
+ Commit the flow to the connection tracking entry associated + with it by a previous call to ct_next. +

- The following actions will likely be useful later, but they have not - been thought out carefully. + The following actions will likely be useful later, but they have not + been thought out carefully.

-
field1 = field2;
-
- Extends the assignment action to allow copying between fields. -
-
learn
+
arp { action; ... };
+
+

+ Temporarily replaces the IPv4 packet being processed by an ARP + packet and executes each nested action on the ARP + packet. Actions following the arp action, if any, apply + to the original, unmodified packet. +

+ +

+ The ARP packet that this action operates on is initialized based on + the IPv4 packet being processed, as follows. These are default + values that the nested actions will probably want to change: +

-
conntrack
+
    +
  • eth.src unchanged
  • +
  • eth.dst unchanged
  • +
  • eth.type = 0x0806
  • +
  • arp.op = 1 (ARP request)
  • +
  • arp.sha copied from eth.src
  • +
  • arp.spa copied from ip4.src
  • +
  • arp.tha = 00:00:00:00:00:00
  • +
  • arp.tpa copied from ip4.dst
  • +
+ +

Prerequisite: ip4

+ -
dec_ttl { action, ... } { action; ...};
+
icmp4 { action; ... };
- decrement TTL; execute first set of actions if - successful, second set if TTL decrement fails +

+ Temporarily replaces the IPv4 packet being processed by an ICMPv4 + packet and executes each nested action on the ICMPv4 + packet. Actions following the icmp4 action, if any, + apply to the original, unmodified packet. +

+ +

+ The ICMPv4 packet that this action operates on is initialized based + on the IPv4 packet being processed, as follows. These are default + values that the nested actions will probably want to change. + Ethernet and IPv4 fields not listed here are not changed: +

+ +
    +
  • ip.proto = 1 (ICMPv4)
  • +
  • ip.frag = 0 (not a fragment)
  • +
  • icmp4.type = 3 (destination unreachable)
  • +
  • icmp4.code = 1 (host unreachable)
  • +
+ +

+ Details TBD. +

+ +

Prerequisite: ip4

-
icmp_reply { action, ... };
-
generate ICMP reply from packet, execute actions
+
tcp_reset;
+
+

+ This action transforms the current TCP packet according to the + following pseudocode: +

+ +
+if (tcp.ack) {
+        tcp.seq = tcp.ack;
+} else {
+        tcp.ack = tcp.seq + length(tcp.payload);
+        tcp.seq = 0;
+}
+tcp.flags = RST;
+
-
arp { action, ... }
-
generate ARP from packet, execute actions
+

+ Then, the action drops all TCP options and payload data, and + updates the TCP checksum. +

+ +

+ Details TBD. +

+ +

Prerequisite: tcp

+
+ + + Human-readable name for this flow's stage in the pipeline. + + + + The overall purpose of these columns is described under Common + Columns at the beginning of this document. + + +
- +

- Each row in this table identifies the physical location of a logical - port. + The rows in this table define multicast groups of logical ports. + Multicast groups allow a single packet transmitted over a tunnel to a + hypervisor to be delivered to multiple VMs on that hypervisor, which + uses bandwidth more efficiently. +

+ +

+ Each row in this table defines a logical multicast group numbered within , whose logical + ports are listed in the column. +

+ + + The logical datapath in which the multicast group resides. + + + + The value used to designate this logical egress port in tunnel + encapsulations. An index forces the key to be unique within the . The unusual range ensures that multicast group IDs + do not overlap with logical port IDs. + + + +

+ The logical multicast group's name. An index forces the name to be + unique within the . Logical flows in the + ingress pipeline may output to the group just as for individual logical + ports, by assigning the group's name to outport and + executing an output action. +

+ +

+ Multicast group names and logical port names share a single namespace + and thus should not overlap (but the database schema cannot enforce + this). To try to avoid conflicts, ovn-northd uses names + that begin with _MC_. +

+
+ + + The logical ports included in the multicast group. All of these ports + must be in the logical datapath (but the + database schema cannot enforce this). + +
+ + +

+ Each row in this table identifies physical bindings of a logical + datapath. A logical datapath implements a logical pipeline among the + ports in the table associated with it. In + practice, the pipeline in a given logical datapath implements either a + logical switch or a logical router. +

+ + + The tunnel key value to which the logical datapath is bound. + The Tunnel Encapsulation section in + ovn-architecture(7) describes how tunnel keys are + constructed for each supported encapsulation. + + + +

+ Each row in is associated with some + logical datapath. ovn-northd uses these keys to track the + association of a logical datapath with concepts in the database. +

+ + + For a logical datapath that represents a logical switch, + ovn-northd stores in this key the UUID of the + corresponding row in + the database. + + + + For a logical datapath that represents a logical router, + ovn-northd stores in this key the UUID of the + corresponding row in + the database. + +
+ + + The overall purpose of these columns is described under Common + Columns at the beginning of this document. + + + +
+ + +

+ Most rows in this table identify the physical location of a logical port. + (The exceptions are logical patch ports, which do not have any physical + location.)

@@ -647,88 +1129,204 @@

- ovn-controller populates the chassis column - for the records that identify the logical ports that are located on its - hypervisor, which ovn-controller in turn finds out by - monitoring the local hypervisor's Open_vSwitch database, which - identifies logical ports via the conventions described in - IntegrationGuide.md. + ovn-controller/ovn-controller-vtep + populates the chassis column for the records that + identify the logical ports that are located on its hypervisor/gateway, + which ovn-controller/ovn-controller-vtep in + turn finds out by monitoring the local hypervisor's Open_vSwitch + database, which identifies logical ports via the conventions described + in IntegrationGuide.md.

- When a chassis shuts down gracefully, it should cleanup the + When a chassis shuts down gracefully, it should clean up the chassis column that it previously had populated. (This is not critical because resources hosted on the chassis are equally unreachable regardless of whether their rows are present.) To handle the case where a VM is shut down abruptly on one chassis, then brought up - again on a different one, ovn-controller must overwrite the - chassis column with new information. + again on a different one, + ovn-controller/ovn-controller-vtep must + overwrite the chassis column with new information.

- - The logical datapath to which the logical port belongs. A logical - datapath implements a logical pipeline via logical flows in the table. (No table represents a logical datapath.) - + + + The logical datapath to which the logical port belongs. + - - A logical port, taken from in the OVN_Northbound database's - table. OVN does not - prescribe a particular format for the logical port ID. - + + A logical port, taken from in the OVN_Northbound database's table. OVN does not + prescribe a particular format for the logical port ID. + - + + The physical location of the logical port. To successfully identify a + chassis, this column must be a record. This is + populated by + ovn-controller/ovn-controller-vtep. + + + +

+ A number that represents the logical port in the key (e.g. STT key or + Geneve TLV) field carried within tunnel protocol packets. +

+ +

+ The tunnel ID must be unique within the scope of a logical datapath. +

+
+ + +

+ The Ethernet address or addresses used as a source address on the + logical port, each in the form + xx:xx:xx:xx:xx:xx. + The string unknown is also allowed to indicate that the + logical port has an unknown set of (additional) source addresses. +

+ +

+ A VM interface would ordinarily have a single Ethernet address. A + gateway port might initially only have unknown, and then + add MAC addresses to the set as it learns new source addresses. +

+
+ + +

+ A type for this logical port. Logical ports can be used to model other + types of connectivity into an OVN logical switch. The following types + are defined: +

+ +
+
(empty string)
+
VM (or VIF) interface.
+ +
patch
+
+ One of a pair of logical ports that act as if connected by a patch + cable. Useful for connecting two logical datapaths, e.g. to connect + a logical router to a logical switch or to another logical router. +
+ +
localnet
+
+ A connection to a locally accessible network from each + ovn-controller instance. A logical switch can only + have a single localnet port attached and at most one + regular logical port. This is used to model direct connectivity to + an existing network. +
+ +
vtep
+
+ A port to a logical switch on a VTEP gateway chassis. In order to + get this port correctly recognized by the OVN controller, the :vtep-physical-switch and :vtep-logical-switch must also + be defined. +
+
+
+
+ +

- A number that represents the logical port in the key (e.g. VXLAN VNI or - STT key) field carried within tunnel protocol packets. (This avoids - wasting space for a whole UUID in tunneled packets. It also allows OVN - to support encapsulations that cannot fit an entire UUID in their - tunnel keys.) + These options apply to logical ports with of + patch.

+ + The in the + record for the other side of the patch. The named must specify this + in its own peer option. That is, the two patch logical + ports must have reversed and + peer values. + +
+ +

- Tunnel ID 0 is reserved for internal use within OVN. + These options apply to logical ports with of + localnet.

- - - For containers created inside a VM, this is taken from - - in the OVN_Northbound database's table. It is left empty if - belongs to a VM or a container created - in the hypervisor. - - - - When identifies the interface of a container - spawned inside a VM, this column identifies the VLAN tag in - the network traffic associated with that container's network interface. - It is left empty if belongs to a VM or a - container created in the hypervisor. - + + Required. ovn-controller uses the configuration entry + ovn-bridge-mappings to determine how to connect to this + network. ovn-bridge-mappings is a list of network names + mapped to a local OVS bridge that provides access to that network. An + example of configuring ovn-bridge-mappings would be: + +
$ ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth0,physnet2:br-eth1
+ +

+ When a logical switch has a localnet port attached, + every chassis that may have a local vif attached to that logical + switch must have a bridge mapping configured to reach that + localnet. Traffic that arrives on a + localnet port is never forwarded over a tunnel to + another chassis. +

+
- - The physical location of the logical port. To successfully identify a - chassis, this column must be a record. This is - populated by ovn-controller. - + + If set, indicates that the port represents a connection to a specific + VLAN on a locally accessible network. The VLAN ID is used to match + incoming traffic and is also added to outgoing traffic. + +
- +

- The Ethernet address or addresses used as a source address on the - logical port, each in the form - xx:xx:xx:xx:xx:xx. - The string unknown is also allowed to indicate that the - logical port has an unknown set of (additional) source addresses. + These options apply to logical ports with of + vtep.

+ + Required. The name of the VTEP gateway. + + + + Required. A logical switch name connected by the VTEP gateway. Must + be set when is vtep. + +
+ +

- A VM interface would ordinarily have a single Ethernet address. A - gateway port might initially only have unknown, and then - add MAC addresses to the set as it learns new source addresses. + These columns support containers nested within a VM. Specifically, + they are used when is empty and identifies the interface of a container spawned + inside a VM. They are empty for containers or VMs that run directly on + a hypervisor.

-
+ + + This is taken from + + in the OVN_Northbound database's table. + + + +

+ Identifies the VLAN tag in the network traffic associated with that + container's network interface. +

+ +

+ This column is used for a different purpose when + is localnet (see Localnet Options, above). +

+
+