1 <?xml version="1.0" encoding="utf-8"?>
2 <manpage program="ovn-architecture" section="7" title="OVN Architecture">
4 <p>ovn-architecture -- Open Virtual Network architecture</p>
9 OVN, the Open Virtual Network, is a system to support virtual network
10 abstraction. OVN complements the existing capabilities of OVS to add
11 native support for virtual network abstractions, such as virtual L2 and L3
12 overlays and security groups. Services such as DHCP are also desirable
13 features. Just like OVS, OVN's design goal is to have a production-quality
14 implementation that can operate at significant scale.
18 An OVN deployment consists of several components:
24 A <dfn>Cloud Management System</dfn> (<dfn>CMS</dfn>), which is
25 OVN's ultimate client (via its users and administrators). OVN
26 integration requires installing a CMS-specific plugin and
27 related software (see below). OVN initially targets OpenStack
32 We generally speak of ``the'' CMS, but one can imagine scenarios in
33 which multiple CMSes manage different parts of an OVN deployment.
38 An OVN Database physical or virtual node (or, eventually, cluster)
39 installed in a central location.
43 One or more (usually many) <dfn>hypervisors</dfn>. Hypervisors must run
44 Open vSwitch and implement the interface described in
45 <code>IntegrationGuide.md</code> in the OVS source tree. Any hypervisor
46 platform supported by Open vSwitch is acceptable.
51 Zero or more <dfn>gateways</dfn>. A gateway extends a tunnel-based
52 logical network into a physical network by bidirectionally forwarding
53 packets between tunnels and a physical Ethernet port. This allows
54 non-virtualized machines to participate in logical networks. A gateway
55 may be a physical host, a virtual machine, or an ASIC-based hardware
56 switch that supports the <code>vtep</code>(5) schema. (Support for the
57 latter will come later in OVN implementation.)
61 Hypervisors and gateways are together called <dfn>transport node</dfn>
62 or <dfn>chassis</dfn>.
68 The diagram below shows how the major components of OVN and related
69 software interact. Starting at the top of the diagram, we have:
74 The Cloud Management System, as defined above.
79 The <dfn>OVN/CMS Plugin</dfn> is the component of the CMS that
80 interfaces to OVN. In OpenStack, this is a Neutron plugin.
81 The plugin's main purpose is to translate the CMS's notion of logical
82 network configuration, stored in the CMS's configuration database in a
83 CMS-specific format, into an intermediate representation understood by
88 This component is necessarily CMS-specific, so a new plugin needs to be
89 developed for each CMS that is integrated with OVN. All of the
90 components below this one in the diagram are CMS-independent.
96 The <dfn>OVN Northbound Database</dfn> receives the intermediate
97 representation of logical network configuration passed down by the
98 OVN/CMS Plugin. The database schema is meant to be ``impedance
99 matched'' with the concepts used in a CMS, so that it directly supports
100 notions of logical switches, routers, ACLs, and so on. See
101 <code>ovs-nb</code>(5) for details.
105 The OVN Northbound Database has only two clients: the OVN/CMS Plugin
106 above it and <code>ovn-northd</code> below it.
111 <code>ovn-northd</code>(8) connects to the OVN Northbound Database
112 above it and the OVN Southbound Database below it. It translates the
113 logical network configuration in terms of conventional network
114 concepts, taken from the OVN Northbound Database, into logical
115 datapath flows in the OVN Southbound Database below it.
120 The <dfn>OVN Southbound Database</dfn> is the center of the system.
121 Its clients are <code>ovn-northd</code>(8) above it and
122 <code>ovn-controller</code>(8) on every transport node below it.
126 The OVN Southbound Database contains three kinds of data: <dfn>Physical
127 Network</dfn> (PN) tables that specify how to reach hypervisor and
128 other nodes, <dfn>Logical Network</dfn> (LN) tables that describe the
129 logical network in terms of ``logical datapath flows,'' and
130 <dfn>Binding</dfn> tables that link logical network components'
131 locations to the physical network. The hypervisors populate the PN and
132 Binding tables, whereas <code>ovn-northd</code>(8) populates the LN
137 OVN Southbound Database performance must scale with the number of
138 transport nodes. This will likely require some work on
139 <code>ovsdb-server</code>(1) as we encounter bottlenecks.
140 Clustering for availability may be needed.
146 The remaining components are replicated onto each hypervisor:
151 <code>ovn-controller</code>(8) is OVN's agent on each hypervisor and
152 software gateway. Northbound, it connects to the OVN Southbound
153 Database to learn about OVN configuration and status and to
154 populate the PN table and the <code>Chassis</code> column in
155 <code>Bindings</code> table with the hypervisor's status.
156 Southbound, it connects to <code>ovs-vswitchd</code>(8) as an
157 OpenFlow controller, for control over network traffic, and to the
158 local <code>ovsdb-server</code>(1) to allow it to monitor and
159 control Open vSwitch configuration.
163 <code>ovs-vswitchd</code>(8) and <code>ovsdb-server</code>(1) are
164 conventional components of Open vSwitch.
172 +-----------|-----------+
177 | OVN Northbound DB |
182 +-----------|-----------+
185 +-------------------+
186 | OVN Southbound DB |
187 +-------------------+
190 +------------------+------------------+
193 +---------------|---------------+ . +---------------|---------------+
195 | ovn-controller | . | ovn-controller |
198 | ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
200 +-------------------------------+ +-------------------------------+
203 <h2>Chassis Setup</h2>
206 Each chassis in an OVN deployment must be configured with an Open vSwitch
207 bridge dedicated for OVN's use, called the <dfn>integration bridge</dfn>.
208 System startup scripts create this bridge prior to starting
209 <code>ovn-controller</code>. The ports on the integration bridge include:
214 On any chassis, tunnel ports that OVN uses to maintain logical network
215 connectivity. <code>ovn-controller</code> adds, updates, and removes
220 On a hypervisor, any VIFs that are to be attached to logical networks.
221 The hypervisor itself, or the integration between Open vSwitch and the
222 hypervisor (described in <code>IntegrationGuide.md</code>) takes care of
223 this. (This is not part of OVN or new to OVN; this is pre-existing
224 integration work that has already been done on hypervisors that support
229 On a gateway, the physical port used for logical network connectivity.
230 System startup scripts add this port to the bridge prior to starting
231 <code>ovn-controller</code>. This can be a patch port to another bridge,
232 instead of a physical port, in more sophisticated setups.
237 Other ports should not be attached to the integration bridge. In
238 particular, physical ports attached to the underlay network (as opposed to
239 gateway ports, which are physical ports attached to logical networks) must
240 not be attached to the integration bridge. Underlay physical ports should
241 instead be attached to a separate Open vSwitch bridge (they need not be
242 attached to any bridge at all, in fact).
246 The integration bridge must be configured with failure mode ``secure'' to
247 avoid switching packets between isolated logical networks before
248 <code>ovn-controller</code> starts up. See <code>Controller Failure
249 Settings</code> in <code>ovs-vsctl</code>(8) for more information.
253 The customary name for the integration bridge is <code>br-int</code>, but
254 another name may be used.
257 <h2>Life Cycle of a VIF</h2>
260 Tables and their schemas presented in isolation are difficult to
261 understand. Here's an example.
265 A VIF on a hypervisor is a virtual network interface attached either
266 to a VM or a container running directly on that hypervisor (This is
267 different from the interface of a container running inside a VM).
271 The steps in this example refer often to details of the OVN and OVN
272 Northbound database schemas. Please see <code>ovn-sb</code>(5) and
273 <code>ovn-nb</code>(5), respectively, for the full story on these
279 A VIF's life cycle begins when a CMS administrator creates a new VIF
280 using the CMS user interface or API and adds it to a switch (one
281 implemented by OVN as a logical switch). The CMS updates its own
282 configuration. This includes associating unique, persistent identifier
283 <var>vif-id</var> and Ethernet address <var>mac</var> with the VIF.
287 The CMS plugin updates the OVN Northbound database to include the new
288 VIF, by adding a row to the <code>Logical_Port</code> table. In the new
289 row, <code>name</code> is <var>vif-id</var>, <code>mac</code> is
290 <var>mac</var>, <code>switch</code> points to the OVN logical switch's
291 Logical_Switch record, and other columns are initialized appropriately.
295 <code>ovn-northd</code> receives the OVN Northbound database update.
296 In turn, it makes the corresponding updates to the OVN Southbound
297 database, by adding rows to the OVN Southbound database
298 <code>Pipeline</code> table to reflect the new port, e.g. add a
299 flow to recognize that packets destined to the new port's MAC
300 address should be delivered to it, and update the flow that
301 delivers broadcast and multicast packets to include the new port.
302 It also creates a record in the <code>Bindings</code> table and
303 populates all its columns except the column that identifies the
304 <code>chassis</code>.
308 On every hypervisor, <code>ovn-controller</code> receives the
309 <code>Pipeline</code> table updates that <code>ovn-northd</code> made
310 in the previous step. As long as the VM that owns the VIF is powered off,
311 <code>ovn-controller</code> cannot do much; it cannot, for example,
312 arrange to send packets to or receive packets from the VIF, because the
313 VIF does not actually exist anywhere.
317 Eventually, a user powers on the VM that owns the VIF. On the hypervisor
318 where the VM is powered on, the integration between the hypervisor and
319 Open vSwitch (described in <code>IntegrationGuide.md</code>) adds the VIF
320 to the OVN integration bridge and stores <var>vif-id</var> in
321 <code>external-ids</code>:<code>iface-id</code> to indicate that the
322 interface is an instantiation of the new VIF. (None of this code is new
323 in OVN; this is pre-existing integration work that has already been done
324 on hypervisors that support OVS.)
328 On the hypervisor where the VM is powered on, <code>ovn-controller</code>
329 notices <code>external-ids</code>:<code>iface-id</code> in the new
330 Interface. In response, it updates the local hypervisor's OpenFlow
331 tables so that packets to and from the VIF are properly handled.
332 Afterward, in the OVN Southbound DB, it updates the
333 <code>Bindings</code> table's <code>chassis</code> column for the
334 row that links the logical port from
335 <code>external-ids</code>:<code>iface-id</code> to the hypervisor.
339 Some CMS systems, including OpenStack, fully start a VM only when its
340 networking is ready. To support this, <code>ovn-northd</code> notices
341 the <code>chassis</code> column updated for the row in
342 <code>Bindings</code> table and pushes this upward by updating the
343 <ref column="up" table="Logical_Port" db="OVN_NB"/> column in the OVN
344 Northbound database's <ref table="Logical_Port" db="OVN_NB"/> table to
345 indicate that the VIF is now up. The CMS, if it uses this feature, can
347 react by allowing the VM's execution to proceed.
351 On every hypervisor but the one where the VIF resides,
352 <code>ovn-controller</code> notices the completely populated row in the
353 <code>Bindings</code> table. This provides <code>ovn-controller</code>
354 the physical location of the logical port, so each instance updates the
355 OpenFlow tables of its switch (based on logical datapath flows in the OVN
356 DB <code>Pipeline</code> table) so that packets to and from the VIF can
357 be properly handled via tunnels.
361 Eventually, a user powers off the VM that owns the VIF. On the
362 hypervisor where the VM was powered off, the VIF is deleted from the OVN
367 On the hypervisor where the VM was powered off,
368 <code>ovn-controller</code> notices that the VIF was deleted. In
369 response, it removes the <code>Chassis</code> column content in the
370 <code>Bindings</code> table for the logical port.
374 On every hypervisor, <code>ovn-controller</code> notices the empty
375 <code>Chassis</code> column in the <code>Bindings</code> table's row
376 for the logical port. This means that <code>ovn-controller</code> no
377 longer knows the physical location of the logical port, so each instance
378 updates its OpenFlow table to reflect that.
382 Eventually, when the VIF (or its entire VM) is no longer needed by
383 anyone, an administrator deletes the VIF using the CMS user interface or
384 API. The CMS updates its own configuration.
388 The CMS plugin removes the VIF from the OVN Northbound database,
389 by deleting its row in the <code>Logical_Port</code> table.
393 <code>ovn-northd</code> receives the OVN Northbound update and in turn
394 updates the OVN Southbound database accordingly, by removing or
395 updating the rows from the OVN Southbound database
396 <code>Pipeline</code> table and <code>Bindings</code> table that
397 were related to the now-destroyed VIF.
401 On every hypervisor, <code>ovn-controller</code> receives the
402 <code>Pipeline</code> table updates that <code>ovn-northd</code> made
403 in the previous step. <code>ovn-controller</code> updates OpenFlow tables
404 to reflect the update, although there may not be much to do, since the VIF
405 had already become unreachable when it was removed from the
406 <code>Bindings</code> table in a previous step.
410 <h2>Life Cycle of a container interface inside a VM</h2>
413 OVN provides virtual network abstractions by converting information
414 written in OVN_NB database to OpenFlow flows in each hypervisor. Secure
415 virtual networking for multi-tenants can only be provided if OVN controller
416 is the only entity that can modify flows in Open vSwitch. When the
417 Open vSwitch integration bridge resides in the hypervisor, it is a
418 fair assumption to make that tenant workloads running inside VMs cannot
419 make any changes to Open vSwitch flows.
423 If the infrastructure provider trusts the applications inside the
424 containers not to break out and modify the Open vSwitch flows, then
425 containers can be run in hypervisors. This is also the case when
426 containers are run inside the VMs and Open vSwitch integration bridge
427 with flows added by OVN controller resides in the same VM. For both
428 the above cases, the workflow is the same as explained with an example
429 in the previous section ("Life Cycle of a VIF").
433 This section talks about the life cycle of a container interface (CIF)
434 when containers are created in the VMs and the Open vSwitch integration
435 bridge resides inside the hypervisor. In this case, even if a container
436 application breaks out, other tenants are not affected because the
437 containers running inside the VMs cannot modify the flows in the
438 Open vSwitch integration bridge.
442 When multiple containers are created inside a VM, there are multiple
443 CIFs associated with them. The network traffic associated with these
444 CIFs need to reach the Open vSwitch integration bridge running in the
445 hypervisor for OVN to support virtual network abstractions. OVN should
446 also be able to distinguish network traffic coming from different CIFs.
447 There are two ways to distinguish network traffic of CIFs.
451 One way is to provide one VIF for every CIF (1:1 model). This means that
452 there could be a lot of network devices in the hypervisor. This would slow
453 down OVS because of all the additional CPU cycles needed for the management
454 of all the VIFs. It would also mean that the entity creating the
455 containers in a VM should also be able to create the corresponding VIFs in
460 The second way is to provide a single VIF for all the CIFs (1:many model).
461 OVN could then distinguish network traffic coming from different CIFs via
462 a tag written in every packet. OVN uses this mechanism and uses VLAN as
463 the tagging mechanism.
468 A CIF's life cycle begins when a container is spawned inside a VM by
469 the either the same CMS that created the VM or a tenant that owns that VM
470 or even a container Orchestration System that is different than the CMS
471 that initially created the VM. Whoever the entity is, it will need to
472 know the <var>vif-id</var> that is associated with the network interface
473 of the VM through which the container interface's network traffic is
474 expected to go through. The entity that creates the container interface
475 will also need to choose an unused VLAN inside that VM.
479 The container spawning entity (either directly or through the CMS that
480 manages the underlying infrastructure) updates the OVN Northbound
481 database to include the new CIF, by adding a row to the
482 <code>Logical_Port</code> table. In the new row, <code>name</code> is
483 any unique identifier, <code>parent_name</code> is the <var>vif-id</var>
484 of the VM through which the CIF's network traffic is expected to go
485 through and the <code>tag</code> is the VLAN tag that identifies the
486 network traffic of that CIF.
490 <code>ovn-northd</code> receives the OVN Northbound database update.
491 In turn, it makes the corresponding updates to the OVN Southbound
492 database, by adding rows to the OVN Southbound database's
493 <code>Pipeline</code> table to reflect the new port and also by
494 creating a new row in the <code>Bindings</code> table and
495 populating all its columns except the column that identifies the
496 <code>chassis</code>.
500 On every hypervisor, <code>ovn-controller</code> subscribes to the
501 changes in the <code>Bindings</code> table. When a new row is created
502 by <code>ovn-northd</code> that includes a value in
503 <code>parent_port</code> column of <code>Bindings</code> table, the
504 <code>ovn-controller</code> in the hypervisor whose OVN integration bridge
505 has that same value in <var>vif-id</var> in
506 <code>external-ids</code>:<code>iface-id</code>
507 updates the local hypervisor's OpenFlow tables so that packets to and
508 from the VIF with the particular VLAN <code>tag</code> are properly
509 handled. Afterward it updates the <code>chassis</code> column of
510 the <code>Bindings</code> to reflect the physical location.
514 One can only start the application inside the container after the
515 underlying network is ready. To support this, <code>ovn-northd</code>
516 notices the updated <code>chassis</code> column in <code>Bindings</code>
517 table and updates the <ref column="up" table="Logical_Port"
518 db="OVN_NB"/> column in the OVN Northbound database's
519 <ref table="Logical_Port" db="OVN_NB"/> table to indicate that the
520 CIF is now up. The entity responsible to start the container application
521 queries this value and starts the application.
525 Eventually the entity that created and started the container, stops it.
526 The entity, through the CMS (or directly) deletes its row in the
527 <code>Logical_Port</code> table.
531 <code>ovn-northd</code> receives the OVN Northbound update and in turn
532 updates the OVN Southbound database accordingly, by removing or
533 updating the rows from the OVN Southbound database
534 <code>Pipeline</code> table that were related to the now-destroyed
535 CIF. It also deletes the row in the <code>Bindings</code> table
540 On every hypervisor, <code>ovn-controller</code> receives the
541 <code>Pipeline</code> table updates that <code>ovn-northd</code> made
542 in the previous step. <code>ovn-controller</code> updates OpenFlow tables
543 to reflect the update.