1 <?xml version="1.0" encoding="utf-8"?>
2 <manpage program="ovn-architecture" section="7" title="OVN Architecture">
4 <p>ovn-architecture -- Open Virtual Network architecture</p>
9 OVN, the Open Virtual Network, is a system to support virtual network
10 abstraction. OVN complements the existing capabilities of OVS to add
11 native support for virtual network abstractions, such as virtual L2 and L3
12 overlays and security groups. Services such as DHCP are also desirable
13 features. Just like OVS, OVN's design goal is to have a production-quality
14 implementation that can operate at significant scale.
18 An OVN deployment consists of several components:
24 A <dfn>Cloud Management System</dfn> (<dfn>CMS</dfn>), which is
25 OVN's ultimate client (via its users and administrators). OVN
26 integration requires installing a CMS-specific plugin and
27 related software (see below). OVN initially targets OpenStack
32 We generally speak of ``the'' CMS, but one can imagine scenarios in
33 which multiple CMSes manage different parts of an OVN deployment.
38 An OVN Database physical or virtual node (or, eventually, cluster)
39 installed in a central location.
43 One or more (usually many) <dfn>hypervisors</dfn>. Hypervisors must run
44 Open vSwitch and implement the interface described in
45 <code>IntegrationGuide.md</code> in the OVS source tree. Any hypervisor
46 platform supported by Open vSwitch is acceptable.
51 Zero or more <dfn>gateways</dfn>. A gateway extends a tunnel-based
52 logical network into a physical network by bidirectionally forwarding
53 packets between tunnels and a physical Ethernet port. This allows
54 non-virtualized machines to participate in logical networks. A gateway
55 may be a physical host, a virtual machine, or an ASIC-based hardware
56 switch that supports the <code>vtep</code>(5) schema. (Support for the
57 latter will come later in OVN implementation.)
61 Hypervisors and gateways are together called <dfn>transport node</dfn>
62 or <dfn>chassis</dfn>.
68 The diagram below shows how the major components of OVN and related
69 software interact. Starting at the top of the diagram, we have:
74 The Cloud Management System, as defined above.
79 The <dfn>OVN/CMS Plugin</dfn> is the component of the CMS that
80 interfaces to OVN. In OpenStack, this is a Neutron plugin.
81 The plugin's main purpose is to translate the CMS's notion of logical
82 network configuration, stored in the CMS's configuration database in a
83 CMS-specific format, into an intermediate representation understood by
88 This component is necessarily CMS-specific, so a new plugin needs to be
89 developed for each CMS that is integrated with OVN. All of the
90 components below this one in the diagram are CMS-independent.
96 The <dfn>OVN Northbound Database</dfn> receives the intermediate
97 representation of logical network configuration passed down by the
98 OVN/CMS Plugin. The database schema is meant to be ``impedance
99 matched'' with the concepts used in a CMS, so that it directly supports
100 notions of logical switches, routers, ACLs, and so on. See
101 <code>ovs-nb</code>(5) for details.
105 The OVN Northbound Database has only two clients: the OVN/CMS Plugin
106 above it and <code>ovn-nbd</code> below it.
111 <code>ovn-nbd</code>(8) connects to the OVN Northbound Database above it
112 and the OVN Database below it. It translates the logical network
113 configuration in terms of conventional network concepts, taken from the
114 OVN Northbound Database, into logical datapath flows in the OVN Database
120 The <dfn>OVN Database</dfn> is the center of the system. Its clients
121 are <code>ovn-nbd</code>(8) above it and <code>ovn-controller</code>(8)
122 on every transport node below it.
126 The OVN Database contains three kinds of data: <dfn>Physical
127 Network</dfn> (PN) tables that specify how to reach hypervisor and
128 other nodes, <dfn>Logical Network</dfn> (LN) tables that describe the
129 logical network in terms of ``logical datapath flows,'' and
130 <dfn>Binding</dfn> tables that link logical network components'
131 locations to the physical network. The hypervisors populate the PN and
132 Binding tables, whereas <code>ovn-nbd</code>(8) populates the LN
137 OVN Database performance must scale with the number of transport nodes.
138 This will likely require some work on <code>ovsdb-server</code>(1) as
139 we encounter bottlenecks. Clustering for availability may be needed.
145 The remaining components are replicated onto each hypervisor:
150 <code>ovn-controller</code>(8) is OVN's agent on each hypervisor and
151 software gateway. Northbound, it connects to the OVN Database to learn
152 about OVN configuration and status and to populate the PN and <code>Bindings</code>
153 tables with the hypervisor's status. Southbound, it connects to
154 <code>ovs-vswitchd</code>(8) as an OpenFlow controller, for control over
155 network traffic, and to the local <code>ovsdb-server</code>(1) to allow
156 it to monitor and control Open vSwitch configuration.
160 <code>ovs-vswitchd</code>(8) and <code>ovsdb-server</code>(1) are
161 conventional components of Open vSwitch.
169 +-----------|-----------+
174 | OVN Northbound DB |
179 +-----------|-----------+
187 +------------------+------------------+
190 +---------------|---------------+ . +---------------|---------------+
192 | ovn-controller | . | ovn-controller |
195 | ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
197 +-------------------------------+ +-------------------------------+
200 <h2>Chassis Setup</h2>
203 Each chassis in an OVN deployment must be configured with an Open vSwitch
204 bridge dedicated for OVN's use, called the <dfn>integration bridge</dfn>.
205 System startup scripts create this bridge prior to starting
206 <code>ovn-controller</code>. The ports on the integration bridge include:
211 On any chassis, tunnel ports that OVN uses to maintain logical network
212 connectivity. <code>ovn-controller</code> adds, updates, and removes
217 On a hypervisor, any VIFs that are to be attached to logical networks.
218 The hypervisor itself, or the integration between Open vSwitch and the
219 hypervisor (described in <code>IntegrationGuide.md</code>) takes care of
220 this. (This is not part of OVN or new to OVN; this is pre-existing
221 integration work that has already been done on hypervisors that support
226 On a gateway, the physical port used for logical network connectivity.
227 System startup scripts add this port to the bridge prior to starting
228 <code>ovn-controller</code>. This can be a patch port to another bridge,
229 instead of a physical port, in more sophisticated setups.
234 Other ports should not be attached to the integration bridge. In
235 particular, physical ports attached to the underlay network (as opposed to
236 gateway ports, which are physical ports attached to logical networks) must
237 not be attached to the integration bridge. Underlay physical ports should
238 instead be attached to a separate Open vSwitch bridge (they need not be
239 attached to any bridge at all, in fact).
243 The integration bridge must be configured with failure mode ``secure'' to
244 avoid switching packets between isolated logical networks before
245 <code>ovn-controller</code> starts up. See <code>Controller Failure
246 Settings</code> in <code>ovs-vsctl</code>(8) for more information.
250 The customary name for the integration bridge is <code>br-int</code>, but
251 another name may be used.
254 <h2>Life Cycle of a VIF</h2>
257 Tables and their schemas presented in isolation are difficult to
258 understand. Here's an example.
262 The steps in this example refer often to details of the OVN and OVN
263 Northbound database schemas. Please see <code>ovn</code>(5) and
264 <code>ovn-nb</code>(5), respectively, for the full story on these
270 A VIF's life cycle begins when a CMS administrator creates a new VIF
271 using the CMS user interface or API and adds it to a switch (one
272 implemented by OVN as a logical switch). The CMS updates its own
273 configuration. This includes associating unique, persistent identifier
274 <var>vif-id</var> and Ethernet address <var>mac</var> with the VIF.
278 The CMS plugin updates the OVN Northbound database to include the new
279 VIF, by adding a row to the <code>Logical_Port</code> table. In the new
280 row, <code>name</code> is <var>vif-id</var>, <code>mac</code> is
281 <var>mac</var>, <code>switch</code> points to the OVN logical switch's
282 Logical_Switch record, and other columns are initialized appropriately.
286 <code>ovs-nbd</code> receives the OVN Northbound database update. In
287 turn, it makes the corresponding updates to the OVN database, by adding
288 rows to the OVN database <code>Pipeline</code> table to reflect the new
289 port, e.g. add a flow to recognize that packets destined to the new
290 port's MAC address should be delivered to it, and update the flow that
291 delivers broadcast and multicast packets to include the new port.
295 On every hypervisor, <code>ovn-controller</code> receives the
296 <code>Pipeline</code> table updates that <code>ovs-nbd</code> made in the
297 previous step. As long as the VM that owns the VIF is powered off,
298 <code>ovn-controller</code> cannot do much; it cannot, for example,
299 arrange to send packets to or receive packets from the VIF, because the
300 VIF does not actually exist anywhere.
304 Eventually, a user powers on the VM that owns the VIF. On the hypervisor
305 where the VM is powered on, the integration between the hypervisor and
306 Open vSwitch (described in <code>IntegrationGuide.md</code>) adds the VIF
307 to the OVN integration bridge and stores <var>vif-id</var> in
308 <code>external-ids</code>:<code>iface-id</code> to indicate that the
309 interface is an instantiation of the new VIF. (None of this code is new
310 in OVN; this is pre-existing integration work that has already been done
311 on hypervisors that support OVS.)
315 On the hypervisor where the VM is powered on, <code>ovn-controller</code>
316 notices <code>external-ids</code>:<code>iface-id</code> in the new
317 Interface. In response, it updates the local hypervisor's OpenFlow
318 tables so that packets to and from the VIF are properly handled.
319 Afterward, it updates the <code>Bindings</code> table in the OVN DB,
320 adding a row that links the logical port from
321 <code>external-ids</code>:<code>iface-id</code> to the hypervisor.
325 Some CMS systems, including OpenStack, fully start a VM only when its
326 networking is ready. To support this, <code>ovn-nbd</code> notices the
327 new row in the <code>Bindings</code> table, and pushes this upward by
328 updating the <ref column="up" table="Logical_Port" db="OVN_NB"/> column
329 in the OVN Northbound database's <ref table="Logical_Port" db="OVN_NB"/>
330 table to indicate that the VIF is now up. The CMS, if it uses this
331 feature, can then react by allowing the VM's execution to proceed.
335 On every hypervisor but the one where the VIF resides,
336 <code>ovn-controller</code> notices the new row in the
337 <code>Bindings</code> table. This provides <code>ovn-controller</code>
338 the physical location of the logical port, so each instance updates the
339 OpenFlow tables of its switch (based on logical datapath flows in the OVN
340 DB <code>Pipeline</code> table) so that packets to and from the VIF can
341 be properly handled via tunnels.
345 Eventually, a user powers off the VM that owns the VIF. On the
346 hypervisor where the VM was powered on, the VIF is deleted from the OVN
351 On the hypervisor where the VM was powered on,
352 <code>ovn-controller</code> notices that the VIF was deleted. In
353 response, it removes the logical port's row from the
354 <code>Bindings</code> table.
358 On every hypervisor, <code>ovn-controller</code> notices the row removed
359 from the <code>Bindings</code> table. This means that
360 <code>ovn-controller</code> no longer knows the physical location of the
361 logical port, so each instance updates its OpenFlow table to reflect
366 Eventually, when the VIF (or its entire VM) is no longer needed by
367 anyone, an administrator deletes the VIF using the CMS user interface or
368 API. The CMS updates its own configuration.
372 The CMS plugin removes the VIF from the OVN Northbound database,
373 by deleting its row in the <code>Logical_Port</code> table.
377 <code>ovs-nbd</code> receives the OVN Northbound update and in turn
378 updates the OVN database accordingly, by removing or updating the
379 rows from the OVN database <code>Pipeline</code> table that were related
380 to the now-destroyed VIF.
384 On every hypervisor, <code>ovn-controller</code> receives the
385 <code>Pipeline</code> table updates that <code>ovs-nbd</code> made in the
386 previous step. <code>ovn-controller</code> updates OpenFlow tables to
387 reflect the update, although there may not be much to do, since the VIF
388 had already become unreachable when it was removed from the
389 <code>Bindings</code> table in a previous step.