1 * Flow match expression handling library.
3 ovn-controller is the primary user of flow match expressions, but
4 the same syntax and I imagine the same code ought to be useful in
5 ovn-nbd for ACL match expressions.
7 ** Definition of data structures to represent a match expression as a
10 ** Definition of data structures to represent variables (fields).
12 Fields need names and prerequisites. Most fields are numeric and
13 thus need widths. We need also need a way to represent nominal
14 fields (currently just logical port names). It might be
15 appropriate to associate fields directly with OXM/NXM code points;
16 we have to decide whether we want OVN to use the OVS flow structure
17 or work with OXM more directly.
19 Probably should be defined so that the data structure is also
20 useful for references to fields in action parsing.
22 ** Parsing into syntax tree.
24 ** Semantic checking against variable definitions.
26 ** Applying prerequisites.
28 ** Simplification into conjunction-of-disjunctions (CoD) form.
30 ** Transformation from CoD form into OXM matches.
34 ** Flow table handling in ovn-controller.
36 ovn-controller has to transform logical datapath flows from the
37 database into OpenFlow flows.
39 *** Definition (or choice) of data structure for flows and flow table.
41 It would be natural enough to use "struct flow" and "struct
42 classifier" for this. Maybe that is what we should do. However,
43 "struct classifier" is optimized for searches based on packet
44 headers, whereas all we care about here can be implemented with a
45 hash table. Also, we may want to make it easy to add and remove
46 support for fields without recompiling, which is not possible with
47 "struct flow" or "struct classifier".
49 On the other hand, we may find that it is difficult to decide that
50 two OXM flow matches are identical (to normalize them) without a
51 lot of domain-specific knowledge that is already embedded in struct
52 flow. It's also going to be a pain to come up with a way to make
53 anything other than "struct flow" work with the ofputil_*()
54 functions for encoding and decoding OpenFlow.
56 It's also possible we could use struct flow without struct
59 *** Assembling conjunctive flows from flow match expressions.
61 This transformation explodes logical datapath flows into multiple
62 OpenFlow flow table entries, since a flow match expression in CoD
63 form requires several OpenFlow flow table entries. It also
64 requires merging together OpenFlow flow tables entries that contain
65 "conjunction" actions (really just concatenating their actions).
67 *** Translating logical datapath port names into port numbers.
69 Logical ports are specified by name in logical datapath flows, but
70 OpenFlow only works in terms of numbers.
72 *** Translating logical datapath actions into OpenFlow actions.
74 Some of the logical datapath actions do not have natural
75 representations as OpenFlow actions: they require
76 packet-in/packet-out round trips through ovn-controller. The
77 trickiest part of that is going to be making sure that the
78 packet-out resumes the control flow that was broken off by the
79 packet-in. That's tricky; we'll probably have to restrict control
80 flow or add OVS features to make resuming in general possible. Not
81 sure which is better at this point.
83 *** OpenFlow flow table synchronization.
85 The internal representation of the OpenFlow flow table has to be
86 synced across the controller connection to OVS. This probably
87 boils down to the "flow monitoring" feature of OF1.4 which was then
88 made available as a "standard extension" to OF1.3. (OVS hasn't
89 implemented this for OF1.4 yet, but the feature is based on a OVS
90 extension to OF1.0, so it should be straightforward to add it.)
92 We probably need some way to catch cases where OVS and OVN don't
93 see eye-to-eye on what exactly constitutes a flow, so that OVN
94 doesn't waste a lot of CPU time hammering at OVS trying to install
95 something that it's not going to do.
97 *** Logical/physical translation.
99 When a packet comes into the integration bridge, the first stage of
100 processing needs to translate it from a physical to a logical
101 context. When a packet leaves the integration bridge, the final
102 stage of processing needs to translate it back into a physical
103 context. ovn-controller needs to populate the OpenFlow flows
104 tables to do these translations.
106 *** Determine how to split logical pipeline across physical nodes.
108 From the original OVN architecture document:
110 The pipeline processing is split between the ingress and egress
111 transport nodes. In particular, the logical egress processing may
112 occur at either hypervisor. Processing the logical egress on the
113 ingress hypervisor requires more state about the egress vif's
114 policies, but reduces traffic on the wire that would eventually be
115 dropped. Whereas, processing on the egress hypervisor can reduce
116 broadcast traffic on the wire by doing local replication. We
117 initially plan to process logical egress on the egress hypervisor
118 so that less state needs to be replicated. However, we may change
119 this behavior once we gain some experience writing the logical
122 The split pipeline processing split will influence how tunnel keys
125 ** Interaction with Open_vSwitch and OVN databases:
127 *** Monitor Chassis table in OVN.
129 Populate Port records for tunnels to other chassis into
130 Open_vSwitch database. As a scale optimization later on, one can
131 populate only records for tunnels to other chassis that have
132 logical networks in common with this one.
134 *** Monitor Pipeline table in OVN, trigger flow table recomputation on change.
136 ** ovn-controller parameters and configuration.
138 *** Tunnel encapsulation to publish.
140 Default: VXLAN? Geneve?
142 *** SSL configuration.
144 Can probably get this from Open_vSwitch database.
148 ** Monitor OVN_Northbound database, trigger Pipeline recomputation on change.
150 ** Translate each OVN_Northbound entity into Pipeline logical datapath flows.
152 We have to first sit down and figure out what the general
153 translation of each entity is. The original OVN architecture
155 http://openvswitch.org/pipermail/dev/2015-January/050380.html had
156 some sketches of these, but they need to be completed and
159 Initially, the simplest way to do this is probably to write
160 straight C code to do a full translation of the entire
161 OVN_Northbound database into the format for the Pipeline table in
162 the OVN Southbound database. As scale increases, this will probably
163 be too inefficient since a small change in OVN_Northbound requires a
164 full recomputation. At that point, we probably want to adopt a more
165 systematic approach, such as something akin to the "nlog" system used
166 in NVP (see Koponen et al. "Network Virtualization in Multi-tenant
167 Datacenters", NSDI 2014).
169 ** Push logical datapath flows to Pipeline table.
171 ** Monitor OVN Southbound database Bindings table.
173 Sync rows in the OVN Bindings table to the "up" column in the
174 OVN_Northbound database.
178 ovsdb-server should have adequate features for OVN but it probably
179 needs work for scale and possibly for availability as deployments
180 grow. Here are some thoughts.
182 Andy Zhou is looking at these issues.
184 ** Scaling number of connections.
186 In typical use today a given ovsdb-server has only a single-digit
187 number of simultaneous connections. The OVN Southbound database will
188 have a connection from every hypervisor. This use case needs testing
189 and probably coding work. Here are some possible improvements.
191 *** Reducing amount of data sent to clients.
193 Currently, whenever a row monitored by a client changes,
194 ovsdb-server sends the client every monitored column in the row,
195 even if only one column changes. It might be valuable to reduce
196 this only to the columns that changes.
198 Also, whenever a column changes, ovsdb-server sends the entire
199 contents of the column. It might be valuable, for columns that
200 are sets or maps, to send only added or removed values or
203 Currently, clients monitor the entire contents of a table. It
204 might make sense to allow clients to monitor only rows that
205 satisfy specific criteria, e.g. to allow an ovn-controller to
206 receive only Pipeline rows for logical networks on its hypervisor.
208 *** Reducing redundant data and code within ovsdb-server.
210 Currently, ovsdb-server separately composes database update
211 information to send to each of its clients. This is fine for a
212 small number of clients, but it wastes time and memory when
213 hundreds of clients all want the same updates (as will be in the
216 (This is somewhat opposed to the idea of letting a client monitor
217 only some rows in a table, since that would increase the diversity
222 If it turns out that other changes don't let ovsdb-server scale
223 adequately, we can multithread ovsdb-server. Initially one might
224 only break protocol handling into separate threads, leaving the
225 actual database work serialized through a lock.
227 ** Increasing availability.
229 Database availability might become an issue. The OVN system
230 shouldn't grind to a halt if the database becomes unavailable, but
231 it would become impossible to bring VIFs up or down, etc.
233 My current thought on how to increase availability is to add
234 clustering to ovsdb-server, probably via the Raft consensus
235 algorithm. As an experiment, I wrote an implementation of Raft
236 for Open vSwitch that you can clone from:
238 https://github.com/blp/ovs-reviews.git raft
240 ** Reducing startup time.
242 As-is, if ovsdb-server restarts, every client will fetch a fresh
243 copy of the part of the database that it cares about. With
244 hundreds of clients, this could cause heavy CPU load on
245 ovsdb-server and use excessive network bandwidth. It would be
246 better to allow incremental updates even across connection loss.
247 One way might be to use "Difference Digests" as described in
248 Epstein et al., "What's the Difference? Efficient Set
249 Reconciliation Without Prior Context". (I'm not yet aware of
250 previous non-academic use of this technique.)
254 ** Write ovn-nbctl utility.
256 The idea here is that we need a utility to act on the OVN_Northbound
257 database in a way similar to a CMS, so that we can do some testing
258 without an actual CMS in the picture.
262 ** Init scripts for ovn-controller (on HVs), ovn-nbd, OVN DB server.
264 ** Distribution packaging.
270 This is being developed on OpenStack's development infrastructure
271 to be along side most of the other Neutron plugins.
273 http://git.openstack.org/cgit/stackforge/networking-ovn
275 http://git.openstack.org/cgit/stackforge/networking-ovn/tree/doc/source/todo.rst