* ovn-controller ** ovn-controller parameters and configuration. *** SSL configuration. Can probably get this from Open_vSwitch database. ** Security *** Limiting the impact of a compromised chassis. Every instance of ovn-controller has the same full access to the central OVN_Southbound database. This means that a compromised chassis can interfere with the normal operation of the rest of the deployment. Some specific examples include writing to the logical flow table to alter traffic handling or updating the port binding table to claim ports that are actually present on a different chassis. In practice, the compromised host would be fighting against ovn-northd and other instances of ovn-controller that would be trying to restore the correct state. The impact could include at least temporarily redirecting traffic (so the compromised host could receive traffic that it shouldn't) and potentially a more general denial of service. There are different potential improvements to this area. The first would be to add some sort of ACL scheme to ovsdb-server. A proposal for this should first include an ACL scheme for ovn-controller. An example policy would be to make Logical_Flow read-only. Table-level control is needed, but is not enough. For example, ovn-controller must be able to update the Chassis and Encap tables, but should only be able to modify the rows associated with that chassis and no others. A more complex example is the Port_Binding table. Currently, ovn-controller is the source of truth of where a port is located. There seems to be no policy that can prevent malicious behavior of a compromised host with this table. An alternative scheme for port bindings would be to provide an optional mode where an external entity controls port bindings and make them read-only to ovn-controller. This is actually how OpenStack works today, for example. The part of OpenStack that manages VMs (Nova) tells the networking component (Neutron) where a port will be located, as opposed to the networking component discovering it. * ovsdb-server ovsdb-server should have adequate features for OVN but it probably needs work for scale and possibly for availability as deployments grow. Here are some thoughts. Andy Zhou is looking at these issues. *** Reducing amount of data sent to clients. Currently, whenever a row monitored by a client changes, ovsdb-server sends the client every monitored column in the row, even if only one column changes. It might be valuable to reduce this only to the columns that changes. Also, whenever a column changes, ovsdb-server sends the entire contents of the column. It might be valuable, for columns that are sets or maps, to send only added or removed values or key-values pairs. Currently, clients monitor the entire contents of a table. It might make sense to allow clients to monitor only rows that satisfy specific criteria, e.g. to allow an ovn-controller to receive only Logical_Flow rows for logical networks on its hypervisor. *** Reducing redundant data and code within ovsdb-server. Currently, ovsdb-server separately composes database update information to send to each of its clients. This is fine for a small number of clients, but it wastes time and memory when hundreds of clients all want the same updates (as will be in the case in OVN). (This is somewhat opposed to the idea of letting a client monitor only some rows in a table, since that would increase the diversity among clients.) *** Multithreading. If it turns out that other changes don't let ovsdb-server scale adequately, we can multithread ovsdb-server. Initially one might only break protocol handling into separate threads, leaving the actual database work serialized through a lock. ** Increasing availability. Database availability might become an issue. The OVN system shouldn't grind to a halt if the database becomes unavailable, but it would become impossible to bring VIFs up or down, etc. My current thought on how to increase availability is to add clustering to ovsdb-server, probably via the Raft consensus algorithm. As an experiment, I wrote an implementation of Raft for Open vSwitch that you can clone from: https://github.com/blp/ovs-reviews.git raft ** Reducing startup time. As-is, if ovsdb-server restarts, every client will fetch a fresh copy of the part of the database that it cares about. With hundreds of clients, this could cause heavy CPU load on ovsdb-server and use excessive network bandwidth. It would be better to allow incremental updates even across connection loss. One way might be to use "Difference Digests" as described in Epstein et al., "What's the Difference? Efficient Set Reconciliation Without Prior Context". (I'm not yet aware of previous non-academic use of this technique.) ** Support multiple tunnel encapsulations in Chassis. So far, both ovn-controller and ovn-controller-vtep only allow chassis to have one tunnel encapsulation entry. We should extend the implementation to support multiple tunnel encapsulations. ** Update learned MAC addresses from VTEP to OVN The VTEP gateway stores all MAC addresses learned from its physical interfaces in the 'Ucast_Macs_Local' and the 'Mcast_Macs_Local' tables. ovn-controller-vtep should be able to update that information back to ovn-sb database, so that other chassis know where to send packets destined to the extended external network instead of broadcasting. ** Translate ovn-sb Multicast_Group table into VTEP config The ovn-controller-vtep daemon should be able to translate the Multicast_Group table entry in ovn-sb database into Mcast_Macs_Remote table configuration in VTEP database. * Use BFD as tunnel monitor. Both ovn-controller and ovn-contorller-vtep should use BFD to monitor the tunnel liveness. Both ovs-vswitchd schema and VTEP schema supports BFD.