mdsal clustering
Michał Skalski <michal@...>
Hi,
I'm trying to use OpenDaylight ovsdb integration with OpenStack Juno. When I only have one odl controller it works ok, but I have some problems when I try to activate odl-mdsal-clustering feature. In my environment I have 3 odl controllers with IP addresses: 192.168.0.6, 192.168.0.7 and 192.168.0.8, on all of them I have similar configuration like in this gist: https://gist.github.com/michalskalski/36f2c21d52d28f7bf107 I put ovsdb plugin from all controllers behind haproxy which listen on VIP address: 192.168.0.2 When I want to connect ovs to odl I use this command: ovs-vsctl set-manager tcp:192.168.0.2:6640 so I use haproxy address and this is forwarded to one of the controllers. For 5 vswitches which I tried connect only on some of them br-int bridge was created by odl like for example here: root@node-32:~# ovs-vsctl show 61ca43fb-1b76-4b38-8c46-04d11536ae54 Manager "tcp:192.168.0.2:6640" is_connected: true Bridge br-int Controller "tcp:192.168.0.8:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal ovs_version: "2.3.1" When I want to add new network or attach vm interfaces are not created on ovs. Does someone have experience with mdsal clustering and ovsdb plugin and can share examples of configuration? Can we put ovsdb manager behind haproxy like I did? What about openflow controller address for ovs bridges? Ovsdb plugin only add one controller address, should I manualy add rest of controller addresses? Maybe it is possible to put openflow controllers behind haproxy but how told ovsdb manager to set specific address on ovs? Regards Michal |
Natarajan_Dhiraviam@...
Dell - Internal Use - Confidential Hi Michal
I was trying OpenStack (Kilo) – ODL (Lithium) integration & could correlate few of the issues you are facing with mine.
My 2 cents, - To check if your config / setup helps achieve the intended, try manually configuring controller addresses on the ovs; but ideally you may not have to. - AFAIK, ovsdb clustering (HA support) is yet to be added - Could you create & share a simple block diagram – not sure if I visualize your blocks & the sequencing correctly (as in ovsdb plugin from all controllers behind haproxy)?
Regards Natarajan
-----Original Message-----
From: ovsdb-dev-bounces@... [mailto:ovsdb-dev-bounces@...] On Behalf Of Michal Skalski Sent: Tuesday, July 14, 2015 7:01 PM To: ovsdb-dev@... Subject: [ovsdb-dev] mdsal clustering Hi, I'm trying to use OpenDaylight ovsdb integration with OpenStack Juno. When I only have one odl controller it works ok, but I have some problems when I try to activate odl-mdsal-clustering feature. In my environment I have 3 odl controllers with IP addresses: 192.168.0.6, 192.168.0.7 and 192.168.0.8, on all of them I have similar configuration like in this gist: https://gist.github.com/michalskalski/36f2c21d52d28f7bf107 I put ovsdb plugin from all controllers behind haproxy which listen on VIP address: 192.168.0.2 When I want to connect ovs to odl I use this command: ovs-vsctl set-manager tcp:192.168.0.2:6640 so I use haproxy address and this is forwarded to one of the controllers. For 5 vswitches which I tried connect only on some of them br-int bridge was created by odl like for example here: root@node-32:~# ovs-vsctl show 61ca43fb-1b76-4b38-8c46-04d11536ae54 Manager "tcp:192.168.0.2:6640" is_connected: true Bridge br-int Controller "tcp:192.168.0.8:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal ovs_version: "2.3.1" When I want to add new network or attach vm interfaces are not created on ovs. Does someone have experience with mdsal clustering and ovsdb plugin and can share examples of configuration? Can we put ovsdb manager behind haproxy like I did? What about openflow controller address for ovs bridges? Ovsdb plugin only add one controller address, should I manualy add rest of controller addresses? Maybe it is possible to put openflow controllers behind haproxy but how told ovsdb manager to set specific address on ovs? Regards Michal _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev |
Anil Vishnoi
Hi michal, Yes, clustering support for ovsdb is planned for Beryllium release. Once we can make ovsdb cluster aware, i believe you won't have to put any load balancer between switches and controller instances. On 14-Jul-2015, at 11:56 am, <Natarajan_Dhiraviam@...> <Natarajan_Dhiraviam@...> wrote:
|
Michał Skalski <michal@...>
Hi Natarajan and Anil,
toggle quoted message
Show quoted text
In my lab I use 3 OpenStack controllers and 2 computes. On each OpenStack controllers I have installed ODL Lithium. I tried to show it on this diagram: https://gist.githubusercontent.com/michalskalski/36f2c21d52d28f7bf107/raw/cf783e97e95806633bfc0d96d8a81e99948c5143/diagram On each controller there is a instance of haproxy which operate on dedicated namespace and put frontend of ovsdb manager on VIP (192.168.0.2). On the backend I have all odl controllers: https://gist.github.com/michalskalski/36f2c21d52d28f7bf107#file-haproxy-ovsdb-manager-cfg Then I use VIP as an address of manager: ovs-vsctl set-manager tcp:192.168.0.2:6640 If br-int bridge will be created then it have controller address set to one of the ODL controller IP address. I can try point 'of.address' inside custom.properties file to VIP address and add another haproxy balancer for OpenFlow controller but not sure if it make sense. Do you maybe know how clustering will work in Beryllium release? I will be able for example add all ODL managers to ovs and it will also set more then one controllers for bridges? In your opinion is it worth trying approach with loadbalancer in front of ODL controllers in Lithium release, or this will simple not work? Thanks Michal 2015-07-14 21:22 GMT+02:00 Anil Vishnoi <vishnoianil@...>: Hi michal, |
Sam Hague
Michal,
toggle quoted message
Show quoted text
the design for clustering is very early and just drawing up how it will look so there isn't much detail. That should firm up over the next couple weeks. Using a LB in front of ODL, I would suspect just won't work well. The main issue is that the OVSDB node will eventually connect to a single ODL node and there is no sharing of information between the ODL nodes. The ODL southbound is mdsal aware but nothing behind it to share data has been worked out. The neutron side has the same issue. So it just won't work the way you'd expect. If the OVSDB node connects to a different ODL in the cluster then that will likely just be viewed as a new connection and the process starts all over. Thanks, Sam ----- Original Message -----
From: "Michał Skalski" <michal@...> |