Re: Openstack-ODL integration issues in stable/kilo + Lithium 0.3.0

Sam Hague


what all does the allinone setup do besides stack and start odl? Meaning does it try to create any bridges or networks?

could you modify the allinone to add a 60s or 90s sleep in between when the ovsdb node manager is set and when odl starts? Or switch from all in one to external? I think what is happening is that odl is starting but it takes a while to get to a resting state. netvirt is the last service to run since it depends on neutron, openflowplugin and the southbound. Those can take a long time to start and then when netvirt finally starts it can take another 30s or so to start. So during that time if the stack is trying to connect to odl there can be issues.

Thanks, Sam

----- Original Message -----
From: "Ravi Sabapathy" <Ravi_Sabapathy@...>
To: "Natarajan Dhiraviam" <Natarajan_Dhiraviam@...>, ffernand@..., shague@..., "Mohnish Anumala"
<Mohnish_Anumala@...>, ovsdb-dev@..., neutron-dev@...
Cc: "C Venkataraghavan" <C_Venkataraghavan@...>
Sent: Thursday, July 9, 2015 12:23:43 PM
Subject: RE: Openstack-ODL integration issues in stable/kilo + Lithium 0.3.0


From: Dhiraviam, Natarajan
Sent: Thursday, July 09, 2015 11:56 AM
To: Flavio Fernandes <ffernand@...> (ffernand@...); Sam Hague;
Anumala, Mohnish; ovsdb-dev@...;
Cc: Sabapathy, Ravi
Subject: Openstack-ODL integration issues in stable/kilo + Lithium 0.3.0

Hi Flavio, Sam & All,

We were testing latest neutron/lithium odl / stable kilo devstack
combo in all-in-one mode few days back, using modified Flavio's
vagrant setup
we faced below issues(?).

1. on stacking - unstacking - stacking (with manager set appropriately),
br-int is *NOT* getting created consistently

2. Even times when br-int was successfully created, OpenFlow connection
to controller from ovswitch on control node is not setup consistently.

Anybody else faced similar issues ?

We created br-int / OF connection to ODL on port 6653 manually in above
cases and could see that the default pipelines flows are getting
programmed both on the control & compute node, however vxlan tunnel
weren't getting created and we programmed it manually in the ovs.
However ping from Tenant1-VM1 on control node to Tenant1-VM2 of
compute node fails. Compute node has received the broadcast ARP
request and sends it out to the Tenant1-VM2 as well. However the
tenant is not responding back to the ARP request. Unfortunately we
aren't able to dump / analyze packets on the Tenant1-VM2...

Tenant / VM definitions are WRT below diagram.


Natarajan & Ravi

Join to automatically receive all group messages.