toggle quoted messageShow quoted text
Triggering off the OF port is still early sometimes. Look for this log instead:
2015-07-02 11:13:11,589 | INFO | config-pusher | SouthboundHandler | 284 - org.opendaylight.ovsdb.openstack.net-virt - 1.1.0.Lithium | triggerUpdates
We have recently added a more deterministic method where you can do a get for a certain node. If you get the eblow url and a 200 Ok, then that means netvirt is up and ready. So you keep polling for 200Ok.http://$
This code is in the stable/lithium branch and master. I see Lithium 0.3.0 below so not sure if you have the official integration build or the latest off stable/lithium. The official build from a couple weeks ago does not have this url code.
----- Original Message -----
From: "Natarajan Dhiraviam" <Natarajan_Dhiraviam@...>
To: shague@..., ffernand@..., "Ravi Sabapathy" <Ravi_Sabapathy@...>
Cc: "Mohnish Anumala" <Mohnish_Anumala@...>, ovsdb-dev@...,
neutron-dev@..., "C Venkataraghavan" <C_Venkataraghavan@...>
Sent: Friday, July 10, 2015 9:42:54 AM
Subject: RE: Openstack-ODL integration issues in stable/kilo + Lithium 0.3.0
Thanks for the inputs.
This allinone setup creates bridge, network & tenants.
We tried setting ODL_BOOT_WAIT to 300 and in one of two attempts br-int, OF
port connection was all fine.
We are trying to get this set-up consistent...
We wait for below logs, before issuing a ovs-vsctl set-manager. Hope these
are the ones to look for before doing a set-manager ?
Starting point is this
| INFO | Event Dispatcher | FeaturesServiceImpl | 20 -
| org.apache.karaf.features.core - 3.0.3 | Installing feature
| odl-ovsdb-openstack 1.2.0-SNAPSHOT
OVSDB socket is active
| INFO | entLoopGroup-7-1 | LoggingHandler | 106 -
| io.netty.common - 4.0.26.Final | [id: 0xc3b7928a,
| /0:0:0:0:0:0:0:0:6640] ACTIVE
OF socket is ready for listening
INFO | Thread-59 | TcpHandler | 256 -
org.opendaylight.openflowjava.openflow-protocol-impl - 0.6.0.SNAPSHOT |
Switch listener started and ready to accept incoming tcp/tls connections on
Start to OF socket ready is typically a little over 3 mins in our OS-ODL
setup (4GB RAM each in control/compute node) .
From: Sam Hague [mailto:shague@...]
Sent: Thursday, July 09, 2015 10:33 PM
To: Sabapathy, Ravi
Cc: Dhiraviam, Natarajan; ffernand@...; Anumala, Mohnish;
Subject: Re: Openstack-ODL integration issues in stable/kilo + Lithium 0.3.0
what all does the allinone setup do besides stack and start odl? Meaning does
it try to create any bridges or networks?
could you modify the allinone to add a 60s or 90s sleep in between when the
ovsdb node manager is set and when odl starts? Or switch from all in one to
external? I think what is happening is that odl is starting but it takes a
while to get to a resting state. netvirt is the last service to run since it
depends on neutron, openflowplugin and the southbound. Those can take a long
time to start and then when netvirt finally starts it can take another 30s
or so to start. So during that time if the stack is trying to connect to odl
there can be issues.
----- Original Message -----
From: "Ravi Sabapathy"
To: "Natarajan Dhiraviam" , ffernand@..., shague@...,
Cc: "C Venkataraghavan"
Sent: Thursday, July 9, 2015 12:23:43 PM
Subject: RE: Openstack-ODL integration issues in stable/kilo + Lithium
From: Dhiraviam, Natarajan
Sent: Thursday, July 09, 2015 11:56 AM
To: Flavio Fernandes (ffernand@...); Sam
Hague; Anumala, Mohnish; ovsdb-dev@...;
Cc: Sabapathy, Ravi
Subject: Openstack-ODL integration issues in stable/kilo + Lithium
Hi Flavio, Sam & All,
We were testing latest neutron/lithium odl / stable kilo devstack
combo in all-in-one mode few days back, using modified Flavio's
we faced below issues(?).
1. on stacking - unstacking - stacking (with manager set appropriately),
br-int is *NOT* getting created consistently
2. Even times when br-int was successfully created, OpenFlow connection
to controller from ovswitch on control node is not setup consistently.
Anybody else faced similar issues ?
We created br-int / OF connection to ODL on port 6653 manually in above
cases and could see that the default pipelines flows are getting
programmed both on the control & compute node, however vxlan tunnel
weren't getting created and we programmed it manually in the ovs.
However ping from Tenant1-VM1 on control node to Tenant1-VM2 of
compute node fails. Compute node has received the broadcast ARP
request and sends it out to the Tenant1-VM2 as well. However the
tenant is not responding back to the ARP request. Unfortunately we
aren't able to dump / analyze packets on the Tenant1-VM2...
Tenant / VM definitions are WRT below diagram.
Natarajan & Ravi