toggle quoted messageShow quoted text
I think we should look at why OVS is getting disconnected during the GC? Is it because of the Echo timeout? Tuning GC will help, but i don't think so it will fix the root cause. I think if we can increase the echo timeouts, probably disconnection won't happen atleast because of GC.
On Tue, Jan 17, 2017 at 1:15 AM, Sela, Guy <guy.sela@...>
So a couple of questions:
Did you reached Full GC? And if so, did the OVSs disconnected? And did everything continued working smoothly afterwards?
Do you have some script or mechanism you can share that will allow to quickly count number of flows in the data store?
Yeah, we had used similar heap and GC settings when testing ITM which added OVSDB and all the netvirt models to the mix, I couldn’t recall what exactly
We had focused mainly on baseline feature of OFPlugin so some of our changes specific to that drive-test may not be applicable for Guy’s case. However, increasing the Heap
and using G1GC is something he had already accounted.
For the scenario we were chasing (only openflowplugin + a load-driver app – bulk-o-matic), we had used the settings mentioned in the last reply of this bug
There are few more tweaks in Openflowplugin – but they are all related to specifics of OFPlugin (Helium)
I believe we did have to do some tweaks with heapsize, GC settings etc.right? Do you recall?
Did you manage to survive Full GCs at all ?
If I don’t avoid it, a Full GC causes all OVSs to disconnect from the ODL, and it results in a bit of chaos. Is there any way around this other than avoiding
Full GC? I managed to avoid it in my testing using 16G heap size and G1 collector.
We tested not just OVSDB but OVSDB+Netvirt/VPNService at scale of about 80 OVS at the time with a full mesh. Scale limits come more from size of
datastore than anything else. So how many devices you can scale depends on extent of features you’re testing. Is it just OVSDB, or Netvit with multiple VMs per compute across multiple networks?
If you’re running into memory issues would be good to increase memory and capture memory usage. While provisioning you may hit a high peak but will
come down once it is done. I’ll check if I can get details of numbers we tested, should be lying somewhere in archived mails.
From: Anil Vishnoi [mailto:vishnoianil@...]
Sent: 17 January 2017 12:09
To: Jamo Luhrsen <jluhrsen@...>
Cc: Pearl, Tomer <tomer.pearl@...>;
ovsdb-dev@....org; Sela, Guy <guy.sela@...>; Vishal Thapar <vishal.thapar@...>
Subject: Re: [openflowjava-dev] [ovsdb-dev] OVSDB scale
I believe team from Ericsson also did some testing with it and we made some more performance improvement on boron.
@vishal : do you have any number from your ovsdb testing ?
On Tue, Jan 3, 2017 at 10:05 PM, Jamo Luhrsen <jluhrsen@...> wrote:
back in Beryllium there was a performance report released . You can see on page 31 that we
saw OVSDB scale up to 1800 nodes. There may be more recent tests done, and I think Marcus
may have some idea. But, I think your 200 number should be achievable.
On 01/02/2017 06:02 AM, Pearl, Tomer wrote:
> I’m trying to bring up a setup with one ODL controller and 200+ OVSs.
> I’m testing with Boron SR1 code
> Are there any reports about ODL scale tests that I can look at ?
> Is 200 OVSs an amount that supposed to work?
> Tomer P.
> ovsdb-dev mailing list
openflowjava-dev mailing list