Re: [controller-dev] Continuing the BUG-2825 discussion


Edward Warnicke <hagbard@...>
 

Looping in ovsdb, openflowplugin, groupbasedpolicy...

Ed

On Fri, Apr 24, 2015 at 3:34 PM, Tony Tkacik -X (ttkacik - Pantheon Technologies SRO at Cisco) <ttkacik@...> wrote:

Note this is merged into post-lithium yangtools - 0.8.0-SNAPSHOT

https://git.opendaylight.org/gerrit/#/c/18922/

 

Merging to Lithium could not be done yet, since as Anton pointed out

some of projects relly on buggy behaviour.

 

I run build of this patch against all ODL projects,

And following had failing unit tests:

 

 

·        ovsdb

·        gbp

·        openflowplugin

 

Because of not properly passing Ipv4 / Ipv6 values.

 

Tony

 

 

From: controller-dev-bounces@... [mailto:controller-dev-bounces@...] On Behalf Of Anton Ivanov
Sent: Friday, April 24, 2015 10:52 PM
To: Edward Warnicke
Cc: controller-dev@...
Subject: Re: [controller-dev] Continuing the BUG-2825 discussion

 

On 24/04/15 21:13, Edward Warnicke wrote:

Also... does this fix 3051:

 


I need to see what it generates which I will do tomorrow. Too late today (on this side of the pond).

We should have announced it on weather before merging. The mayhem will be complete - people have been "using" this bug across the board. Pretty much every project will stop building.

A.


 

Ed

 

On Fri, Apr 24, 2015 at 1:09 PM, Edward Warnicke <hagbard@...> wrote:

Your top line finding is roughly in line with what I expected given past profiling and examination of performance issues.

 

The more nuanced stuff you are looking at is interesting though.

 

Would it perhaps make more sense to dig into the 'bigger fish' on performance?

 

Ed

 

On Fri, Apr 24, 2015 at 12:15 PM, Anton Ivanov <anton.ivanov@...> wrote:

On 22/04/15 14:34, Edward Warnicke wrote:

Sure... but that doesn't really tell us anything about the impact on overall performance.  Chasing microseconds that don't matter is a classic performance tuning error (note: not saying that's what you are doing here... just that we really can't know until we look at its end to end impact).


Hi Edward, hi list,

I ran a full set of benchmarks today and the short story is: there is an impact, but it does not yet make a lot of difference.

Basically, we are stuck/stalled somewhere else.

As I add the faster binary interfaces into the game I retain comparable (a few percent higher, but nothing to shout about) average number of responses on CBENCH. While 5-6% is an improvement, it is not something particularly big.

What is interesting is that the standard deviation increases (once all bells and whistles are added) by 4.6 times. This is at default settings which means that the machine running the controller can clock up and down. Investigation shows that indeed it does.

This effect disappears if I turn the dynamic clock off and nail the machine frequency to highest. Average remains pretty much the same (despite clock running only at max frequency), std dev becomes very low.

What this says to me is that clearly there is CPU saving from the patches and it is significant to the point where the kernel starts making decisions that a particular core CPU usage is low enough to drop the clock. It also drops it low enough for it to show in responce measurements. However, this CPU saving cannot be leveraged for an increase in the headline number, because we are constrained elsewhere and whatever that constraint is, it is pretty "hard".

I am going to set-up the benchmark differently next week to measure actual CPU and/or frequency stats to see exactly what the gain is as well as look for the "wall" (whatever it is) we are hitting here.

Brgds,

A.

P.S. In order to get to this point I had to fix quite a few 3051/3053 derivatives. That breakage is across the board - pretty much everywhere and in every project.

Gerrits for these and an updated gerrit for the patchset (the current one has a bug) will follow next week.

A.



 

Ed

 

On Wed, Apr 22, 2015 at 4:42 AM, Anton Ivanov <anton.ivanov@...> wrote:

On 22/04/15 12:34, Anton Ivanov wrote:

On 21/04/15 22:56, Edward Warnicke wrote:

Anton,

 

Did you ever do end to end testing to see the impact of these proposed changes on performance?


No because I keep being dragged to firefight all kinds of stuff.

I am just about done with setting up the rig.


I'm curious :)


So am I, but it will probably take a day or two to reach the point where the test setup will satisfy my curiosity :(


Just to be clear - I have benchmarked the object instantiation with the existing code and with the new code quite extensively so the differences there are now known.

We have a guaranteed extra latency of 40-100 microseconds on anything and everything we do because of the current code.

A.





A.


 

Ed

 

On Tue, Apr 21, 2015 at 2:52 PM, Anton Ivanov <anton.ivanov@...> wrote:

Hi all,

I found one more item going through my notes from the call - the claimed estimate of performance impact for storing into datastore of data coming from network is wrong.

The given example was BGP which stores all data into the data store so supposedly if you change ser-des to use binary forms you are just moving the CPU load elsewhere. There should apparently be no difference.

So is it really "moving the CPU elsewhere"?

Current design:

    Sum = Deserialize binary from network + Convert to String + Pass to regexp for validation + instantiate object + move into store

Proposed fix assuming we do not fix the data store and data store is "as is":

    Sum = Deserialize + instantiate object + Convert to String + move into store


The regexp is not present in the second case - so even if the BI continues to use the string form and has no efficiency improvements you are looking at: 40 microseconds gain per IPv4 prefix and 100+ for each IPv6 prefix.

Situation with other use cases is similar. So with all due respect, Robert - you are quite wrong on this one. If we will be using any of the acceleration interfaces, it is not moving CPU elsewhere, it is a significant CPU saving.

Brgds,

A.



On 21/04/15 18:39, Anton Ivanov wrote:

Hi all,

First of all, continuing from the call - if there is general feeling that we should not be locking ourselves into the additional binary APIs then we should simply cut for Li the issue down to bug-fixing and make the classes expose only backwards compatible interfaces.

Reality is - we are buggy and non-v6 compliant. We can release the controller with the sticker "Legacy Equipment, v4 only" sticker on it, but this still leaves the corner cases of v4 prefix and Mac. So it will still be buggy.

I am going to go back to Robert's "Solve it at BI" suggestion.

I am probably missing something.

1. If there is a "union" like structure that sits behind an object (similar to current code generated from union) it has to use the _SAME_ backend data and the backend data should be in the canonical form and stored canonically on instantiation. How is this going to work out as code?

The current code generated to represent a yang union does not do that - it provides alternatives, each of which uses its own storage. If one part is generated and another one is not how does the generated part talk to the manual one and how do they use the same storage? Specifically, how does the string part (as expected by the current core components) talk to the canonicalized backend?

The deeper you go into this, the more it looks like zero generated code - all manual (for the 5 affected types - there are 5 types in total under discussion). Even then, I cannot form that code in my head (just yet), this is why I am proposing something which is BA only.

2. We are talking a grand total of 5 types here. There are 5 types in total across the core yang data models which predate yang by a very long time and have their unique canonicalization specifics described in earlier RFCs (or IEEE documents). Their canonical representation does not match the data in the yang model and is described in the comments:

    MacAddress
    Ipv4Address - optional, performance only
    Ipv4Prefix
    Ipv6Address
    Ipv6Prefix

A generic solution to generate polimorphic code to represent any type with a binary req is nice. I am all for it. But is it worth it to do that for 5 types? Also, these 5 types are special - you are not doing any network application without touching at least one of them.

So even if they end up being special that may be worth it. In fact, the easiest is to make them special in BA, special in BI and be done with it - it is probably a fraction of a percent or so of the effort required to fix generation.

A.
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev


_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev

 




_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev

 


_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev

 

 

 

 

 


Join z.archive.ovsdb-dev@lists.opendaylight.org to automatically receive all group messages.