Queue usage


Anton Ivanov
 

Hi all,

I was going through the source and I noted something.

We continuously declare queues as blocking while never using the fact that they can block. We effectively use blocking queues as non-blocking.

First of all, am I missing something here? If not - we can improve on what we have at present by either using blocking or using some standard coding paterns for non-blocking.

The improvement is not great ~ 3%, but IMHO it is still being limited somewhere else so it may be more (if we manage to find where the big performance wall is which masks most attempts at performance improvement):

Example:

/exports/src/ODL-instructions/openflowjava/openflow-protocol-impl/src/main/java/org/opendaylight/openflowjava/protocol/impl/connection/ChannelOutboundQueue.java

This enqueues by default, instantiates a flush thread if there is something enqueued and lets the thread lapse after that. There are two ways of improving on it:

1. Do not enqueue at all if the channel is writeable and the queue is empty - immediate 3% improvement in latency right there. This is the fast/slow or cut-through/queue pattern which is quite common across the packet processing world. Nearly everyone uses it and it works.

2. Do not try to synchronize the flusher and do not let it lapse. If the flush thread has been launched, we can use Queue.take() inside the flusher which will block and wait for an element to become available (hopefully in a reasonably efficient manner - I have not looked at the java implementation). I need to get my head around on how to organize the cooperation of netty write() in blocking so that we do not check channel.writeable() unless it is necessary, but that is also doable. The end result will use less locking, less synchronization and most importantly less invocations of execute() - so it should be more efficient than the current implementation.

A.

P.S.

IMHO there are other places (in the plugin itself) which can benefit from similar improvements. None of these by itself is a lot (a few percent each), however I hope that as we look through them we will finally find the culprit for the overall slowliness.

A.