Vaishali Mithbaokar (vmithbao) <vmithbao@...>
<including integration-dev@... mailer >
As per attached email , Chris recently put together nightly performance test against OpenFlowPlugin distribution (not the base controller directly). It doesn't have the issue where test results are 0 after some time.
It may be good idea if you can share the cbench configuration you are using?
BTW in the above set up Chris has controller and clench running in separate VM though.
Hello Greg :-)
I build controller with master branch about 3 days ago, I didn’t run test on the Hydrogen release and i build recently at maybe at Base. because i use git
clone ssh://<username>@git.opendaylight.org:29418/controller.git so i think it is Base :-)
So is it a bug? Because after running several cbench tests, the result is all 0.. or i had the wrong configuration ? How can I configure to get better performance ?
By the way, did you have similar performance test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg Hall < ghall@...> 写道:
toggle quoted message
Show quoted text
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...> wrote:
Hello all:
This is my first time for asking for help from opendaylight controller dev group.
Recently i am doing some research work about opendaylight controller performance. I want to test about the controller’s packet_in throughput and the latency. I used a VM with 8 CPUs and 8 Gb memory
to do this experiment. At the beginning, I tested the throughput with clench tool, but the result was poor about 20K~30K responses per second. Did someone have similar test about opendaylight performance with clench ? And how about the result ?
Another issue is that after using clench to test performance for several times, the clench result is always 0 and the osgi console response become very slow even can’t response. So what is wrong with
the opendaylight? Is this a bug or due to my wrong configuration( in fact i didn’tconfigure anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
huangxufu <huangxufu@...>
|
|
huangxufu <huangxufu@...>
Hello Vaishali,
Thanks for your suggestion, I will try using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench at the same VM. so I use cbench with following commend: $ cbench -t -s 10 others are all default.
best wishes! rainmeter 在 2014年4月20日,上午1:53,Vaishali Mithbaokar (vmithbao) < vmithbao@...> 写道:
toggle quoted message
Show quoted text
As per attached email , Chris recently put together nightly performance test against OpenFlowPlugin distribution (not the base controller directly). It doesn't have the issue where test results are 0 after some time.
It may be good idea if you can share the cbench configuration you are using? BTW in the above set up Chris has controller and clench running in separate VM though.
Hello Greg :-) I build controller with master branch about 3 days ago, I didn’t run test on the Hydrogen release and i build recently at maybe at Base. because i use git clone ssh://<username>@git.opendaylight.org:29418/controller.git so i think it is Base :-) So is it a bug? Because after running several cbench tests, the result is all 0.. or i had the wrong configuration ? How can I configure to get better performance ? By the way, did you have similar performance test? What was the result? Sorry for flooding question :-)
best wishes! rainmeter
在 2014年4月19日,下午10:44,Greg Hall < ghall@...> 写道: Hello Perf tester :-)
What build/date was your controller? Hydrogen release or a recent build? Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of a recent change. You’ll see a clear message in the console stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...> wrote:
Hello all: This is my first time for asking for help from opendaylight controller dev group. Recently i am doing some research work about opendaylight controller performance. I want to test about the controller’s packet_in throughput and the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment. At the beginning, I tested the throughput with clench tool, but the result was poor about 20K~30K responses per second. Did someone have similar test about opendaylight performance with clench ? And how about the result ? Another issue is that after using clench to test performance for several times, the clench result is always 0 and the osgi console response become very slow even can’t response. So what is wrong with the opendaylight? Is this a bug or due to my wrong configuration( in fact i didn’tconfigure anything, i just run the run.sh file to start the controller) ? Thank you for anyone who can give me some suggestion or explanation about these two problems.
Best wishes to all :) rainmeter
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
1 attachments - [integration-dev] Automation CBench test at Ericsson lab.eml(9K)
- download
<邮件附件.eml>
|
|
Christopher O'SHEA <christopher.o.shea@...>
Hi,
It's recommended you use a different vm for cbench and the controller.
This is because both will put heavy load on the CPU.
toggle quoted message
Show quoted text
Hello Vaishali,
Thanks for your suggestion, I will try using OpenFlowPlugin distribution to test performance again.
Because
i run ODL controller and cbench at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali Mithbaokar (vmithbao) < vmithbao@...> 写道:
As per attached email , Chris recently put together nightly performance test against OpenFlowPlugin distribution (not the base controller directly). It doesn't have the issue where test results are 0 after some time.
It may be good idea if you can share the cbench configuration you are using?
BTW in the above set up Chris has controller and clench running in separate VM though.
Hello Greg :-)
I build controller with master branch about 3 days ago, I didn’t run test on the Hydrogen release and i build recently at maybe at Base. because i use git
clone ssh://<username>@git.opendaylight.org:29418/controller.git so i think it is Base :-)
So is it a bug? Because after running several cbench tests, the result is all 0.. or i had the wrong configuration ? How can I configure to get better performance ?
By the way, did you have similar performance test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg Hall < ghall@...> 写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...> wrote:
Hello all:
This is my first time for asking for help from opendaylight controller dev group.
Recently i am doing some research work about opendaylight controller performance. I want to test about the controller’s packet_in throughput and the latency. I used a VM with 8 CPUs and 8 Gb memory
to do this experiment. At the beginning, I tested the throughput with clench tool, but the result was poor about 20K~30K responses per second. Did someone have similar test about opendaylight performance with clench ? And how about the result ?
Another issue is that after using clench to test performance for several times, the clench result is always 0 and the osgi console response become very slow even can’t response. So what is wrong
with the opendaylight? Is this a bug or due to my wrong configuration( in fact i didn’tconfigure anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1 attachments
-
[integration-dev] Automation CBench test at Ericsson lab.eml(9K)
-
download
<邮件附件.eml>
|
|
Also please check you are using the right settings for Cbench, specially enabling the reactive forwarding module in the OF plugin:
BR/Luis
toggle quoted message
Show quoted text
Hi,
It's recommended you use a different vm for cbench and the controller.
This is because both will put heavy load on the CPU.
Hello Vaishali,
Thanks for your suggestion, I will try using OpenFlowPlugin distribution to test performance again.
Because
i run ODL controller and cbench at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali Mithbaokar (vmithbao) < vmithbao@...> 写道:
As per attached email , Chris recently put together nightly performance test against OpenFlowPlugin distribution (not the base controller directly). It doesn't have the issue where test results are 0 after some time.
It may be good idea if you can share the cbench configuration you are using?
BTW in the above set up Chris has controller and clench running in separate VM though.
Hello Greg :-)
I build controller with master branch about 3 days ago, I didn’t run test on the Hydrogen release and i build recently at maybe at Base. because i use git
clone ssh://<username>@git.opendaylight.org:29418/controller.git so i think it is Base :-)
So is it a bug? Because after running several cbench tests, the result is all 0.. or i had the wrong configuration ? How can I configure to get better performance ?
By the way, did you have similar performance test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg Hall < ghall@...> 写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...> wrote:
Hello all:
This is my first time for asking for help from opendaylight controller dev group.
Recently i am doing some research work about opendaylight controller performance. I want to test about the controller’s packet_in throughput and the latency. I used a VM with 8 CPUs and 8 Gb memory
to do this experiment. At the beginning, I tested the throughput with clench tool, but the result was poor about 20K~30K responses per second. Did someone have similar test about opendaylight performance with clench ? And how about the result ?
Another issue is that after using clench to test performance for several times, the clench result is always 0 and the osgi console response become very slow even can’t response. So what is wrong
with the opendaylight? Is this a bug or due to my wrong configuration( in fact i didn’tconfigure anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1 attachments
-
[integration-dev] Automation CBench test at Ericsson lab.eml(9K)
-
download
<邮件附件.eml>
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Muthukumaran Kothandaraman <mkothand@...>
Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
huangxufu <huangxufu@...>
Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道:
toggle quoted message
Show quoted text
Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
huangxufu <huangxufu@...>
Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道:
toggle quoted message
Show quoted text
Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Hi, can you please rerun your test with the OSGi setting: dropAllPacketsRpc on
This should be in theory
toggle quoted message
Show quoted text
Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道: Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Hi Jan,
We have this and other performance results collected by Chris O’Shea, it would be nice if you could join the performance discussion this afternoon at 5:30PM PST
BR/Luis
toggle quoted message
Show quoted text
Subject: Re: [controller-dev] opendaylight controller performance test problem
Date: April 21, 2014 at 1:16:10 AM PDT
Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道: Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________ controller-dev mailing list controller-dev@...https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Christopher O'SHEA <christopher.o.shea@...>
Hi Huang,
I agree the performance isn’t up to par with other controller.
Here are the results from last night test run
Chris
Hello all,
Thanks for all your kind reminding.
First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle.
2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements
3.Add OF plugin reactive forwarding bundle to opendayligt/plugins.
4.Set controller Log level to ERROR
5.Start
controller with recommended options: run.sh -of13 -Xms1g -Xmx4g
6.Turn
on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on
7.Then i started cbench with this command :
$ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10
then i got the following result:
15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms
15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms
15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms
15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms
15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms
15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms
15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms
15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms
15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms
15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms
15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms
15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms
15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms
RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s
There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command:
$ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10
the beacon controller’s result is as follows:
cbench: controller benchmarking tool
running in mode 'throughput'
connecting to controller at localhost:6633
faking 10 switches offset 1 :: 13 tests each; 10000 ms per test
with 100000 unique source MACs per switch
learning destination mac addresses before the test
starting test with 0 ms delay after features_reply
ignoring first 3 "warmup" and last 0 "cooldown" loops
connection delay of 50ms per 5 switch(es)
debugging info is off
15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms
15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms
15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms
15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms
15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms
15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms
15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms
15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms
15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms
15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms
15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms
15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms
15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms
RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes!
rainmeter
在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道:
toggle quoted message
Show quoted text
Hi Chris,
>>It's recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network latency / bandwidth constraints do not skew the measurements
While running controller and cbench on the same bare-metal multi-core machine via loopback (not different VMs), CPU-pinning can help minimizing stomping and eliminate possible measurement-skews due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment scenario. But this gives a good baseline to compare when deployed over network. If performance degrades between loopback and network environment, first target of suspicion and
troubleshooting would be none other than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From: "Christopher O'SHEA" <christopher.o.shea@...>
To: huangxufu <huangxufu@...>
Cc: "controller-dev@..." <controller-dev@...>,
"integration-dev@..." <integration-dev@...>
Date: 04/20/2014 12:59 PM
Subject: Re: [controller-dev] opendaylight controller performance test problem
Sent by: controller-dev-bounces@...
Hi,
It's recommended you use a different vm for cbench and the controller.
This is because both will put heavy load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...> wrote:
Hello Vaishali,
Thanks for your suggestion, I will try using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali Mithbaokar (vmithbao) <vmithbao@...> 写道:
<including integration-dev@... mailer >
As per attached email , Chris recently put together nightly performance test against OpenFlowPlugin distribution (not the base controller directly). It doesn't have the issue where test results are 0 after some time.
It may be good idea if you can share the cbench configuration you are using?
BTW in the above set up Chris has controller and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..." <controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test problem
Hello Greg :-)
I build controller with master branch about 3 days ago, I didn’t run test on the Hydrogen release and i build recently at maybe at Base. because i use
git clone ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running several cbench tests, the result is all 0.. or i had the wrong configuration ? How can I configure to get better performance ?
By the way, did you have similar performance test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg Hall <ghall@...> 写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...> wrote:
Hello all:
This is my first time for asking for help from opendaylight controller dev group.
Recently i am doing some research work about opendaylight controller performance. I want to test about the controller’s packet_in throughput and the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment. At the beginning, I tested the throughput
with clench tool, but the result was poor about 20K~30K responses per second. Did someone have similar test about opendaylight performance with clench ? And how about the result ?
Another issue is that after using clench to test performance for several times, the clench result is always 0 and the osgi console response become very slow even can’t response. So what is wrong with the opendaylight? Is this a bug or due to my wrong configuration(
in fact i didn’tconfigure anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1 attachments
[integration-dev] Automation CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Jan Medved (jmedved) <jmedved@...>
I am running the tests as we speak, will join this afternoon.
/Jan
toggle quoted message
Show quoted text
Hi Jan,
We have this and other performance results collected by Chris O’Shea, it would be nice if you could join the performance discussion this afternoon at 5:30PM PST
BR/Luis
Begin forwarded message:
Subject:
Re: [controller-dev] opendaylight controller performance test problem
Date: April 21, 2014 at 1:16:10 AM PDT
Hello all,
Thanks for all your kind reminding.
First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle.
2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements
3.Add OF plugin reactive forwarding bundle to opendayligt/plugins.
4.Set controller Log level to ERROR
5.Start
controller with recommended options: run.sh -of13 -Xms1g -Xmx4g
6.Turn
on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on
7.Then i started cbench with this command :
$ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10
then i got the following result:
15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms
15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms
15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms
15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms
15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms
15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms
15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms
15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms
15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms
15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms
15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms
15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms
15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms
RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s
There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command:
$ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10
the beacon controller’s result is as follows:
cbench: controller benchmarking tool
running in mode 'throughput'
connecting to controller at localhost:6633
faking 10 switches offset 1 :: 13 tests each; 10000 ms per test
with 100000 unique source MACs per switch
learning destination mac addresses before the test
starting test with 0 ms delay after features_reply
ignoring first 3 "warmup" and last 0 "cooldown" loops
connection delay of 50ms per 5 switch(es)
debugging info is off
15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms
15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms
15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms
15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms
15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms
15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms
15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms
15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms
15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms
15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms
15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms
15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms
15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms
RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes!
rainmeter
在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道:
Hi Chris,
>>It's recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network latency / bandwidth constraints do not skew the measurements
While running controller and cbench on the same bare-metal multi-core machine via loopback (not different VMs), CPU-pinning can help minimizing stomping and eliminate possible measurement-skews due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment scenario. But this gives a good baseline to compare when deployed over network. If performance degrades between loopback and network environment, first target of suspicion and
troubleshooting would be none other than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From: "Christopher O'SHEA" <christopher.o.shea@...>
To: huangxufu <huangxufu@...>
Cc: "controller-dev@..." <controller-dev@...>,
"integration-dev@..." <integration-dev@...>
Date: 04/20/2014 12:59 PM
Subject: Re: [controller-dev] opendaylight controller performance test problem
Sent by: controller-dev-bounces@...
Hi,
It's recommended you use a different vm for cbench and the controller.
This is because both will put heavy load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...> wrote:
Hello Vaishali,
Thanks for your suggestion, I will try using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali Mithbaokar (vmithbao) <vmithbao@...> 写道:
<including integration-dev@... mailer >
As per attached email , Chris recently put together nightly performance test against OpenFlowPlugin distribution (not the base controller directly). It doesn't have the issue where test results are 0 after some time.
It may be good idea if you can share the cbench configuration you are using?
BTW in the above set up Chris has controller and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..." <controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test problem
Hello Greg :-)
I build controller with master branch about 3 days ago, I didn’t run test on the Hydrogen release and i build recently at maybe at Base. because i use
git clone ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running several cbench tests, the result is all 0.. or i had the wrong configuration ? How can I configure to get better performance ?
By the way, did you have similar performance test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg Hall <ghall@...> 写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...> wrote:
Hello all:
This is my first time for asking for help from opendaylight controller dev group.
Recently i am doing some research work about opendaylight controller performance. I want to test about the controller’s packet_in throughput and the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment. At the beginning, I tested the throughput
with clench tool, but the result was poor about 20K~30K responses per second. Did someone have similar test about opendaylight performance with clench ? And how about the result ?
Another issue is that after using clench to test performance for several times, the clench result is always 0 and the osgi console response become very slow even can’t response. So what is wrong with the opendaylight? Is this a bug or due to my wrong configuration(
in fact i didn’tconfigure anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1 attachments
[integration-dev] Automation CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
huangxufu <huangxufu@...>
Hi Luis, Here is the result if configure OSGi setting:dropAllPacketsRpc on. Cbench throughput mode: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 1000 -t -i 50 -I 5 -s 10 cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:44:38.152 10 switches: flows/sec: 6752 4630 4990 6210 6329 4586 2144 6309 4779 6014 total = 5.274107 per ms 15:44:48.260 10 switches: flows/sec: 8207 7238 7490 8042 6928 6443 9771 8228 8264 7496 total = 7.804764 per ms 15:44:58.388 10 switches: flows/sec: 3471 2911 3801 3601 6533 4855 19294 17180 6161 4640 total = 7.224830 per ms 15:45:09.330 10 switches: flows/sec: 7889 8882 6725 7456 8531 7993 5338 7429 7169 7847 total = 6.941944 per ms 15:45:19.598 10 switches: flows/sec: 9404 11875 9218 9654 9861 11810 12022 10778 10845 10495 total = 10.421092 per ms 15:45:30.681 10 switches: flows/sec: 2484 2534 5084 5739 3457 2673 3739 5946 5076 2096 total = 3.535304 per ms 15:45:41.048 10 switches: flows/sec: 2614 1613 0 0 3207 2980 0 0 0 2735 total = 1.280721 per ms 15:45:51.570 10 switches: flows/sec: 0 0 3170 2477 0 0 3721 628 1686 264 total = 1.146252 per ms 15:46:01.915 10 switches: flows/sec: 2192 3282 0 0 2042 3069 0 1031 0 0 total = 1.133841 per ms 15:46:12.671 10 switches: flows/sec: 0 0 2370 1952 0 0 2881 621 2476 1503 total = 1.107656 per ms 15:46:22.823 10 switches: flows/sec: 1710 1526 0 0 2381 2269 0 2350 0 0 total = 1.018325 per ms 15:46:33.670 10 switches: flows/sec: 0 0 2506 0 0 0 0 0 2008 1113 total = 0.523598 per ms 15:46:44.178 10 switches: flows/sec: 535 0 0 3428 0 0 2529 1836 263 0 total = 0.825439 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 523.60/10421.09/2793.42/3141.90 responses/s
and the latency mode: cbench: controller benchmarking tool running in mode 'latency' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:48:18.983 10 switches: flows/sec: 950 916 839 858 1675 1402 914 652 670 957 total = 0.983299 per ms 15:48:29.084 10 switches: flows/sec: 1034 1269 1000 1004 817 823 1045 1080 754 1016 total = 0.984198 per ms 15:48:39.184 10 switches: flows/sec: 1163 580 1293 613 1213 910 1146 879 1120 882 total = 0.979898 per ms 15:48:49.284 10 switches: flows/sec: 1154 415 1596 736 524 172 838 1037 1676 1315 total = 0.946299 per ms 15:48:59.385 10 switches: flows/sec: 832 1199 1336 881 730 1131 1303 469 254 1147 total = 0.928200 per ms 15:49:09.485 10 switches: flows/sec: 813 867 152 815 974 1033 991 1646 919 830 total = 0.903999 per ms 15:49:19.585 10 switches: flows/sec: 155 906 1078 0 992 1424 1119 966 1211 1064 total = 0.891499 per ms 15:49:29.686 10 switches: flows/sec: 807 1046 881 815 651 686 1424 804 700 839 total = 0.865300 per ms 15:49:39.786 10 switches: flows/sec: 702 1091 1283 708 899 999 751 1135 863 49 total = 0.848000 per ms 15:49:49.886 10 switches: flows/sec: 566 1168 688 729 1053 894 796 969 687 697 total = 0.824700 per ms 15:49:59.986 10 switches: flows/sec: 694 800 816 1252 650 706 760 882 974 556 total = 0.809000 per ms 15:50:10.087 10 switches: flows/sec: 718 930 639 899 674 690 1101 701 827 711 total = 0.788999 per ms 15:50:20.187 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/946.30/780.60/264.70 responses/s
both tests based on base distribution as before.
The results are much more better than last one, but it's still not good enough.
I find when cbench test is set on throughput mode, after running for a while the result becomes bad, even 0. So maybe something is blocking on the controller, I think.
Best Regards :) Huang
toggle quoted message
Show quoted text
Hi, can you please rerun your test with the OSGi setting: dropAllPacketsRpc on
This should be in theory Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道: Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Hi Huang,
CBench throughput mode does not work well in controller with new OF plugin due to a memory issue currently being investigated by OF plugin project.
For latency mode you are getting a little below than we get if you see Chris mail from yesterday. Also for latency mode we perform same as Floodlight according to OF plugin devs test.
BR/Luis
toggle quoted message
Show quoted text
Hi Luis, Here is the result if configure OSGi setting:dropAllPacketsRpc on. Cbench throughput mode: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 1000 -t -i 50 -I 5 -s 10 cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:44:38.152 10 switches: flows/sec: 6752 4630 4990 6210 6329 4586 2144 6309 4779 6014 total = 5.274107 per ms 15:44:48.260 10 switches: flows/sec: 8207 7238 7490 8042 6928 6443 9771 8228 8264 7496 total = 7.804764 per ms 15:44:58.388 10 switches: flows/sec: 3471 2911 3801 3601 6533 4855 19294 17180 6161 4640 total = 7.224830 per ms 15:45:09.330 10 switches: flows/sec: 7889 8882 6725 7456 8531 7993 5338 7429 7169 7847 total = 6.941944 per ms 15:45:19.598 10 switches: flows/sec: 9404 11875 9218 9654 9861 11810 12022 10778 10845 10495 total = 10.421092 per ms 15:45:30.681 10 switches: flows/sec: 2484 2534 5084 5739 3457 2673 3739 5946 5076 2096 total = 3.535304 per ms 15:45:41.048 10 switches: flows/sec: 2614 1613 0 0 3207 2980 0 0 0 2735 total = 1.280721 per ms 15:45:51.570 10 switches: flows/sec: 0 0 3170 2477 0 0 3721 628 1686 264 total = 1.146252 per ms 15:46:01.915 10 switches: flows/sec: 2192 3282 0 0 2042 3069 0 1031 0 0 total = 1.133841 per ms 15:46:12.671 10 switches: flows/sec: 0 0 2370 1952 0 0 2881 621 2476 1503 total = 1.107656 per ms 15:46:22.823 10 switches: flows/sec: 1710 1526 0 0 2381 2269 0 2350 0 0 total = 1.018325 per ms 15:46:33.670 10 switches: flows/sec: 0 0 2506 0 0 0 0 0 2008 1113 total = 0.523598 per ms 15:46:44.178 10 switches: flows/sec: 535 0 0 3428 0 0 2529 1836 263 0 total = 0.825439 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 523.60/10421.09/2793.42/3141.90 responses/s
and the latency mode: cbench: controller benchmarking tool running in mode 'latency' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:48:18.983 10 switches: flows/sec: 950 916 839 858 1675 1402 914 652 670 957 total = 0.983299 per ms 15:48:29.084 10 switches: flows/sec: 1034 1269 1000 1004 817 823 1045 1080 754 1016 total = 0.984198 per ms 15:48:39.184 10 switches: flows/sec: 1163 580 1293 613 1213 910 1146 879 1120 882 total = 0.979898 per ms 15:48:49.284 10 switches: flows/sec: 1154 415 1596 736 524 172 838 1037 1676 1315 total = 0.946299 per ms 15:48:59.385 10 switches: flows/sec: 832 1199 1336 881 730 1131 1303 469 254 1147 total = 0.928200 per ms 15:49:09.485 10 switches: flows/sec: 813 867 152 815 974 1033 991 1646 919 830 total = 0.903999 per ms 15:49:19.585 10 switches: flows/sec: 155 906 1078 0 992 1424 1119 966 1211 1064 total = 0.891499 per ms 15:49:29.686 10 switches: flows/sec: 807 1046 881 815 651 686 1424 804 700 839 total = 0.865300 per ms 15:49:39.786 10 switches: flows/sec: 702 1091 1283 708 899 999 751 1135 863 49 total = 0.848000 per ms 15:49:49.886 10 switches: flows/sec: 566 1168 688 729 1053 894 796 969 687 697 total = 0.824700 per ms 15:49:59.986 10 switches: flows/sec: 694 800 816 1252 650 706 760 882 974 556 total = 0.809000 per ms 15:50:10.087 10 switches: flows/sec: 718 930 639 899 674 690 1101 701 827 711 total = 0.788999 per ms 15:50:20.187 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/946.30/780.60/264.70 responses/s
both tests based on base distribution as before.
The results are much more better than last one, but it's still not good enough.
I find when cbench test is set on throughput mode, after running for a while the result becomes bad, even 0. So maybe something is blocking on the controller, I think.
Best Regards :) Huang Hi, can you please rerun your test with the OSGi setting: dropAllPacketsRpc on
This should be in theory Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道: Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
huangxufu <huangxufu@...>
Hi Luis,
I agree with you. when i test cbench at throughput mode, i got log about memory error. But when i test at latency mode, everything is ok and the performance even better than beacon. Now i am going to find why this happen. Do you have some suggestions about how to quickly find the root of the this problem?
Best Regards:) Huang
toggle quoted message
Show quoted text
Hi Huang,
CBench throughput mode does not work well in controller with new OF plugin due to a memory issue currently being investigated by OF plugin project.
For latency mode you are getting a little below than we get if you see Chris mail from yesterday. Also for latency mode we perform same as Floodlight according to OF plugin devs test.
BR/Luis
Hi Luis, Here is the result if configure OSGi setting:dropAllPacketsRpc on. Cbench throughput mode: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 1000 -t -i 50 -I 5 -s 10 cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:44:38.152 10 switches: flows/sec: 6752 4630 4990 6210 6329 4586 2144 6309 4779 6014 total = 5.274107 per ms 15:44:48.260 10 switches: flows/sec: 8207 7238 7490 8042 6928 6443 9771 8228 8264 7496 total = 7.804764 per ms 15:44:58.388 10 switches: flows/sec: 3471 2911 3801 3601 6533 4855 19294 17180 6161 4640 total = 7.224830 per ms 15:45:09.330 10 switches: flows/sec: 7889 8882 6725 7456 8531 7993 5338 7429 7169 7847 total = 6.941944 per ms 15:45:19.598 10 switches: flows/sec: 9404 11875 9218 9654 9861 11810 12022 10778 10845 10495 total = 10.421092 per ms 15:45:30.681 10 switches: flows/sec: 2484 2534 5084 5739 3457 2673 3739 5946 5076 2096 total = 3.535304 per ms 15:45:41.048 10 switches: flows/sec: 2614 1613 0 0 3207 2980 0 0 0 2735 total = 1.280721 per ms 15:45:51.570 10 switches: flows/sec: 0 0 3170 2477 0 0 3721 628 1686 264 total = 1.146252 per ms 15:46:01.915 10 switches: flows/sec: 2192 3282 0 0 2042 3069 0 1031 0 0 total = 1.133841 per ms 15:46:12.671 10 switches: flows/sec: 0 0 2370 1952 0 0 2881 621 2476 1503 total = 1.107656 per ms 15:46:22.823 10 switches: flows/sec: 1710 1526 0 0 2381 2269 0 2350 0 0 total = 1.018325 per ms 15:46:33.670 10 switches: flows/sec: 0 0 2506 0 0 0 0 0 2008 1113 total = 0.523598 per ms 15:46:44.178 10 switches: flows/sec: 535 0 0 3428 0 0 2529 1836 263 0 total = 0.825439 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 523.60/10421.09/2793.42/3141.90 responses/s
and the latency mode: cbench: controller benchmarking tool running in mode 'latency' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:48:18.983 10 switches: flows/sec: 950 916 839 858 1675 1402 914 652 670 957 total = 0.983299 per ms 15:48:29.084 10 switches: flows/sec: 1034 1269 1000 1004 817 823 1045 1080 754 1016 total = 0.984198 per ms 15:48:39.184 10 switches: flows/sec: 1163 580 1293 613 1213 910 1146 879 1120 882 total = 0.979898 per ms 15:48:49.284 10 switches: flows/sec: 1154 415 1596 736 524 172 838 1037 1676 1315 total = 0.946299 per ms 15:48:59.385 10 switches: flows/sec: 832 1199 1336 881 730 1131 1303 469 254 1147 total = 0.928200 per ms 15:49:09.485 10 switches: flows/sec: 813 867 152 815 974 1033 991 1646 919 830 total = 0.903999 per ms 15:49:19.585 10 switches: flows/sec: 155 906 1078 0 992 1424 1119 966 1211 1064 total = 0.891499 per ms 15:49:29.686 10 switches: flows/sec: 807 1046 881 815 651 686 1424 804 700 839 total = 0.865300 per ms 15:49:39.786 10 switches: flows/sec: 702 1091 1283 708 899 999 751 1135 863 49 total = 0.848000 per ms 15:49:49.886 10 switches: flows/sec: 566 1168 688 729 1053 894 796 969 687 697 total = 0.824700 per ms 15:49:59.986 10 switches: flows/sec: 694 800 816 1252 650 706 760 882 974 556 total = 0.809000 per ms 15:50:10.087 10 switches: flows/sec: 718 930 639 899 674 690 1101 701 827 711 total = 0.788999 per ms 15:50:20.187 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/946.30/780.60/264.70 responses/s
both tests based on base distribution as before.
The results are much more better than last one, but it's still not good enough.
I find when cbench test is set on throughput mode, after running for a while the result becomes bad, even 0. So maybe something is blocking on the controller, I think.
Best Regards :) Huang Hi, can you please rerun your test with the OSGi setting: dropAllPacketsRpc on
This should be in theory Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道: Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|
Hi Huang,
I do not know much more about this issue but seen that you want to help with this I cc openflowplugin-dev list so they can provide you more information.
BR/Luis
toggle quoted message
Show quoted text
Hi Luis,
I agree with you. when i test cbench at throughput mode, i got log about memory error. But when i test at latency mode, everything is ok and the performance even better than beacon. Now i am going to find why this happen. Do you have some suggestions about how to quickly find the root of the this problem?
Best Regards:) Huang Hi Huang,
CBench throughput mode does not work well in controller with new OF plugin due to a memory issue currently being investigated by OF plugin project.
For latency mode you are getting a little below than we get if you see Chris mail from yesterday. Also for latency mode we perform same as Floodlight according to OF plugin devs test.
BR/Luis
Hi Luis, Here is the result if configure OSGi setting:dropAllPacketsRpc on. Cbench throughput mode: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 1000 -t -i 50 -I 5 -s 10 cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:44:38.152 10 switches: flows/sec: 6752 4630 4990 6210 6329 4586 2144 6309 4779 6014 total = 5.274107 per ms 15:44:48.260 10 switches: flows/sec: 8207 7238 7490 8042 6928 6443 9771 8228 8264 7496 total = 7.804764 per ms 15:44:58.388 10 switches: flows/sec: 3471 2911 3801 3601 6533 4855 19294 17180 6161 4640 total = 7.224830 per ms 15:45:09.330 10 switches: flows/sec: 7889 8882 6725 7456 8531 7993 5338 7429 7169 7847 total = 6.941944 per ms 15:45:19.598 10 switches: flows/sec: 9404 11875 9218 9654 9861 11810 12022 10778 10845 10495 total = 10.421092 per ms 15:45:30.681 10 switches: flows/sec: 2484 2534 5084 5739 3457 2673 3739 5946 5076 2096 total = 3.535304 per ms 15:45:41.048 10 switches: flows/sec: 2614 1613 0 0 3207 2980 0 0 0 2735 total = 1.280721 per ms 15:45:51.570 10 switches: flows/sec: 0 0 3170 2477 0 0 3721 628 1686 264 total = 1.146252 per ms 15:46:01.915 10 switches: flows/sec: 2192 3282 0 0 2042 3069 0 1031 0 0 total = 1.133841 per ms 15:46:12.671 10 switches: flows/sec: 0 0 2370 1952 0 0 2881 621 2476 1503 total = 1.107656 per ms 15:46:22.823 10 switches: flows/sec: 1710 1526 0 0 2381 2269 0 2350 0 0 total = 1.018325 per ms 15:46:33.670 10 switches: flows/sec: 0 0 2506 0 0 0 0 0 2008 1113 total = 0.523598 per ms 15:46:44.178 10 switches: flows/sec: 535 0 0 3428 0 0 2529 1836 263 0 total = 0.825439 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 523.60/10421.09/2793.42/3141.90 responses/s
and the latency mode: cbench: controller benchmarking tool running in mode 'latency' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 1000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:48:18.983 10 switches: flows/sec: 950 916 839 858 1675 1402 914 652 670 957 total = 0.983299 per ms 15:48:29.084 10 switches: flows/sec: 1034 1269 1000 1004 817 823 1045 1080 754 1016 total = 0.984198 per ms 15:48:39.184 10 switches: flows/sec: 1163 580 1293 613 1213 910 1146 879 1120 882 total = 0.979898 per ms 15:48:49.284 10 switches: flows/sec: 1154 415 1596 736 524 172 838 1037 1676 1315 total = 0.946299 per ms 15:48:59.385 10 switches: flows/sec: 832 1199 1336 881 730 1131 1303 469 254 1147 total = 0.928200 per ms 15:49:09.485 10 switches: flows/sec: 813 867 152 815 974 1033 991 1646 919 830 total = 0.903999 per ms 15:49:19.585 10 switches: flows/sec: 155 906 1078 0 992 1424 1119 966 1211 1064 total = 0.891499 per ms 15:49:29.686 10 switches: flows/sec: 807 1046 881 815 651 686 1424 804 700 839 total = 0.865300 per ms 15:49:39.786 10 switches: flows/sec: 702 1091 1283 708 899 999 751 1135 863 49 total = 0.848000 per ms 15:49:49.886 10 switches: flows/sec: 566 1168 688 729 1053 894 796 969 687 697 total = 0.824700 per ms 15:49:59.986 10 switches: flows/sec: 694 800 816 1252 650 706 760 882 974 556 total = 0.809000 per ms 15:50:10.087 10 switches: flows/sec: 718 930 639 899 674 690 1101 701 827 711 total = 0.788999 per ms 15:50:20.187 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/946.30/780.60/264.70 responses/s
both tests based on base distribution as before.
The results are much more better than last one, but it's still not good enough.
I find when cbench test is set on throughput mode, after running for a while the result becomes bad, even 0. So maybe something is blocking on the controller, I think.
Best Regards :) Huang Hi, can you please rerun your test with the OSGi setting: dropAllPacketsRpc on
This should be in theory Hello all,
Thanks for all your kind reminding. First, I agree with Muthukumaran. I have read these two references before, so i chose cbench and controller running on the same VM.
1.Downloading base distribution artifact and OF plugin reactive forwarding bundle. 2.Delete two AD-SAL bundles simple forwarding and arp handler that interfere with MD-SAL Cbench measurements 3.Add OF plugin reactive forwarding bundle to opendayligt/plugins. 4.Set controller Log level to ERROR 5.Start controller with recommended options: run.sh -of13 -Xms1g -Xmx4g 6.Turn on the data store drop test, type from the controller’s OSGI console: > dropAllPackets on 7.Then i started cbench with this command : $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 then i got the following result: 15:08:28.696 10 switches: flows/sec: 52 99 79 40 55 46 63 31 38 106 total = 0.060895 per ms 15:08:38.816 10 switches: flows/sec: 43 16 35 45 52 24 25 39 46 28 total = 0.035231 per ms 15:08:49.534 10 switches: flows/sec: 38 25 24 20 12 20 4 2 29 26 total = 0.018836 per ms 15:09:00.625 10 switches: flows/sec: 31 3 49 23 26 42 20 25 15 19 total = 0.023019 per ms 15:09:11.015 10 switches: flows/sec: 15 24 9 12 35 14 5 1 5 6 total = 0.012245 per ms 15:09:21.322 10 switches: flows/sec: 0 10 0 0 0 0 0 13 8 4 total = 0.003429 per ms 15:09:32.313 10 switches: flows/sec: 0 0 13 0 9 1 5 0 12 0 total = 0.003673 per ms 15:09:42.416 10 switches: flows/sec: 0 0 0 0 8 0 0 7 0 0 total = 0.001500 per ms 15:09:53.393 10 switches: flows/sec: 0 0 25 0 8 0 9 0 8 0 total = 0.004597 per ms 15:10:03.503 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms 15:10:14.239 10 switches: flows/sec: 0 0 0 0 7 0 6 14 0 0 total = 0.002539 per ms 15:10:24.535 10 switches: flows/sec: 0 0 9 0 0 0 0 0 7 1 total = 0.001667 per ms 15:10:34.645 10 switches: flows/sec: 0 0 0 0 0 0 0 0 0 0 total = 0.000000 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 0.00/23.02/5.27/6.78 responses/s There are still many 0 and even when it is not 0 the result is very poor.
After the test as show above, i use OF plugin edition for the same cbench test, getting simile result.
However on the contrary, I also run beacon, floodlight controller on the same VM. I run the same cbench command: $ taskset -c 0 cbench -c localhost -p 6633 -m 10000 -l 13 -w 3 -M 100000 -t -i 50 -I 5 -s 10 the beacon controller’s result is as follows: cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 10 switches offset 1 :: 13 tests each; 10000 ms per test with 100000 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 3 "warmup" and last 0 "cooldown" loops connection delay of 50ms per 5 switch(es) debugging info is off 15:19:20.681 10 switches: flows/sec: 1211932 1211936 1211136 1211135 1209537 1208740 1207942 1207144 1209540 1209538 total = 1209.773558 per ms 15:19:30.787 10 switches: flows/sec: 1332517 1334911 1334910 1333315 1332516 1334911 1334914 1334115 1331717 1331717 total = 1333.339899 per ms 15:19:40.890 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341296 1341299 1340500 1338103 total = 1340.170783 per ms 15:19:50.993 10 switches: flows/sec: 1338902 1341296 1341296 1339701 1338902 1341296 1341298 1340500 1338103 1338103 total = 1339.813624 per ms 15:20:01.098 10 switches: flows/sec: 1367644 1369243 1369242 1367644 1367646 1368444 1366845 1366845 1366845 1366845 total = 1367.583576 per ms 15:20:11.201 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.747665 per ms 15:20:21.304 10 switches: flows/sec: 1348482 1350877 1350877 1349281 1348482 1350877 1350879 1350080 1347683 1347683 total = 1349.390289 per ms 15:20:31.407 10 switches: flows/sec: 1325329 1325329 1325329 1325331 1323733 1325329 1325329 1325329 1325329 1324532 total = 1325.071747 per ms 15:20:41.511 10 switches: flows/sec: 1332515 1334909 1334909 1333314 1332515 1334909 1334912 1334113 1331716 1331716 total = 1333.284810 per ms 15:20:51.615 10 switches: flows/sec: 1370837 1373232 1373232 1371636 1370837 1373232 1373234 1372435 1370038 1370038 total = 1371.601603 per ms 15:21:01.718 10 switches: flows/sec: 1361257 1363651 1363651 1362056 1361257 1363651 1363651 1363651 1362855 1360458 total = 1362.610257 per ms 15:21:11.822 10 switches: flows/sec: 1357265 1357265 1357264 1357267 1355669 1357264 1357264 1357264 1357264 1356468 total = 1356.793931 per ms 15:21:21.924 10 switches: flows/sec: 1341297 1341297 1341296 1341299 1339701 1341296 1341296 1341296 1341299 1339701 total = 1340.961574 per ms RESULT: 10 switches 10 tests min/max/avg/stdev = 1325071.75/1371747.66/1351885.91/15827.27 responses/s
The floodlight controller has simile performance.
So i think maybe the reason for the difference is the controller itself, other than cbench. I think the configuration seems right, so maybe there is something can be done to improve the performance on the controller side.
Does anyone have some suggestions or ideas for how to improve the performance?
best wishes! rainmeter 在 2014年4月21日,下午1:27,Muthukumaran Kothandaraman < mkothand@...> 写道: Hi Chris,
>>It's
recommended you use a different vm for cbench and the controller
.. as long as it is ensured that network
latency / bandwidth constraints do not skew the measurements
While running controller and cbench
on the same bare-metal multi-core machine via loopback (not different VMs),
CPU-pinning can help minimizing stomping and eliminate possible measurement-skews
due to network
- http://archive.openflow.org/wk/index.php/Controller_Performance_Comparisons
and
- Section 6 ref of http://yuba.stanford.edu/~derickso/docs/hotsdn15-erickson.pdf
Agreed, this is not real-deployment
scenario. But this gives a good baseline to compare when deployed over
network. If performance degrades between loopback and network environment,
first target of suspicion and
troubleshooting would be none other
than network itself
Regards
Muthukumaran (Muthu)
P(existence at t) = 1- 1/P(existence at t-1)
From:
"Christopher O'SHEA"
<christopher.o.shea@...>
To:
huangxufu <huangxufu@...>
Cc:
"controller-dev@..."
<controller-dev@...>, "integration-dev@..."
<integration-dev@...>
Date:
04/20/2014 12:59 PM
Subject:
Re: [controller-dev]
opendaylight controller performance test
problem
Sent by:
controller-dev-bounces@...
Hi,
It's recommended you use a different
vm for cbench and the controller.
This is because both will put heavy
load on the CPU.
On 19 Apr 2014, at 10:13 pm, "huangxufu" <huangxufu@...>
wrote:
Hello Vaishali,
Thanks for your suggestion, I will try
using OpenFlowPlugin distribution to test performance again.
Because i run ODL controller and cbench
at the same VM. so I use cbench with following commend:
$ cbench -t -s 10
others are all default.
best wishes!
rainmeter
在 2014年4月20日,上午1:53,Vaishali
Mithbaokar (vmithbao) <vmithbao@...>
写道:
<including integration-dev@...
mailer >
As per attached email , Chris recently
put together nightly performance test against OpenFlowPlugin distribution
(not the base controller directly). It doesn't have the issue where
test results are 0 after some time.
It may be good idea if you can share
the cbench configuration you are using?
BTW in the above set up Chris has controller
and clench running in separate VM though.
Thanks,
Vaishali
From: huangxufu <huangxufu@...>
Date: Saturday, April 19, 2014 8:01 AM
To: Greg Hall <ghall@...>
Cc: "controller-dev@..."
<controller-dev@...>
Subject: Re: [controller-dev] opendaylight controller performance test
problem
Hello Greg :-)
I build controller with master branch
about 3 days ago, I didn’t run test on the Hydrogen release and i build
recently at maybe at Base. because i use git clone
ssh://<username>@git.opendaylight.org:29418/controller.git
so i think it is Base :-)
So is it a bug? Because after running
several cbench tests, the result is all 0.. or i had the wrong configuration
? How can I configure to get better performance ?
By the way, did you have similar performance
test? What was the result? Sorry for flooding question :-)
best wishes!
rainmeter
在 2014年4月19日,下午10:44,Greg
Hall <ghall@...>
写道:
Hello Perf tester :-)
What build/date was your controller?
Hydrogen release or a recent build?
Base or SP? A lot of issues fixed since Hydrogen.
Memory exhaustion is a prime suspect for your apparent hang.
If it’s a recent build then the Max memory setting -Xmx is 1GB as of
a recent change. You’ll see a clear message in the console
stating this at startup.
Greg
On Apr 19, 2014, at 6:40 AM, huangxufu <huangxufu@...>
wrote:
Hello all:
This is my first time for asking for help from opendaylight controller
dev group.
Recently i am doing some research work about opendaylight controller performance.
I want to test about the controller’s packet_in throughput and
the latency. I used a VM with 8 CPUs and 8 Gb memory to do this experiment.
At the beginning, I tested the throughput with clench tool, but the
result was poor about 20K~30K responses per second. Did someone have similar
test about opendaylight performance with clench ? And how about the result
?
Another issue is that after using clench to test performance for several
times, the clench result is always 0 and the osgi console response become
very slow even can’t response. So what is wrong with the opendaylight?
Is this a bug or due to my wrong configuration( in fact i didn’tconfigure
anything, i just run the run.sh file to start the controller) ?
Thank you for anyone who can give me some suggestion or explanation about
these two problems.
Best wishes to all :)
rainmeter
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
1
attachments
[integration-dev] Automation
CBench test at Ericsson lab.eml(9K)
download
<邮件附件.eml>
_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev_______________________________________________
controller-dev mailing list
controller-dev@...
https://lists.opendaylight.org/mailman/listinfo/controller-dev
_______________________________________________ controller-dev mailing list controller-dev@... https://lists.opendaylight.org/mailman/listinfo/controller-dev
|
|