Luis Gomez <luis.gomez@...>
Hi guys,
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very
well all the REST requests so this should be a good input for Robot framework.
BR/Luis
toggle quoted message
Show quoted text
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Hi Luis
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
On Tue, Nov 5, 2013 at 12:04 PM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
See my answers inline:
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail;
integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Hi Guys,
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
Yes, that is the idea although nothing has been uploaded to the repo yet. So far we have 2 things: Python scripts created by China team and stored in
an external repo and Robot framework installed in Open Lab at Ericsson. Both are now described (Carol updated the Robot today) in
https://wiki.opendaylight.org/view/CrossProject:Integration_Group:Test_Tools
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
BR/Luis
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
|
|
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
toggle quoted message
Show quoted text
Hi guys, Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework. BR/Luis From: Baohua Yang [mailto:yangbaohua@gmail.com] Sent: Tuesday, November 05, 2013 1:10 AM To: Luis Gomez Cc: Moiz Raja; integration-dev@... Subject: Re: [integration-dev] System Test Plan discussion Hi Luis Thanks for your willing to help test the code. Currently, we've finished the tests on main functions of all modules in the base edition. We believe there's still lots of work to do and welcome for any feedback! Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely. Every member, please do not hesitate to drop lines. Hi Moiz, See my answers inline: From: Moiz Raja [mailto:moraja@...] Sent: Monday, November 04, 2013 5:59 PM To: Luis Gomez Cc: Gmail; integration-dev@... Subject: Re: [integration-dev] System Test Plan discussion Hi Guys, A couple of questions on the System test. a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot. b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts? The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there. c. Will the python/robot tests live in the integration repository? Anything tests that I can look at? I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look. I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest. BR/Luis OK, I changed the test case order so that we start with the most basic services and end with most abstracted services : - Host Tracker & Simple Forwarding Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something like basic-to-extra functions. I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it. As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test. Sure, Luis. This is a valuable question! IMHO, the simple-to-complex order is good. However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform. Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately. BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules even it this means more TCs. I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical order. This is fine but I just want to explain the reason for the order before: 1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts with a complex case and fails, more difficult to debug. 2) I also combined some modules like for example if I need to have some flows (FRM) in order to check statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr. The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e. Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan: 1) Module order: do we want to follow any logical order (simple to complex or something else) or we just do alphabetical? 2) Module dependencies: do we want to have modules depending on the result of previous modules (more test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test) Please help verify and input more details, or to see if we have better place to move to. Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week. Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided. Any one is welcome to make contribution to the code, documents or bug fixing. It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test: - Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases. - Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this. - TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks like we are interested in collaborating with ON.Lab people so this could be a good opportunity. - System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan. So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan. _______________________________________________ integration-dev mailing list integration-dev@... https://lists.opendaylight.org/mailman/listinfo/integration-dev
-- Best wishes! Baohua _______________________________________________ integration-dev mailing list integration-dev@...https://lists.opendaylight.org/mailman/listinfo/integration-dev
|
|
Luis Gomez <luis.gomez@...>
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job whenever someone tries to put something on it (i.e. with git
push)? or is there another way to do this?
toggle quoted message
Show quoted text
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:04 PM
To: Luis Gomez
Cc: Baohua Yang; huang denghui (huangdenghui@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
-Moiz
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very
well all the REST requests so this should be a good input for Robot framework.
From: Baohua
Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz
Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger
a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have
the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most abstracted services :
- Host Tracker
& Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something
like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats
Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway
besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical order. This is fine but I just want to explain the reason
for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones
like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check statistics (stats Mgr), I create the flows (FRM1), then
I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e. Stats Mgr) that needs to have certain conditions created
in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more test efficient, less TCs) or have totally independent modules
(less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
|
|
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
-Moiz
toggle quoted message
Show quoted text
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this? How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt. -Moiz
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework. From: Baohua Yang [mailto:yangbaohua@gmail.com] Sent: Tuesday, November 05, 2013 1:10 AM To: Luis Gomez Cc: Moiz Raja; integration-dev@... Subject: Re: [integration-dev] System Test Plan discussion Thanks for your willing to help test the code. Currently, we've finished the tests on main functions of all modules in the base edition. We believe there's still lots of work to do and welcome for any feedback! Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely. Every member, please do not hesitate to drop lines. From: Moiz Raja [mailto:moraja@...] Sent: Monday, November 04, 2013 5:59 PM To: Luis Gomez Cc: Gmail; integration-dev@... Subject: Re: [integration-dev] System Test Plan discussion A couple of questions on the System test. a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot. b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts? The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there. c. Will the python/robot tests live in the integration repository? Anything tests that I can look at? I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look. I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest. OK, I changed the test case order so that we start with the most basic services and end with most abstracted services : - Host Tracker & Simple Forwarding Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something like basic-to-extra functions. I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it. As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test. Sure, Luis. This is a valuable question! IMHO, the simple-to-complex order is good. However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform. Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately. BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules even it this means more TCs. I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical order. This is fine but I just want to explain the reason for the order before: 1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts with a complex case and fails, more difficult to debug. 2) I also combined some modules like for example if I need to have some flows (FRM) in order to check statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr. The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e. Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan: 1) Module order: do we want to follow any logical order (simple to complex or something else) or we just do alphabetical? 2) Module dependencies: do we want to have modules depending on the result of previous modules (more test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test) Please help verify and input more details, or to see if we have better place to move to. Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week. Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided. Any one is welcome to make contribution to the code, documents or bug fixing. It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test: - Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases. - Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this. - TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks like we are interested in collaborating with ON.Lab people so this could be a good opportunity. - System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan. So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan. _______________________________________________ integration-dev mailing list integration-dev@... https://lists.opendaylight.org/mailman/listinfo/integration-dev
|
|
Baohua Yang <yangbaohua@...>
Thanks for your feedback, Luis. It would be of great help if widely thorough tests can be done in various environment, before we make the code stable. As we agreed before, I will notify every stable release , and then we can consider to deploy it to our OpenLab.
Maybe we should discuss at tomorrow meeting on integration into the robot framework and pulling the code into our integration repo . And periodically, they can sync between each other.
Can you help check where is a suitable place in our integration repo? Thanks!
toggle quoted message
Show quoted text
On Wed, Nov 6, 2013 at 8:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi guys,
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very
well all the REST requests so this should be a good input for Robot framework.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Hi Luis
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
On Tue, Nov 5, 2013 at 12:04 PM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
See my answers inline:
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail;
integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Hi Guys,
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
Yes, that is the idea although nothing has been uploaded to the repo yet. So far we have 2 things: Python scripts created by China team and stored in
an external repo and Robot framework installed in Open Lab at Ericsson. Both are now described (Carol updated the Robot today) in
https://wiki.opendaylight.org/view/CrossProject:Integration_Group:Test_Tools
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
BR/Luis
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
-- Best wishes! Baohua
|
|
Luis Gomez <luis.gomez@...>
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone changes something in the test code, I do not know how much
effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them already. Also Baohua and Dengui, being the contributors for
the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
toggle quoted message
Show quoted text
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
To: Luis Gomez
Cc: Baohua Yang; huang denghui (huangdenghui@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job whenever someone tries to put something on it (i.e. with git
push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very
well all the REST requests so this should be a good input for Robot framework.
From: Baohua
Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz
Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger
a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have
the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most abstracted services :
- Host Tracker
& Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something
like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats
Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway
besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical order. This is fine but I just want to explain the reason
for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones
like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check statistics (stats Mgr), I create the flows (FRM1), then
I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e. Stats Mgr) that needs to have certain conditions created
in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more test efficient, less TCs) or have totally independent modules
(less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
|
|
Ok. I understand your concern now.
I'll let Baohua and Denghui push to the ODL repo since they have done the work to create these tests :)
-Moiz
toggle quoted message
Show quoted text
Hi Moiz, My only concern was that the existing verify job will build the release vehicles every time someone changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team. Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight? Thanks/Luis
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run. Was your concern that pushing the code will automatically trigger a test run? Because it won't. I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this? How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt. Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework. From: Baohua Yang [mailto:yangbaohua@gmail.com] Sent: Tuesday, November 05, 2013 1:10 AM To: Luis Gomez Cc: Moiz Raja; integration-dev@... Subject: Re: [integration-dev] System Test Plan discussion Thanks for your willing to help test the code. Currently, we've finished the tests on main functions of all modules in the base edition. We believe there's still lots of work to do and welcome for any feedback! Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely. Every member, please do not hesitate to drop lines. From: Moiz Raja [mailto:moraja@...] Sent: Monday, November 04, 2013 5:59 PM To: Luis Gomez Cc: Gmail; integration-dev@... Subject: Re: [integration-dev] System Test Plan discussion A couple of questions on the System test. a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot. b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts? The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there. c. Will the python/robot tests live in the integration repository? Anything tests that I can look at? I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look. I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest. OK, I changed the test case order so that we start with the most basic services and end with most abstracted services : - Host Tracker & Simple Forwarding Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something like basic-to-extra functions. I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it. As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test. Sure, Luis. This is a valuable question! IMHO, the simple-to-complex order is good. However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform. Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately. BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules even it this means more TCs. I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical order. This is fine but I just want to explain the reason for the order before: 1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts with a complex case and fails, more difficult to debug. 2) I also combined some modules like for example if I need to have some flows (FRM) in order to check statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr. The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e. Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan: 1) Module order: do we want to follow any logical order (simple to complex or something else) or we just do alphabetical? 2) Module dependencies: do we want to have modules depending on the result of previous modules (more test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test) Please help verify and input more details, or to see if we have better place to move to. Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week. Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided. Any one is welcome to make contribution to the code, documents or bug fixing. It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test: - Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases. - Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this. - TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks like we are interested in collaborating with ON.Lab people so this could be a good opportunity. - System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan. So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan. _______________________________________________ integration-dev mailing list integration-dev@... https://lists.opendaylight.org/mailman/listinfo/integration-dev
|
|
Baohua Yang <yangbaohua@...>
Yes, luis. Definitely. What I concern is should we put the code now or wait until the code is more stable? As Deng hui and I are still fixing one little bug. After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
Thanks!
toggle quoted message
Show quoted text
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone changes something in the test code, I do not know how much
effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them already. Also Baohua and Dengui, being the contributors for
the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job whenever someone tries to put something on it (i.e. with git
push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very
well all the REST requests so this should be a good input for Robot framework.
From: Baohua
Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz
Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger
a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have
the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most abstracted services :
- Host Tracker
& Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something
like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats
Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway
besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical order. This is fine but I just want to explain the reason
for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones
like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check statistics (stats Mgr), I create the flows (FRM1), then
I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e. Stats Mgr) that needs to have certain conditions created
in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more test efficient, less TCs) or have totally independent modules
(less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
-- Best wishes! Baohua
|
|
Luis Gomez <luis.gomez@...>
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python, robot, etc..), so one idea is to have the following structure
under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the meeting if you or someone else has more ideas.
BR/Luis
toggle quoted message
Show quoted text
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
|
|
Baohua Yang <yangbaohua@...>
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
toggle quoted message
Show quoted text
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python, robot, etc..), so one idea is to have the following structure
under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
-- Best wishes! Baohua
|
|
Punal Patel <Punal.Patel@...>
Hi Team,
We can use robotframework-requests
https://github.com/bulkan/robotframework-requests
We are using “requests” right now in CSIT test tools but robotframework-requests will integrate CSIT test tools to robotframework.
Any thoughts, comments?
Thank You,
Punal Patel
toggle quoted message
Show quoted text
From: integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Baohua Yang
Sent: November-05-13 9:03 PM
To: Luis Gomez
Cc: integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is
test code (python, robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow
during the meeting if you or someone else has more ideas.
BR/Luis
From: Baohua
Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every
time someone changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to
upload them already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz
Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then
they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify
build job whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the
controller IP and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua
Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz
Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The
system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger
the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going
to have release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited
to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and
end with most abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not
very precise when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete.
Please review the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for
example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container
Mgr, Cluster Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function
of individual module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules,
QoS modules.... And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent
modules even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following
alphabetical order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr
or Toplogy Mgr, FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also
if the test starts with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM)
in order to check statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some
modules (i.e. Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something
else) or we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous
modules (more test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience
so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have
already got from Swaraj. From the TSC call today, it looks like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk
about OVSDB inclusion in base release, I will also work on some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to
ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling
the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
|
|
Luis Gomez <luis.gomez@...>
Can anyone try this
robotframework-requests vs CSIT-test-tool requests and see what are the differences?
I would like to try in the Ericsson Lab today if I have time….
toggle quoted message
Show quoted text
From: Punal Patel [mailto:Punal.Patel@...]
Sent: Wednesday, November 06, 2013 1:36 PM
To: Baohua Yang; Luis Gomez; Carol Sanders (carol.sanders@...)
Cc: integration-dev@...
Subject: RE: [integration-dev] CSIT test tools
Hi Team,
We can use robotframework-requests
https://github.com/bulkan/robotframework-requests
We are using “requests” right now in CSIT test tools but robotframework-requests will integrate CSIT test tools to robotframework.
Any thoughts, comments?
Thank You,
Punal Patel
From:
integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Baohua Yang
Sent: November-05-13 9:03 PM
To: Luis Gomez
Cc: integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice
that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
|
|
Baohua Yang <yangbaohua@...>
Thanks punal and luis! It is a good lib that uses the python requests library and provides api for the robot framework.
The lib was evaluated before we try to find other solutions, and we think it a nice general packaging.
Before planning writing our tool, we also tried similar ideas but found several difficulties to handle our scenarios using such way. 1) The result validation may not be that straightforward when directly calling the NB API, because lots of responses must be filtered and analyzed . e.g., in switch manager, the list function will return large content including the timestamps, hence we cannot pre-set the standard answer before getting them in real time. Even more actions are involved, however, will reduce the robot scripts' readability. So we want keep more flexibility and scalability here, while keeping robot easily-satisfied.
2) Another problem is the dynamics. Some functionality must be evaluated combing several NB APIs, such as adding an entry and then removing it. And even more complicated scenarios in future. 3) Our further aim is to load the network config dynamically and help setup the environment automatically. So far as we know, there has to be additional code to do this, even with helps of similar libs and the robot tool.
IMHO, an appropriate solution may be keeping robot itself simple and readable, while guaranteeing enough scalability and flexibility for users and future developments.
Btw, one big advantage of the lib we can see is the session support, which can easily be implemented in current CSIT tool.
However, we keep this undecided as we are still investigating whether the caching and persistence mechanism by session is necessary for our tests (all can be done in seconds currently), and what is the advantage and disadvantage of using session in the specific controller test?
I would appreciate very much if anyone can drop some comments on this!
Anyone has some comments? We would like to welcome more evaluation comments ASAP.
Thanks!
toggle quoted message
Show quoted text
On Thu, Nov 7, 2013 at 6:29 AM, Luis Gomez <luis.gomez@...> wrote:
Can anyone try this
robotframework-requests vs CSIT-test-tool requests and see what are the differences?
I would like to try in the Ericsson Lab today if I have time….
Hi Team,
We can use robotframework-requests
https://github.com/bulkan/robotframework-requests
We are using “requests” right now in CSIT test tools but robotframework-requests will integrate CSIT test tools to robotframework.
Any thoughts, comments?
Thank You,
Punal Patel
From:
integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Baohua Yang
Sent: November-05-13 9:03 PM
To: Luis Gomez
Cc: integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice
that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
-- Best wishes! Baohua
|
|
Luis Gomez <luis.gomez@...>
I like the word “test” on integration root, now what are the things we will have under test?
Maybe tools that can be used by any test (like your scripts) and then the test themselves (CSIT, E2EST, etc…) written in Robot or TestOn (will see) so we can
do something like this:
Integration --
distributions
packaging
test --
tools
csit --
robot
teston
What do you think?
toggle quoted message
Show quoted text
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 9:03 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
|
|
Luis Gomez <luis.gomez@...>
Hi all,
I have tried simple test case “get topology” with robot request library and it also works pretty neat:
*** Settings ***
Library Collections
Library RequestsLibrary
*** Testcases ***
Get Request
${auth}= Create List admin admin
Create Session controller http://10.125.136.52:8080 auth=${auth}
${resp}= Get controller /controller/nb/v2/topology/default
Log ${resp}
Should Be Equal As Strings ${resp.status_code} 200
However I am doing a simple match on “200 OK” response while Baohua python scripts we are doing much more:
def get_topology(self):
"""
The name is suggested to match the NB API.
Show the topology
>>> TopologyManager().get_topology()
True
"""
r = super(self.__class__, self).read()
if r:
v = r['edgeProperties']
for i in range(0, len(r), 2):
nc = v[i]['edge']
if nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:03'},
u'type': u'OF', u'id': u'3'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'2'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:02'},
u'type': u'OF', u'id': u'3'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'1'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'1'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:02'},
u'type': u'OF', u'id': u'3'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'2'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:03'},
u'type': u'OF', u'id': u'3'}:
print False
else:
print False
print True
So I believe this is what Baohua means that we can be more flexible using our own library, right?
BR/Luis
toggle quoted message
Show quoted text
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Wednesday, November 06, 2013 6:25 PM
To: Luis Gomez
Cc: Punal Patel; Carol Sanders (carol.sanders@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Thanks punal and luis!
It is a good lib that uses the python requests library and provides api for the robot framework.
The lib was evaluated before we try to find other solutions, and we think it a nice general packaging.
Before planning writing our tool, we also tried similar ideas but found several difficulties to handle our scenarios using such way.
1) The result validation may not be that straightforward when directly calling the NB API, because lots of responses must be filtered and analyzed . e.g., in switch manager, the list function will return large content including the timestamps,
hence we cannot pre-set the standard answer before getting them in real time. Even more actions are involved, however, will reduce the robot scripts' readability. So we want keep more flexibility and scalability here, while keeping robot easily-satisfied.
2) Another problem is the dynamics. Some functionality must be evaluated combing several NB APIs, such as adding an entry and then removing it. And even more complicated scenarios in future.
3) Our further aim is to load the network config dynamically and help setup the environment automatically. So far as we know, there has to be additional code to do this, even with helps of similar libs and the robot tool.
IMHO, an appropriate solution may be keeping robot itself simple and readable, while guaranteeing enough scalability and flexibility for users and future developments.
Btw, one big advantage of the lib we can see is the session support, which can easily be implemented in current CSIT tool.
However, we keep this undecided as we are still investigating whether the caching and persistence mechanism by session is necessary for our tests (all can be done in seconds currently), and what is the advantage and disadvantage of using
session in the specific controller test?
I would appreciate very much if anyone can drop some comments on this!
Anyone has some comments? We would like to welcome more evaluation comments ASAP.
On Thu, Nov 7, 2013 at 6:29 AM, Luis Gomez <luis.gomez@...> wrote:
Can anyone try this
robotframework-requests vs CSIT-test-tool requests and see what are the differences?
I would like to try in the Ericsson Lab today if I have time….
Hi Team,
We can use robotframework-requests
https://github.com/bulkan/robotframework-requests
We are using “requests” right now in CSIT test tools but robotframework-requests will
integrate CSIT test tools to robotframework.
Any thoughts, comments?
Thank You,
Punal Patel
From:
integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Baohua Yang
Sent: November-05-13 9:03 PM
To: Luis Gomez
Cc:
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice
that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
--
Best wishes!
Baohua
|
|
Luis Gomez <luis.gomez@...>
In general I think both ways have their pros and cons: Custom libraries are very powerful but I also feel we miss the test definition concept in Robot, i.e.
all the test is defined in Python and we need to actually see the python code to understand the test case. Anyway I will check tomorrow with Carol and some other people joining the test tools discussion of every Thursday what are the Robot limitations with
regards to our system test and how we can better use CSIT_test_tools scripts to work around the limitations.
BR/Luis
toggle quoted message
Show quoted text
From: integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Luis Gomez
Sent: Wednesday, November 06, 2013 7:44 PM
To: Baohua Yang
Cc: integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Hi all,
I have tried simple test case “get topology” with robot request library and it also works pretty neat:
*** Settings ***
Library Collections
Library RequestsLibrary
*** Testcases ***
Get Request
${auth}= Create List admin admin
Create Session controller
http://10.125.136.52:8080 auth=${auth}
${resp}= Get controller /controller/nb/v2/topology/default
Log ${resp}
Should Be Equal As Strings ${resp.status_code} 200
However I am doing a simple match on “200 OK” response while Baohua python scripts we are doing much more:
def get_topology(self):
"""
The name is suggested to match the NB API.
Show the topology
>>> TopologyManager().get_topology()
True
"""
r = super(self.__class__, self).read()
if r:
v = r['edgeProperties']
for i in range(0, len(r), 2):
nc = v[i]['edge']
if nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:03'},
u'type': u'OF', u'id': u'3'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'2'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:02'},
u'type': u'OF', u'id': u'3'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'1'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'1'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:02'},
u'type': u'OF', u'id': u'3'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'2'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:03'},
u'type': u'OF', u'id': u'3'}:
print False
else:
print False
print True
So I believe this is what Baohua means that we can be more flexible using our own library, right?
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Wednesday, November 06, 2013 6:25 PM
To: Luis Gomez
Cc: Punal Patel; Carol Sanders (carol.sanders@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Thanks punal and luis!
It is a good lib that uses the python requests library and provides api for the robot framework.
The lib was evaluated before we try to find other solutions, and we think it a nice general packaging.
Before planning writing our tool, we also tried similar ideas but found several difficulties to handle our scenarios using such way.
1) The result validation may not be that straightforward when directly calling the NB API, because lots of responses must be filtered and analyzed . e.g., in switch manager, the list function will return large content including the timestamps,
hence we cannot pre-set the standard answer before getting them in real time. Even more actions are involved, however, will reduce the robot scripts' readability. So we want keep more flexibility and scalability here, while keeping robot easily-satisfied.
2) Another problem is the dynamics. Some functionality must be evaluated combing several NB APIs, such as adding an entry and then removing it. And even more complicated scenarios in future.
3) Our further aim is to load the network config dynamically and help setup the environment automatically. So far as we know, there has to be additional code to do this, even with helps of similar libs and the robot tool.
IMHO, an appropriate solution may be keeping robot itself simple and readable, while guaranteeing enough scalability and flexibility for users and future developments.
Btw, one big advantage of the lib we can see is the session support, which can easily be implemented in current CSIT tool.
However, we keep this undecided as we are still investigating whether the caching and persistence mechanism by session is necessary for our tests (all can be done in seconds currently), and what is the advantage and disadvantage of using
session in the specific controller test?
I would appreciate very much if anyone can drop some comments on this!
Anyone has some comments? We would like to welcome more evaluation comments ASAP.
On Thu, Nov 7, 2013 at 6:29 AM, Luis Gomez <luis.gomez@...> wrote:
Can anyone try this
robotframework-requests vs CSIT-test-tool requests and see what are the differences?
I would like to try in the Ericsson Lab today if I have time….
Hi Team,
We can use robotframework-requests
https://github.com/bulkan/robotframework-requests
We are using “requests” right now in CSIT test tools but robotframework-requests will
integrate CSIT test tools to robotframework.
Any thoughts, comments?
Thank You,
Punal Patel
From:
integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Baohua Yang
Sent: November-05-13 9:03 PM
To: Luis Gomez
Cc:
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice
that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
--
Best wishes!
Baohua
|
|
Baohua Yang <yangbaohua@...>
Dear team members I just created the test directory as discussed, and add the first version stable code.
Besides, I notice that when pushing code into the integration repo, a full rebuilding on the projects is triggered. Maybe this is not necessary currently. Thanks!
toggle quoted message
Show quoted text
On Thu, Nov 7, 2013 at 11:32 AM, Luis Gomez <luis.gomez@...> wrote:
I like the word “test” on integration root, now what are the things we will have under test?
Maybe tools that can be used by any test (like your scripts) and then the test themselves (CSIT, E2EST, etc…) written in Robot or TestOn (will see) so we can
do something like this:
Integration --
distributions
packaging
test --
tools
csit --
robot
teston
What do you think?
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 9:03 PM
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
-- Best wishes! Baohua
|
|
Luis Gomez <luis.gomez@...>
OK, now you have your code reviewed and hopefully merged to Master. The build that you are talking is totally unnecessary when we push test code, we need to
work on a new Jenkins strategy for our repo.
toggle quoted message
Show quoted text
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Thursday, November 07, 2013 1:25 AM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...); integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Dear team members
I just created the test directory as discussed, and add the first version stable code.
Besides, I notice that when pushing code into the integration repo, a full rebuilding on the projects is triggered.
Maybe this is not necessary currently.
On Thu, Nov 7, 2013 at 11:32 AM, Luis Gomez <luis.gomez@...> wrote:
I like the word “test” on integration root, now what are the things we will have under test?
Maybe tools that can be used by any test (like your scripts) and then the test themselves (CSIT,
E2EST, etc…) written in Robot or TestOn (will see) so we can do something like this:
Integration --
distributions
packaging
test --
tools
csit --
robot
teston
What do you think?
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 9:03 PM
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
--
Best wishes!
Baohua
|
|
Baohua Yang <yangbaohua@...>
Hi, members. Before the meeting (10:30 AM in China), there're two things need to clarify here. 1) robot is too simple in logic processing to handle the ODP test. That's why there're robotframework-requests similar things. We need to extend it's logic handling ability. One promising way is to create a new abstraction layers, or external lib, to provide high-level API to robot.
2) Extending the ability DOESNOT means have to reduce the readability. As the API provided to robot can be defined flexible and customize, we can provide as much as readability as needed. Imagine robot as a simple java interpreter, and low-level requests APIs as the C lib. We provide high-level java API based on low-level C lib, but does not hurt java's readability.
IMHO, we just hide unnecessary complexity, while guaranteeing necessary flexibility. That is also the fundamental concept of SDN. Thanks!
toggle quoted message
Show quoted text
In general I think both ways have their pros and cons: Custom libraries are very powerful but I also feel we miss the test definition concept in Robot, i.e.
all the test is defined in Python and we need to actually see the python code to understand the test case. Anyway I will check tomorrow with Carol and some other people joining the test tools discussion of every Thursday what are the Robot limitations with
regards to our system test and how we can better use CSIT_test_tools scripts to work around the limitations.
BR/Luis
Hi all,
I have tried simple test case “get topology” with robot request library and it also works pretty neat:
*** Settings ***
Library Collections
Library RequestsLibrary
*** Testcases ***
Get Request
${auth}= Create List admin admin
Create Session controller
http://10.125.136.52:8080 auth=${auth}
${resp}= Get controller /controller/nb/v2/topology/default
Log ${resp}
Should Be Equal As Strings ${resp.status_code} 200
However I am doing a simple match on “200 OK” response while Baohua python scripts we are doing much more:
def get_topology(self):
"""
The name is suggested to match the NB API.
Show the topology
>>> TopologyManager().get_topology()
True
"""
r = super(self.__class__, self).read()
if r:
v = r['edgeProperties']
for i in range(0, len(r), 2):
nc = v[i]['edge']
if nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:03'},
u'type': u'OF', u'id': u'3'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'2'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:02'},
u'type': u'OF', u'id': u'3'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'1'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'1'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:02'},
u'type': u'OF', u'id': u'3'}:
print False
elif nc[u'tailNodeConnector'] == {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:01'},
u'type': u'OF', u'id': u'2'}:
if nc[u'headNodeConnector'] != {u'node': {u'type': u'OF', u'id': u'00:00:00:00:00:00:00:03'},
u'type': u'OF', u'id': u'3'}:
print False
else:
print False
print True
So I believe this is what Baohua means that we can be more flexible using our own library, right?
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Wednesday, November 06, 2013 6:25 PM
To: Luis Gomez
Cc: Punal Patel; Carol Sanders (carol.sanders@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Thanks punal and luis!
It is a good lib that uses the python requests library and provides api for the robot framework.
The lib was evaluated before we try to find other solutions, and we think it a nice general packaging.
Before planning writing our tool, we also tried similar ideas but found several difficulties to handle our scenarios using such way.
1) The result validation may not be that straightforward when directly calling the NB API, because lots of responses must be filtered and analyzed . e.g., in switch manager, the list function will return large content including the timestamps,
hence we cannot pre-set the standard answer before getting them in real time. Even more actions are involved, however, will reduce the robot scripts' readability. So we want keep more flexibility and scalability here, while keeping robot easily-satisfied.
2) Another problem is the dynamics. Some functionality must be evaluated combing several NB APIs, such as adding an entry and then removing it. And even more complicated scenarios in future.
3) Our further aim is to load the network config dynamically and help setup the environment automatically. So far as we know, there has to be additional code to do this, even with helps of similar libs and the robot tool.
IMHO, an appropriate solution may be keeping robot itself simple and readable, while guaranteeing enough scalability and flexibility for users and future developments.
Btw, one big advantage of the lib we can see is the session support, which can easily be implemented in current CSIT tool.
However, we keep this undecided as we are still investigating whether the caching and persistence mechanism by session is necessary for our tests (all can be done in seconds currently), and what is the advantage and disadvantage of using
session in the specific controller test?
I would appreciate very much if anyone can drop some comments on this!
Anyone has some comments? We would like to welcome more evaluation comments ASAP.
On Thu, Nov 7, 2013 at 6:29 AM, Luis Gomez <luis.gomez@...> wrote:
Can anyone try this
robotframework-requests vs CSIT-test-tool requests and see what are the differences?
I would like to try in the Ericsson Lab today if I have time….
Hi Team,
We can use robotframework-requests
https://github.com/bulkan/robotframework-requests
We are using “requests” right now in CSIT test tools but robotframework-requests will
integrate CSIT test tools to robotframework.
Any thoughts, comments?
Thank You,
Punal Patel
From:
integration-dev-bounces@... [mailto:integration-dev-bounces@...]
On Behalf Of Baohua Yang
Sent: November-05-13 9:03 PM
To: Luis Gomez
Cc:
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Sure, luis, how about the following?
Integration --
distributions
packaging
test --
tool
CSIT_test
robot
teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python,
robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions
packaging
testcode – tool
robot
teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the
meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Tuesday, November 05, 2013 6:20 PM
To: Luis Gomez
Cc: Moiz Raja; huang denghui (huangdenghui@...);
integration-dev@...
Subject: Re: [integration-dev] CSIT test tools
Yes, luis. Definitely.
What I concern is should we put the code now or wait until the code is more stable?
As Deng hui and I are still fixing one little bug.
After that, we can see the basic release on the tests of the base edition.
Should we put the code into a new directory like "tool"? Or other comment?
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone
changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them
already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
Sent: Tuesday, November 05, 2013 5:51 PM
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job
whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP
and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Sent: Tuesday, November 05, 2013 1:10 AM
To: Luis Gomez
Cc: Moiz Raja; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
Thanks for your willing to help test the code.
Currently, we've finished the tests on main functions of all modules in the base edition.
We believe there's still lots of work to do and welcome for any feedback!
Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely.
Every member, please do not hesitate to drop lines.
From: Moiz Raja [mailto:moraja@...]
Sent: Monday, November 04, 2013 5:59 PM
To: Luis Gomez
Cc: Gmail; integration-dev@...
Subject: Re: [integration-dev] System Test Plan discussion
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build,
at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts?
The test code (Robot or Python) does not need to be built so I do not think we are going to have
release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there.
c. Will the python/robot tests live in the integration repository? Anything tests that I can look at?
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
OK, I changed the test case order so that we start with the most basic services and end with most
abstracted services :
- Host
Tracker & Simple Forwarding
Note that basic service does not necessarily means simple to test, so maybe I was not very precise
when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review
the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic
network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster
Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
Sure, Luis. This is a valuable question!
IMHO, the simple-to-complex order is good.
However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual
module, instead of the entire platform.
Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules....
And each category can be tested separately.
BTW, I think I would prefer some logic in the order like (simple to complex) and independent modules
even it this means more TCs.
I have also observed the list of TCs in the Base Test Plan has been reordered following alphabetical
order. This is fine but I just want to explain the reason for the order before:
1) I started from more simple (less abstraction) test cases modules like Switch Mgr or Toplogy Mgr,
FRM and finished with more complex (more abstraction) ones like: host tracker, ARP Handler, Container Mgr, etc.. The reason for this is if we start testing simple things and these fails, there is no point to continue with complex ones. Also if the test starts
with a complex case and fails, more difficult to debug.
2) I also combined some modules like for example if I need to have some flows (FRM) in order to check
statistics (stats Mgr), I create the flows (FRM1), then I check the statistics (Stats Mgr) and then I clear the flows (FRM2). This way I do not need to create flows 2 times, one for FRM and another for Stats Mgr.
The thing is the existing TC flow does not work as the way I wrote it there are some modules (i.e.
Stats Mgr) that needs to have certain conditions created in previous modules (FRM). Anyway, instead of fixing this I would like to open a discussion (this is the right time) on how we want to write and later execute test plan:
1) Module order: do we want to follow any logical order (simple to complex or something else) or
we just do alphabetical?
2) Module dependencies: do we want to have modules depending on the result of previous modules (more
test efficient, less TCs) or have totally independent modules (less efficient but very flexible if for example I want to run just one module test)
Please help verify and input more details, or to see if we have better place to move to.
Denghui and I are working hard on the code writing these days, and we plan to release a workable one for the base edition in this week.
Currently, the simple list_nodes tests on arp handler, host tracker and switch manager have been provided.
Any one is welcome to make contribution to the code, documents or bug fixing.
It is been a busy day but finally I got some time to summarize our afternoon discussion around the system test:
- Phython scripts: Baohua will write some guide in the wiki on how to use the scripts. These can be very useful to debug test cases.
- Robot framework: It is already installed in Ericsson Lab, Carol will write some quick guide on how to use it. Denghui, you have also Robot experience so you can help Carol with this.
- TestON framework: We have meeting next week to know more about it (especially APIs and driver support), I will share with you some documentation I have already got from Swaraj. From the TSC call today, it looks
like we are interested in collaborating with ON.Lab people so this could be a good opportunity.
- System Test Plan: We need to continue working on this, everybody is invited to contribute in the wiki. I am going to meet Madhu next Monday to talk about OVSDB inclusion in base release, I will also work on
some test cases around OVSDB plugin. Punal is going to take a look on VTN project (Virtualization release) as they have all very well documented in the wiki. The rest of the projects we will need to ask information as we are writing the test plan.
So, the plan for next week is to get familiar with Robot framework and the python scripts created by China team, evaluate TestON framework, and continue filling the test plan.
_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev
--
Best wishes!
Baohua
--
Best wishes!
Baohua
Legal Disclaimer:
The information contained in this message may be privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice
that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and delete or destroy any copy of this message!
--
Best wishes!
Baohua
-- Best wishes! Baohua
|
|