Re: CSIT test tools
Luis Gomez <luis.gomez@...>
OK, now you have your code reviewed and hopefully merged to Master. The build that you are talking is totally unnecessary when we push test code, we need to work on a new Jenkins strategy for our repo.
From: Baohua Yang [mailto:yangbaohua@...]
Sent: Thursday, November 07, 2013 1:25 AM To: Luis Gomez Cc: Moiz Raja; huang denghui (huangdenghui@...); integration-dev@... Subject: Re: [integration-dev] CSIT test tools
Dear team members I just created the test directory as discussed, and add the first version stable code. Please help review in https://git.opendaylight.org/gerrit/#/c/2479/. Besides, I notice that when pushing code into the integration repo, a full rebuilding on the projects is triggered. Maybe this is not necessary currently. Thanks!
On Thu, Nov 7, 2013 at 11:32 AM, Luis Gomez <luis.gomez@...> wrote: I like the word “test” on integration root, now what are the things we will have under test?
Maybe tools that can be used by any test (like your scripts) and then the test themselves (CSIT, E2EST, etc…) written in Robot or TestOn (will see) so we can do something like this:
Integration -- distributions packaging test -- tools csit -- robot teston
What do you think?
From: Baohua Yang [mailto:yangbaohua@...]
Sure, luis, how about the following?
Integration -- distributions packaging test -- tool CSIT_test robot teston
On Wed, Nov 6, 2013 at 10:31 AM, Luis Gomez <luis.gomez@...> wrote: Hi Baohua,
I have not really thought much on the name but yes we need a folder for all that is test code (python, robot, etc..), so one idea is to have the following structure under integration:
Integration – distributions packaging testcode – tool robot teston
Or maybe have tool, robot and teston directly in the root, we can discuss more tomorrow during the meeting if you or someone else has more ideas.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@...]
Yes, luis. Definitely. What I concern is should we put the code now or wait until the code is more stable? As Deng hui and I are still fixing one little bug. After that, we can see the basic release on the tests of the base edition. Beside, I found our repo here at https://git.opendaylight.org/gerrit/#/admin/projects/integration. Should we put the code into a new directory like "tool"? Or other comment? Thanks!
On Wed, Nov 6, 2013 at 10:12 AM, Luis Gomez <luis.gomez@...> wrote: Hi Moiz,
My only concern was that the existing verify job will build the release vehicles every time someone changes something in the test code, I do not know how much effort/time is required for the build (maybe it is very less) but it is something I would like to optimize in Jenkins like the mail I sent this morning to Andy and the rest of the team.
Said that, yes, sooner or later the scripts will be in the git so please feel free to upload them already. Also Baohua and Dengui, being the contributors for the scripts, are you familiar with pull/push process in OpenDaylight?
Thanks/Luis
From: Moiz Raja [mailto:moraja@...]
When you push code into the repository it will trigger a verify build. However if you do not have your integration tests integrated into the maven build then they will not be run.
Was your concern that pushing the code will automatically trigger a test run? Because it won't.
I suggest that we simply add whatever is in the Github repo to the ODL integration repo. I can do it if you like.
-Moiz
On Nov 5, 2013, at 5:13 PM, Luis Gomez <luis.gomez@...> wrote:
Hi Moiz, just a question: our repo as it is configured now, should not trigger a verify build job whenever someone tries to put something on it (i.e. with git push)? or is there another way to do this?
From: Moiz Raja [mailto:moraja@cisco.com]
How about just adding the python scripts into the integration repo instead of using github. It doesn't hurt.
-Moiz
On Nov 5, 2013, at 4:31 PM, Luis Gomez <luis.gomez@...> wrote:
Hi guys,
Congratulations, I downloaded the python scripts to the test tools VMs, changed the controller IP and run the system test with no issues. I saw you coded very well all the REST requests so this should be a good input for Robot framework.
BR/Luis
From: Baohua Yang [mailto:yangbaohua@gmail.com]
Hi Luis Thanks for your willing to help test the code. Currently, we've finished the tests on main functions of all modules in the base edition. We believe there's still lots of work to do and welcome for any feedback! Denghui and I have discussed a lot on the development plan, but we think there will be more power community-widely. Every member, please do not hesitate to drop lines. Thanks!
On Tue, Nov 5, 2013 at 12:04 PM, Luis Gomez <luis.gomez@...> wrote: Hi Moiz,
See my answers inline:
From: Moiz Raja [mailto:moraja@...]
Hi Guys,
A couple of questions on the System test.
a. Will the System Test be integrated with the build? The system test will not run with the build, at least the one based on Robot/Phyton. The idea is to trigger a job so that the controller VM (separated from the build server) fetches the latest release vehicle from Jenkins and runs it. After this we will trigger the test case execution in Robot.
b. What framework are we going to use to deploy the built artifacts? Is it going to be something like capistrano or custom bash scripts? The test code (Robot or Python) does not need to be built so I do not think we are going to have release artifacts as we have in Java. Instead we will have the test code stored in our git and then Robot will fetch the code from there. c. Will the python/robot tests live in the integration repository? Anything tests that I can look at? Yes, that is the idea although nothing has been uploaded to the repo yet. So far we have 2 things: Python scripts created by China team and stored in an external repo and Robot framework installed in Open Lab at Ericsson. Both are now described (Carol updated the Robot today) in https://wiki.opendaylight.org/view/CrossProject:Integration_Group:Test_Tools
I remind everybody this week is to get familiar with these tools and see how we can better use them so yes you and everybody is invited to take a look.
I will personally try to get the python scripts to work in the Open Lab at Ericsson tomorrow the latest.
BR/Luis
-Moiz
On Nov 3, 2013, at 12:16 AM, Luis Gomez <luis.gomez@...> wrote:
OK, I changed the test case order so that we start with the most basic services and end with most abstracted services :
- Switch Mgr - Topology Mgr - FRM - Statistics Mgr - Configuration - Host Tracker & Simple Forwarding - ARP Handler - Forward Manager - Container Mgr
Note that basic service does not necessarily means simple to test, so maybe I was not very precise when I said simple-to-complex. What I really meant was something like basic-to-extra functions.
I also added the required steps on every area so every test case above is self-complete. Please review the test plan and let me know if you agree with it.
As for your question, yes, we can categorize the services in different ways like for example: basic network functions (Switch Mgr, Topology Mgr, FRM, Stats Mgr), extra network functions (Host Tracker, Simple Forwarding, ARP Handler, Forward Mgr), basic node functions (Configuration, User Mgr, Connection Mgr) and extra node functions (Container Mgr, Cluster Mgr). This is just an idea but it could be more, anyway besides the classification the important is that we do not leave features/modules without test.
BR/Luis
From: Gmail [mailto:yangbaohua@gmail.com]
Sure, Luis. This is a valuable question! IMHO, the simple-to-complex order is good. However, we might also keep test cases independent from each other, i.e., each test case should be self-complete. Because, we sometime may want to test the function of individual module, instead of the entire platform. Besides, we may even categorize the tested modules based on their functions, test complexity, etc. For example, state collection modules, basic forwarding modules, QoS modules.... And each category can be tested separately. How do you think?
_______________________________________________
-- _______________________________________________
--
--
-- |
|