tempest results are very flaky [to me]

Flavio Fernandes <ffernand@...>

Hi folks,

I’m trying to collect results of tempest tests, so we can have a list of what works and what does not.
Trouble is, I seem to get different results on what passes and what fails. See below for the methodology
I used.

I created a python script that will parse the results from multiple runs and give us a better idea of
what is somewhat consistent.

A link to processTests.py is here [1].

I would like to hear if you guys see the same inconsistencies, and what is the best approach to decide
what tests we should focus on. I apologize for not getting a set of trello on what tests need attention but
the recent fixes we had in neutron, together with these result inconsistencies have made this task
pretty challenging.


— flavio



Take a sample of runs, so we can filter out tests into 3 categories:

1) Always OK
2) Sometimes OK
3) Always FAIL

Once we have collected these, run script that takes the raw runs and breaks each test into the 3 types mentioned above.

## start odl, stack as mentioned in wiki [2].

cd /opt/stack/tempest ; \ 
for x in `seq 1 10` ; do echo $x ; time ./run_tempest.sh tempest.api.network 2>&1 | tee ~/tempest_run${x}.log ; done

## process the runs to generate a tally of pass/fails
processTests.py ~/tempest_run*.log

processTests.py ~/tempest_run*.log | grep 'fail:0'
processTests.py ~/tempest_run*.log | grep 'pass:0'

# The tests are shown in a way that you can run with explicitly, example:
/run_tempest.sh   tempest.api.network.test_floating_ips_negative.FloatingIPNegativeTestJSON.test_create_floatingip_with_port_ext_net_unreachable


Join z.archive.ovsdb-dev@lists.opendaylight.org to automatically receive all group messages.