Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test coverage improvements - tracking issue #114

Open
danpetry opened this issue Apr 5, 2019 · 0 comments
Open

Test coverage improvements - tracking issue #114

danpetry opened this issue Apr 5, 2019 · 0 comments

Comments

@danpetry
Copy link
Contributor

danpetry commented Apr 5, 2019

Description

This is a place for all improvements in coverage to the release specs test. Please update this description with desired test improvements.

Coverage improvements

Test 01-ci

  • Run tasks 1, 2 and 3 in the reference Docker platform

Test 02-tests

There is a lot of scope for including more tests and improving Test 02, which is already automated. This can be done by:

  • Including more boards specified to be tested
  • Currently tests fail from the point of view of the compile-and-test-for-board.py script only if an exception is thrown in the Python test script (compilation error, timeout, etc.). I.e. it's a test to see if the test application flashes and runs, not passes. Enabling failure upon receiving [FAILED] would be beneficial, perhaps for a defined subset of tests to start, and then expanding.
  • Increasing the number of {RIOTBASE}/tests that are actually run during Test 02 (e.g. by allowing the test to be run automatically using make test, perhaps using a hardware-in-the-loop rig)
  • Adding to the tests in {RIOTBASE}/tests and making sure that they are run during execution of the Test 02 script.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant