Replies: 6 comments 14 replies
-
I think it would be good if users could define in what check failures are acceptable/expected. For example matching ABRT "reason" and "package". A really nice feature imho would be to be able to specify a list of log files/dirs that should be included in case of the check fail. For example, when running iscsi tests, save contents of |
Beta Was this translation helpful? Give feedback.
-
@happz Do you know if there was any development in this area? |
Beta Was this translation helpful? Give feedback.
-
Hi, the RHIVOS QE team is also interested in this now for the kernel panic detection. In their case, it would not be intentional kernel panics. So they would be reported as test failure. Though, IIUC, intentional kernel panics would also need to be supported. |
Beta Was this translation helpful? Give feedback.
-
Here's a couple of thoughts from myself:
All in all, the outline looks very good to me. Thanks for summarizing the existing issues and putting it all together, @happz! |
Beta Was this translation helpful? Give feedback.
-
Hello, I have few comments to this feature:
|
Beta Was this translation helpful? Give feedback.
-
I've briefly looked into Abrt. Quick summary: abrt and python-abrt packages available on el-7, el-8, while on el9 it is not being included in rhel/centos/epel repos. python3-abrt
Ubuntu container:
Will try to investigate further if there is any hope for interoperability. |
Beta Was this translation helpful? Give feedback.
-
As I think I have all I need for the plugin aspect of the implementation of additional checks, I'd like to get a better picture WRT what checks we're thinking about, when, and what kind of effect they might have, to summarize our current understanding, before I start hacking. And I'd like to find out answers for some aspects don't see clearly enough yet.
Issues
tmt
to reboot unresponsive machines #1523Pull requests
Checks
What's interesting
So far the following checks were identified as "interesting":
Where?
check
:check
- same as the above.results.yaml
gets new key,check
, partially mirroring what's saved for a test itself:When?
prepare
/finish
phase, but an implicit one, injected by the check implementation rather than the one user needs to include. Suggested by Add additional test checks #216 (comment): ABRT check preparing the environment by enablingabrtd
(before running tests), then collecting data infinish
.What?
dmesg
output indmesg.txt
, or saved code dumps)There seems to be the need for "during the test" type of checks as well, see #1523. Something that is executed while the test is still running, and might take action. While there is an overlap with what we're building for checks - implementation in plugins, with clear internal API & clear specification of how it interacts with tests (see "points" below) - it looks like something more watchdog-like in nature, and while definitely requested and needed, it does not fit the "additional check" concept discussed here & in the original issue. I'd implement it as a standalone type of action.
Implementation
execute
plugin would call into (enabled) checks -before-all-tests
,before-test
,after-test
,after-all-tests
, (or[before|after]-all-tests
could be implemented by (maybe)Plan
calling checks to generate theirprepare
andfinish
phases).provides_method()
but for "point" & name combinationThings that are unclear (to me)
pass
level,warn
would be the best it can ever hope for? Or not at all, leave the evaluation to whoever runs tmt?/foo
needs this one extra", "same checks but test/foo
needs this one extra and/bar
needs one of the defaults disabled"?prepare
/finish
phases, i.e. call plugins to add their own phases? This is whatrequire
does, BTW, it's converted into aprepare/install
phase.prepare
sounds harmless, butfinish
might be complicated as it might be way too late for checks to affect test results. Or we may support both, have two distinct check plugin entry points, "now it's time to add yourprepare
phase if you want to do so" and "we're inexecute
now, if you want to do something before we start running tests, do it now", and same for the "finish" actions ("we're done with tests, here are their results" & "now it's time to add yourfinish
phase, if you want to do so".Feel free to chime in, I'd like to know answers to the questions above - the implementation itself would be fairly quick and straightforward as soon as we resolve the unresolved.
So, what did I miss? Any obvious checks, points where it must influence or inspect the test process, or artifacts it might produce? Looking forward to hearing from you.
Beta Was this translation helpful? Give feedback.
All reactions