You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This description of the test infrastructure from @dkegel-fastly is worth gold -- is it already documented somewhere on the web site, and if not, where should it go?
I first looked for it in the "testing" section of the Contributing page and would expect to see it near there -- right below that would be my vote.
Build-only tests for all small targets:
The "make smoketest" at the end of .github/workflows/linux.yml make sure that at least one example compiles for each target.
Emulated tests for small targets:
The "make tinygo-baremetal" at the end of .github/workflows/linux.yml makes sure that very small set of tests (currently just encoding/hex) passes on a very small set of emulated targets (currently cortex-m-qemu).
Real hardware tests for small targets:
Look at the test results for the tip of dev and release.
The "TinyHCI" test results are from https://github.com/tinygo-org/tinyhci, which runs on real hardware.
I haven't looked at that at all, but I think it only tests hardware functions.
All of the above could probably stand some expansion. In particular, tinygo-baremetal could make sure a simple smoke test of fmt passes. ( Once reflect improves, perhaps after https://github.com/tinygo-org/tinygo/pull/2640 lands, we could even make sure "tinygo test fmt" passes.)
This description of the test infrastructure from @dkegel-fastly is worth gold -- is it already documented somewhere on the web site, and if not, where should it go?
I first looked for it in the "testing" section of the Contributing page and would expect to see it near there -- right below that would be my vote.
Originally posted by @dkegel-fastly in #254 (comment)
The text was updated successfully, but these errors were encountered: