Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change test-execution order (and improve their organization) #1100

Closed
tcompa opened this issue Dec 19, 2023 · 1 comment · Fixed by #1166
Closed

Change test-execution order (and improve their organization) #1100

tcompa opened this issue Dec 19, 2023 · 1 comment · Fixed by #1166
Assignees
Labels
testing testing

Comments

@tcompa
Copy link
Collaborator

tcompa commented Dec 19, 2023

As of #1090, we now have a better organization of tests - including at least subfolders db, api and backend.

We could also add:

  • tests/schemas, with the tests for Pydantic schemas EDIT: this is already in-place
  • tests/unit, with tests of function which do not belong to any other subfolder

Tests in subfolders are typically faster and more low-level than the rest, and they should run first. This will make the CI fail faster for the large fraction of PRs that only change some specific part of the schemas, models or API.

A reasonable order could be like

  1. unit
  2. schemas
  3. db
  4. api
  5. backend (which include SLURM)
  6. all the non-subfolder tests

The ordering feature should be available through https://github.com/pytest-dev/pytest-order (I did not try it out yet).

EDIT: see pytest-dev/pytest-order#52 or pytest-dev/pytest-order#69

@tcompa
Copy link
Collaborator Author

tcompa commented Jan 17, 2024

As of #1166, the pytest output is split multiple blocks, which could be annoying to read. This is the downside of running multiple CLI pytest commands, rather than using pytest-order or similar tools. Note that deprecations/warnings are now a bit less visible.

On the other hand, here is a nice fail-fast scenario: https://github.com/fractal-analytics-platform/fractal-server/actions/runs/7556028285/job/20572204667. After adding a (fake) failing test within tests/unit, the "Test with pytest" step of the GitHub action failed in 11 seconds. This seems good enough trade-off for the moment, even though a single-pytest-command option would be even better (and it would also run locally).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing testing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants