Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better reuse of process selection logic #37

Merged
merged 7 commits into from
Jan 25, 2024
Merged

Better reuse of process selection logic #37

merged 7 commits into from
Jan 25, 2024

Conversation

soxofaan
Copy link
Member

WIP PR to support #10 #24

@soxofaan
Copy link
Member Author

soxofaan commented Jan 23, 2024

@m-mohr @clausmichele This is a pretty big PR and I don't expect you to review it in detail

I just wanted to inform you of what it is about. Some quick highlights

  • It is mainly about unified handling of --processes and --process-levels (and --experimental) options for the individual process tests (WP5) and full process graph tests (WP6). In main branch the handling is pretty inconsistent between these WPs.
  • It covers/fixes Skip workflows if executed processes are not in the list of processes / process levels #10 and Mutually exclusive --processes and --process-levels #24
  • I removed the processes, process_levels and skip_experimental fixtures, and moved that functionality to a pytest plugin under lib, which allows to hook into the test collection phase (instead of doing skips at test execution phase)
  • for individual process tests (WP5): when using --processes and --process-levels , process tests of "unselected" processes are now not even collected anymore, e.g. with option --processes=min, your test run will only consist of 8 tests (the 8 min tests), instead of thousands of tests that will be marked as skipped. This behavior is closer to other test selection features of pytest. And it improves the signal noise ratio of the CLI or HTML report massively if you ask me, allowing you to iterate much more focused on a subset of tests
  • for full process graph/workflow tests (WP6): --processes and --experimental is now supported and followed. Also: all process_ids of the full process graph are now extracted automatically and checked to determine if a test should be skipped (instead of depending on that brittle LEVEL setting)
  • side note (as you might have noticed): I also started setting up minimal CI in github actions with "internal" tests: tests about the functionality under lib that does not depend on a real backend deployment or implementation.

@clausmichele
Copy link
Member

Also: all process_ids of the full process graph are now extracted automatically and checked to determine if a test should be skipped (instead of depending on that brittle LEVEL setting)

This is really cool! I went through the code and seems fine. I just didn't get where the code is initializing the process registry and the list of processes of each defined level, that will be used to check if the process graph contains required processes from that level.

On the workflows side I guess that now the LEVEL= parts at the top of the code can be removed.

@soxofaan
Copy link
Member Author

I just didn't get where the code is initializing the process registry and the list of processes of each defined level, that will be used to check if the process graph contains required processes from that level.

FYI: The bootstrapping of these things is triggered from this pytest "configure" hook:

def pytest_configure(config):
"""Implementation of `pytest_configure` hook."""
backend_url = get_backend_url(config)
if backend_url is None:
backend = NoBackend()
else:
connection = openeo.connect(url=backend_url, auto_validate=False)
backend = HttpBackend(connection=connection)
set_backend_under_test(backend)
set_process_selection_from_config(config)

On the workflows side I guess that now the LEVEL= parts at the top of the code can be removed.

indeed that was one of the minor followup cleanups I might add

@soxofaan
Copy link
Member Author

FYI: I'm going to merge this somewhere today to allows some further finetuning in other tickets

@m-mohr
Copy link
Member

m-mohr commented Jan 25, 2024

Is there a way to figure out why something was skipped? Could be with a CLI parameter. I found that valuable to know when developing things.

Also, when testing the PR the tests ran much slower than before. What happened there?
Command: pytest src/openeo_test_suite/tests/processes/processing/test_example.py --runner=dask

Master: 128 failed, 715 passed, 166 skipped, 68 warnings in 11.85s
This branch: 128 failed, 715 passed, 102 skipped, 68 warnings in 970.68s (0:16:10)

Copy link
Member

@m-mohr m-mohr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The runtime slowdown should be investigated

@soxofaan
Copy link
Member Author

The runtime slowdown should be investigated

had some struggles to get dask dependencies working, but I can now reproduce. Investigating

soxofaan added a commit that referenced this pull request Jan 25, 2024
@soxofaan
Copy link
Member Author

pushed a quickfix

and now doing pytest src/openeo_test_suite/tests/processes/processing/test_example.py --runner=dask

on main branch:

157 failed, 659 passed, 181 skipped, 133 warnings in 5.50s

on process-listing branch:

157 failed, 659 passed, 117 skipped, 133 warnings in 5.11s

@soxofaan
Copy link
Member Author

Is there a way to figure out why something was skipped? Could be with a CLI parameter. I found that valuable to know when developing things.

With normal pytest.skips there is always an associated reason that is shown depending on the verbosity level (cli), it is available in the HTML report too.

But I guess you here are referring with "skipped" to tests that are now not being "collected" anymore because of the process selection options?
There is by default not really a way to communicate that. I could add some logging to expose that in some way . Or indeed a cli parameter to use runtime pytest.skips instead of test-collection time skips.

I found that valuable to know when developing things.

I understand, but as a user of the test suite, using the process selection feature, seeing hundreds of tests scroll by and being listed as skipped in the report gives a bad signal noise ratio

@m-mohr
Copy link
Member

m-mohr commented Jan 25, 2024

Yeah, in general I agree that skipped tests should not be shown. But it's imporant for dev to understand why something is skipped, I think. But it's good for now I guess. Thanks for the explanation.

Thanks for the fix for the slowdown.

- Move logic to `lib` and plugin, with better (internal) test coverage
- Replace fixture based approach in tests/processes/processing with filtering at test collection phase
@soxofaan soxofaan merged commit 2cdaff2 into main Jan 25, 2024
2 checks passed
@soxofaan soxofaan deleted the process-listing branch January 25, 2024 15:31
@soxofaan
Copy link
Member Author

Yeah, in general I agree that skipped tests should not be shown. But it's imporant for dev to understand why something is skipped, I think.

added some minimal logging already in 3984219

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants