Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests are failing with napari 0.5.0 #443

Open
adamltyson opened this issue Jul 11, 2024 · 6 comments
Open

Tests are failing with napari 0.5.0 #443

adamltyson opened this issue Jul 11, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@adamltyson
Copy link
Member

@alessandrofelder's hypothesis - "napari fixtures are now cleverer and fail your tests if you don't clean up your widgets properly"

We should look into this and get the tests passing again (e.g. by making sure all qt widgets have parents)

@adamltyson adamltyson added the bug Something isn't working label Jul 11, 2024
@IgorTatarnikov
Copy link
Member

IgorTatarnikov commented Jul 18, 2024

Based on my investigations I think this is related to the logging level set, if "DEBUG" is set then these errors crop up while if everything is "INFO" and below it seems to work fine. Upon further investigation this seems to only matter for logging to file and not to console.

This was the minimal example I've been working with:

import numpy as np
from fancylog import fancylog
from pathlib import Path
import cellfinder.core as program_for_log


def test_example(make_napari_viewer):
    package_path = Path(__file__).parent

    fancylog.start_logging(
        package_path,
        program_for_log,
        file_log_level="INFO",
        log_header="CELLFINDER TRAINING LOG",
    )

    viewer = make_napari_viewer()
    test_image = np.random.random((100, 100, 100))
    n_layers = len(viewer.layers)
    # adds a "detected" and a "rejected layer"
    viewer.add_image(test_image, name="test_image")

    assert len(viewer.layers) == n_layers + 1

From what I understand it has something to do with the use of WeakSet by napari during clean up. Logging to file with DEBUG keeps a reference to the viewer open and that breaks the teardown code.

@IgorTatarnikov
Copy link
Member

IgorTatarnikov commented Jul 18, 2024

I've got a better minimal example now that doesn't use fancylog I think this needs to be passed to the napari developers. I've verified that 3 out of 4 tests pass using napari==0.5.0 and 4 for 4 using napari==0.4.19.post1

import numpy as np
import logging

import pytest


@pytest.mark.parametrize("debug_level", ["INFO", "WARNING", "ERROR", "DEBUG",])
def test_example(make_napari_viewer, debug_level):
    logger = logging.getLogger()
    logger.setLevel(getattr(logging, debug_level))

    viewer = make_napari_viewer()
    test_image = np.random.random((100, 100, 100))
    n_layers = len(viewer.layers)
    # adds a "detected" and a "rejected layer"
    viewer.add_image(test_image, name="test_image")

    assert len(viewer.layers) == n_layers + 1

@adamltyson
Copy link
Member Author

Thanks @IgorTatarnikov, yes I'd say it's worth raising an issue, or asking on Zulip.

My only confusion though - I don't think the cellfinder plugin (unlike brainreg) logs to file?

@IgorTatarnikov
Copy link
Member

The cellfinder plugin doesn't directly, there's only one time logging is invoked during our test suite:

Which calls:

def cli():
args = training_parse()
ensure_directory_exists(args.output_dir)
fancylog.start_logging(
args.output_dir,
program_for_log,
variables=[args],
log_header="CELLFINDER TRAINING LOG",
)

Commenting out that one line fixes the errors.

@alessandrofelder
Copy link
Member

The cellfinder plugin doesn't directly, there's only one time logging is invoked during our test suite:

So the logger from a non-napari-related test function persists across tests??? 😱 TIL!

@adamltyson
Copy link
Member Author

Just noting that following #444 we should make sure to update the tests (removing the xfail) when this issue is addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants