Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect descriptions for failed generated tests #777

Open
obensonne opened this issue Feb 22, 2014 · 7 comments
Open

Incorrect descriptions for failed generated tests #777

obensonne opened this issue Feb 22, 2014 · 7 comments
Assignees

Comments

@obensonne
Copy link

Given the test module test.py:

def test():
    def func(x):
        assert x == 2
    for val in (1,2,3,4):
        func.description = 'test_%s' % val
        yield func, val

I would exepct the following output when running nosetests test.py -v:

test_1 ... FAIL
test_2 ... ok
test_3 ... FAIL
test_4 ... FAIL

======================================================================
FAIL: test_1
----------------------------------------------------------------------
Traceback (most recent call last):
    ...
AssertionError

======================================================================
FAIL: test_3
----------------------------------------------------------------------
Traceback (most recent call last):
    ...    
AssertionError

======================================================================
FAIL: test_4
----------------------------------------------------------------------
Traceback (most recent call last):
    ...
AssertionError

----------------------------------------------------------------------
Ran 4 tests in 0.002s

FAILED (failures=3)

However, the actual result is:

test_1 ... FAIL
test_2 ... ok
test_3 ... FAIL
test_4 ... FAIL

======================================================================
FAIL: test_4
----------------------------------------------------------------------
Traceback (most recent call last):
    ...
AssertionError

======================================================================
FAIL: test_4
----------------------------------------------------------------------
Traceback (most recent call last):
    ...    
AssertionError

======================================================================
FAIL: test_4
----------------------------------------------------------------------
Traceback (most recent call last):
    ...
AssertionError

----------------------------------------------------------------------
Ran 4 tests in 0.002s

FAILED (failures=3)

So while the verbose output is correct, the final failure report always shows the description of the last generated test. I know I could work around this by always yielding distinct test functions using lambda or partial, but then, why does nose test generators may yield functions and arguments in the first place? This actually motivates to reuse the same function for different tests.

@obensonne
Copy link
Author

Issue #10 seems to address the same problem. However, it is 2 years old and its formatting is broken due to the import from Google Code.

@jszakmeister
Copy link
Contributor

Let's look at what you're doing:

def test():
    # step 1
    def func(x):
        assert x == 2

    for val in (1,2,3,4):
        # step 2
        func.description = 'test_%s' % val

        # step 3
        yield func, val

Step 1 creates a function inline, but outside the loop. After this step, func is defined, but there's only one of them. In Step 2, you modify func's description with the new description. So now the single instance of func has been updated with a new description. Then Step 3 yields the updated function and argument.

So once the generator has been iterated through, any cached instance of func will now have the description 'test_4'. And that's precisely what's happening when generating the failure report: unittest is asking for descriptions, and we're providing one, but the cached callable's description is no longer accurate. Seems like we should cache the name some other way, and use that when unittest calls shortDescription(), or you could try:

def test():
    for val in (1,2,3,4):
        def func(x):
            assert x == 2
        func.description = 'test_%s' % val
        yield func, val

This will produce the correct results. It creates more function objects that you care for, but it also makes sense since you aren't mutating the same object.

@jszakmeister
Copy link
Contributor

The following would fix the issue, but I'm not sure I like the solution (caching it in this way). It almost feels like we should formulate the test description and pass it into the test case. That would require pulling out some machinery from the TestBase class, and perhaps others... which feels too intrusive.

What do you think?

diff --git a/nose/case.py b/nose/case.py
index cffa4ab..2c02c11 100644
--- a/nose/case.py
+++ b/nose/case.py
@@ -197,13 +197,36 @@ class TestBase(unittest.TestCase):
         self.test(*self.arg)

     def shortDescription(self):
+        # We cache the description because of the way we advocate people
+        # set the description for their test.  The set up is typically
+        # something like:
+        #
+        #   def test_generator():
+        #       def func(arg):
+        #           assert ...
+        #       for i in range(10):
+        #           func.description = "test_generator_%d" % i
+        #           yield func, i
+        #
+        # This appears to work early on, but it fails when generating the report
+        # summary at the end of the run.  To prevent it from getting
+        # out-of-sync, we'll go ahead and cache the description the first time
+        # through.
+        if hasattr(self, '_cached_description'):
+            return self._cached_description
+
         if hasattr(self.test, 'description'):
-            return self.test.description
-        func, arg = self._descriptors()
-        doc = getattr(func, '__doc__', None)
-        if not doc:
-            doc = str(self)
-        return doc.strip().split("\n")[0].strip()
+            description = self.test.description
+        else:
+            func, arg = self._descriptors()
+            doc = getattr(func, '__doc__', None)
+            if not doc:
+                doc = str(self)
+            description = doc.strip().split("\n")[0].strip()
+
+        self._cached_description = description
+
+        return description


 class FunctionTestCase(TestBase):

@obensonne
Copy link
Author

Thanks for the quick feedback!

My guess was to fix the test description early, when initializing a test case class:

diff --git a/nose/case.py b/nose/case.py
index cffa4ab..2c79a5d 100644
--- a/nose/case.py
+++ b/nose/case.py
@@ -190,6 +190,10 @@ class TestBase(unittest.TestCase):
     """
     __test__ = False # do not collect

+    def __init__(self):
+        self._test_description = getattr(self.test, 'description', None)
+        unittest.TestCase.__init__(self)
+    
     def id(self):
         return str(self)

@@ -197,8 +201,8 @@ class TestBase(unittest.TestCase):
         self.test(*self.arg)

     def shortDescription(self):
-        if hasattr(self.test, 'description'):
-            return self.test.description
+        if self._test_description is not None:
+            return self._test_description
         func, arg = self._descriptors()
         doc = getattr(func, '__doc__', None)
         if not doc:

It's the first time I looked into the nose source code so I'm not sure if this breaks other parts or violates some general concepts.

Concerning your patch, is it guaranteed that the shortDescription method is called early enough to indeed cache the description value at a time when it is still correct? Even if it is, to my taste it is a bit too implicit.

jszakmeister added a commit to jszakmeister/nose that referenced this issue Feb 27, 2014
The final report was looking at the original function object to
determine the name, but by that time it has been modified and no longer
represents the test that failed.  This caches the description, so the
report summary can get the correct name.

wip: should we just pass in the description directly?  Seems like we
should.
@jszakmeister jszakmeister self-assigned this Feb 28, 2014
@jszakmeister
Copy link
Contributor

I'm not sure why I never saw your response @obensonne--though I can't say this is the first time that it's happened with GitHub. :-(

I'll have to check and see whether that approach would work.

@obensonne
Copy link
Author

Thanks for the feedback, @jszakmeister

@BrenBarn
Copy link

Has there been any progress on this? This is an annoying bug. The solution posted seems like the natural one: the test description should be extracted as part of the setup, before the test is actually run, rather than waiting until later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants