-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Human-readable context for expectations #1965
Comments
Hey! So we actually used to have this in Jasmine but found that over thousands of test files at FB, nobody used it. So for now we are printing a nice error message with approximate information and a stack trace that will lead to the expectation (just like in your screenshot). I agree we could print the line that throws but quite often the assertion is multiple lines long:
so this wouldn't actually look so good and we'd have to use a parser to parse the JS and extract the relevant info (and collapse long lines) or something similar to make it pretty. Personally I think we are showing enough information for now but happy to reconsider. If you have ideas for something that isn't very complex but adds more context which helps resolve issues faster, let me know. |
@cpojer so the pattern is to wrap each assertion in an Is it possible that this pattern has been adopted less because it's better or worse, but more just for consistency with the existing tests? or perhaps not knowing the feature exists? I didn't know this was in Jasmine.
Refactoring to a single line could encourage more semantic information in the assertion? Perhaps? const adminUser = {
…
}
expect(a).toEqual(adminUser);
The example above shows that it's difficult to discover exactly which assertion failed unless you add verbose (IMO) wrappers around everything. This is especially true in a transpiled environment where sourcemap line numbers aren't always accurate. I believe that quickly and accurately understanding broke and where is important, as are concise tests.
I made a few suggestions above: Are you looking for something simpler, or different? |
hey @timoxley! we already thought about adding something like this. so the issue with the first option is that some matchers have optional arguments, and that makes things more complicated. e.g. here on the second case we won't know if the argument is a proximity or an error message expect(555).toBeCloseTo(111, 2, 'reason why');
expect(555).toBeCloseTo(111, 'reason why'); second suggestion won't work because the matcher will throw as soon as something does'n meet expectation expect(1).toBe(2)/* will throw here */.because('reason'); we could attach the reason before the matcher executes, like this: expect(1).because('reason').toBe(2);
// or
because('reason').expect(1).toBe(2); but this API doesn't really look that good. another option would be to add a second arg to expect(1, 'just because').toBe(2); but it is pretty much the same as the previous option. |
The reason why I think this isn't very useful is because engineers don't want to waste time writing tests. Anything we do to make it harder for them will just lead to worse tests. In the past, the best solution was actually to create custom matchers. We'll introduce
which should allow you to write more expressive failure messages. I've seen this used a lot in projects like Relay. See all the matchers: https://github.com/facebook/relay/blob/master/src/tools/__mocks__/RelayTestUtils.js#L281 |
Closing due to inactivity but happy to reopen if there are good ideas. |
@cpojer @DmitriiAbramov apologies for delay. A second arg to |
@cpojer Agreed! In addition to not wanting to waste time debugging tests, this is exactly why I believe a less verbose API with more failure context would be preferable. For some concrete numbers, using the simple example from my comment above, to get equivalent assertions + context with tape, Jest requires the programmer write nearly double the amount of ceremonial boilerplate:
This could be improved! // tape
test('api works', t => {
t.deepEquals(api(), [], 'api without magic provides no items')
t.deepEquals(api(0), [], 'api with zero magic also provides no items')
t.deepEquals(api(true), [1,2,3], 'api with magic enabled provides all items')
t.end()
})
// jest
describe('api works', () => {
test('api without magic provides no items', () => {
expect(api()).toEqual([])
})
test('api with zero magic also provides no items', () => {
expect(api(0)).toEqual([])
})
test('api with magic enabled provides all items', () => {
expect(api(true)).toEqual([1,2,3])
})
}) Update: I suppose you could write the jest tests on a single line with arrows: // jest
describe('api works', () => {
test('api without magic provides no items', () => expect(api()).toEqual([]))
test('api with zero magic also provides no items', () => expect(api(0)).toEqual([]))
test('api with magic enabled provides all items', () => expect(api(true)).toEqual([1,2,3]))
}) This makes for some longer lines, but does improve the stats we compared earlier somewhat:
However I think having the test description at the start of the line, without a linebreak, makes it harder to visually parse the logic because it puts the "meat" of the test, i.e. the actual assertions, at some arbitrary column position. Parsing the code is more important than reading the test description, which is basically just a glorified comment. This is why nobody writes comments at the start of the line .e.g. this would be sadomasochistic madness: /* api without magic provides no items */ expect(api()).toEqual([])
/* api with zero magic also provides no items */ expect(api(0)).toEqual([])
/* api with magic enabled provides all items */ expect(api(true)).toEqual([1,2,3]) Ideally all the assertion code would line up neatly in the same column so it's easily parsed by a human. Based on this thinking, I'd strongly opt for the trailing |
Thanks for keeping the conversation going. Note, that you also don't need the describe block, further making things smaller. |
Use the last arg of each matcher function then? |
It would only work if we add it as a second arg of expect(obj).toHaveProperty('a.b.c', 'is that a reason or a value of the property?'); |
@cpojer it's unclear if the jasmine (and others) way of One example is testing redux-saga/redux-observable stuff where you're testing a state machine. It's very helpful to have a descriptive message about at what state did it fail. That example is contrived so the descriptions are as well, though.. |
@jayphelps we're not using the jasmine way any more since we rewrote all jasmine matchers |
@DmitriiAbramov sorry my question wasn't clear. Has the jasmine way of doing it been ruled it to be added back? Doing the same thing they allow. |
@jayphelps as i said before, it won't work for all matchers because of the ambiguity. expect(obj).toHaveProperty('a.b.c', 'is that a reason or a value of the property?'); and sing jest matchers can be extended with a third party packages i don't think it's a good idea to mess with the argument list |
the cleanest option is probably to have it as a second argument of
i'm not sure if we want to overload the API though. We almost never used it in facebook test suites, and for special cases i think it'm easier to just define a new test: beforeEach(someSharedSetup);
test('reason or description', () => expect(1).toBe(1)); it's just a few lines more :) |
Or you can even put it into another |
@DmitriiAbramov The annoying cases are when you build up state, like in a state machine for sagas, epics, etc. Each test requires the previous state changes, isolating them requires a ton of duplication without any gain AFAIK. it('stuff', () => {
const generator = incrementAsync();
expect(generator.next().value).toBe(
call(delay, 1000)
);
expect(generator.next().value).toBe(
put({ type: 'INCREMENT' })
);
expect(generator.next()).toBe(
{ done: true, value: undefined }
);
});
Can you elaborate this? Nesting describe calls AFAIK was just for dividing section titles, tests are still run concurrently right? |
Test suites (files) run concurrently, |
i'm starting to think that something like test('111' () => {
jest.debug('write something only if it fails');
expect(1).toBe(2);
}); can be a thing |
From this discussion and this repository I think a nice and semantic one would be: it('has all the methods', () => {
since('cookie is a method').expect(reply.cookie).toBeDefined();
since('download is a method').expect(reply.download).toBeDefined();
since('end is a method').expect(reply.end).toBeDefined();
// ...
}); The usage is similar to the If you like this I might be able to work out a PR adding the |
Please implement an easy way to do this. I don't use it very often, but especially for more complicated tests, it's helpful to know exactly what is failing without having to go digging. Please don't say "rewrite your test suites to be simpler". The only thing engineers hate more than writing test suites is rewriting test suites.
|
I got some simple "prototype" demo working, I would need to implement the recursion now. It is a thin wrapper using Proxies around the global variables and then over each method. However, Proxies are not supported by older browsers and cannot be polyfilled so it might not be acceptable for Jest. This is the general structure for the wrapper: const since = (text) => {
return new Proxy(global, {
get: (orig, key) => {
return (...args) => {
try {
const stack = orig[key](...args);
return new Proxy(stack, {
get: (orig, key) => {
return (...args) => {
try {
const ret = orig[key](...args);
// ... implement recursion here
} catch (err) {
console.log('2', key, text, err);
throw err;
}
}
}
});
} catch (err) {
console.log('1', key, text, err);
throw err;
}
};
}
});
}; There are three realistic options:
Edit: see it in action: describe('Test', () => {
it('works', () => {
since('It fails!').expect('a').toEqual('b');
});
}); |
You need expectation context to make test results sane when you have non-trivial tests. Real-world tests wouldn't always be that simple. Remember custom matchers - they hide mathing complexity. But when test fails, hiding this complexity is not what you want becase you want maximum info about failure. Expectation context alows you to provide this context manually. Not ideal I guess, some kind of automatic context would be better, but it is the only way I've seen by now. When I broke something and it fails, with Jest I have to debug it manually or add logging or whatever modifications. Which is much less convenient than just look at test run results. Sorry if I'm mistaken, but I don't see any technological counter-arguments to this feature here. And things like "this shouldn't be added because it won't look good" are just ridiculous. |
Can we reopen? Even |
|
Also, jasmine-custom-message looks similar to what's requested:
|
I think another modifier quickly becomes unreadable.
T
2018-04-19 13:47 GMT+02:00 λ • Geovani de Souza <[email protected]>:
… What about chaining another modifier?
expect(foo).toEqual(bar).because('reason with %s placeholders')
Or maybe a function
expect(foo).toEqual(bar).explainedBy((result) => `Lorem ipsum ${result}`)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1965 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAM5PwBCvET1KdEDeDEF7gGo708Naj8oks5tqHlSgaJpZM4Kc6Uu>
.
--
__________________
Tarjei Huse
Mobil: 920 63 413
|
The way I think either |
Can this be reopened? It seems Jest still has no good solution for testing how state changes over multiple interactions, for example:
Both of these cases require performing a series of actions, and asserting how the state changed (or didn't change) after each action, so multi-assertion tests are sometimes necessary. It's not always possible to solve this kind of problem with |
I keep finding situations where this would be really useful. Specifically when I'm running multiple interactions and assertions as @callumlocke explains. If we come up with an API that folks don't hate, is this something you'd be willing to pursue? I really think this would be a valuable and much used feature. |
Here's a summary of the proposed solutions: expect(api()).toEqual([]) // api without magic provides no items
it('api without magic provides no items', () => expect(api()).toEqual([]))
test('api without magic provides no items', () => expect(api()).toEqual([]))
expect(api()).toHaveNoItems()
expect(api(), 'api without magic provides no items').toEqual([])
expect(api()).because('api without magic provides no items').toEqual([])
since('api without magic provides no items').expect(api()).toEqual([]))
because('api without magic provides no items').expect(api()).toEqual([]))
jest.debug('api without magic provides no items'); expect(api()).toEqual([])) Note that a trailing All four options in the first group are supported today. Personally, I find that the first option (a code frame with a comment) works great. And even better than that is using a custom matcher (option 4). I think what's we need to understand to make movement on this is: what's more appealing about the options in the second group than the options in the first? What does the second group add which can justify core maintenance for all of the matchers we provide (across async matchers, asymmetric matchers, spy matchers, throw matchers, promise matchers, and custom matchers)? |
Hi,
First group options are mainly meant for the first use case. If you encounter the need for dynamically generated assertion, as proposed, you can nest calls to Since those dynamic use cases are still rare though, imo we should stick for the solution that requires minimal work for the maintainers. I personally like the |
I'm fine with any option. I just want the feature. I'm not huge on the |
@kentcdodds what about the four existing options? @eric-burel have you seen |
Like I said, I don't really care which option we go with. I just want the feature to exist. I guess if I were to sort them by order of preference it would be:
|
The feature does exist today with four options: expect(api()).toEqual([]) // api without magic provides no items
it('api without magic provides no items', () => expect(api()).toEqual([]))
test('api without magic provides no items', () => expect(api()).toEqual([]))
expect(api()).toHaveNoItems() What is wrong with these? The proposed new solutions only seem to be marginally better than these existing solutions. What benefits do they bring over what we have that justify the maintenance cost? |
@rickhanlonii Nice I did not know about So it left the second I listed: having a dynamically generate failure message, which would make debugging faster. I don't have much use cases right now, maybe when you test an object field value, you would like to print the whole object on failure. That's legitimate imo, as anything that makes writing test easier, even if marginal or a bit redundant. After all we can both write Edit: creating a test in another test with |
This is crazy. Just add a second, optional parameter to expect(). Those of us who want to use it will (selectively), and those who don't, won't. Mocha has been doing this forever... it's one of the reasons I abandoned Jasmine years ago (the other being much better timer mocking.) If I weren't having to join the React bandwagon, I wouldn't be using Jest or any other Jasmine derivative. |
Printing out a message on error is a convention in so many other testing frameworks and I was surprised to not see it in Jest. I've found a lot of helpful examples in this thread (thank you for those), but adding an explicit way to print a custom error on test failure would be a nice addition to Jest's usability. This would make it easier for developers who are used to other testing frameworks (including non-JS ones) to ramp up on Jest. |
@mattphillips do you think it's possible to do something similar to jest-chain here to allow a solution to exist in userland? E.g. second argument to |
Honestly, this is something very standard in most JS testing frameworks. Very disappointed not to find it in Jest as we write all of our tests with a custom error message. |
@SimenB sorry I only noticed your message this morning! Yes this is doable in userland, I've just knocked it up and released it as Feedback welcome 😄 |
Awesome, thanks for doing it! |
Two things:
I used Mocha/Chai, as well as tape, before coming to Jest, and this is really a deal breaker. What do we have to do to get a custom message support into expect? Telling us to expect.extend in order to create a custom matcher sounds exactly like what you were trying to avoid in your first argument: "engineers don't want to waste time writing tests." |
I find it easy to open the unit test and look at the line number, so the use-case in the OP doesn't bother me. The use case that bothers me is when I have a loop inside a test, e.g. to test every value of an enum, for example like this:
If that fails then I don't know which wordType value failed. My work-around was to replace that with a message which contains the test result, and expect that the message contains the expected test result (i.e. true). If it fails then Jest prints the message which contains the additional information.
Jest prints this ...
... which tells me that the wordType was 6 when it failed. Or more readably something like ...
|
This would be incredibly useful in tests like this: test("compare ArrayBufferCursors", () => {
const orig: ArrayBufferCursor;
const test: ArrayBufferCursor;
expect(test.size).toBe(orig.size);
while (orig.bytes_left) {
expect(test.u8()).toBe(orig.u8());
}
}); Right now I only know some byte in the ArrayBufferCursor is wrong but I have no idea which one. Being able to add an index as context would make debugging much easier. Thankfully some people have presented workarounds here but they're all ugly and slow. |
@rickhanlonii, I understand that you want to reduce the maintenance costs of jest, but options that you offer in your comment increase the maintenance of unit tests of all other projects. If I want to explain an assertion using While there are cases with repetitive tests, where This discussion and emerging of |
I may be misunderstanding the OP's request here but the problem that I was trying to solve and brought me to this issue thread was solved just using a simple The most generic version of this might look like:
|
+1 |
I found this closed issue while trying to find out what was the syntax for exactly this in jest. I'm surprised that it's not present, already. I would also like to see this added. |
@cpojer can we reopen this? There have been some good ideas. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
If there's multiple expectations in a single
it
, currently it appears to be impossible to figure out which expectation actually failed without cross-referencing the failure with line numbers in your code.Which expectation failed? The first or the second?
It would be nice if there were some human readable context that made it immediately clear which expectation failed and what the expectation output actually means in human terms, without having to find the line number at the top of the stack trace and mapping that back to the code.
Compare the
tape
equivalent below. Ignore that tape doesn't bail after the first assertion failure.tape
prints out a human-readable message above each expectation failure, allowing you to know exactly which test failed without going back to the test file.Note this also pushes the human-readable noise off to the end of line in the test source, where you might write a comment anyway.
It seems the only way to attach human-readable information to errors with
jest
is to wrap everything in an additionalit
which is unnecessarily verbose IMO.Ideally, one could attach some human-readable context onto the end of the
expect
somehow.e.g.
Context message as additional optional parameter for assertion methods:
Or context message as a chained
.because
or.why
or.comment
or.t
or something:Alternatively, it'd be even better perhaps if jest could simply read the file and print the actual source code line that expectation itself is on.
The text was updated successfully, but these errors were encountered: