Skip to content
This repository has been archived by the owner on May 31, 2020. It is now read-only.

Use Hypothesis to generate example data for tests #622

Open
Zac-HD opened this issue Aug 10, 2017 · 0 comments
Open

Use Hypothesis to generate example data for tests #622

Zac-HD opened this issue Aug 10, 2017 · 0 comments

Comments

@Zac-HD
Copy link

Zac-HD commented Aug 10, 2017

Equivalent to beeware/voc#580.

I'm a maintainer of Hypothesis, and discussed using it to test Batavia with Russell (@freakboy3742) at the PyConAU sprints. I've got more than enough to do working on Hypothesis itself, but would be delighted to consult, mentor, teach, or assist anyone who wants to use it to test beeware things. Just @-mention me, and I'll answer!

The idea is that instead of checking predefined examples in tests/utils/sample.py:SAMPLE_DATA, you would pick the right Hypothesis strategy and get examples from that (using a test decorated with @given, so examples are reproducible and minimize correctly).

As a quick-and-dirty demo, we could also just (temporarily) replace SAMPLE_DATA with a dataset drawn from Hypothesis:

from hypothesis.strategies import from_type

SAMPLE_DATA = ...  # current definition
generated = {
    k: [from_type(eval(k)).example() for _ in range(100)
    if isinstance(eval(k), type)]
    for k in SAMPLE_DATA
}
SAMPLE_DATA = {
    k: sorted(set(repr(x) for x in v), key=lambda x: len(x), x)
    for k, v in generated.items()
}

This wouldn't show minimal examples, but it would probably turn up a bunch of unicode issues and doesn't require any changes to existing tests.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant