There are four groups of common tests in tests
directory:
- acceptance (
tests/acceptance/test_*.py
) - regression (
tests/regression/test_*.py
) - snapshot (
tests/snapshot/test_*.py
). - internal (
tests/internal/test_*.py
).
The acceptance and regression tests check the on-chain protocol state:
- after executing the vote script
scripts/vote_*.py
if it exists - just the current on-chain state otherwise
Acceptance tests are devoted to the contract's state, while regression tests are for various scenario.
The snapshot tests run only if the vote script exists.
If there are multiple vote scripts all the scripts are run and executed sequentially in lexicographical order by script name.
The internal tests are using for testing tooling and run only if the env WITH_INTERNAL_TESTS = 1
exists.
As there is no vote script (as the workflow defines) only the acceptance and regression tests run.
As the vote script exists (as the workflow defines): a) the acceptance tests run after execution the vote b) the regression tests run after executing the vote c) the snapshot tests run
Snapshot tests now are run only for and if upgrade_*.py
vote script
are present in the /scripts
directory. NB.
By snapshot here we denote a subset of storage data of a contract (or multiple contracts). The idea is to check that the voting doesn't modify a contract storage other than the expected changes.
Snapshot tests work as follows:
- Go over some protocol use scenario (e.g. stake by use + oracle report)
- Store the snapshot along the steps
- Revert the chain changes
- Execute the vote
- Do (1) and (2) again
- Compare the snapshots got during the first and the second scenario runs
- The expected outcome is that the voting doesn't change
Current snapshot implementation in kind of MVP and need a number of issues to be addressed in the future:
- expand the number of storage variables observed
- allow modification of the storage variables supposed not to be changed after the voting without modification of the common test files
- extract getters from ABIs automatically
Internal tests are used to test the tooling itself.
How to run one test? You need to add file name:
poetry run brownie test tests/<dir>/test_<name>.py -s
How not to raise the network every time you launch test? You could to run network in separate terminal window, tests will connect to it:
poetry run brownie console --network mainnet-fork
How to decode unreadable error messages (like 0xb...)?
- You need to clone
lido-cli
repo.
git clone https://github.com/lidofinance/lido-cli
- install following docs - https://github.com/lidofinance/lido-cli
- run
./run.sh tx parse-error <error-messages>