-
Notifications
You must be signed in to change notification settings - Fork 5
Step 5: Writing tests with complex setup dependencies
This extra tutorial focuses on how to define a test with more complex setup dependencies like autogenerated ephemeral test nodes, permanent test objects and multi-test dependency cloning.
It is implemented by the tutorial_get
and tutorial_finale``variants with
the same minimal ``tutorial_step_get.py
test script code.
To tune the complexity up a bit, this extra tutorial makes use of three very different vms playing the roles of a temporary (regular) vm, a special type of vm called permanent vm, and a vm with multiple setup dependencies:
- tutorial_get: vms = vm1 vm2 vm3 roles = temporary multisetup permanent temporary = vm1 multisetup = vm2 permanent = vm3 get_state_vm1 = connect get_opts_vm1 = switch=on get_vm3 = 0root get_state_vm3 = ready type = tutorial_step_get host_dhcp_service = yes variants: - explicit_noop: get_vm2 = tutorial_gui.client_noop get_state_vm2 = guisetup.noop set_state_vm2 = getsetup.noop - explicit_clicked: get_vm2 = tutorial_gui.client_clicked get_state_vm2 = guisetup.clicked set_state_vm2 = getsetup.clicked - implicit_both: get_vm2 = tutorial_gui set_state_vm2 = getsetup
Variant definitions are the most significant parts of this tutorial since all
more elaborate dependencies are configured here using get
parameters. The
test script remains minimal and makes immediate use of an ephemeral state vm1,
the variant-specific state vm2, and the permanent vm3 provided beforehand by
reading this configuration. Let's break this down by tracing what each of these
test objects represents and how it is set up.
The easiest to consider of the three is the ephemeral state vm vm1
which
is simply the virtual machine used in previous tutorials at the off connect
state (being online) with an additional switch. Whenever a test vm dependency is
defined via the usual get_state = connect
, this actually translates to a
default use of get = connect
to look for setup tests restricted through the
filter only=connect
. However, get
could also be explicitly specified to
restrict for some tests while the usually included get_state
is always used
to decide on what state to look for once a setup test is parsed, run, and the
tutorial_get
test validates whether its requirements are satisfied. We still
rely on the default get
value while for vm1
while we become explicit for
vm3
by requesting its root node (i.e. creation test) but making use of
its ready
state. The additional parameter get_opts
refines further
the behavior of the get
procedure, usually adding space separated secondary
options in comparison to get_state
, get_type
, and get_mode
. There
is also a check_opts
parameter described in the README that accompanies the
check_state
and other similar parameters for this procedure.
In the case above, the get
options include switch=on
which will switch the
state type of the connect
state from off
to on
. It can do this by
generating an ephemeral test node (called this way because changing the off
state will erase any previous on state setup paths) which will retrieve the off
connect
state, boot the vm, and set the derived boot state as on connect
state. All of this is done purely from the configuration and does not need any
additional code implementation. Clearly though, there could be cases where the
boot sequence and the overall state switch might require additional fine tuning.
In such cases we could define custom ephemeral tests in the corresponding section
in the groups config:
- ephemeral: start_vm = yes kill_vm = no get_state = customize get_type = off set_type = on type = shared_manage_vm vm_action = boot variants: ... # Additional customization possible only when left running - on_customize: set_state = on_customize type = shared_customize_on
Using an on_customize
ephemeral test as an example, we replace the default
boot code from the shared_manage_vm
test script to shared_customize_on
where one could perform additional customization that is possible only when
the vm is left running. There might be many more reasons one would like to start
with an off (e.g. LVM) state and end up with on (e.g. QCOW2) state that cannot
be covered with a default autogenerated ephemeral test. It is still helpful when
the simplest cases can be handled without having to add a manual config entry for
each on/off state transition as a duplicated setup node. In both cases the
ephemeral on state will be reused for the duration of the branch of tests before
reverting to another off state.
The simplified dump of the Cartesian graph excluding vm2 from running the test
tutorial_get.explicit_noop
shown here

includes the dependency paths of vm1 and vm3 up to their root states (creation)
0r
and their shared root (dependency scan) 0s
. The automatically created
and thus temporary vm1 then has a (pre)install node 0p
which expands into
preparational and original installation tests when actually run. It then has
other freely configured automated setup nodes (nested a
) and the generated
ephemeral test as a boot test is denoted as 1b1
. We can now look at the way
the permanent vm3 is parsed and notice that there is no automated setup and
instead a direct dependence on its own root node. This root test will not truly
be run and is parsed only to complete the dependency graph as long as the vm
is correctly identified as a permanent vm in its variant definition section:
- vm3: vms = "vm3" ... # storage permanent_vm = yes
Of course, we might still make it possible to create the vm within our tool set instead of truly externally and write a tool for this purpose as we will do here.
Preparing the permanent Ubuntu vm could be internalized and thus automated using a tool if
- the vm is only made permanent in order to preserve certain properties or further improve setup reuse
- the preparation is could be automated and is not complex enough or requiring too much human input
To start, the vm can be created and customized using some regular tools that are already available for vm management and then all setup specific to the vm could be moved to a custom tool provided in the sample test suite. Running all steps to prepare the vm should look like this
avocado manu setup=full,permubuntu vms=vm3
at the end of the tool development, adjusting parts of the creation,
installation, and customization stages included in the full
tool and
ultimately writing the code for the permubuntu
tool in a file placed in the
tools
test suite folder. Using a more manual initial set of steps like
setup=create|install|deploy
would require specifying additional parameter
create_permanent_vm=yes
in order to no longer prevent the vm root test from
being run. The permanent vm3 here is already defined in terms of guest variant
and object variant, added to the list of selectable vms in the guest base config
and default variants in the objects overwrite config, and finally configured to
only use on states in the sets config
... # Per-object state type selection vm3: get_type = on set_type = on
Alternatively for the last step one could think of defining off-state-only vms as well that could be vms with permanent logical volumes.
While in the more elaborate case the new vm should also be handled in the creation, installation, and customization stages (if the predefined defaults are not sufficient), if one needs a specific software (e.g. OS) or hardware it might also extend adaptation to the guest-os and guest-hw configs on the guest side. This will naturally require a greater understanding of the default and general Cartesian configuration and the best approach would be to always extend it using the currently defined guest variants as a starting point.
The final possibly necessary step involves tool development as a slight departure from regular test development and we use the opportunity to demonstrate one such instance by writing a tool called "permubuntu" for performing customization to the permanent vm that cannot be put or would require too much deviation for the regular customization step. To be accessible from the command line, the tool module must explicitly expose the manual step which is the tool entry point
__all__ = ["permubuntu"]
If the tool is making use of the Cartesian graph the developer should also make sure to import the necessary classes and decorators
from avocado_i2n.cartgraph import TestNode from avocado_i2n.intertest_setup import with_cartesian_graph ... @with_cartesian_graph def permubuntu(config, tag=""): ...
Perhaps clearly by now, it is also important to follow the same standardized function signature for the entry point function.
The actual implementation within the entry point function could be as easy as
general command execution but if it involves Cartesian graph manipulation it
would also require knowledge of the API for the cartgraph
subpackage as well
as possibly the standard test runner (node traverser) and loader (node parser)
that come with such graphs. The best advice to give at this stage is to observe
how the sample tool does this and mimic such behavior until one is comfortable
and becomes more experienced.
We can now turn our eyes to the most elaborate dependencies, namely those of vm2.
We could define multiple setup test nodes for the same test object by using any
desired test restriction in the get
parameter. In the example here, we have
multiple test dependencies for the vm2 object using get=tutorial_gui
. In
particular, we could achieve the same effect both through explicit variants like
- explicit_noop: get_vm2 = tutorial_gui.client_noop get_state_vm2 = guisetup.noop set_state_vm2 = getsetup.noop - explicit_clicked: get_vm2 = tutorial_gui.client_clicked get_state_vm2 = guisetup.clicked set_state_vm2 = getsetup.clicked
where we use get
with a test restriction that is expected to resolve into a
single test variant, here either a tutorial_gui.client_noop
test or a
tutorial_gui.client_clicked
, or through an implicit variant like
- implicit_both: get_vm2 = tutorial_gui set_state_vm2 = getsetup
where we use get
with a test restriction that is expected to resolve into a
test set with multiple subvariants, here both a tutorial_gui.client_noop
test and a tutorial_gui.client_clicked
test nodes. The explicit method has
the advantage of flexibility where we could fully decide what object state to
retrieve as we do with guisetup.noop
and guisetup.clicked
. It could
quickly become laborsome and hard to read though if the dependency test set is
too large as such configuration should be supplied for each resolved test node.
The implicit method (here included as a third variant for comparison) automates
all of this by cloning the tutorial_get.implicit_both
test for each parsed
parent node, resulting in a tutorial_get.implicit_both..guisetup.noop
and a
tutorial_get.implicit_both...guisetup.clicked
tests retrieving the same
states as the ones from the explicit configuration. It is important to notice
that this method does have get_state
parameters and instead autodetects the
states that the implicit_both
clones will need. A drawback of the implicit
method is lesser readability and lesser control over the configuration which is
why the explicit method is still more preferable when using just a few dependency
variants.
Running all four resulting tutorial_get
tests will construct the following
somewhat more elaborate Cartesian graph

All of the vm1 and vm3 dependencies are the same but the first observable
complication from vm2 is the fact that each one of the two explicit variants
1
and 2
and the two implicit variants 3
and 3d1
(for its first
and only duplicate) depends on a two-object test node. The 1a1-vm1vm2
test
tutorial_gui.client_noop
uses both vm1 and vm2 for the setup of vm2 saved
as state guisetup.noop
and used by 1
and 3
respectively and in a
similar manner the 2a1-vm1vm2
test tutorial_gui.client_clicked
does the
same with a vm2 state guisetup.clicked
used by 2
and 3d1
as the
explicitly stated and implicitly generated versions of this test. The simplest
use case for such dependencies one could think of are cases where vm2 should be
brought to a state that requires multiple participating vms, e.g. imagine that
there is a Windows VPN client installed on vm2 that requires connection with a
server vm1 in order to set up a VPN profile. However, let's say that the later
tests tutorial_get
require vm2 with VPN client profile established but with
two different configurations, one default as noop
and one with extra clicks
on a checkbox clicked
. The resulting configuration and dependency graph here
capture such scenarios in and explicit and an implicit fashion.
With all the above configuration included, we can now go deeper for a sort of
tutorial_finale
with the following final variant:
- tutorial_finale: vms = vm1 vm2 vm3 roles = temporary multisetup permanent temporary = vm1 multisetup = vm2 permanent = vm3 get_state_vm1 = connect get_opts_vm1 = switch=on get_vm3 = 0root get_state_vm3 = ready type = tutorial_step_get host_dhcp_service = yes get_vm2 = tutorial_get.implicit_both
This additional test group will use the same test script but adds another level
of depth to the cloning since its vm2 dependency is the single previous variant
tutorial_get.implicit_both
which will parse two grandparent setup nodes and
be cloned in return, ultimately duplicating the tutorial_finale
tests too.
Let's look at a picture to understand this better, i.e. at the Cartesian graph:

The tutorial_finale
variants were split into 1
as the clone original
tutorial_finale..getsetup.guisetup.noop
and 1d1
as its clone. None of
the two tests is main in any aspect, the first simply takes the first available
parent. The vm2 dependency of 1
is 1a1
which is now the previous test
tutorial_get.implicit_both..guisetup.noop
and the vm2 dependency of 1d1
is 1a1d1
which is tutorial_get.implicit_both..guisetup.clicked
. Both vm2
dependencies involve vm2 being prepared in larger contexts of three participant
vms (as indicated in the vm1vm2vm3
part of the 1a1
and 1a1d1
tests)
so even more elaborate anticipated scenarios. This could then go as deep as are
the needs of the product QA and its test suite.
All tests here used the same permanent vm3 and a booted online vm1. Traversing
the graph and in particular a tutorial_gui
test node used to prepare a given
guisetup
vm2 state will bring vm1 back to the state linux_virtuser
which
is required for the preparation of vm2's guisetup
. As linux_virtuser
is
an off state, this in return will invalidate the ephemeral connect
state that
was prepared by the boot 1b1
test. However, the graph traversing runner will
take care of this by noticing that the connect
state is ephemeral and no
longer available when preparing to run any test requiring it and will thus run
the 1b1
setup test again with a different hash, for instance using a name
like internal.stateless.manage.start..9OLZAl
. In this way each ephemeral
test rerun can be uniquely identified and autogenerated or manually configured
ephemeral tests remain compatible with complex setup automation like this.
If you reached this far, you know everything there is to know about configuring complete virtualized networks with state reuse for their different components!