You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After we switched to using the new runner, the env file cannot be reused, which means some resources will be generated in each case, this further increases the consumption of the resource pool, such as DHCP. In the past, the first case will generate a MAC address and save it into env file, and then the second case can reuse this, but for now, every VM has a different MAC. If the DHCP pool is small, VM sometimes fails to get an IP address in some cases because the pool is eaten up.
IIUC, the nrunner will create a sub-process for each case, so the env file will be deleted when the case is finished. To fix this issue, I thought of two ways:
Have an enhancement in avocado to make nrunner have the same behavior as the legacy runner when max_parallel_tasks is set to 1.
Fix this on avocado-vt side, save the MAC address cache somewhere, but this can only fix the MAC issue.
Thanks for reporting this @PaulYuuu. @richtja mentioned that this is similar to the requirements of reusing containers in the LXC spawner effort, so maybe we can think of a common solution.
I haven't dived deeper into the root cause of this issue, but I feel like the cache facility would be something might help on that. Maybe we can have a try.
After we switched to using the new runner, the env file cannot be reused, which means some resources will be generated in each case, this further increases the consumption of the resource pool, such as DHCP. In the past, the first case will generate a MAC address and save it into env file, and then the second case can reuse this, but for now, every VM has a different MAC. If the DHCP pool is small, VM sometimes fails to get an IP address in some cases because the pool is eaten up.
IIUC, the nrunner will create a sub-process for each case, so the env file will be deleted when the case is finished. To fix this issue, I thought of two ways:
max_parallel_tasks
is set to 1.Hello @luckyh @clebergnu, do you have any opinion about this issue?
The text was updated successfully, but these errors were encountered: