You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment we cache the likelihoods, which are a function of edge span and number of mutations above the edge (along with distance between timepoints). However, there may be many combinations of (span, #mutations) which are unique, and hence don't need caching.
We could probably work out which combinations are unique or not easily enough:
# assume we have pre-calculated the mutations above each edge and stored them in `mut_edges`mut_span, counts=np.unique(
np.vstack((mut_edges, ts.edges_right-ts_edges_left)),
axis=1,
return_counts=True,
)
fornum_muts, spaninmut_span[:, counts>1].T:
# this combination of num_muts and span is duplicated, so cache the likelihoods for the different time intervals
The text was updated successfully, but these errors were encountered:
At the moment we cache the likelihoods, which are a function of edge span and number of mutations above the edge (along with distance between timepoints). However, there may be many combinations of
(span, #mutations)
which are unique, and hence don't need caching.We could probably work out which combinations are unique or not easily enough:
The text was updated successfully, but these errors were encountered: