Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize likelihood cache #247

Open
hyanwong opened this issue Jan 17, 2023 · 1 comment
Open

Optimize likelihood cache #247

hyanwong opened this issue Jan 17, 2023 · 1 comment

Comments

@hyanwong
Copy link
Member

hyanwong commented Jan 17, 2023

At the moment we cache the likelihoods, which are a function of edge span and number of mutations above the edge (along with distance between timepoints). However, there may be many combinations of (span, #mutations) which are unique, and hence don't need caching.

We could probably work out which combinations are unique or not easily enough:

# assume we have pre-calculated the mutations above each edge and stored them in `mut_edges`
mut_span, counts = np.unique(
    np.vstack((mut_edges, ts.edges_right-ts_edges_left)),
    axis=1,
    return_counts=True,
)
for num_muts, span in mut_span[:, counts > 1].T:
    # this combination of num_muts and span is duplicated, so cache the likelihoods for the different time intervals
@nspope
Copy link
Contributor

nspope commented Jan 24, 2023

Is it still worth precomputing these for unique cases, because then it's possible to parallelize?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants