Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Greedy speedup #7

Open
ghannum opened this issue Jan 24, 2018 · 2 comments
Open

Greedy speedup #7

ghannum opened this issue Jan 24, 2018 · 2 comments

Comments

@ghannum
Copy link

ghannum commented Jan 24, 2018

Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).

Would it be possible to add a parameter for greedy scoring which would scale better in time?

For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?

@mblondel
Copy link
Owner

I think the right way to do it would be to add a band constraint, as done in Fast Global Alignment Kernels by @marcocuturi. This would allow to only compute distances for pairs of observations not too far from the diagonal. This should be fairly straightforward but we haven't got around to doing it yet.

Repository owner deleted a comment from sculyi Mar 21, 2018
@mblondel
Copy link
Owner

@ghannum You will probably be interested in PR #9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants