Skip to content
This repository has been archived by the owner on May 28, 2024. It is now read-only.

Why compute the likelihood contributed by different scales in this way? #1

Open
williacode opened this issue Sep 29, 2021 · 0 comments

Comments

@williacode
Copy link

I have two questions about the likelihood part.
Question 1:
In graph.py,

return xs[self.index], sum(prev_log_dets) / n_parts

the logdet is divided by n_parts (usually 2).
Why the logdet is divided by 2 at SelectNode?
Question 2:
Why the likelihood contributed by different scales is computed in the following way?
Here

lp0 = node_to_lps['m0-flow-0'] / 2 + node_to_lps['m0-dist-0']

lp0 is computed as the likelihood contributed by the first scale.
and here
lp2 = node_to_lps['m0-dist-2'] - node_to_lps['m0-flow-1'] / 2

lp2 is computed as the likelihood contributed by the last scale.
But I have checked the formula (3) in the paper 'Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features'.
The likelihoods contributed by different scales are not equal to the implementation.
Can somebody explain this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant