You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently trying to reproduce the method on pyTorch. See here: cobnet
FYI, a code (with python bindings) for the computation of the hierarchical merge tree is available in the higra package
I would therefore appreciate a more formal description of the training procedure as the paper is rather unclear on that matter.
From my understanding it goes like this.
phase 1:
Regress a bunch of dimensionality reduction (conv1x1) layer that connect to output of resNet blocks. The outputs you call "side activations" are 5 in total.
Regress two sets of parameters (conv1x1) for fine and coarse scales. These take as inputs tensors of 4 channels (concatenate 4 lowest and 4 highest side activations, respectively, to give Yfine and Ycoarse.
It is unclear what you mean by " To train the two sets of weights of the linear combinations, we freeze the pre-trained weights...".
Does that also apply to the side activation losses (eq. 1)? In that case, the parameters of the backbone would never be modified. My intuition is that phase 1 is in fact divided in two steps (1) sum all losses of eq. 1 and backprop through both dimensionality reduction and resNet. (2) freeze resNet and aforementioned dim reduction, compute sum of loss on Yfine and Ycoarse, backprop.
Can you confirm this please?
phase 2:
Freeze everything except the orientation layers to regress the oriented boundaries.
Thanks a lot in advance for your help.
Also, any help on the porting would be appreciated.
The text was updated successfully, but these errors were encountered:
I am currently trying to reproduce the method on pyTorch. See here:
cobnet
FYI, a code (with python bindings) for the computation of the hierarchical merge tree is available in the higra package
I would therefore appreciate a more formal description of the training procedure as the paper is rather unclear on that matter.
From my understanding it goes like this.
phase 1:
It is unclear what you mean by " To train the two sets of weights of the linear combinations, we freeze the pre-trained weights...".
Does that also apply to the side activation losses (eq. 1)? In that case, the parameters of the backbone would never be modified. My intuition is that phase 1 is in fact divided in two steps (1) sum all losses of eq. 1 and backprop through both dimensionality reduction and resNet. (2) freeze resNet and aforementioned dim reduction, compute sum of loss on Yfine and Ycoarse, backprop.
Can you confirm this please?
phase 2:
Thanks a lot in advance for your help.
Also, any help on the porting would be appreciated.
The text was updated successfully, but these errors were encountered: