-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Padding Layer #15
base: master
Are you sure you want to change the base?
Padding Layer #15
Conversation
rightPad = strRightPad.Atoi(); | ||
} break; | ||
} | ||
++idxToken; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a valid string here? Add some information in the comments. ("1, 2, 3 4")? Any caveats the caller should be aware of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I don't have any checks for negative numbers in the padding string. Is the check necessary? I'll add the order of padding to the comments nevertheless
tmva/tmva/src/MethodDL.cxx
Outdated
@@ -496,6 +496,8 @@ void MethodDL::CreateDeepNet(DNN::TDeepNet<Architecture_t, Layer_t> &deepNet, | |||
} else if (strLayerType == "LSTM") { | |||
Log() << kFATAL << "LSTM Layer is not yet fully implemented" << Endl; | |||
//ParseLstmLayer(deepNet, nets, layerString->GetString(), subDelimiter); | |||
} else if (strLayerType == "PADDING") { | |||
ParseRnnLayer(deepNet, nets, layerString->GetString(), subDelimiter); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo, should be ParsePaddingLayer
.
tmva/tmva/inc/TMVA/DNN/DeepNet.h
Outdated
/*! Function for adding Padding Layer in the Deep Neural Network, with a given | ||
* top, bottom, left and right paddings. It will take every matrix from the | ||
* previous layer and pad it with zeros to a matrix with new dimensions. */ | ||
TPaddingLayer<Architecture_t> *AddPaddingLayer(size_t topPad, size_t bottomPad, size_t leftPad, size_t rightPad); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also have a factory method for using "full" and "valid" paddings? (without the user need to specify exact dimensions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we make this a factory method in the Padding Layer itself or have this option only for conv layer?
namespace CNN { | ||
|
||
template <typename Architecture_t> | ||
class TPaddingLayer : public VGeneralLayer<Architecture_t> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly this is more appropriately named PaddingLayer2D
since it assumes 2D spatial layout. (Then we can also integrate 1D and 3D layers later. This is the approach of e.g. keras).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I'll fix the name
fTopPad(topPad), fBottomPad(bottomPad), fLeftPad(leftPad), fRightPad(rightPad) | ||
{ | ||
|
||
this->outputHeight = inputHeight + topPad + bottomPad; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Create private functions for this? The calculation can be reused in constructor for options "valid" and "full".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I have made this change already. I'll push it soon.
This layer introduces arbitrary padding as a separate layer instead of having it in convolution. The idea is to allow only fixed padding types for convolution layer such as
VALID
,SAME
,FULL
etc. Since arbitrary padding is still required in some architectures, it's been created as a separate layer. The layer takes 4 arguments:The expected format for the string is this:
PADDING2D|topPad|bottomPad|leftPad|rightPad
This contains the naive implementation of padding.
More efficient approaches such as pre-allocation, a single data structure for propagations are yet to be discussed.TO-DO
Tests for padding