-
Dear developers, I am trying to use However, for a model that is not for training, the monitor cannot obtain those outputs values, only print I found another method to extract features by reconstructing a Module from the first layer to a certain layer from an example that was last updated in 2017 (https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb). This method runs an extra forward() for one intermediate result, which seems not efficient, especially when extracting intermediate results of multiple layers. Another method mentioned in #4805 and #1538 is using My questions are:
Related previous discussion: Thanks. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Could you clarify what it is that you try to accomplish? If you want to debug an output inside the graph then |
Beta Was this translation helpful? Give feedback.
Could you clarify what it is that you try to accomplish? If you want to debug an output inside the graph then
mxnet.monitor
is a debug tool that lets you do that. It is typically used once per many iterations due to the high overhead. If instead what you want is to get some specific activations and use it in some other network for example, then you can group the activations you want together usingmx.sym.Group
and return as the result of the network - this will make the network produce multiple outputs.