Skip to content

Commit

Permalink
Removed dropout layers from models using it
Browse files Browse the repository at this point in the history
After discussion with @htoyryla (jcjohnson#408 (comment)), I think that processing fragments with exactly same sequence without random variations could make overlapping parts more conforming to each other.
Or maybe not. Anyway, it doesn't seem to makes results worse, and then it's simply faster.
  • Loading branch information
VaKonS authored Jul 2, 2017
1 parent 4d09095 commit f29e9f8
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion neural_style.lua
Original file line number Diff line number Diff line change
Expand Up @@ -208,8 +208,12 @@ print("Processing image part #" .. fragment_counter .. " ([" .. fx .. ", " .. fy
local layer = cnn:get(i)
local name = layer.name
local layer_type = torch.type(layer)
local is_dropout = (layer_type == 'nn.Dropout')
local is_pooling = (layer_type == 'cudnn.SpatialMaxPooling' or layer_type == 'nn.SpatialMaxPooling')
if is_pooling and params.pooling == 'avg' then
if is_dropout then
local msg = 'Removing dropout at layer %d'
print(string.format(msg, i))
elseif is_pooling and params.pooling == 'avg' then
assert(layer.padW == 0 and layer.padH == 0)
local kW, kH = layer.kW, layer.kH
local dW, dH = layer.dW, layer.dH
Expand Down

0 comments on commit f29e9f8

Please sign in to comment.