Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about Neural Style implementation #24

Open
alexjc opened this issue Sep 29, 2015 · 6 comments
Open

Questions about Neural Style implementation #24

alexjc opened this issue Sep 29, 2015 · 6 comments

Comments

@alexjc
Copy link

alexjc commented Sep 29, 2015

I have a few questions about the notebook with the implementation of "A Neural Algorithm for Artistic Style". Hopefully this is the right place for them? It'd be easier to work with and make pull requests if this was a script rather than a notebook, but it's up to you.

Overall I think this is by far the prettiest implementation I've seen of the algorithm, and it's been a pleasure to work with. My questions:

  • Other implementations benefit from having input images that are multiple of 32, but don't use convolution layer padding. Here, Lasagne is used to add padding, so should the input images be multiples of 32 or with an additional +2 to width and height?
  • The borders of the computed images seem to have dark areas around them, then the style fades in towards the middle. Is this due to the way the borders are handled? I'd expect it to be not black but the color of the mean pixel, so I'm not sure what's going on.
  • I'm finding the lbfgs in scipy to be quite unstable (compared to the one in Torch used by Justin's implementation), as it often returns the error below. This seems to be quite random depending on image size/parameters, and adding new features to the algorithm isn't helping. Any ideas?
Bad direction in the line search;
   refresh the lbfgs memory and restart the iteration.

           * * *

   N    Tit     Tnf  Tnint  Skip  Nact     Projg        F
*****    211    259      2     0     0   1.204D-03   2.472D+03
  F =   2472.1531027869614

ABNORMAL_TERMINATION_IN_LNSRCH

 Line search cannot locate an adequate point after 20 function
  and gradient evaluations.  Previous x, f and g restored.
 Possible causes: 1 error in function or gradient evaluation;
                  2 rounding error dominate computation.
  • I've noticed that the GPU memory during execution is constantly fluctuating, presumably because Theano allocates and de-allocates buffers. Is there a way to force it to just allocate the buffers it needs once then keep them in memory throughout the process?

Thanks again for the code, I've been very impressed with Lasagne because of it!

@alexjc
Copy link
Author

alexjc commented Sep 29, 2015

I'm wondering if the second and third issues in my list are caused by software versions... I had to upgrade Theano to the version in the Git repository since the PIP version did not have pooling modes for convolution required by the latest Lasagne—and it causes an error otherwise. I believe things were more reliable before the upgrade (though I had to hack the average pooling out for it to work).

Which combination of versions/revisions were you using for Lasagne and Theano?

@f0k
Copy link
Member

f0k commented Sep 29, 2015

It'd be easier to work with and make pull requests if this was a script rather than a notebook, but it's up to you.

It's mostly a notebook because they can be browsed and rendered directly on github: https://github.com/Lasagne/Recipes/blob/master/examples/styletransfer/Art%20Style%20Transfer.ipynb
We'd lose that if it was a script.

Is there a way to force it to just allocate the buffers it needs once then keep them in memory throughout the process?

Yes, just set THEANO_FLAGS=lib.cnmem=.5 to have it allocate 50% of GPU memory from the start, or THEANO_FLAGS=lib.cnmem=600 to have it allocate 600 MiB. In addition, you can tell it not to release memory in between, via THEANO_FLAGS=allow_gc=0, or even combine both: THEANO_FLAGS=lib.cnmem=600,allow_gc=0 (that's faster then either of those alone). You can also make those permanent in your ~/.theanorc:

[global]
floatX = float32
device = gpu
allow_gc = False
[lib]
cnmem=600

I had to upgrade Theano to the version in the Git repository since the PIP version did not have pooling modes for convolution required by the latest Lasagne

Yes, we've tried to prominently mention this in the install instructions: http://lasagne.readthedocs.org/en/latest/user/installation.html#stable-lasagne-release

Which combination of versions/revisions were you using for Lasagne and Theano?

I'm working with the bleeding-edge version of both (that's required for lib.cnmem), but I don't know which version @ebenolson used for the notebook. Maybe we should include that information in the notebook? Since the release of Lasagne we did not change anything that would affect backwards-compatibility, though, so I'd expect it to work the same no matter which version you're using.

I'll leave the other technical questions up to Eben!

@ebenolson
Copy link
Member

Hi @alexjc, thanks for the questions.

I don't know what versions I was using, but they were likely the current master when the notebook was committed - I think it is probably fine with the current versions, but I'll try to rerun later and confirm that.

I haven't seen that particular LBFGS error, but using scipy is definitely a weak point, I'd like to find an alternative - perhaps I will see if the Torch optimizer can be wrapped/converted easily.

As for edge/image size effects I haven't really investigated, I'll have to get back to you on that.

@alexjc
Copy link
Author

alexjc commented Sep 29, 2015

Many thanks @f0k and @ebenolson.

I've traced the major problems down to using optimizer=fast_compile and exception_verbosity=high which I had enabled for testing algorithm changes. With those two flags set, LBFGS fails randomly and I presume any form of gradient descent would fail also if the function compiled wrong.

I will report back on the other two issues, which seem minor in comparison!

@christopher-beckham
Copy link

I have also seen weird border effects when I have used this example for my own work. Have we figured out a reason for this? :)

@alexjc
Copy link
Author

alexjc commented Apr 18, 2016

@christopher-beckham Try using an image size that's a multiple of 16 or 32, depending on which layers you use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants