Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network does not converge, bad captions #9

Open
PavlosMelissinos opened this issue Oct 30, 2017 · 34 comments
Open

Network does not converge, bad captions #9

PavlosMelissinos opened this issue Oct 30, 2017 · 34 comments

Comments

@PavlosMelissinos
Copy link

Hello,

I've followed your instructions and started training the network. The loss reaches its minimum value after about 5 epochs and then it starts to diverge again.

After 50 epochs, the generated captions of the best epoch (5th or 6th) look like this:

Predicting for image: 992
2351479551_e8820a1ff3.jpg : exercise lamb Fourth headphones facing pasta soft her soft her soft her soft her soft her dads college soft her dads college soft her her her her her soft her her her her her soft her her her her
Predicting for image: 993
3514179514_cbc3371b92.jpg : fist graffitti soft her soft her Hollywood Fourth Crowd soft her her soft her her her her her soft her her her her her her soft her her her her soft her her her her soft her her her
Predicting for image: 994
1119015538_e8e796281e.jpg : closeout security soft her soft her security fall soft her her her her her fall soft her her her her her her soft her her her her her soft her her her her soft her her her her her
Predicting for image: 995
3727752439_907795603b.jpg : roots college Fourth tree-filled o swing-set places soft her soft her her soft her her soft her her college soft her her her her her her her soft her her her her soft her her her her her her

Any idea what's wrong?

@MikhailovSergei
Copy link

hi, I also have faced this problem. Let's work together to avoid this problem. My mail: [email protected]. Waiting for your answer

@anuragmishracse
Copy link
Owner

It's been a while since I worked on this repo. I'll try to retrain it and reproduce this error sometime next week and see if something needs change.

Meanwhile, @PavlosMelissinos and @MikhailovSergei if you were able to debug this, feel free to update and send a pull request.

@MikhailovSergei
Copy link

ok), will try too

@MikhailovSergei
Copy link

Hello, do u have the Flickr_30k.trainimages.txt and Flickr_30k.testimages.txt files. I can't find this files in anywhere=( In official web it's unable to download. I have image I need just this files

@lopezlaura
Copy link

Hello,
I am also facing the exact same problem, please let me know if you find a solution.
@MikhailovSergei I have just sent you an email.

@MikhailovSergei
Copy link

Hi, I am glad to receive u comment. I have changed batch-size. I set it equal to 1500 instead 32 in capture_generator.py and train_model.py. after 43-45 epoch it can work a little better. Please give me know about u result and if u find some more better ways)))

@anuragmishracse
Copy link
Owner

@MikhailovSergei @lopezlaura It actually depends on the dataset. Different datasets will ideally require us to tune hyperparameters to get optimal captions. It's not usual that we can reuse the hyperparameters.

Things that you can try:

  1. Changing the batch size, try keeping it 1024
  2. Changing the Learning rate can help you reach an optimum.
  3. Changing the optimization algorithm.

If it helps you improve your model, do post your results here for others to refer to.

@MikhailovSergei
Copy link

So what batch_size is better for Flickr 8k?

@aashimasingh
Copy link

aashimasingh commented Dec 13, 2017

I am facing the same issue while using Flickr8k and the captions are not making any sense. Particular words are getting repeated in every sentence. Somehow, it is working better on a subset of 100 images rather than the entire dataset. I have tried changing the batch size but it didn't help. Could you give any suggestions?

@EriCongMa
Copy link

After I trained the model , it gave me the result as follows:

yielding count: 599098
yielding count: 599099
yielding count: 599100
yielding count: 599101
yielding count: 599102
yielding count: 599103
yielding count: 599104
yielding count: 599105
yielding count: 599106
yielding count: 599107
yielding count: 599108
yielding count: 599109
yielding count: 599110
Epoch 00050: loss did not improve
 - 1177s - loss: 6.7838 - acc: 0.3085
Training complete...

U can see the loss is high and the acc is low. Meanwhile, when I run the test_model, all of the output sentences are the same. I wanna know where to change learning rate and which optimization algorithm can be better?

BTW, can you share ur weight file to me? My email address is [email protected]
Thanks very much.

@kashyap32
Copy link

changing a batch size can improve accuracy . try it with 1024.
And can you share me model.save file
mail - [email protected]
Thanks!

@zbj6633
Copy link

zbj6633 commented Dec 25, 2017

I am a university student,can you share me model.save file,I want to see the effect.
mail - [email protected]
Thanks!

@MikhailovSergei
Copy link

but if we take 1024 batch size it will be overfit

@zbj6633
Copy link

zbj6633 commented Dec 26, 2017

@MikhailovSergei 1024batch need how much memory GPU

@b10112157
Copy link

can ur share me model.save file ?my networks doesn"t also converge
mail :[email protected]
@MikhailovSergei
@kashyap32
@army3401
@aashimasingh
@lopezlaura

thanks

@ShixiangWan
Copy link

My networks doesn't converge, too. So maybe this is a bug. :(

@b10112157
Copy link

b10112157 commented May 14, 2018 via email

@ShixiangWan
Copy link

@b10112157 Sorry, I have no other image caption projects, and no windows 10 image caption projects. But for this, tensorboard screenshot is the following:

image

@b10112157
Copy link

b10112157 commented May 14, 2018 via email

@ShixiangWan
Copy link

@b10112157 Thanks for your kindly help. This my best weight and model file (epochs=50, batch_size=32): https://drive.google.com/open?id=1DlfecYfiPlViFCh1h9Op_6puaTAKwN0N

@b10112157
Copy link

b10112157 commented May 14, 2018 via email

@ShixiangWan
Copy link

@b10112157 As shown in above tensorboard screenshot, the best loss is 5.502 (5th step) and the best accuracy is 0.3267 according to the best loss.

@ShixiangWan
Copy link

@army3401 1024 batch need ~4.2GB GPU memory. This is my testing on single K80 GPU:
image

@b10112157
Copy link

b10112157 commented May 14, 2018 via email

@ShixiangWan
Copy link

@b10112157 Thanks. I am trying batch size 1024, and now the loss curve is apparent better than batch size 32. So maybe small batch size 32 results in the shock.

@b10112157
Copy link

b10112157 commented May 14, 2018 via email

@ShixiangWan
Copy link

@b10112157 This is 1024 batch size whole model file:
https://drive.google.com/open?id=1rK5OkeCAb_kJLKR6EKlVqd_HzlZrjrYn

Tensorboard screenshot:
image

But I sample and test some pictures just now, the captions are bad. For example:
image

@b10112157
Copy link

b10112157 commented May 14, 2018 via email

@cynthia0811
Copy link

@ShixiangWan Hey,dear.
And have you fix the bad captioning performance with the higher accuracy?

@zhenming33
Copy link

It's not about the model, just replace 'unique = list(set(unique))' by 'unique = sorted(set(unique),key=unique.index)' in caption_generator.py, then results can make some sense. due to the batch size, my final loss is 2.23 and result like
1536144644 1

@Kinghup
Copy link

Kinghup commented Jan 15, 2019

@b10112157 This is 1024 batch size whole model file:
https://drive.google.com/open?id=1rK5OkeCAb_kJLKR6EKlVqd_HzlZrjrYn

Tensorboard screenshot:
image

But I sample and test some pictures just now, the captions are bad. For example:
image

wow! It's great! I have the same problem,and i add the BN layer to stabilize the loss. but the best model's loss is 4.7 and the the acc is 0.37. Do you just adjust the batch size to 1024?

@Kinghup
Copy link

Kinghup commented Jan 17, 2019

It's not about the model, just replace 'unique = list(set(unique))' by 'unique = sorted(set(unique),key=unique.index)' in caption_generator.py, then results can make some sense. due to the batch size, my final loss is 2.23 and result like
1536144644 1

How do you solve the problem? I try you solution but it does't work. The caption is excursive and has no sense . I don't know where is wrong ,please help me.

@Kinghup
Copy link

Kinghup commented Jan 18, 2019

It's not about the model, just replace 'unique = list(set(unique))' by 'unique = sorted(set(unique),key=unique.index)' in caption_generator.py, then results can make some sense. due to the batch size, my final loss is 2.23 and result like
1536144644 1

how do you solve the problem? I try but it doesn't work....

@a494456818
Copy link

I don't think setting batch_size to 32 will converge the training. I made the following settings:

  1. batch_size = 512
  2. @zhenming33 use his method
    At the 45th epoch convergence, loss is 2.4 +. At the same time, I set batch_size to 1024, 49 epoch convergence, loss to 1.5+
    image
    If you need weight files, please let me know your email address.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests