Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch Size #11

Open
hesegi opened this issue Jul 16, 2020 · 1 comment
Open

Batch Size #11

hesegi opened this issue Jul 16, 2020 · 1 comment

Comments

@hesegi
Copy link

hesegi commented Jul 16, 2020

Hello,

Firstly thank you very much for your project. I have a problem about large dataset. I have 10 thousand images and masks.
I can create .npy files. When I run unet.py files with 10 thousand images I get error Memory Usage Exceeds. When I decrease the number of dataset (max 600 images) no any error. I think batch size don't work in model.fit. Do you have any advice or solution?

Best regards.

@RaghavPrabhakar66
Copy link

Hi,

I think batch size isn't working because there is no generator to generate the dataset. You can try ImageDataGenerator which is inbuilt in keras or you can make your own custom generator.

Below is an implementation of ImageDataGenerator. Hope this helps and answers your question. (Moreover I have done data augmentation, you can ignore that part).

from keras_preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

valid_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
    train_dir,
    target_size=(64, 64),
    batch_size=32,
    color_mode='rgb',
    shuffle=True,
    class_mode='categorical')

valid_generator = valid_datagen.flow_from_directory(
    valid_dir,
    target_size=(64, 64),
    batch_size=32,
    shuffle=True,
    class_mode='categorical',
    color_mode='rgb')

STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size

history = model.fit_generator(
    generator=train_generator,
    steps_per_epoch=STEP_SIZE_TRAIN,
    validation_data=valid_generator,
    validation_steps=STEP_SIZE_VALID,
    epochs=20)

To read more about Custom Generator, you can visit here or if u want to see them in action you can visit my repo Kaggle-Data-Science-Bowl-2018 in which i have trained U-Net with custom generators.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants