Skip to content

It seems that batch size is controlled by chunk_size #215

@ghost

Description

In train.py

def train_dataloader(self):
        return DataLoader(self.train_dataset,
                          shuffle=True,
                          num_workers=4,
                          batch_size=self.hparams.batch_size,
                          pin_memory=True)

where batch_size only dictates the number of images loaded from the train split. Then in rendering chunk_size is used for batched inference and the batch_size does not really appear anywhere.

    for i in range(0, B, self.hparams.chunk):
        rendered_ray_chunks = \
            render_rays(self.models,
                        self.embeddings,
                        rays[i:i+self.hparams.chunk],
                        ts[i:i+self.hparams.chunk],
                        self.hparams.N_samples,
                        self.hparams.use_disp,
                        self.hparams.perturb,
                        self.hparams.noise_std,
                        self.hparams.N_importance,
                        self.hparams.chunk, # chunk size is effective in val mode
                        self.train_dataset.white_back)

I am a bit confused here because it seems that chunk_size is the actual batch size per training step. Please clarify.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions