Skip to content

Latest commit

 

History

History
9 lines (7 loc) · 870 Bytes

README.md

File metadata and controls

9 lines (7 loc) · 870 Bytes

Loss Based Sampling

In this work I will try to implement and test Loss Based Sampling (LBS) and compare it with 'vanilla' training on whole dataset. The idea behind LBS is following:

  • We train on whole dataset
  • At first iteration we make forward pass on whole dataset -> compute loss -> take 80% of samples with the highest loss and use them for backward pass
  • On second iteration we take those 80% of samples and do forward pass and again compute loss -> take 80% of samples with the highest loss and use them for backward pass and so on
  • On every 5th iteration we start from the begining and train on whole dataset and repeat steps as described above.

For testing purposes I will use MNIST dataset and LeNet5 architecture which was initially adapted for this dataset.