Skip to content

alankabisov/loss-based-sampling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Loss Based Sampling

In this work I will try to implement and test Loss Based Sampling (LBS) and compare it with 'vanilla' training on whole dataset. The idea behind LBS is following:

  • We train on whole dataset
  • At first iteration we make forward pass on whole dataset -> compute loss -> take 80% of samples with the highest loss and use them for backward pass
  • On second iteration we take those 80% of samples and do forward pass and again compute loss -> take 80% of samples with the highest loss and use them for backward pass and so on
  • On every 5th iteration we start from the begining and train on whole dataset and repeat steps as described above.

For testing purposes I will use MNIST dataset and LeNet5 architecture which was initially adapted for this dataset.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published