Skip to content
/ notes Public

A note and a progress tracker of my programming skill

Notifications You must be signed in to change notification settings

lexms/notes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

19 Commits
Β 
Β 

Repository files navigation

Notes

Navigation

Disclaimer

  • This is a note on how I interpret every tutorials, articles, and books that I've read.
  • All sources will be cited, if not cited means it's based on my personal experience
  • There could be an information distortion, so I would be very grateful if you could tell me or let me know via twitter or linkedin

A

πŸ”™ Back

B

  • Boltzmann Machine
  • Backward Pass
    • Call optimizer.zerograd() after each .step() prevent accumulating the gradient in .backward().

πŸ”™ Back

C

πŸ”™ Back

D

  • DCGAN
    • print(netD.main[5].weight.size()) | torch.Size([256, 128, 4, 4]) artinya 256 feature maps out, 128 feature maps in, kernel 4x4
    • Every iteration convolution put different result for every feature maps
    • if Loss D is near zero and Loss G still high means the generator generate garbage
    • Loss G πŸ”Ί = fooling D with garbage, Loss D πŸ”» = doesn't learn anything
    • Loss G πŸ”» = generate good image, Loss D πŸ”» = can distinguish fake n real
    • D(x) - the average output (across the batch) of the discriminator for the all real batch. This should start close to 1 then theoretically converge to 0.5 when G gets better. Why? It's because initially the discriminator know how to predict the real one (output's mean = 1) and then start to confused by the weight produced by the discriminator while training on the fake batch.
    • D(G(z)) - average discriminator outputs for the all fake batch. The first number is before D is updated and the second number is after D is updated. These numbers should start near 0 and converge to 0.5 as G gets better. Why? It's because initially the discriminator know how to predict the fake one (output's mean = 0) and then start to confused, cause the generator can produce almost as good as the real one.

πŸ”™ Back

E

πŸ”™ Back

F

πŸ”™ Back

G

  • Training on GPU
    • I found Tensorflow can harness more GPU power then PyTorch while training DCGAN using their tutorial's code

πŸ”™ Back

H

  • Hook PyTorch
    • First create function for hook, then create model, then register hook

πŸ”™ Back

I

πŸ”™ Back

J

πŸ”™ Back

K

πŸ”™ Back

L

  • Latent Space
    • Latent Space is a compressed representation from certain dataset.

πŸ”™ Back

M

πŸ”™ Back

N

  • Neuroscience
    • If your cells can turn into eyeballs or teeth, probably your cells can do backpropagation or something similar like backpropagation [YouTube:Preserve Knowledge]

πŸ”™ Back

O

πŸ”™ Back

P

  • P Value

    • p-value, the probability getting the current/original idea is TRUE or correct
    • The lower the P-value is the more significant your independent variable is going to be, the more impact on the dependent variable. <5% highly significant, >5% less significant
  • Polynomial Linear Regression

    • Rven though the relation between x and y is non linear you can use Polynomial Linear Regression

πŸ”™ Back

Q

πŸ”™ Back

R

  • R

    • name space seperated using dot
  • Preview .md files in vscode

  • Reactjs Concepts

    • Split component as needed, and naming props from the component’s own point of view rather than the context in which it is being used. React Doc

πŸ”™ Back

S

  • Data Security in ML

    • Even with decentralized deep learning, GAN can generate protypical samples of targeted data. [src: Arxiv]
  • Sparse coding

    • nan
  • Spyder

    • An object cannot be viewed in spyder

πŸ”™ Back

T

πŸ”™ Back

U

V

πŸ”™ Back

W

πŸ”™ Back

X

πŸ”™ Back

Y

πŸ”™ Back

Z

πŸ”™ Back


About

A note and a progress tracker of my programming skill

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published