Skip to content

Latest commit

 

History

History
179 lines (98 loc) · 6.1 KB

README.md

File metadata and controls

179 lines (98 loc) · 6.1 KB

ML_Interview_Answers

This repository contains the implementations and answers to popular Computer Vision questions

An excellent resource for QnA rounds. You may also refer to this

Some Terminology

  1. What is feature space?

  2. What is Latent space?

  3. What is embedding space?

  4. What is representation space?

  5. What are latent features?

  6. What is a feature embedding?

  7. What is feature representation?

  8. What does latent representation mean?

  9. What does embedding representation mean?

  10. What does latent embedding refer to?

  11. What does vector refer to?

  12. What does Domain Distribution mean?

Maths

  1. What is covariance?

  2. What is correlation? Remember: Pearson (Linear) vs Spearman (Non linear)

  3. Explain the differences betn 1 and 2.

  4. What do norms refer to?

  5. Differences between 'distances' and norms?

  6. Eigenvalues and Eigenvectors

  7. PCA (Derive this from scratch)

  8. SVD

  9. K-Means - overview and mathametical explanation.

  10. L1 vs L2?

TLDR; Use L1 when there are no extreme outliers in the data otherwise in all other cases use L2.

Training and Accuracy Metrics

  1. What is Precision?

  2. What is Recall?

  3. What is F1 score?

  4. Define Confusion Matrix.

  5. Define Bias and Variance.

  6. How does model performace vary with bias and variance?

  7. What does the ROC curve represent? Ans: here

  8. How does the bias and variance vary with precision and recall?

  9. What is the difference between test and validation sets? Prelim idea here

  10. Are validation sets always needed?

  11. What is K-cross fold validation? Ans

  12. Deal with class imbalance Ans

Layer Functions

  1. What is Normalization?

  2. What is Batch Normalization?

  3. What is Instance Normalization?

Optimizers

Comparison of various optimizers for 11 tasks - blog

Activations and Losses

Nice Blog

  1. List down all standard losses and activations.
  • Sigmoid
  • ReLU
  • Leaky ReLU
  • Tanh
  • Hard Tanh
  • Cross Entropy
  • Binary Cross Entropy
  • Kullback leibler divergence loss
  • Triplet - Bring centroid closer to mean (anchor)
  • Hard Triplet Mining - Bring extreme points closer to mean (point)

image

Model Compression

  1. What is Knowledge Distillation?

  2. What is model pruning?

  3. This awesome twitter thread on model memory consumption.

  4. How tensors are stored in memory

Some Standard Architectures

Read here.

  1. VGG-16/19/152

  2. Resnet - 18/50/150

  • Skip connection:
  • Identity connections:
  1. Inception - v1/v2/v3
  • Group convolution:
  1. Xception
  • Depthwise separable conv:
  • Pointwise separable conv:
  1. MobileNet

  2. Capsule Networks

Convolution

  1. What is convolution?

  2. What are kernels/filters?

  3. What is stride and padding?

  4. Derive the factor of improvement of depthwise separable conv over normal convolution.

Model Deployment and production

Beginner's Blog

  1. Data Drift vs Model Drift vs Concept Drift?

Extensive repo on this topic

Transformers and Attention

  1. Thoughts on Transformers by Karpathy.

Other cool stuff

  1. Hands on Stable Diffusion
  2. Transformers are more robust than CNNs? Discussion

Must Read Texts

  1. Image Processing, Analysis and Machine Vision - Sonka, Boyle
  2. Deep Learning - Bengio, Goodfellow
    Download links