Skip to content
View Vaibhav170216's full-sized avatar
🎯
Focusing
🎯
Focusing

Block or report Vaibhav170216

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Vaibhav170216/README.md

Hi there, I'm Vaibhav Nagar πŸ‘‹

πŸ€– Aspiring ML Engineer | 🧠 LLM Enthusiast | 🌍 GeoAI Researcher

LinkedIn X


πŸš€ About Me

  • πŸ”­ Currently working on transformer architectures and building them from scratch
  • 🌱 Deep diving into MLOps best practices and automated ML pipelines
  • πŸŽ“ Graduate from NIT Warangal with a background in Remote Sensing and GIS with focus on GeoAI
  • 🌍 Based in Delhi, India
  • πŸ’¬ Ask me about LLMs, CNNs or Geospatial deep learning
  • ⚑ Fun fact: I love implementing papers from scratch to truly understand the architecture

πŸ› οΈ Tech Stack

Languages

Python SQL

ML/DL Frameworks

PyTorch TensorFlow Hugging Face scikit-learn

MLOps & Tools

Docker Git Jupyter Weights & Biases

Cloud & Platforms

AWS Google Cloud


πŸ“Š Featured Projects

Master's thesis project comparing deep learning algorithms for Land Use Land Cover classification using 1D CNNs. Applied to real-world geospatial datasets.

Tech: Deep Learning Geospatial Analysis Remote Sensing 1D CNN

Experimenting with end-to-end MLOps pipelines, CI/CD for ML models, model versioning and deployment best practices.

Tech: MLOps Docker CI/CD Model Deployment

Built transformer and Vision Transformer (ViT) architectures from scratch in PyTorch. Deep dive into attention mechanisms, positional encodings and patch embeddings.

Tech: PyTorch Transformers Vision Transformer (ViT) NLP

Fine-tuning Llama 3 on the Dolly dataset using Low-Rank Adaptation (LoRA) for parameter-efficient training. Exploring efficient fine-tuning techniques for large language models.

Tech: Llama 3 Weights and Biases LoRA


⭐ Few Repos I'm Currently Following

Some interesting projects and resources I've been exploring:


🎯 Current Focus

  • πŸ”₯ Exploring GeoAI foundation models
  • πŸ“š Building intuition on efficient training pipelines for transformer models
  • πŸ§ͺ Experimenting with Retrieval-Augmented Generation (RAG) systems

🀝 Let's Connect!

I'm always open to collaborating on interesting AI/ML projects, discussing research ideas or just chatting about the latest in deep learning!


Pinned Loading

  1. Masters-Thesis-GeoAI-1D-CNN Masters-Thesis-GeoAI-1D-CNN Public

    Comparative study of Deep Learning algorithms for Land Use Land Cover (LULC) classification

  2. mlops mlops Public

    Experimenting with MLOps automated pipeline and best practices

    Jupyter Notebook

  3. transformers-and-vit-from-scratch transformers-and-vit-from-scratch Public

    Implementation of Transformer and Vision Transformer architectures from scratch using PyTorch.

    Python

  4. Finetuning-Llama-3-on-Dolly-Dataset-with-LoRA Finetuning-Llama-3-on-Dolly-Dataset-with-LoRA Public

  5. mlops-zomato mlops-zomato Public

    Python

  6. ayulockin/neurips-llm-efficiency-challenge ayulockin/neurips-llm-efficiency-challenge Public

    Starter pack for NeurIPS LLM Efficiency Challenge 2023.

    Python 129 41