AI Infrastructure Engineer | Competitive Programmer | Full-Stack Developer
๐ฏ Currently focusing on AI hardware acceleration and cross-platform inference
๐ญ Working on OpenVINO XLA plugin for multi-platform model deployment
๐ฑ Learning hardware-specific optimizations for GPUs, TPUs, and FPGAs
๐ฌ Ask me about ML Infrastructure, Hardware Acceleration, and Competitive Programming
Developing a new plugin for OpenVINO that enables cross-platform model inference:
- ๐ Converting OpenVINO IR to XLA representation (HLO/MLIR)
- โก Enabling model inference on diverse hardware:
- NVIDIA GPUs
- Google TPUs
- FPGA devices
- ๐ฏ Supporting various model architectures:
- Convolutional Neural Networks (CNNs)
- Transformer-based models
- ๐ ๏ธ Tech Stack:
- OpenVINO Framework
- XLA (Accelerated Linear Algebra)
- MLIR (Multi-Level IR)
- CUDA/GPU Programming
Data Structures & Algorithms
- Arrays & Lists
- Trees & Graphs
- Dynamic Programming
Systems Programming
- OS Fundamentals
- Memory Management
- GPU Architecture
Networks
- TCP/IP
- gRPC
- Load Balancing
Databases
- SQL/NoSQL
- Redis
- Distributed Storage
- ๐ฅ Model Training: PyTorch, JAX, TensorFlow
- ๐ Distributed Systems: PyTorch DDP, DeepSpeed, Megatron-LM
- โก Hardware Acceleration: CUDA, cuDNN, NCCL
- ๐ณ Orchestration: Kubernetes, Ray, Docker
- ๐ Monitoring: Prometheus, Grafana, OpenTelemetry
- ๐ Advanced Linear Algebra & Tensor Operations
- ๐ฅ๏ธ CUDA Programming & GPU Optimization
- ๐ Distributed Systems Design
- ๐ ๏ธ High-Performance Computing
- ๐งฎ Competitive Programming