Master Thesis for M.Sc. Business Education - Pre-Trained Denoising Autoencoders Long Short-Term Memory Networks as probabilistic Models for Estimation of Distribution Genetic Programming
-
Updated
Feb 16, 2023 - TeX
Master Thesis for M.Sc. Business Education - Pre-Trained Denoising Autoencoders Long Short-Term Memory Networks as probabilistic Models for Estimation of Distribution Genetic Programming
[NeurIPS 2023] Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation
Source codes and datasets for paper "Zero-1-to-3: Domain-level Zero-shot Cognitive Diagnosis via One Batch of Early-bird Students towards Three Diagnostic Objectives" (AAAI 2024)
Using SqueezeNet to classify video frames coming from a webcam or a smartphone camera
The official GitHub page for the survey paper "Self-Supervised learning for Videos: A survey"
Awesome multi-modal large language paper/project, collections of popular training strategies, e.g., PEFT, LoRA.
Code for the ICLR 2021 Paper "In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness"
PyTorch code for Finding in NAACL 2022 paper "Probing the Role of Positional Information in Vision-Language Models".
Pre-Training and Fine-Tuning transformer models using PyTorch and the Hugging Face Transformers library. Whether you're delving into pre-training with custom datasets or fine-tuning for specific classification tasks, these notebooks offer explanations and code for implementation.
Deep reference priors (ICML22)
Code for "On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models"
This project is dataset and model checkpoints for the paper "Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora".
Maximize Efficiency, Elevate Accuracy: Slash GPU Hours by Half with Efficient Pre-training!
Pre-training of Deep Bidirectional Transformers for Language Understanding
Een beschrijving van het schakelprogramma Ad FDND -> Ba CMD. NB: private tot de examencommissie goedkeuring geeft!
Transformer、GPT2、BERT pre-training and fine-tuning from scratch
Methodology to pre-train and evaluate a LLM to the Portuguese language
Is ID embedding necessary for multimodal recommender system?
Comprehensive Project on training and fine-tuning transformer models using PyTorch and the Hugging Face Transformers library. Aimed at enthusiasts and researchers, it offers an accessible yet deep dive into the practical aspects of working with transformers for NLP tasks.
Add a description, image, and links to the pre-training topic page so that developers can more easily learn about it.
To associate your repository with the pre-training topic, visit your repo's landing page and select "manage topics."