Implementation of the project Ranking of Clarification Questions in Pytorch
Checkpoint 2 will involve reproducing the evaluation numbers of a state-of-the-art baseline model for the task of interest with code that you have implemented from scratch. In other words, you must get the same numbers as the previous paper on the same dataset.
In your report, also perform an analysis of what remaining errors this model makes (ideally with concrete examples of failure cases), and describe how you plan to create a new model for the final project that will address these error cases. If you are interested in tackling a task that does not have a neural baseline in the final project, you may also describe how you adopted the existing model to this new task and perform your error analysis on the new task (although you must report results on the task that the state-of-the-art model was originally designed for).
- Data Loading
- Model
- Integration
- Experiments
- Error Analysis
-
Clone the Repository, Create the Environment:
conda env create -f environment.yml
-
Download Data (outside the repository):
wget https://www.dropbox.com/s/8uaqm1ymrh50yxf/clarification_questions_dataset.zip
,unzip clarification_questions_dataset.zip
-
Create a directory to store models:
mkdir trained_models
-
For Each Model Directory, Run Train & Test:
bash run.sh experiment_name