Skip to content

Latest commit

 

History

History
30 lines (24 loc) · 1.6 KB

README.md

File metadata and controls

30 lines (24 loc) · 1.6 KB

Personalized-Judge

Install environment

conda create --name judge python=3.9.16
conda activate judge
pip install -r requirements.txt

Download Dataset and Insert your API Key

For Author Profiling (AP) on Public Reddit, please download synthetic_dataset.jsonl to the folder './ap' from this link.
For Empathetic Conversation (EC), please download Track 3 and Track 4 data from link and then artciles from link. And put these 3 CSV under './ec'.
For OpinionQA, please download the data to './opinions_qa'.

Then, update your API Keys in utils.py.

Run Experiments

python prism.py --num_sample 1000 --prompt_type with_persona --persona_features all_features --model gpt-4o
python opinionqa.py --num_sample 200 --prompt_type with_persona --model gpt-4o
python ap.py --model gpt-4o
python ec.py --num_article 300 --num_pair_per_article 5  --prompt_type all_features --model gpt-4o

--prompt_type controls the type of prompt to be used: with_persona, no_persona, and with_persona_with_tie
--persona_features controls the number of personas to be used: all_features, key_features, least_imp_feature

More sample experiment can be found in exp/exp.sh

Results and Visualization

All the results can be found under \outputs. The jupyter notebook to visualize the results can be found in visualization.ipynb.