Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么执行了python3 main.py --do_train --do_eval后result.txt文件是空的? #12

Open
yangxia605 opened this issue Dec 5, 2020 · 7 comments

Comments

@yangxia605
Copy link

你好:
请教一下,为什么执行了python3 main.py --do_train --do_eval之后,result.txt 文件是空的,请问这是什么原因呢?谢谢

@Pvvv
Copy link

Pvvv commented Jan 7, 2021

你好:
请教一下,为什么执行了python3 main.py --do_train --do_eval之后,result.txt 文件是空的,请问这是什么原因呢?谢谢

请问您的问题现在解决了嘛

@hialoha
Copy link

hialoha commented Jan 27, 2021

I also have the same problem,result.txt is empty,and I haven't deal with it.

@Claymore715
Copy link

我也有这个问题

@Pvvv
Copy link

Pvvv commented May 24, 2021 via email

@wjx-git
Copy link

wjx-git commented Jul 16, 2021

除了安装Perl外,还需要将 eval/answer_keys.txt 中的标签改为你自己验证集数据的标签。还需要更改 semeval2010_task8_scorer-v1.2.pl 中的部分内容,挺麻烦的,还不如用 sklearn 计算。

@lixianglong1205
Copy link

使用sklearn计算f1-score
欢迎指正

import os
from sklearn.metrics import f1_score, precision_score, accuracy_score, recall_score

EVAL_DIR = "eval"


def txt_result_analysis(path):
    with open(path, mode="r", encoding="utf-8") as answer_file:
        source_label = [i.rstrip() for i in answer_file if i.strip()]
    label_dict = {}
    for one in source_label:
        i, j = one.split("\t")
        label_dict[i] = j

    return label_dict


def official_f1(average="macro"):
    """使用sklearn包计算f1-score"""
    source_label_dict = txt_result_analysis("./eval/answer_keys.txt")
    proposed_label_dict = txt_result_analysis("./eval/proposed_answers.txt")
    source_label, proposed_label = [], []
    for key in source_label_dict:
        value1 = source_label_dict[key]
        value2 = proposed_label_dict[key]
        source_label.append(value1)
        proposed_label.append(value2)
    """
    micro: 计算所有的TP TN FP FN,然后计算F1-score
    f1_score_ = f1_score(source_label, proposed_label, average="micro")
    macro: 计算每一个类别的F1-score,然后对所有的F1-scre求平均值
    f1_score_ = f1_score(source_label, proposed_label, average="macro")
    """
    f1_score_ = f1_score(source_label, proposed_label, average=average)
    return f1_score_


if __name__ == "__main__":
    print("macro-averaged F1 = {}%".format(official_f1() * 100))


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants
@Pvvv @yangxia605 @hialoha @wjx-git @Claymore715 @lixianglong1205 and others