Skip to content
This repository has been archived by the owner on Jan 12, 2024. It is now read-only.

how can i balance the model #73

Open
dgl547437235 opened this issue Jul 13, 2020 · 7 comments
Open

how can i balance the model #73

dgl547437235 opened this issue Jul 13, 2020 · 7 comments

Comments

@dgl547437235
Copy link

hi,samet-akcay,
thanks for your code,
i train my custom data,then i write my test code,the effect of detecting NG samples is great,but often recognize OK samples as NG,how can i solve the problem

@Johncheng1
Copy link

can you share your test code?Thank you! @dgl547437235

@dgl547437235
Copy link
Author

@Johncheng1 ,由于是业务代码,所以不能共享,下面是我的逻辑,我训练的正样本集是A,测试的正样本集是B,测试的负样本集是C,我每训练一定的批次,就去计算在A上的最大编码距离,然后用该距离去测试B和C,超过最大编码距离则判定为NG

@Johncheng1
Copy link

Johncheng1 commented Jul 30, 2020 via email

@dgl547437235
Copy link
Author

@ Johncheng1,效果不错,真阳率和真阴率可以同时达到100%

@Baitom
Copy link

Baitom commented Aug 4, 2020

@ Johncheng1,效果不错,真阳率和真阴率可以同时达到100%

请问自己建立的模型出现这种情况该如何处理:G模型的损失值在逐步下降,而D模型的损失值基本不怎么变换,且不能区分出正样本和负样本(正样本和负样本的距离值的分布基本吻合)

@captainfffsama
Copy link

@dgl547437235 你好,我想请教以下您这里用的最大编码距离是什么距离?
因为代码的test里写的是用欧式距离,但是论文里分数用的L1范数。。。我将0设置为alnomaly,在mnist上训练了15轮之后,测试的时候发现NetG对0的重建效果也很好= =|||,分数无论用L1范数还是L2范数似乎都不能很好的找个阈值来区分正常和异常样本。

@dgl547437235
Copy link
Author

@hqabcxyxz ,你好,我的最大编码距离是GNet在所有训练过的normal数据集上得出的,你可以试着把编码损失的权重增大,这么做可以有效提高异常样本的编码距离

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants