Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: EfficientAD Pred Scores are very low. #2333

Open
rishabh-akridata opened this issue Oct 1, 2024 · 6 comments
Open

Question: EfficientAD Pred Scores are very low. #2333

rishabh-akridata opened this issue Oct 1, 2024 · 6 comments

Comments

@rishabh-akridata
Copy link

Hi,
The pred scores of the efficient AD model are coming very low. Any idea, what is the reason behind this?

[0.5111125114597613,
 0.5133660426906085,
 0.5584840866012671,
 0.5097309616622074,
 0.5100746147399043,
 0.5101457543058581,
 0.5078011069118742,
 0.49999999867182765,
 0.5171874406178997,
 0.5087297667708703,
 0.5009551808039379,
 0.5057654561524445,
 0.4999895069959258,
 0.539257350719084,
 0.49940360237013637,
 1.0,
 0.4994316863525398,
 0.4994300963089074,
 0.5165583225774157,
 0.5048455843669841,
 0.5154843004558268,
 0.5397329320390526,
 0.5101223970673893,
 0.5083544491795497,
 0.5032673477437677,
 0.49942594466358836,
 0.5006433367892856,
 0.5098894506945904,
 0.5126176016486443,
 0.5127730119775831,
 0.5302341765149362,
 0.49942822048686353,
 0.5073944904970427,
 0.4994255681267349,
 0.5114618296351392,
 0.5280324174412317,
 0.49941536510695117,
 0.5089816112649264,
 0.5227437273384679,
 0.5137013902640468,
 0.5014994171221683,
 0.5085513064539936,
 0.5100113379541017,
 0.5132433620694963,
 0.5113751708183177,
 0.4994075496982797,
 0.5080081078810714,
 0.5129591863193106,
 0.4994130655979363,
 0.4994456381387047,
 0.507350383665099,
 0.6742883947936092,
 0.4994323178984793,
 0.519790254606196,
 0.5131760290454198,
 0.5125366592855003,
 0.5056670881684252,
 0.5067094590581136,
 0.5135149821639905,
 1.0,
 0.4994077042089933,
 0.49940675500850656,
 0.6721566180050411,
 0.5094002476392792,
 0.5132569147398781,
 0.49941373322589055,
 0.5078058847904434,
 0.49941520218447955,
 0.49940407564220735,
 0.5204894080587092,
 0.5146554660286121,
 0.5027675370188384,
 0.5133325426428347,
 0.5122029143733313,
 0.5094226716153291,
 0.515083500550321,
 0.4994165772855578,
 0.5002911459985357,
 0.5089317321960678,
 0.5190379724898744,
 0.499425803213236,
 0.5095397871944086,
 0.6957075571253086]

Thanks.

@alexriedel1
Copy link
Contributor

alexriedel1 commented Oct 3, 2024

What do you mean by low? The scores are normalized in the range 0,1 with 0 being normal and 1 being abnormal

@rishabh-akridata
Copy link
Author

@alexriedel1 The scores are normalized between 0 and 1. But they are not high for the anomalous samples. Mostly all the scores are close to 0.5 only.

@alexriedel1
Copy link
Contributor

alexriedel1 commented Oct 3, 2024

Assuming you have trained your Model using normal and abnormal Images, maybe the difference between normal and abnormal isnt very large. The training procedure tries to normalize the outputs in a way that 0.5 is the threshold for abnormal. You could try to use a different algorithm than efficientad and see if the problem still exists. If yes, then the Chance is high that your normal and abnormal are Not different enough. Maybe you can show some of your Images?

@rishabh-akridata
Copy link
Author

@alexriedel1 I am getting the following anomaly scores on the MVTEC bottle class. Is this reasonable? I tried other methods such as Padim and the scores are pretty high such as 0.70, and 0.94 for anomalous images. Is this because the Padim is a distance-based method and efficientAD is a reconstruction-based method?
Screenshot 2024-10-03 at 2 49 44 PM

@alexriedel1
Copy link
Contributor

How do you obtain these results? Through validation while Training or through testing After training? Can you Share your Code?

@watertianyi
Copy link

@alexriedel1
I used patchcore and EfficientAD to classify the same batch of data sets respectively. The F1Score and AUROC of patchcore are much better than EfficientAD, and the heat map of patchcore can clearly reflect the defective area. EfficientAD can hardly reflect the defective area. It was all trained in the past. 100 epochs. I would like to know if there is room for improvement when changing EfficientAD’s epoch to 1000. I don’t have mask annotation data. Is there any room for improvement, especially the defects and accuracy of heat maps?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants