-
Notifications
You must be signed in to change notification settings - Fork 3
evaluate
David Wood edited this page Apr 7, 2022
·
1 revision
Perform classifier evaluation in one of three ways using a set of labeled data:
1) Evaluate a pre-trained model/classifer (-file option),
2) use k-fold evaluation on a single classifier (-model option), or
3) use k-fold evaluation on multiple classifiers and rank their performance.
(-models option).
Pre-trained models are loaded from the file system or from a named server.
Classifier definitions are pre-defined or defined in a custom JavaScript file.
Test/training sounds are read from the file system.
Required options:
-sounds csv list of (dir|metadata.csv) : specifies 1 or more a metadata.csv
files referencing sounds or directories containing a metadata.csv.
Options for pre-trained models:
-file file : specifies the file containing the model to evaluate.
Options for local model training and K-fold evaluation:
-train(deprecated) : flag indicating k-fold evaluation of a single model.
-label name: label name to use for training and/or computing a confusion
matrix.
-model spec : specifies the type of model to train and evaluate using k-fold
cross validation. Currently supported values include 'gmm', 'lpnn',
'multilabel', 'cnn' 'dcase' and others. Use the ls-models tool to see all
supported model names. You may define a model in a local JavaScript file
and use it with 'jsfile:. See
https://github.ibm.com/asset-development-coc/acoustic-analyzer/wiki/Define-Models-with-JavaScript
for details. Default is ensemble.
-models jsfile : specifies a javascript file that creates a map of IClassifier
keyed by an arbitrary string. All classifiers will be evaluated using k-fold
evaluation (as with a single model) to produce a ranked list of classifiers
For example,
classifiers = { "gmm" : new GMMClassifierBuilder.build(),
"dcase" : new DCASEClassifierBuilder.build() }
-cp : used with -models option to specify a 'check point' file to avoid
recomputation across interrupted/failed runs. Default is none.
-folds N : the number of folds to use in a K-fold evaluation. Must be at least 2.
Default is 4.
-singleFold : flag used to get a single accuracy value using 1 of N folds.
-firstNPerFold N - use the first N sounds for training and evaluation.
Applies to subwindows if being used. Default is 0.
-mthAfterFirstN M - after first N clips, use every mth clip.
Default is 0 to use all after first N.
-maxItemsPerFold max : defines a maximum amount of clips to use for training
and evaluation. Default is no maximum.
-seed : sets the seed used when shuffling the data prior to fold creation.
The default is a fixed value for repeatability.
Additional options for either mode:
-balance: a flag that causes an equal number of sounds for each label
value using down sampling for evaluation and training if applicable.
Equivalent to '-balance-with down'.
-balance-with [up|down]: causes the sounds to be balanced using either up
or down sampling. Up sampling currently makes copies of under represented
sounds to match the label value with the most samples. Down-sampling
randomly removes sounds so that the each label value has the minimum
number of sounds found for any label value.
-clipLen double : splits sound recordings up into clips of the given
number of milliseconds. Only valid with local files. It defaults to
5000 when the -sounds option is provided.
Set to 0 to turn off splitting of local sounds.
-pad (no|zero|duplicate): when clips are shorter than the requests clip
padding can be added to make all clips the same length. Some models may
require this. Zero padding sets the added samples to zero. Duplicate
reuses the sound as many times a necessary to set the added samples.
Default is no padding.
-metadata (all|some) : require that all files listed in the metadata file(s)
to be present. Default does not require all files.
-cm : flag requesting that the confusion matrix be printed. Requires the
-label option
-exportCM : compute and write the confusion matrix to a CSV file.
Examples (local pre-trained model):
... -sounds mydir -file classifier.cfr
... -sounds m1.csv,m2.csv -model gmm -file classifier.cfr
... -sounds m1.csv -model jsfile:mymodel.js -file classifier.cfr
Examples (k-fold evaluation of single model):
... -sounds mydir -label mylabel
... -sounds m1.csv,m2.csv -label mylabel -model jsfile:model.js
... -sounds mydir1,mydir2 -label mylabel -clipLen 3000 -pad duplicate
... -sounds mydir -label mylabel -folds 2
... -sounds mydir -label mylabel -singleFold -cm
Examples (k-fold evaluation of multiple model):
... -models models.js -sounds mydir -label mylabel
... -models models.js -sounds mydir -label mylabel -folds 3 -balanced
... -models models.js -sounds mydir -label mylabel -clipLen 4000 -pad duplicate
An example -models file:
var windowSizes = [40, 80, 120, 240]
var windowHopPercent= [ .5, 1 ]
var featureExtractors = [
new FFTFeatureExtractor(),
new MFCCFeatureExtractor(),
new MFFBFeatureExtractor()
]
var featureProcessors = [
null,
new DeltaFeatureProcessor(2, [1,1,1])
]
var algorithms = [
new GMMClassifierBuilder(),
new LpDistanceMergeNNClassifierBuilder()
]
// The named variable that exports the models to the evaluate CLI
var classifiers = {}
// for (alg of algorithms) {
for (algIndex=0 ; algIndex<algorithms.length; algIndex++) {
var alg = algorithms[algIndex];
for (wsizeIndex=0 ; wsizeIndex<windowSizes.length ; wsizeIndex++) {
var wsize = windowSizes[wsizeIndex];
for (hopIndex=0 ; hopIndex<windowHopPercent.length ; hopIndex++) {
var hop = windowHopPercent * wsize;
for (feIndex=0 ; feIndex<featureExtractors.length ; feIndex++) {
var fe = featureExtractors[feIndex];
for (fpIndex=0 ; fpIndex<featureProcessors.length ; fpIndex++) {
var fp = featureProcessors[fpIndex]
var fge = new FeatureGramExtractor(wsize, hop, fe, fp);
alg.setFeatureGramExtractor(fge);
var c = alg.build();
var key = "algIndex=" + algIndex + ",wsizeIndex=" + wsizeIndex + ",hopIndex=" + hopIndex +
",feIndex=" + feIndex + ",fpIndex=" + fpIndex;
classifiers[key] = alg.build();
}
}
}
}
}