EvaluationΒΆ

In this field you can put any evaluation metrics the authors used, along with some properties.


<evaluations> -> [<evaluation>+] | null
<evaluation> -> {
  "metrics": [<metric>+] | null
  "method_evaluation": <method_evaluation> | null  # null means no evaluation of method
}
<metric> -> "error_rate"   # e.g. accuracy, precision, recall, f-1, etc
<metric> -> "classification_loss"   # e.g. log-loss, etc
<metric> -> "error_rate_variation"  # e.g. ROC, AUC, etc
<metric> -> "error_distance"  # e.g. sum of squared error, absolute error, r^2, etc
<metric> -> "clustering_metrics"  # e.g. silhouette, etc
<metric> -> "time"  # time complexity/how much time is takes
<metric> -> "space"  # space complexity/how much space is takes
<method_evaluation> -> "internal"  # e.g. silhouette, metrics that do not depend on labels
<method_evaluation> -> "external" # e.g. accuracy, metrics dependent on labels
<method_evaluation> -> "both"  # both internal and external