-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MLLIB] SPARK-2329 Add multi-label evaluation metrics #1270
Changes from 6 commits
154164b
ad62df0
40593f5
ca46765
79e8476
1843f73
cf4222b
517a594
43a613e
fc8175e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,156 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.mllib.evaluation | ||
|
||
import org.apache.spark.rdd.RDD | ||
import org.apache.spark.SparkContext._ | ||
|
||
/** | ||
* Evaluator for multilabel classification. | ||
* @param predictionAndLabels an RDD of (predictions, labels) pairs, both are non-null sets. | ||
*/ | ||
class MultilabelMetrics(predictionAndLabels: RDD[(Set[Double], Set[Double])]) { | ||
|
||
private lazy val numDocs: Long = predictionAndLabels.count | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
||
private lazy val numLabels: Long = predictionAndLabels.flatMap { case (_, labels) => | ||
labels}.distinct.count | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @mengxr Could you elaborate on this? |
||
|
||
/** | ||
* Returns strict Accuracy | ||
* (for equal sets of labels) | ||
*/ | ||
lazy val strictAccuracy: Double = predictionAndLabels.filter { case (predictions, labels) => | ||
predictions == labels}.count.toDouble / numDocs | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. move |
||
|
||
/** | ||
* Returns Accuracy | ||
*/ | ||
lazy val accuracy: Double = predictionAndLabels.map { case (predictions, labels) => | ||
labels.intersect(predictions).size.toDouble / labels.union(predictions).size}.sum / numDocs | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why this fold expression instead of just calling sum? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The fold operation is made with RDD. I didn't find sum in the RDD interface, that's why I used fold. I will be happy to use sum instead. http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.rdd.RDD There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. After There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @srowen @markhamstra Thanks, done! avulanov@79e8476. Could you also review #1155 ? It is my pull request for multiclass classification measures. |
||
/** | ||
* Returns Hamming-loss | ||
*/ | ||
lazy val hammingLoss: Double = (predictionAndLabels.map { case (predictions, labels) => | ||
labels.diff(predictions).size + predictions.diff(labels).size}. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This may be faster: |
||
sum).toDouble / (numDocs * numLabels) | ||
|
||
/** | ||
* Returns Document-based Precision averaged by the number of documents | ||
*/ | ||
lazy val macroPrecisionDoc: Double = (predictionAndLabels.map { case (predictions, labels) => | ||
if (predictions.size > 0) { | ||
predictions.intersect(labels).size.toDouble / predictions.size | ||
} else 0 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. use |
||
}.sum) / numDocs | ||
|
||
/** | ||
* Returns Document-based Recall averaged by the number of documents | ||
*/ | ||
lazy val macroRecallDoc: Double = (predictionAndLabels.map { case (predictions, labels) => | ||
labels.intersect(predictions).size.toDouble / labels.size}.sum) / numDocs | ||
|
||
/** | ||
* Returns Document-based F1-measure averaged by the number of documents | ||
*/ | ||
lazy val macroF1MeasureDoc: Double = (predictionAndLabels.map { case (predictions, labels) => | ||
2.0 * predictions.intersect(labels).size / (predictions.size + labels.size)}.sum) / numDocs | ||
|
||
/** | ||
* Returns micro-averaged document-based Precision | ||
* (equals to label-based microPrecision) | ||
*/ | ||
lazy val microPrecisionDoc: Double = microPrecisionClass | ||
|
||
/** | ||
* Returns micro-averaged document-based Recall | ||
* (equals to label-based microRecall) | ||
*/ | ||
lazy val microRecallDoc: Double = microRecallClass | ||
|
||
/** | ||
* Returns micro-averaged document-based F1-measure | ||
* (equals to label-based microF1measure) | ||
*/ | ||
lazy val microF1MeasureDoc: Double = microF1MeasureClass | ||
|
||
private lazy val tpPerClass = predictionAndLabels.flatMap { case (predictions, labels) => | ||
predictions.intersect(labels).map(category => (category, 1))}.reduceByKey(_ + _).collectAsMap() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
||
private lazy val fpPerClass = predictionAndLabels.flatMap { case(predictions, labels) => | ||
predictions.diff(labels).map(category => (category, 1))}.reduceByKey(_ + _).collectAsMap() | ||
|
||
private lazy val fnPerClass = predictionAndLabels.flatMap{ case(predictions, labels) => | ||
labels.diff(predictions).map(category => (category, 1))}.reduceByKey(_ + _).collectAsMap() | ||
|
||
/** | ||
* Returns Precision for a given label (category) | ||
* @param label the label. | ||
*/ | ||
def precisionClass(label: Double) = { | ||
val tp = tpPerClass(label) | ||
val fp = fpPerClass.getOrElse(label, 0) | ||
if (tp + fp == 0) 0 else tp.toDouble / (tp + fp) | ||
} | ||
|
||
/** | ||
* Returns Recall for a given label (category) | ||
* @param label the label. | ||
*/ | ||
def recallClass(label: Double) = { | ||
val tp = tpPerClass(label) | ||
val fn = fnPerClass.getOrElse(label, 0) | ||
if (tp + fn == 0) 0 else tp.toDouble / (tp + fn) | ||
} | ||
|
||
/** | ||
* Returns F1-measure for a given label (category) | ||
* @param label the label. | ||
*/ | ||
def f1MeasureClass(label: Double) = { | ||
val precision = precisionClass(label) | ||
val recall = recallClass(label) | ||
if((precision + recall) == 0) 0 else 2 * precision * recall / (precision + recall) | ||
} | ||
|
||
private lazy val sumTp = tpPerClass.foldLeft(0L){ case (sum, (_, tp)) => sum + tp} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. space before There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please update other places as well. |
||
private lazy val sumFpClass = fpPerClass.foldLeft(0L){ case (sum, (_, fp)) => sum + fp} | ||
private lazy val sumFnClass = fnPerClass.foldLeft(0L){ case (sum, (_, fn)) => sum + fn} | ||
|
||
/** | ||
* Returns micro-averaged label-based Precision | ||
*/ | ||
lazy val microPrecisionClass = { | ||
val sumFp = fpPerClass.foldLeft(0L){ case(sumFp, (_, fp)) => sumFp + fp} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
sumTp.toDouble / (sumTp + sumFp) | ||
} | ||
|
||
/** | ||
* Returns micro-averaged label-based Recall | ||
*/ | ||
lazy val microRecallClass = { | ||
val sumFn = fnPerClass.foldLeft(0.0){ case(sumFn, (_, fn)) => sumFn + fn} | ||
sumTp.toDouble / (sumTp + sumFn) | ||
} | ||
|
||
/** | ||
* Returns micro-averaged label-based F1-measure | ||
*/ | ||
lazy val microF1MeasureClass = 2.0 * sumTp / (2 * sumTp + sumFnClass + sumFpClass) | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,102 @@ | ||
/* | ||
* Licensed to the Apache Software Foundation (ASF) under one or more | ||
* contributor license agreements. See the NOTICE file distributed with | ||
* this work for additional information regarding copyright ownership. | ||
* The ASF licenses this file to You under the Apache License, Version 2.0 | ||
* (the "License"); you may not use this file except in compliance with | ||
* the License. You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
package org.apache.spark.mllib.evaluation | ||
|
||
import org.scalatest.FunSuite | ||
|
||
import org.apache.spark.mllib.util.LocalSparkContext | ||
import org.apache.spark.rdd.RDD | ||
|
||
class MultilabelMetricsSuite extends FunSuite with LocalSparkContext { | ||
test("Multilabel evaluation metrics") { | ||
/* | ||
* Documents true labels (5x class0, 3x class1, 4x class2): | ||
* doc 0 - predict 0, 1 - class 0, 2 | ||
* doc 1 - predict 0, 2 - class 0, 1 | ||
* doc 2 - predict none - class 0 | ||
* doc 3 - predict 2 - class 2 | ||
* doc 4 - predict 2, 0 - class 2, 0 | ||
* doc 5 - predict 0, 1, 2 - class 0, 1 | ||
* doc 6 - predict 1 - class 1, 2 | ||
* | ||
* predicted classes | ||
* class 0 - doc 0, 1, 4, 5 (total 4) | ||
* class 1 - doc 0, 5, 6 (total 3) | ||
* class 2 - doc 1, 3, 4, 5 (total 4) | ||
* | ||
* true classes | ||
* class 0 - doc 0, 1, 2, 4, 5 (total 5) | ||
* class 1 - doc 1, 5, 6 (total 3) | ||
* class 2 - doc 0, 3, 4, 6 (total 4) | ||
* | ||
*/ | ||
val scoreAndLabels: RDD[(Set[Double], Set[Double])] = sc.parallelize( | ||
Seq((Set(0.0, 1.0), Set(0.0, 2.0)), | ||
(Set(0.0, 2.0), Set(0.0, 1.0)), | ||
(Set(), Set(0.0)), | ||
(Set(2.0), Set(2.0)), | ||
(Set(2.0, 0.0), Set(2.0, 0.0)), | ||
(Set(0.0, 1.0, 2.0), Set(0.0, 1.0)), | ||
(Set(1.0), Set(1.0, 2.0))), 2) | ||
val metrics = new MultilabelMetrics(scoreAndLabels) | ||
val delta = 0.00001 | ||
val precision0 = 4.0 / (4 + 0) | ||
val precision1 = 2.0 / (2 + 1) | ||
val precision2 = 2.0 / (2 + 2) | ||
val recall0 = 4.0 / (4 + 1) | ||
val recall1 = 2.0 / (2 + 1) | ||
val recall2 = 2.0 / (2 + 2) | ||
val f1measure0 = 2 * precision0 * recall0 / (precision0 + recall0) | ||
val f1measure1 = 2 * precision1 * recall1 / (precision1 + recall1) | ||
val f1measure2 = 2 * precision2 * recall2 / (precision2 + recall2) | ||
val sumTp = 4 + 2 + 2 | ||
assert(sumTp == (1 + 1 + 0 + 1 + 2 + 2 + 1)) | ||
val microPrecisionClass = sumTp.toDouble / (4 + 0 + 2 + 1 + 2 + 2) | ||
val microRecallClass = sumTp.toDouble / (4 + 1 + 2 + 1 + 2 + 2) | ||
val microF1MeasureClass = 2.0 * sumTp.toDouble / | ||
(2 * sumTp.toDouble + (1 + 1 + 2) + (0 + 1 + 2)) | ||
val macroPrecisionDoc = 1.0 / 7 * | ||
(1.0 / 2 + 1.0 / 2 + 0 + 1.0 / 1 + 2.0 / 2 + 2.0 / 3 + 1.0 / 1.0) | ||
val macroRecallDoc = 1.0 / 7 * | ||
(1.0 / 2 + 1.0 / 2 + 0 / 1 + 1.0 / 1 + 2.0 / 2 + 2.0 / 2 + 1.0 / 2) | ||
val macroF1MeasureDoc = (1.0 / 7) * | ||
2 * ( 1.0 / (2 + 2) + 1.0 / (2 + 2) + 0 + 1.0 / (1 + 1) + | ||
2.0 / (2 + 2) + 2.0 / (3 + 2) + 1.0 / (1 + 2) ) | ||
val hammingLoss = (1.0 / (7 * 3)) * (2 + 2 + 1 + 0 + 0 + 1 + 1) | ||
val strictAccuracy = 2.0 / 7 | ||
val accuracy = 1.0 / 7 * (1.0 / 3 + 1.0 /3 + 0 + 1.0 / 1 + 2.0 / 2 + 2.0 / 3 + 1.0 / 2) | ||
assert(math.abs(metrics.precisionClass(0.0) - precision0) < delta) | ||
assert(math.abs(metrics.precisionClass(1.0) - precision1) < delta) | ||
assert(math.abs(metrics.precisionClass(2.0) - precision2) < delta) | ||
assert(math.abs(metrics.recallClass(0.0) - recall0) < delta) | ||
assert(math.abs(metrics.recallClass(1.0) - recall1) < delta) | ||
assert(math.abs(metrics.recallClass(2.0) - recall2) < delta) | ||
assert(math.abs(metrics.f1MeasureClass(0.0) - f1measure0) < delta) | ||
assert(math.abs(metrics.f1MeasureClass(1.0) - f1measure1) < delta) | ||
assert(math.abs(metrics.f1MeasureClass(2.0) - f1measure2) < delta) | ||
assert(math.abs(metrics.microPrecisionClass - microPrecisionClass) < delta) | ||
assert(math.abs(metrics.microRecallClass - microRecallClass) < delta) | ||
assert(math.abs(metrics.microF1MeasureClass - microF1MeasureClass) < delta) | ||
assert(math.abs(metrics.macroPrecisionDoc - macroPrecisionDoc) < delta) | ||
assert(math.abs(metrics.macroRecallDoc - macroRecallDoc) < delta) | ||
assert(math.abs(metrics.macroF1MeasureDoc - macroF1MeasureDoc) < delta) | ||
assert(math.abs(metrics.hammingLoss - hammingLoss) < delta) | ||
assert(math.abs(metrics.strictAccuracy - strictAccuracy) < delta) | ||
assert(math.abs(metrics.accuracy - accuracy) < delta) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another feasible representation of predictions/labels is mllib.linalg.Vector. It's basically a Vector of +1s and -1s, either dense or sparse. So it will be great if we add another function to do the transformation.
It's up to you. Transforming the data outside this evaluation module is also OK. : )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RDD[(Set[Double], Set[Double])]
may be hard for Java users. We can ask users to inputRDD[(Array[Double], Array[Double])]
, requiring that the labels are ordered. It is easier for Java users and faster to compute intersection and other set operations.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mengxr We need to ensure that they don't contain repeating elements as well. It should be an optional constructor, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both
Set
andDouble
are Scala types. It is very hard for Java users to construct such RDDs. Also, the input labels and output predictions are usually stored asArray[Double]
. Shall we change the input toRDD[(Array[Double], Array[Double])]
and internally convert it toRDD[(Set[Double], Set[Double])]
and cache? We can put a contract that both labels and predictions are unique and ordered within a single instance. We don't need it if we useSet
internally. But later we can switch toArray[Double]
based solution for speed, because those are very small arrays.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mengxr Can we have
RDD[(java.util.HashSet[Double], java.util.HashSet[Double])]
as an optional constructor? Internally, we will usescala.collection.JavaConversions.asScalaSet
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@avulanov Let's think what is more natural for the input data to a multi-label classifier and the output from the model it produces. They should match the input type here, so we can chain them easily. If we use either Java or Scala
Set
, we are going to have compatibility issues on the other. Also, set stores small objects, which increase GC pressure. These are the reasons I recommend usingArray[Double]
.