'Denominator in Average Precision at K
Question 1:
I've seen slightly different definitions for "Average Precision @ K" in the context of recommender system measures. One definition I've seen is:
AP@K is the sum of precision@K for different values of K divided by the total number of relevant items in the top K results.
But according to the ml_metrics implementation, the denominator is the minimum of k and the number of relevant items.
Which is correct? (Or how do you decide which one to choose?)
Question 2:
What do you do if the total number of relevant items is 0? The denominator would be zero. In that case, do you just set the AP@k to 0, 1, or something else? Dependng on whether I choose 0 or 1, it would have a drasically different effect on Mean Average Precision (the average of AP@ks over all queries).
Question 3:
How is AP@K different from average precision in the sense of the area under the precision-recall curve (one implementation being the sklearn one here)? I get that with AP@K, you limit the number of retrieved items to look at, but is that the only difference?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
