You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to point to a strange behavior in how the implicit package calculates evaluation metrics. In particular, precision@k should decrease as k increases. However, for the dataset that we tested, precision@k calculated via precision_at_k increases with k. Upon checking the code, the reason for this behavior is due to line 444 (pr_div += fmin(K, likes.size())) in ranking_metrics_at_k. If the number of items that the user likes is smaller than k, the code effectively truncates the denominator to the number of liked items while the numerator, which is the number of relevant/true recommended items, can increase as k increases.
I don't understand why precision@k is calculated in this package in such a manner. I have not found any other reference for this formula. Other packages tested with our dataset generated precision@k that decreases with k. If there is a reason or reference for this approach, please share.
If this is indeed an error, there are other occurrences of this truncation in here and here which should be fixed too as they introduce error i the calculation of ndcg@k.
The text was updated successfully, but these errors were encountered:
I have also found that the calculation of precision@k is incorrect. When trying to find out what was wrong, I noticed that the p@k results were identical to the results I got when calculating the recall@k myself, so I think the implicit library's p@k may be returning the recall@k instead of the precision.
I have also found that the calculation of precision@k is incorrect. When trying to find out what was wrong, I noticed that the p@k results were identical to the results I got when calculating the recall@k myself, so I think the implicit library's p@k may be returning the recall@k instead of the precision.
Its because of fmin(K, likes.size()). If the average number of interactions per user in the test is lower enough than k, the denominator in line 471 often be the same as in the recall formula, so in this case the result of such precision realization is identical to the recall
I want to point to a strange behavior in how the implicit package calculates evaluation metrics. In particular, precision@k should decrease as
k
increases. However, for the dataset that we tested, precision@k calculated viaprecision_at_k
increases withk
. Upon checking the code, the reason for this behavior is due to line 444 (pr_div += fmin(K, likes.size())
) inranking_metrics_at_k
. If the number of items that the user likes is smaller thank
, the code effectively truncates the denominator to the number of liked items while the numerator, which is the number of relevant/true recommended items, can increase ask
increases.I don't understand why precision@k is calculated in this package in such a manner. I have not found any other reference for this formula. Other packages tested with our dataset generated precision@k that decreases with k. If there is a reason or reference for this approach, please share.
If this is indeed an error, there are other occurrences of this truncation in here and here which should be fixed too as they introduce error i the calculation of ndcg@k.
The text was updated successfully, but these errors were encountered: