Skip to content

Design new metric for recommender system performance #70

@ProbablyFaiz

Description

@ProbablyFaiz

Currently, we use the variant of recall described in Huang 2021:

Our initial results have been very promising. The primary metric we are currently using is recall, the percentage of documents defined as relevant that we are successfully able to recommend. We adopt the measurement approach taken by Huang et. al (2021).

We select a random opinion in the federal corpus and remove it from our network (as if the opinion never existed).
We input all but one of the opinion’s neighbors into the recommendation software.
We measure whether the omitted neighbor was the top recommendation, in the top 5 recommendations, or in the top 20 recommendations.

This is alright, but leaves a lot to be desired with respect to a fuller understanding of our models' performance and its ability to surface useful cases. We've got some other ideas (to be documented at a later time) of what kinds of metrics might better serve us.

Metadata

Metadata

Labels

featureNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions