From abe2f6aad848c7657faacf6480c262547a9e6fc1 Mon Sep 17 00:00:00 2001 From: 0xflotus <0xflotus@gmail.com> Date: Mon, 17 Sep 2018 18:01:12 +0200 Subject: [PATCH] updated 'usecase' --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5797db2..d4009c3 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Since images and user generated content dominate the internet today, filtering n * The network takes in an image and gives output a probability (score between 0-1) which can be used to filter not suitable for work images. Scores < 0.2 indicate that the image is likely to be safe with high probability. Scores > 0.8 indicate that the image is highly probable to be NSFW. Scores in middle range may be binned for different NSFW levels. * Depending on the dataset, usecase and types of images, we advise developers to choose suitable thresholds. Due to difficult nature of problem, there will be errors, which depend on use-cases / definition / tolerance of NSFW. Ideally developers should create an evaluation set according to the definition of what is safe for their application, then fit a [ROC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) curve to choose a suitable threshold if they are using the model as it is. -* ***Results can be improved by [fine-tuning](http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html)*** the model for your dataset/ uscase / definition of NSFW. We do not provide any guarantees of accuracy of results. Please read the disclaimer below. +* ***Results can be improved by [fine-tuning](http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html)*** the model for your dataset/ usecase / definition of NSFW. We do not provide any guarantees of accuracy of results. Please read the disclaimer below. * Using human moderation for edge cases in combination with the machine learned solution will help improve performance. #### Description of model