Hello Pieter, I have another inquiry that would require your expertise.
Recently, I conducted an experiment comparing the performance of two programs: the orginal Detectron2 (without AL) and Boxal. For Boxal, I set the initial_training_set to include all the annotated data and Loops=0. Basically, I tried to set all the parameters identical between the two programs.
However, despite my effort, I observed a significant discrepancy in the results obtained from Detectron2 and Boxal. The results from Detectron2 turned out to be significantly better than those from Boxal.
I would greatly appreciate your insights on the potential reasons behind this discrepancy. To be more clear, I have included a summary of the parameters used and the corresponding loss outputs below.
Parameters:
Train set size: 360
Validation set size: 45
Test set size: 45
use_initial_train_dir: True # only for Boxal
network_config: "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
pretrained_weights: "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
classes: ['damaged']
transfer_learning_on_previous_models: True
learning_rate: 0.0005
warmup_iterations: 1000
train_iterations_base: 10000
train_iterations_step_size: 1000
step_image_number: 500
eval_period: 100
checkpoint_period: -1
weight_decay: 0.0001
learning_policy: 'steps_with_decay'
step_ratios: [0.5, 0.8]
gamma: 0.1
train_batch_size: 2
roi_heads_batch_size_per_img: 128
confidence_threshold: 0.5
nms_threshold: 0.05
strategy: 'uncertainty' # only for Boxal
mode: 'mean' # only for Boxal
initial_datasize: 360 # only for Boxal
pool_size: 1 # only for Boxal
loops: 0 # only for Boxal
dropout_probability: 0.25 # only for Boxal
mcd_iterations: 10 # only for Boxal
iou_thres: 0.95 # only for Boxal
incremental_learning: False # only for Boxal
sampling_percentage_per_subset: 10 # only for Boxal
Detectron2:

Boxal:

Hello Pieter, I have another inquiry that would require your expertise.
Recently, I conducted an experiment comparing the performance of two programs: the orginal
Detectron2(without AL) andBoxal. ForBoxal, I set theinitial_training_setto include all the annotated data andLoops=0. Basically, I tried to set all the parameters identical between the two programs.However, despite my effort, I observed a significant discrepancy in the results obtained from
Detectron2andBoxal. The results fromDetectron2turned out to be significantly better than those fromBoxal.I would greatly appreciate your insights on the potential reasons behind this discrepancy. To be more clear, I have included a summary of the parameters used and the corresponding loss outputs below.
Parameters:
Train set size: 360
Validation set size: 45
Test set size: 45
Detectron2:


Boxal: