Skip to content

Question about the result of online adaptation with "L2AWad" #9

@Tiam2Y

Description

@Tiam2Y

Hello! Thanks for the great work! @AlessioTonioni
I'm at it again and have questions about the results of "Learning to adapt".

I used 12 Synthia video sequences as dataset and meta-trained the network you provided with the following parameters:

--dataset=./meta_datasets.csv
--batchSize=4
--weights=./pretrained_weight/weights.ckpt	# download from the link you provided
--numStep=40000
--lr=0.0001
--alpha=0.00001
--adaptationSteps=3
--metaAlgorithm=L2AWad
--unSupervisedMeta
--maskedGT

After training, I used this weight to test online adaptation on video sequences from DrivingStereo and KITTI raw data. I found that the prediction results for the first few frames were extremely poor , the error rate D1 is close to 99%, but after 100 to 200 frames, D1 quickly drops below 10%.

I would like to ask:

  1. Is it normal for the initial prediction results to be so poor?
  2. Is there anything wrong with my training?
  3. Is this result representative of your work for comparison?

Sorry for the troublesome questions, but I'd appreciate your answers!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions