Skip to content

Fine-tuning stage process and filtering sample adaptively #8

@dogyoonlee

Description

@dogyoonlee

Hello,

I'm implementing your method with pure PyTorch code and it works before the finetuning stage including sample importance learning.

However, I have additional questions about the adaptive sampling and fine-tuning stage process.

Could you let me know where is exactly related to adaptive sampling with fine-tuning stage?

I implement the adaptive sampling based on learned sample importance with top-k algorithm after masking the importance value exceeding adaptive threshold.

Because of the batch-wise data format, the algorithm that I designed sets the rest of the importance as zero in following cases you mentioned in the paper.
image

In addition, I'm confusing about the actual meaning of the sentence in the paper(Section 3.2 - Fine-tuning):
Note that this phase results in separate shading networks for each maximum sample count, while all rely on the same sampling network.

However, it does not work and I'm still get hard to fix it.
Could you explain about the point in detail?

(I just add this implementation code for understanding my implementation. )
image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions