Skip to content
This repository was archived by the owner on Oct 31, 2023. It is now read-only.
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Working with Multiview Data #9

@MatthewGong

Description

@MatthewGong

Hello,

Thank you for the great repo.

I've been trying to use this on a multi-view data set and I'm having some trouble getting a network converge on good results.

The data I'm training on is taken from ~20-30 synced cameras(depending on how many colmap finds in the SFM) set up semi-evenly in a room. The cameras are static, but the scene is dynamic, albeit slow moving. I modified the data loading to take a json that contains frames from each camera. When building a training set, I made the assumption that the order of images loaded in the training is how the model expects frames to be ordered in time. Frames are picked sequentially from each camera, e.g If there's 30 cameras and 150 frames, camera 1 will contribute frames 1,31,61,91...etc.

I've gotten the network to run and train on the dataset, and the outputs are recognizable, but there's a lot of artifacts. Any help building intuition or advice on how to improve the quality of the outputs would be much appreciated.

Original image:
1

Outputs after 250k iterations:

001
disp_001
disp_jet_001
error_001

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions