Hi authors,
Thank you for releasing the GigaTIME dataset and code. It's a fantastic resource for our research.
I am currently exploring the sample test data provided in the repository (data/sample_test_data/data/0_556_556_556_comet.npy) and noticed a potential discrepancy regarding the channel dimension that I hoped you could clarify.
In the documentation (or based on my understanding of the model architecture), I was expecting the channel dimension to be 23 (representing the comet kernels/templates). However, the actual data has 25 channels.
Could you please clarify:
Is the expected channel count actually 25?
If the count should be 23, what do the extra 2 channels represent? (e.g., Are they background estimates, masks, or padding?)
Is there a specific preprocessing step required to slice these down to 23, or is this an updated version of the dataset?
Thanks again for your work!
Best regards,
Zhe Yin
Hi authors,
Thank you for releasing the GigaTIME dataset and code. It's a fantastic resource for our research.
I am currently exploring the sample test data provided in the repository (data/sample_test_data/data/0_556_556_556_comet.npy) and noticed a potential discrepancy regarding the channel dimension that I hoped you could clarify.
In the documentation (or based on my understanding of the model architecture), I was expecting the channel dimension to be 23 (representing the comet kernels/templates). However, the actual data has 25 channels.
Could you please clarify:
Is the expected channel count actually 25?
If the count should be 23, what do the extra 2 channels represent? (e.g., Are they background estimates, masks, or padding?)
Is there a specific preprocessing step required to slice these down to 23, or is this an updated version of the dataset?
Thanks again for your work!
Best regards,
Zhe Yin