Skip to content

Downsampling of reads #213

@cwuensch

Description

@cwuensch

We are using Biodalliance genome browser with high-coverage bam-files (up to 10,000 reads per base pair).
By default, the limit of reads to be displayed is set to 100. (And there has to be a limit, because it gets terribly slow, if not)
Problem is, the genome browser seems to just take the first 100 reads then. In a recent case, there was not one singe read displayed for the locus in question, but only reads which started right from the current position. In other cases you may have only wildtype-reads being displayed while the mutated ones get clipped.
Could you implement some form of statistical downsampling? e.g. selecting the reads to be displayed per random? Or just taking every 100th read or something like that?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions