Skip to content

memory fix #24

@mjsull

Description

@mjsull

Sorry if I have misunderstood what the program is doing.

From what I can tell, prophex is storing all the results from each query in memory and then dumping them to STDOUT once the software has ended. For very large read files, this means prophex is using a lot of memory it doesn't need to. Would it not be better to dump each read to STDOUT as the results come in?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions