Skip to content

Parse large existing open CML data to database as fast as possible #8

@cchwala

Description

@cchwala

To have a fast way of showcasing the analytics and visualization of the archived data we need a fast way to parse a large CML dataset to the DB.

It looks like https://github.com/timescale/timescaledb-parallel-copy could be the tool of choice, if we do it based on CSV data.

But if we need to parse/transform some strange CSV data that are provided by new sources, how do we handle that?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions