To have a fast way of showcasing the analytics and visualization of the archived data we need a fast way to parse a large CML dataset to the DB.
It looks like https://github.com/timescale/timescaledb-parallel-copy could be the tool of choice, if we do it based on CSV data.
But if we need to parse/transform some strange CSV data that are provided by new sources, how do we handle that?