-
Notifications
You must be signed in to change notification settings - Fork 237
Description
Hello,
I wonder if somebody has tried to create a pipeline on data retrieved by an SQL query (for example via duckdb, or similar tools).
The question starts from a relatively simple use-case: I'd like to be able to run a simple query with joins (for example with duckddb, but ideally even using a virtualizer, a query engine such as apache drill, etc) over two datasets (that usually are composed by different large files), and consume the results as a stream, inside ad specific Reader for the datatrove pipeline.
If I'm not wrong, it should be possible to implement something like that, extending the class BaseReader, and creating a Generator[Document]... for each row from the results...
Did anyone already tried something similar? Do you see any drawbacks in that approach?
Thank you in advance for any suggestion, and for developing this very useful framework! :-)
Alfredo