-
Notifications
You must be signed in to change notification settings - Fork 3
Description
It would be nice if for large datasets all filtering / searching can be pushed down to the database.
This would imply at least following requirements.
Get callbacks from DT
All sorting, filtering, searching that is done through the various fields that DT provides need to be passed on to the server using callbacks.
https://datatables.net/manual/server-side
eDT(
...
callback = sprintf("table.on('search', function() {
console.log(table.search());
Shiny.setInputValue(\"%1$s\", table.search(), {priority: \"event\"});
});
", ns('search_value'))
...
)
Next, all these callbacks need to be translated into dplyr code and executed on the data.
Afterwards the proxy can be updated.
If DT can not be disabled to respond to these events. Which could cause double executions of the same callbacks, possibly causing the table to update twice.
Updated rows need to be persisted and explicitly re-joined
For each edit we need to store
- database keys
- modification
These edits need to be joined unto the data when a new subset is read in.
Afterwards all filters need to be re-applied to these rows.
Inserted rows need to be persisted
When a filter changes, the already inserted rows need to be binded to the new data subset.
Next the same filter needs to be applied to the inserted rows before displaying the data.
Deleted rows need to be persisted
When a filter changes, the already inserted rows need to be binded to the new data subset.
A column to sort on needs to be provided
Unique column(s) need to be provided to sort upon by default.
This can be either the 'key' alone and/or a 'version' / 'created_at' column.
Otherwise, the rows will randomly scramble whenever a new filter is applied.