-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
Hi,
I made and used my own utility for similar purpose.
If csv file is very large, every .import csv procedure makes delay.
So I solved the problem with backed up dump of imported csv sqlite db.
The following is my script.
#!/bin/bash
if [[ $# -eq 0 ]]; then
cat << HELP
Usage:
${0##*/} myfile.csv ---> view csv fields schema and sample data.
${0##*/} myfile.csv 'select ... from myfile ... ;' ----> query csv.
HELP
exit 1
fi
csv_file=$1
shift
table=${csv_file%%.*}
query=".schema
select * from $table limit 1;"
if [[ "$@" ]]; then
query="$@"
fi
if [[ -f "$table.db" ]]; then
cat << EOF | sqlite3 $table.db
.mode csv
.headers on
$query
EOF
else
cat << EOF | sqlite3
.mode csv
.import $csv_file $table
.headers on
.backup main $table.db
$query
EOF
fiHow about adding option like -c ( cache) or -d ( use database) for making dumb db at first execution and using dumped db if db file exists.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels