Line 73: |
Line 73: |
| * product countries | | * product countries |
| * full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see]) | | * full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see]) |
| + | |
| + | ==== Import CSV to DuckDB ==== |
| + | [https://duckdb.org/ DuckDB] is very close to SQLite, except it has higher performances: database size is 3 times lighter, and requests performs 5-10 times better. |
| + | # Discard invalid characters |
| + | <nowiki>#</nowiki> duckdb doesn't like invalid UTF8. It did not want to read some parquet file as such, with the following error: |
| + | <nowiki>#</nowiki> Error: near line 1: Invalid Input Error: Invalid string encoding found in Parquet file: value "........." |
| + | <nowiki>#</nowiki> (occuring namely on this product: <nowiki>https://world.openfoodfacts.org/product/9900109008673?rev=4</nowiki> ) |
| + | <nowiki>#</nowiki> The issue, and its solution below, seems to be well-known: <nowiki>https://til.simonwillison.net/linux/iconv</nowiki> |
| + | iconv -f utf-8 -t utf-8 -c en.openfoodfacts.org.products.csv -o en.openfoodfacts.org.products.converted.csv |
| + | |
| + | <nowiki>#</nowiki> Create duckdb database and import data |
| + | duckdb products.db <<EOF |
| + | CREATE TABLE products AS |
| + | <nowiki> </nowiki> SELECT * FROM read_csv_auto('en.openfoodfacts.org.products.converted.csv', quote=<nowiki>''</nowiki>, sample_size=3000000, delim='\t'); |
| + | EOF |
| + | |
| + | <nowiki>#</nowiki> Then you can try a SQL request |
| + | duckdb products.db -csv <<EOF |
| + | SELECT * FROM products |
| + | <nowiki> </nowiki> WHERE completeness > 0.99 -- products with a good level of completeness |
| + | <nowiki> </nowiki> ORDER BY last_modified_datetime LIMIT 10; |
| + | EOF |
| | | |
| ==== Python ==== | | ==== Python ==== |