Changes

Jump to navigation Jump to search
+ Import CSV to DuckDB
Line 73: Line 73:  
* product countries
 
* product countries
 
* full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see])
 
* full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see])
 +
 +
==== Import CSV to DuckDB ====
 +
[https://duckdb.org/ DuckDB] is very close to SQLite, except it has higher performances: database size is 3 times lighter, and requests performs 5-10 times better.
 +
# Discard invalid characters
 +
<nowiki>#</nowiki> duckdb doesn't like invalid UTF8. It did not want to read some parquet file as such, with the following error:
 +
<nowiki>#</nowiki> Error: near line 1: Invalid Input Error: Invalid string encoding found in Parquet file: value "........."
 +
<nowiki>#</nowiki> (occuring namely on this product: <nowiki>https://world.openfoodfacts.org/product/9900109008673?rev=4</nowiki> )
 +
<nowiki>#</nowiki> The issue, and its solution below, seems to be well-known: <nowiki>https://til.simonwillison.net/linux/iconv</nowiki>
 +
iconv -f utf-8 -t utf-8 -c en.openfoodfacts.org.products.csv -o en.openfoodfacts.org.products.converted.csv
 +
 +
<nowiki>#</nowiki> Create duckdb database and import data
 +
duckdb products.db <<EOF
 +
CREATE TABLE products AS
 +
<nowiki> </nowiki>   SELECT * FROM read_csv_auto('en.openfoodfacts.org.products.converted.csv', quote=<nowiki>''</nowiki>, sample_size=3000000, delim='\t');
 +
EOF
 +
 +
<nowiki>#</nowiki> Then you can try a SQL request
 +
duckdb products.db -csv <<EOF
 +
SELECT * FROM products
 +
<nowiki> </nowiki> WHERE completeness > 0.99 -- products with a good level of completeness
 +
<nowiki> </nowiki> ORDER BY last_modified_datetime LIMIT 10;
 +
EOF
    
==== Python ====
 
==== Python ====
Line 242: Line 264:  
We are testing a new kind of tool to provide the data: every day an SQL database is fed by the regular daily CSV export, and published online thanks to Datasette tool.
 
We are testing a new kind of tool to provide the data: every day an SQL database is fed by the regular daily CSV export, and published online thanks to Datasette tool.
   −
The tool, called Mirabelle, can be found here: http://mirabelle.openfoodfacts.org/
+
The tool, called ''[[Mirabelle]]'', can be found here: http://mirabelle.openfoodfacts.org/
    
The whole CSV export can be found here: http://mirabelle.openfoodfacts.org/products/all
 
The whole CSV export can be found here: http://mirabelle.openfoodfacts.org/products/all
Line 248: Line 270:  
* The tool supports simple queries with a form, and also facet navigation.
 
* The tool supports simple queries with a form, and also facet navigation.
 
* For those who know SQL language, it allows rich and complex queries.
 
* For those who know SQL language, it allows rich and complex queries.
      
What's different with [https://world.openfoodfacts.org/cgi/search.pl Open Food Facts advanced search]?
 
What's different with [https://world.openfoodfacts.org/cgi/search.pl Open Food Facts advanced search]?
Line 273: Line 294:     
Maybe you have to wait several seconds. It will download a product.csv file.
 
Maybe you have to wait several seconds. It will download a product.csv file.
 +
 +
==== Tips ====
 +
 +
* Several fields -- such as <code>countries_en</code>, <code>categories_en</code>, etc. -- contain multiple values. To query a particular value you have to use the operator <code>like</code> and use percents like this: <code>like %italy%</code>.

Navigation menu