Line 6: |
Line 6: |
| | | |
| === Looking for a selection of products? === | | === Looking for a selection of products? === |
− | Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl | + | Then use '''the advanced search'''. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl |
| | | |
| When your search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try! | | When your search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try! |
| + | |
| + | Do notice you can download up to 10,000 results only. If you need to download more results you have to use some other methods described below. |
| | | |
| === Looking for the whole database? === | | === Looking for the whole database? === |
| The whole database can be downloaded at https://world.openfoodfacts.org/data | | The whole database can be downloaded at https://world.openfoodfacts.org/data |
| | | |
− | It's very big. Open Food Facts hosts more than 2,200,000 products (as of July 2020). So you will probably need skills to reuse the data. | + | It's very big. Open Food Facts hosts more than 2,800,000 products (as of April 2023). So you will probably need skills to reuse the data. |
| | | |
| You'll be able to find here different kinds of data. | | You'll be able to find here different kinds of data. |
Line 21: |
Line 23: |
| | | |
| ==== The JSONL daily export ==== | | ==== The JSONL daily export ==== |
− | While still undocumented, there is a daily export of the whole database in [https://jsonlines.org/ JSONL format] (sometimes called LDJSON or NDJSON) where each line is a JSON object. It represents the same data as the MongoDB export. The file is 2,7GB (2020-09), compressed with gzip. It takes more than 14GB uncompressed.
| + | There is a daily export of the whole database in [https://jsonlines.org/ JSONL format] (sometimes called LDJSON or NDJSON) where each line is a JSON object. It represents the same data as the MongoDB export. The file is 4,8GB (2022-10), compressed with gzip. It takes more than 14GB uncompressed. |
| | | |
| You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz | | You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz |
Line 71: |
Line 73: |
| * product countries | | * product countries |
| * full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see]) | | * full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see]) |
| + | |
| + | ==== Import CSV to DuckDB ==== |
| + | [https://duckdb.org/ DuckDB] is very close to SQLite, except it has higher performances: database size is 3 times lighter, and requests performs 5-10 times better. |
| + | # Discard invalid characters |
| + | <nowiki>#</nowiki> duckdb doesn't like invalid UTF8. It did not want to read some parquet file as such, with the following error: |
| + | <nowiki>#</nowiki> Error: near line 1: Invalid Input Error: Invalid string encoding found in Parquet file: value "........." |
| + | <nowiki>#</nowiki> (occuring namely on this product: <nowiki>https://world.openfoodfacts.org/product/9900109008673?rev=4</nowiki> ) |
| + | <nowiki>#</nowiki> The issue, and its solution below, seems to be well-known: <nowiki>https://til.simonwillison.net/linux/iconv</nowiki> |
| + | iconv -f utf-8 -t utf-8 -c en.openfoodfacts.org.products.csv -o en.openfoodfacts.org.products.converted.csv |
| + | |
| + | <nowiki>#</nowiki> Create duckdb database and import data |
| + | duckdb products.db <<EOF |
| + | CREATE TABLE products AS |
| + | <nowiki> </nowiki> SELECT * FROM read_csv_auto('en.openfoodfacts.org.products.converted.csv', quote=<nowiki>''</nowiki>, sample_size=3000000, delim='\t'); |
| + | EOF |
| + | |
| + | <nowiki>#</nowiki> Then you can try a SQL request |
| + | duckdb products.db -csv <<EOF |
| + | SELECT * FROM products |
| + | <nowiki> </nowiki> WHERE completeness > 0.99 -- products with a good level of completeness |
| + | <nowiki> </nowiki> ORDER BY last_modified_datetime LIMIT 10; |
| + | EOF |
| | | |
| ==== Python ==== | | ==== Python ==== |
Line 133: |
Line 157: |
| Filtering barcodes which are different from a code containing 1 to 13 digits: | | Filtering barcodes which are different from a code containing 1 to 13 digits: |
| $ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.code|test("^[0-9]{1,13}$") | not) | .code' > ean_gt_13.csv | | $ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.code|test("^[0-9]{1,13}$") | not) | .code' > ean_gt_13.csv |
| + | Some part of the data are arrays, you must aggregate them using <code>join</code> for CSV export. For example, to export each product and its states in CSV: |
| + | $ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,(.states_tags|join(","))] | @csv' |
| + | Selecting products with quality issues and exporting the barcode and the issues in CSV: |
| + | $ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.data_quality_errors_tags[]? != "")' | jq -r '[.code,(.data_quality_errors_tags|join(","))] | @csv' |
| These operations can be quite long (more than 10 minutes depending on your computer and your selection). | | These operations can be quite long (more than 10 minutes depending on your computer and your selection). |
| | | |
| === MongoDB dump === | | === MongoDB dump === |
− | The MongoDB dump needs to be reused with MongoDB. It allows building a full replication of the Open Food Facts database and use MongoDB for selecting, filtering and exporting data. Using MongoDB allows faster manipulations compared to the other methods. | + | The [https://world.openfoodfacts.org/data MongoDB dump] needs to be reused with MongoDB. It allows building a full replication of the Open Food Facts database and use MongoDB for selecting, filtering and exporting data. Using MongoDB allows faster manipulations compared to the other methods. |
| | | |
| First, you '''need a running MongoDB installation'''. Open Food Facts is using MongoDB 4.4. It has been reported that prior version should not work for Open Food Facts dump. | | First, you '''need a running MongoDB installation'''. Open Food Facts is using MongoDB 4.4. It has been reported that prior version should not work for Open Food Facts dump. |
Line 208: |
Line 236: |
| # Open Food Facts database contains hundreds of fields. | | # Open Food Facts database contains hundreds of fields. |
| | | |
− | # An easy way to list them all is to use "variety" Schema Analyzer | + | # An easy way to list them all is to use "variety" Schema Analyzer: |
| + | # https://github.com/variety/variety |
| | | |
| # 1. Install "variety" | | # 1. Install "variety" |
Line 225: |
Line 254: |
| time mongo off --eval "var collection = 'products', limit = 100000" variety.js > off_schema_100000.txt | | time mongo off --eval "var collection = 'products', limit = 100000" variety.js > off_schema_100000.txt |
| # (75 minutes) | | # (75 minutes) |
| + | |
| + | time mongo off --eval "var collection = 'products'" variety.js > off_schema_all.txt |
| + | # (more than two days) |
| + | |
| | | |
| </pre> | | </pre> |
| + | |
| + | === CSV export via SQL (beta) === |
| + | We are testing a new kind of tool to provide the data: every day an SQL database is fed by the regular daily CSV export, and published online thanks to Datasette tool. |
| + | |
| + | The tool, called ''[[Mirabelle]]'', can be found here: http://mirabelle.openfoodfacts.org/ |
| + | |
| + | The whole CSV export can be found here: http://mirabelle.openfoodfacts.org/products/all |
| + | |
| + | * The tool supports simple queries with a form, and also facet navigation. |
| + | * For those who know SQL language, it allows rich and complex queries. |
| + | |
| + | What's different with [https://world.openfoodfacts.org/cgi/search.pl Open Food Facts advanced search]? |
| + | |
| + | * It's possible to export selections with more than 10,000 products (eg. big queries by countries). |
| + | * It's possible to build queries by date. |
| + | * It allows richer queries with OR, AND, NOT, REGEXP, etc. |
| + | * It is possible to restrict the number of fields displayed and exported. |
| + | * It is possible to order results by any field. |
| + | |
| + | ==== Example ==== |
| + | '''1 -- Build your query (or ask someone to build it for you)''' |
| + | |
| + | Eg. all German products that have been scanned at least one time. |
| + | -- Products from Germany that have been scanned at least one time |
| + | select code, product_name from [all] |
| + | where countries_en like "%germany%" and unique_scans_n is not null |
| + | order by unique_scans_n desc |
| + | -- the limit here displays 20 results; the link "CSV without limit" below allow you to download all the data without limit |
| + | limit 20 |
| + | https://mirabelle.openfoodfacts.org/products?sql=--+Products+from+Germany+that+have+been+scanned+at+least+one+time%0D%0Aselect+code%2C+product_name+from+%5Ball%5D%0D%0Awhere+countries_en+like+%22%25germany%25%22+and+unique_scans_n+is+not+null%0D%0Aorder+by+unique_scans_n+desc%0D%0A--+the+limit+here+displays+20+results%3B+the+link+%22CSV+without+limit%22+below+allows+to+download+all+the+data+without+limit%0D%0Alimit+20 |
| + | |
| + | '''2 -- Click on the link "CSV without limit"''' |
| + | |
| + | Maybe you have to wait several seconds. It will download a product.csv file. |
| + | |
| + | ==== Tips ==== |
| + | |
| + | * Several fields -- such as <code>countries_en</code>, <code>categories_en</code>, etc. -- contain multiple values. To query a particular value you have to use the operator <code>like</code> and use percents like this: <code>like %italy%</code>. |