Changes

Jump to navigation Jump to search
Multiple filter and CSV export
Line 18: Line 18:     
==== The MongoDB daily export ====
 
==== The MongoDB daily export ====
It represents the most complete data; it's very big and you have to know how to deal with MongoDB.
+
It represents the most complete data; it's very big and you have to know how to deal with MongoDB. It's very big! More than 9GB uncompressed.
    
==== The JSONL daily export ====
 
==== The JSONL daily export ====
While still undocumented, there is a daily export of the whole database in jsonl format. It represents the same data as the MongoDB export. It's very big! More than 14GB uncompressed.
+
While still undocumented, there is a daily export of the whole database in [https://jsonlines.org/ JSONL format] (sometimes called LDJSON or NDJSON) where each line is a JSON object. It represents the same data as the MongoDB export. The file is 2,7GB (2020-09), compressed with gzip. It takes more than 14GB uncompressed.
    
You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz
 
You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz
Line 55: Line 55:  
==== Import CSV in PostgreSQL ====
 
==== Import CSV in PostgreSQL ====
 
See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator).
 
See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator).
 +
 +
Alternative way - feel free to use a project from github: https://github.com/ArchiMageAlex/off_converter
    
==== Import CSV to SQLite ====
 
==== Import CSV to SQLite ====
   −
The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this an be extended easily. Data imported so far includes:  
+
The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this can be extended easily. Data imported so far includes:  
    
* barcode number
 
* barcode number
Line 77: Line 79:  
==== R stat ====
 
==== R stat ====
 
For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community].
 
For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community].
 +
 +
Moreover, here a link to transform .bson file to a dataframe: https://github.com/gnaweric/openfoodfact_database_queries
 +
 +
With the use of {mongolite}, first connect to the base, then import the .bson file, then get a sample of it to make sure it is ready. Finally save it to a .rdata file for example.
 +
 +
Beware, each line is a product and some variable need to be unnest: tidyverser::unnest_wider()
    
=== JSONL export ===
 
=== JSONL export ===
Line 97: Line 105:  
  $ cat openfoodfacts-products.jsonl | jq -r '[.code,.product_name] | @csv' > names.csv # output CSV file (name.csv) containing all products with code,product_name
 
  $ cat openfoodfacts-products.jsonl | jq -r '[.code,.product_name] | @csv' > names.csv # output CSV file (name.csv) containing all products with code,product_name
   −
If you don't have enough disk place to uncompress the .gz file, you can use zcat directly on the compressed file. Example:
+
If you don't have enough disk space to uncompress the .gz file, you can use zcat directly on the compressed file. Example:
 
  $ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name
 
  $ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name
 +
 +
==== Filtering JSONL export with jq ====
 +
Filtering a specific country:
 +
$ zcat openfoodfacts-products.jsonl.gz | jq '. | select(.countries_tags[]? == "en:germany")'
 +
 +
The previous command produces a json output containing all the products sold in Germany. If you want a JSONL output, add -c parameter.
 +
$ zcat openfoodfacts-products.jsonl.gz | jq -c '. | select(.countries_tags[]? == "en:germany")'
 +
 +
You can add multiple filters and export the result to a CSV file. For example, here is a command that 1. selects products having the Nutri-Score computed and belonging to the TOP 90% most scanned products in 2020, and 2. exports barcode (<code>code</code>) and number of scans (<code>scans_n</code>) as a CSV file.
 +
$ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.misc_tags[]? == "en:nutriscore-computed" and .popularity_tags[]? == "top-90-percent-scans-2020") | [.code,.scans_n] | @csv' > displayed.ns.in.top90.2020.world.csv
 +
 +
These operations can be quite long (more than 10 minutes depending on your computer and your selection).

Navigation menu