Changes

Jump to navigation Jump to search
m
Link
Line 1: Line 1: −
Open Food Facts data is released as Open Data: it can be reused freely by anyone, under the Open Database License (ODBL).
+
[[Category:Reuse]]
 +
Open Food Facts data is released as Open Data: it can be reused freely by anyone, under the Open Database License (ODBL). While this page is related to practical reuse, you must really be aware of [[ODBL License|rights and duties provided by the Open Database]] License (ODBL).
    
== Where is the data? ==
 
== Where is the data? ==
Line 7: Line 8:  
Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl
 
Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl
   −
When you search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try!
+
When your search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try!
    
=== Looking for the whole database? ===
 
=== Looking for the whole database? ===
 
The whole database can be downloaded at https://world.openfoodfacts.org/data
 
The whole database can be downloaded at https://world.openfoodfacts.org/data
   −
It's very big. Open Food Facts hosts more than 1,200,000 products (as of April 2020). So you will probably need skills to reuse the data.
+
It's very big. Open Food Facts hosts more than 2,200,000 products (as of July 2020). So you will probably need skills to reuse the data.
   −
You'll be able to find there different kinds of data.
+
You'll be able to find here different kinds of data.
    
==== The MongoDB daily export ====
 
==== The MongoDB daily export ====
It represents the most complete data; it's very big and you have to know how to deal with MongoDB.
+
It represents the most complete data; it's very big and you have to know how to deal with MongoDB. It's very big! More than 30GB uncompressed.
   −
==== The jsonl daily export ====
+
==== The JSONL daily export ====
While still undocumented, there is a daily export of the whole database in jsonl format. It represents the same data as the MongoDB export. It's very big! More than 17GB uncompressed.
+
While still undocumented, there is a daily export of the whole database in [https://jsonlines.org/ JSONL format] (sometimes called LDJSON or NDJSON) where each line is a JSON object. It represents the same data as the MongoDB export. The file is 2,7GB (2020-09), compressed with gzip. It takes more than 14GB uncompressed.
    
You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz
 
You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz
    
==== The CSV daily export ====
 
==== The CSV daily export ====
It represents a subset of the database but it is generally fitted to the majority of usages. It's a 2.3GB file (as of April 2020), so it can't be opened by Libre Office or Excel with an 8GB machine.
+
It contains all the products, but with a subset of the database fields. [https://world.openfoodfacts.org/data/data-fields.txt This subset is very large] and include main characteristics (EAN, name, brand...), many tags (such as categories, origins, labels, packaging...), ingredients and nutrition facts. Thus, it is generally fitted to the majority of usages. It's a 2.3GB file (as of April 2020), so it can't be opened by Libre Office or Excel with an 8GB machine.
    
== How to reuse? ==
 
== How to reuse? ==
Line 33: Line 34:  
==== csvkit tips ====
 
==== csvkit tips ====
 
[https://csvkit.readthedocs.io/en/latest/ csvkit] is a very efficient tool to manipulate huge amounts of CSV data. Here are some useful tips to manipulate Open Food Facts CSV export.
 
[https://csvkit.readthedocs.io/en/latest/ csvkit] is a very efficient tool to manipulate huge amounts of CSV data. Here are some useful tips to manipulate Open Food Facts CSV export.
 +
 +
''Converting whole Open Food Facts "CSV" export to regular CSV''. Open Food Facts export use tabs as separator: it should be called TSV (tab separated values) instead of CSV (comma separated values). <code>csvkit</code> can convert TSV file into CSV very easily:
 +
 +
<code>$ csvclean -t en.openfoodfacts.org.products.csv > myCSV.csv</code>
    
''Selecting 2 column''s. Selecting two or three columns can be useful for some usages. Extracting two columns produce a smaller CSV file which can be opened by common softwares such as Libre Office or Excel. The following command creates a CSV file (brands.csv) containing two columns from Open Food Facts (code and brands). (It generally takes more than 2 minutes, depending on your computer.)
 
''Selecting 2 column''s. Selecting two or three columns can be useful for some usages. Extracting two columns produce a smaller CSV file which can be opened by common softwares such as Libre Office or Excel. The following command creates a CSV file (brands.csv) containing two columns from Open Food Facts (code and brands). (It generally takes more than 2 minutes, depending on your computer.)
Line 54: Line 59:  
==== Import CSV in PostgreSQL ====
 
==== Import CSV in PostgreSQL ====
 
See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator).
 
See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator).
 +
 +
Alternative way - feel free to use a project from github: https://github.com/ArchiMageAlex/off_converter
    
==== Import CSV to SQLite ====
 
==== Import CSV to SQLite ====
   −
The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this an be extended easily. Data imported so far includes:  
+
The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this can be extended easily. Data imported so far includes:  
    
* barcode number
 
* barcode number
Line 77: Line 84:  
For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community].
 
For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community].
   −
=== jsonl export ===
+
Moreover, here a link to transform .bson file to a dataframe: https://github.com/gnaweric/openfoodfact_database_queries
jsonl is a huge file! It's not possible to play with it with common editors or common tools. But there is some command line tools that allows interesting things, like [https://stedolan.github.io/jq/manual/v1.6/ jq].
+
 
 +
With the use of {mongolite}, first connect to the base, then import the .bson file, then get a sample of it to make sure it is ready. Finally save it to a .rdata file for example.
 +
 
 +
Beware, each line is a product and some variable need to be unnest: tidyverser::unnest_wider()
 +
 
 +
=== JSONL delta exports ===
 +
Every day, Open Food Facts exports all the products created during the last 24 hours. The documentation of this export can be found in the /data page.
 +
 
 +
If you don't have MongoDB and just want to use these delta exports to build an up-to-date database, you can merge each export with the help of <code>[https://stedolan.github.io/jq/manual/v1.6/ jq]</code> tool.
 +
 
 +
$ gunzip products_1638076899_1638162314.json.gz # will decompress the file
 +
$ wc -l products_1638076899_1638162314.json # will count the number of products in this export (in JSONL each line is a JSON object)
 +
$ jq -c '. + .' 2021-11-30.json products_1638162314_1638248379.json > 2021-12-01.json # merge the delta with previous complete data
 +
 
 +
=== JSONL export ===
 +
JSONL is a huge file! It's not possible to play with it with common editors or common tools. But there is some command line tools that allows interesting things, like [https://stedolan.github.io/jq/manual/v1.6/ jq].
    
==== jq ====
 
==== jq ====
* start decompress the file (be carreful => 17GB after decompression):
+
* start decompress the file (be careful => 14GB after decompression):
 
  $ gunzip openfoodfacts-products.jsonl.gz
 
  $ gunzip openfoodfacts-products.jsonl.gz
 
* work on a small subset to test. E.g. for 100 products:
 
* work on a small subset to test. E.g. for 100 products:
Line 91: Line 113:  
  $ cat small.jsonl | jq -r .code # print all products' codes.
 
  $ cat small.jsonl | jq -r .code # print all products' codes.
   −
  $ cat small.jsonl | jq -r '[.code,.product_name] | @csv' # output a CSV file containing code,product_name
+
  $ cat small.jsonl | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name
 +
 
 +
Then you can try on the whole database:
 +
$ cat openfoodfacts-products.jsonl | jq -r '[.code,.product_name] | @csv' > names.csv # output CSV file (name.csv) containing all products with code,product_name
 +
 
 +
If you don't have enough disk space to uncompress the .gz file, you can use zcat directly on the compressed file. Example:
 +
$ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name
 +
 
 +
==== Filtering JSONL export with jq ====
 +
Filtering a specific country:
 +
$ zcat openfoodfacts-products.jsonl.gz | jq '. | select(.countries_tags[]? == "en:germany")'
 +
 
 +
The previous command produces a json output containing all the products sold in Germany. If you want a JSONL output, add -c parameter.
 +
$ zcat openfoodfacts-products.jsonl.gz | jq -c '. | select(.countries_tags[]? == "en:germany")'
 +
 
 +
You can add multiple filters and export the result to a CSV file. For example, here is a command that 1. selects products having the Nutri-Score computed and belonging to the TOP 90% most scanned products in 2020, and 2. exports barcode (<code>code</code>) and number of scans (<code>scans_n</code>) as a CSV file.
 +
$ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.misc_tags[]? == "en:nutriscore-computed" and .popularity_tags[]? == "top-90-percent-scans-2020") | [.code,.scans_n] | @csv' > displayed.ns.in.top90.2020.world.csv
 +
 
 +
Filtering barcodes which are different from a code containing 1 to 13 digits:
 +
$ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.code|test("^[0-9]{1,13}$") | not) | .code' > ean_gt_13.csv
 +
These operations can be quite long (more than 10 minutes depending on your computer and your selection).
 +
 
 +
=== MongoDB dump ===
 +
The [https://world.openfoodfacts.org/data MongoDB dump] needs to be reused with MongoDB. It allows building a full replication of the Open Food Facts database and use MongoDB for selecting, filtering and exporting data. Using MongoDB allows faster manipulations compared to the other methods.
 +
 
 +
First, you '''need a running MongoDB installation'''. Open Food Facts is using MongoDB 4.4. It has been reported that prior version should not work for Open Food Facts dump.
 +
 
 +
You can see [https://gist.github.com/CharlesNepote/13198c2ed336fc64cb674d63876e8d99 here a quick tutorial on how to install MongoDB on Debian 10 or Debian 11].
 +
 
 +
==== Import Open Food Facts MongoDB dump into MongoDB ====
 +
<pre>
 +
# Download and decompress the dump
 +
wget https://static.openfoodfacts.org/data/openfoodfacts-mongodbdump.tar.gz
 +
tar -xzf openfoodfacts-mongodbdump.tar.gz
 +
 
 +
# Restore all the database. mongorestore recreates indexes recorded by mongodump.
 +
mongorestore --drop ./dump
 +
# => 2254885 document(s) restored successfully. 0 document(s) failed to restore.
 +
</pre>
 +
 
 +
==== Play with the database ====
 +
<pre>
 +
# Display 5 first products in JSON format, using pagination
 +
# https://www.codementor.io/@arpitbhayani/fast-and-efficient-pagination-in-mongodb-9095flbqr
 +
mongo off --eval 'db.products.find().limit(5).pretty().shellPrint()' --quiet
 +
 
 +
# Combined with JQ (JSON tool) to provide colors
 +
# JQ has to installed separatly. See https://stedolan.github.io/jq/
 +
mongo off --eval 'db.products.find().limit(5).pretty().shellPrint()' --quiet | jq .
 +
 
 +
# Combined with JQ (JSON tool) to provide colors and compact output (each JSON object on a single line (aka JSONL format))
 +
mongo off --eval 'db.products.find().limit(5).pretty().shellPrint()' --quiet | jq . -c
 +
 
 +
# Get products from Germany; return fields "code" and "counties_tags"; limit to 2 products
 +
mongo off --eval 'db.products.find({countries_tags: "en:germany"}, {code: 1, countries_tags: 1}).limit(2).pretty().shellPrint()' --quiet
 +
 
 +
# get the data from one field without _id
 +
mongo off --eval 'db.products.find({countries_tags: "en:germany"}, {_id: 0, countries_tags: 1}).limit(2).pretty().shellPrint()' --quiet
 +
 
 +
</pre>
 +
 
 +
==== Export the database ====
 +
<pre>
 +
# Exports
 +
# See: https://www.mongodb.com/docs/database-tools/mongoexport/
 +
 
 +
 
 +
# 1. The "aggregate" way
 +
mongo off --eval 'db.products.aggregate([{$match: {product_name: "Coke"}},{$out: "result"}])'
 +
 
 +
mongoexport --db off --collection result --fields code,product_name --type=csv --out result.csv
 +
 
 +
 
 +
# 2. the -q,--query option way
 +
 
 +
# Export 5 first german products
 +
mongoexport -d off -c products --type=csv --fields code,countries_tags -q '{"countries_tags": "en:germany"}}' --out report.csv --limit 5
 +
 
 +
# Export to STDIN in CSV format; notice option --quiet
 +
mongoexport -d off -c products --type=csv --fields code,countries_tags -q '{"countries_tags": "en:germany"}' --limit 5 --quiet
 +
 
 +
# How long to export all German products?
 +
time mongoexport -d off -c products --type=csv --fields code,countries_tags -q '{"countries_tags": "en:germany"}' --out report.csv
 +
# real 0m10.135s
 +
 
 +
# Specify the fields in a file containing the line-separated list of fields to export (--fieldFile option)
 +
# Official csv export fields are coming from @export_fields variable in /lib/ProductOpener/Config_off.pm
 +
mongoexport -d off -c products --type=csv --fieldFile official_csv_export_fields.txt -q '{"countries_tags": "en:germany"}' --limit 5 --quiet
 +
 
 +
</pre>
 +
 
 +
==== List all fields used in the database ====
 +
<pre>
 +
# Open Food Facts database contains hundreds of fields.
 +
 
 +
# An easy way to list them all is to use "variety" Schema Analyzer:
 +
# https://github.com/variety/variety
 +
 
 +
# 1. Install "variety"
 +
git clone https://github.com/variety/variety.git
 +
 
 +
# 2. Use it
 +
cd ./variety
 +
 
 +
# Analyzing can be very long (hours). You can restrict the analysis to a small number
 +
time mongo off --eval "var collection = 'products', limit = 1000" variety.js > off_schema_1000.txt
 +
# (17 s)
 +
 
 +
time mongo off --eval "var collection = 'products', limit = 10000" variety.js > off_schema_10000.txt
 +
# (3 minutes)
 +
 
 +
time mongo off --eval "var collection = 'products', limit = 100000" variety.js > off_schema_100000.txt
 +
# (75 minutes)
 +
 
 +
time mongo off --eval "var collection = 'products'" variety.js > off_schema_all.txt
 +
# (more than two days)
 +
 
 +
 
 +
</pre>

Navigation menu