Line 1: |
Line 1: |
− | Open Food Facts data is released as Open Data: it can be reused freely by anyone, under the Open Database License (ODBL). | + | [[Category:Reuse]] |
| + | Open Food Facts data is released as Open Data: it can be reused freely by anyone, under the Open Database License (ODBL). While this page is related to practical reuse, you must really be aware of [[ODBL License|rights and duties provided by the Open Database]] License (ODBL). |
| | | |
| == Where is the data? == | | == Where is the data? == |
Line 7: |
Line 8: |
| Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl | | Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl |
| | | |
− | When you search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try! | + | When your search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try! |
| | | |
| === Looking for the whole database? === | | === Looking for the whole database? === |
| The whole database can be downloaded at https://world.openfoodfacts.org/data | | The whole database can be downloaded at https://world.openfoodfacts.org/data |
| | | |
− | It's very big. Open Food Facts hosts more than 1,200,000 products (as of April 2020). So you will probably need skills to reuse the data. | + | It's very big. Open Food Facts hosts more than 1,400,000 products (as of July 2020). So you will probably need skills to reuse the data. |
| | | |
− | You'll be able to find there different kinds of data. | + | You'll be able to find here different kinds of data. |
| | | |
| ==== The MongoDB daily export ==== | | ==== The MongoDB daily export ==== |
− | It represents the most complete data; it's very big and you have to know how to deal with MongoDB. | + | It represents the most complete data; it's very big and you have to know how to deal with MongoDB. It's very big! More than 9GB uncompressed. |
| | | |
− | ==== The jsonl daily export ==== | + | ==== The JSONL daily export ==== |
− | While still undocumented, there is a daily export of the whole database in jsonl format. It represents the same data as the MongoDB export. It's very big! More than 14GB uncompressed. | + | While still undocumented, there is a daily export of the whole database in [https://jsonlines.org/ JSONL format] (sometimes called LDJSON or NDJSON) where each line is a JSON object. It represents the same data as the MongoDB export. The file is 2,7GB (2020-09), compressed with gzip. It takes more than 14GB uncompressed. |
| | | |
| You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz | | You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz |
| | | |
| ==== The CSV daily export ==== | | ==== The CSV daily export ==== |
− | It represents a subset of the database but it is generally fitted to the majority of usages. It's a 2.3GB file (as of April 2020), so it can't be opened by Libre Office or Excel with an 8GB machine. | + | It contains all the products, but with a subset of the database fields. [https://world.openfoodfacts.org/data/data-fields.txt This subset is very large] and include main characteristics (EAN, name, brand...), many tags (such as categories, origins, labels, packaging...), ingredients and nutrition facts. Thus, it is generally fitted to the majority of usages. It's a 2.3GB file (as of April 2020), so it can't be opened by Libre Office or Excel with an 8GB machine. |
| | | |
| == How to reuse? == | | == How to reuse? == |
Line 54: |
Line 55: |
| ==== Import CSV in PostgreSQL ==== | | ==== Import CSV in PostgreSQL ==== |
| See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator). | | See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator). |
| + | |
| + | Alternative way - feel free to use a project from github: https://github.com/ArchiMageAlex/off_converter |
| | | |
| ==== Import CSV to SQLite ==== | | ==== Import CSV to SQLite ==== |
| | | |
− | The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this an be extended easily. Data imported so far includes: | + | The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this can be extended easily. Data imported so far includes: |
| | | |
| * barcode number | | * barcode number |
Line 76: |
Line 79: |
| ==== R stat ==== | | ==== R stat ==== |
| For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community]. | | For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community]. |
| + | |
| + | Moreover, here a link to transform .bson file to a dataframe: https://github.com/gnaweric/openfoodfact_database_queries |
| + | |
| + | With the use of {mongolite}, first connect to the base, then import the .bson file, then get a sample of it to make sure it is ready. Finally save it to a .rdata file for example. |
| + | |
| + | Beware, each line is a product and some variable need to be unnest: tidyverser::unnest_wider() |
| | | |
| === JSONL export === | | === JSONL export === |
Line 96: |
Line 105: |
| $ cat openfoodfacts-products.jsonl | jq -r '[.code,.product_name] | @csv' > names.csv # output CSV file (name.csv) containing all products with code,product_name | | $ cat openfoodfacts-products.jsonl | jq -r '[.code,.product_name] | @csv' > names.csv # output CSV file (name.csv) containing all products with code,product_name |
| | | |
− | If you don't have enough disk place to uncompress the .gz file, you can use zcat directly on the compressed file. Example: | + | If you don't have enough disk space to uncompress the .gz file, you can use zcat directly on the compressed file. Example: |
| $ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name | | $ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name |
| + | |
| + | ==== Filtering JSONL export with jq ==== |
| + | Filtering a specific country: |
| + | $ zcat openfoodfacts-products.jsonl.gz | jq '. | select(.countries_tags[]? == "en:germany")' |
| + | |
| + | The previous command produces a json output containing all the products sold in Germany. If you want a JSONL output, add -c parameter. |
| + | $ zcat openfoodfacts-products.jsonl.gz | jq -c '. | select(.countries_tags[]? == "en:germany")' |
| + | |
| + | You can add multiple filters and export the result to a CSV file. For example, here is a command that 1. selects products having the Nutri-Score computed and belonging to the TOP 90% most scanned products in 2020, and 2. exports barcode (<code>code</code>) and number of scans (<code>scans_n</code>) as a CSV file. |
| + | $ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.misc_tags[]? == "en:nutriscore-computed" and .popularity_tags[]? == "top-90-percent-scans-2020") | [.code,.scans_n] | @csv' > displayed.ns.in.top90.2020.world.csv |
| + | |
| + | These operations can be quite long (more than 10 minutes depending on your computer and your selection). |