Line 1: |
Line 1: |
− | Open Food Facts data is released as Open Data: it can be reused freely by anyone, under the Open Database License (ODBL). | + | [[Category:Reuse]] |
| + | Open Food Facts data is released as Open Data: it can be reused freely by anyone, under the Open Database License (ODBL). While this page is related to practical reuse, you must really be aware of [[ODBL License|rights and duties provided by the Open Database]] License (ODBL). |
| | | |
− | == Where is the data == | + | == Where is the data? == |
| You'll find different kind of ways to get the data. | | You'll find different kind of ways to get the data. |
| | | |
− | === Searching for a selection of product? === | + | === Looking for a selection of products? === |
| Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl | | Then use the advanced search. The Open Food Facts advanced search feature allows to download selections of the data. See: https://world.openfoodfacts.org/cgi/search.pl |
| | | |
− | When you search is done, you will be able to download the selection, just give a try! | + | When your search is done, you will be able to download the selection in '''CSV or Excel format''', just give a try! |
| | | |
− | === Searching for the whole database? === | + | === Looking for the whole database? === |
| The whole database can be downloaded at https://world.openfoodfacts.org/data | | The whole database can be downloaded at https://world.openfoodfacts.org/data |
| | | |
− | It's very big. Open Food Facts hosts more than 1,200,000 products (as of April 2020). So you will probably need skills to reuse the data. | + | It's very big. Open Food Facts hosts more than 1,400,000 products (as of July 2020). So you will probably need skills to reuse the data. |
| | | |
− | You'll be able to find there different kinds of data. | + | You'll be able to find here different kinds of data. |
| | | |
| ==== The MongoDB daily export ==== | | ==== The MongoDB daily export ==== |
| It represents the most complete data; it's very big and you have to know how to deal with MongoDB. | | It represents the most complete data; it's very big and you have to know how to deal with MongoDB. |
| + | |
| + | ==== The JSONL daily export ==== |
| + | While still undocumented, there is a daily export of the whole database in jsonl format. It represents the same data as the MongoDB export. It's very big! More than 14GB uncompressed. |
| + | |
| + | You can find it at https://static.openfoodfacts.org/data/openfoodfacts-products.jsonl.gz |
| | | |
| ==== The CSV daily export ==== | | ==== The CSV daily export ==== |
− | It represents a subset of the database but it is generally fitted to the majority of usages. It's a 2.3GB file (as of April 2020), so it can't be opened by Libre Office or Excel with an 8GB machine. | + | It contains all the products, but with a subset of the database fields. [https://world.openfoodfacts.org/data/data-fields.txt This subset is very large] and include main characteristics (EAN, name, brand...), many tags (such as categories, origins, labels, packaging...), ingredients and nutrition facts. Thus, it is generally fitted to the majority of usages. It's a 2.3GB file (as of April 2020), so it can't be opened by Libre Office or Excel with an 8GB machine. |
| | | |
| == How to reuse? == | | == How to reuse? == |
Line 29: |
Line 35: |
| [https://csvkit.readthedocs.io/en/latest/ csvkit] is a very efficient tool to manipulate huge amounts of CSV data. Here are some useful tips to manipulate Open Food Facts CSV export. | | [https://csvkit.readthedocs.io/en/latest/ csvkit] is a very efficient tool to manipulate huge amounts of CSV data. Here are some useful tips to manipulate Open Food Facts CSV export. |
| | | |
− | '''Selecting 2 columns'''. Selecting two or three columns can be useful for some usages. Extracting two columns produce a smaller CSV file which can be opened by common softwares such as Libre Office or Excel. The following command creates a CSV file (brands.csv) containing two columns from Open Food Facts (code and brands). (It generally takes more than 2 minutes, depending on your computer.)
| + | ''Selecting 2 column''s. Selecting two or three columns can be useful for some usages. Extracting two columns produce a smaller CSV file which can be opened by common softwares such as Libre Office or Excel. The following command creates a CSV file (brands.csv) containing two columns from Open Food Facts (code and brands). (It generally takes more than 2 minutes, depending on your computer.) |
| | | |
| <code> | | <code> |
Line 35: |
Line 41: |
| </code> | | </code> |
| | | |
− | ==== Import CSV in PostGRE SQL ==== | + | ''Selecting products based on a regular expression''. csvkit can search in some specified fields, allowing to make powerful selections. The following command creates a CSV file (selection.csv) containing all products where the barcode (code) is beginning by 325798 (<code>-r "^325798(.*)"</code>). |
| + | |
| + | <code> |
| + | $ csvgrep -t -c code -r "^325798(.*)" en.openfoodfacts.org.products.csv > selection.csv |
| + | </code> |
| + | |
| + | The following command creates a CSV file (calissons.csv) containing all products where the category (categories) is containing "calisson". |
| + | |
| + | <code> |
| + | $ csvgrep -t -c categories -r "calisson" en.openfoodfacts.org.products.csv > calisson.csv |
| + | </code> |
| + | |
| + | ==== Import CSV in PostgreSQL ==== |
| See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator). | | See this article: https://blog-postgresql.verite.pro/2018/12/21/import-openfoodfacts.html (in french, but should be understandable with Google Translator). |
| + | |
| + | ==== Import CSV to SQLite ==== |
| + | |
| + | The repository [https://github.com/fairdirect/foodrescue-content foodrescue-content] contains Ruby scripts that import Open Food Facts CSV data into a [https://www.sqlite.org/index.html SQLite] database with full table normalization. Only a few fields are imported so far, but this an be extended easily. Data imported so far includes: |
| + | |
| + | * barcode number |
| + | * product name |
| + | * product categories |
| + | * product countries |
| + | * full categories hierarchy imported from the <code>categories.txt</code> taxonomy ([https://github.com/openfoodfacts/openfoodfacts-server/tree/master/taxonomies see]) |
| + | |
| + | ==== Python ==== |
| + | There are some articles dealing with using Python language to explore Open Food Facts data. |
| + | |
| + | Step by step commands: http://www.xavierdupre.fr/app/ensae_teaching_cs/helpsphinx/notebooks/prepare_data_2017.html (also in french) |
| + | |
| + | Python notebooks are great to learn Open Food Facts data, as they mix code and results together: |
| + | * Find [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=Python dozens of python notebooks on Kaggle] |
| + | * https://www.datasciencesociety.net/part-1-exploring-food-data/ |
| + | |
| + | ==== R stat ==== |
| + | For people who have R stat skills, there are [https://www.kaggle.com/openfoodfacts/world-food-facts/kernels?sortBy=hotness&group=everyone&pageSize=20&datasetId=20&language=R more than 50 notebooks from Kaggle community]. |
| + | |
| + | === JSONL export === |
| + | JSONL is a huge file! It's not possible to play with it with common editors or common tools. But there is some command line tools that allows interesting things, like [https://stedolan.github.io/jq/manual/v1.6/ jq]. |
| + | |
| + | ==== jq ==== |
| + | * start decompress the file (be carreful => 14GB after decompression): |
| + | $ gunzip openfoodfacts-products.jsonl.gz |
| + | * work on a small subset to test. E.g. for 100 products: |
| + | $ head -n 100 openfoodfacts-products.jsonl > small.jsonl |
| + | |
| + | You can start playing with jq. Here are examples. |
| + | $ cat small.jsonl | jq . # print all file in JSON format |
| + | |
| + | $ cat small.jsonl | jq -r .code # print all products' codes. |
| + | |
| + | $ cat small.jsonl | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name |
| + | |
| + | Then you can try on the whole database: |
| + | $ cat openfoodfacts-products.jsonl | jq -r '[.code,.product_name] | @csv' > names.csv # output CSV file (name.csv) containing all products with code,product_name |
| + | |
| + | If you don't have enough disk place to uncompress the .gz file, you can use zcat directly on the compressed file. Example: |
| + | $ zcat openfoodfacts-products.jsonl.gz | jq -r '[.code,.product_name] | @csv' # output CSV data containing code,product_name |