Changes

Jump to navigation Jump to search
+ parquet file
Line 175: Line 175:  
  $ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.data_quality_errors_tags[]? != "")' | jq -r '[.code,(.data_quality_errors_tags|join(","))] | @csv'
 
  $ zcat openfoodfacts-products.jsonl.gz | jq -r '. | select(.data_quality_errors_tags[]? != "")' | jq -r '[.code,(.data_quality_errors_tags|join(","))] | @csv'
 
These operations can be quite long (more than 10 minutes depending on your computer and your selection).
 
These operations can be quite long (more than 10 minutes depending on your computer and your selection).
 +
 +
=== Parquet file hosted on Hugging Face (beta) ===
 +
This method should not be considered as ready for production. It's just another convenient way to access Open Food Facts data.
 +
 +
The parquet file is made from JSONL export (the whole database). Then Hugging Face allows different ways to query the data.
 +
 +
==== In-browser queries ====
 +
Just go the dataset's page -- https://huggingface.co/datasets/openfoodfacts/product-database -- and click on the "SQL" yellow botton.
 +
 +
You'll see a SQL interface in your browser, where you can perform queries.
 +
 +
==== From the command line, thanks to DuckDB ====
 +
Here again, this great tool allows to request remote parquet files thru the command line.
 +
$ duckdb :memory: "SELECT * from '<nowiki>https://huggingface.co/datasets/openfoodfacts/product-database/resolve/main/products.parquet'</nowiki> LIMIT 10;"
 +
The request can be a bit long (~15 seconds).
 +
 +
==== From the command line, thru the local filesystem ====
 +
If you want faster results, just download the parquet file from Hugging Face. You'll then be able to query the file with DuckDB, with better request times.
 +
$ duckdb :memory: "SELECT * from './products.parquet' LIMIT 10;"
    
=== MongoDB dump ===
 
=== MongoDB dump ===

Navigation menu