Changes

Jump to navigation Jump to search
Line 44: Line 44:  
de tableaux nutritionnels [https://drive.google.com/open?id=1nPxOGFgjKaKuUcnVC0nbNAijt1al9rvo pdf]
 
de tableaux nutritionnels [https://drive.google.com/open?id=1nPxOGFgjKaKuUcnVC0nbNAijt1al9rvo pdf]
   −
* Table extraction (V1): https://github.com/openfoodfacts/off-nutrition-table-extractor
+
* Table extraction (V1): https://github.com/openfoodfacts/off-nutrition-table-extractor. Mainly focus on the nutritional table detection part, the extraction process remains quite simple.
 
* Table extraction (v2): https://github.com/cgandon/openfoodfacts-nutriments
 
* Table extraction (v2): https://github.com/cgandon/openfoodfacts-nutriments
 +
 +
= Other approaches =
 +
 +
The previous attempts are not state of the art for the table extraction task. Other approaches we may consider:
 +
 +
* Detection of the table structure: identifying rows, columns and cells of the table. Most of the time rows corresponds to nutrition labels (protein, carbohydrate, energy,...) and columns to the quantity (100g, per portion, % of daily intake). By using pattern matching, we can label the column and rows and extract the values of the nutritional table for 100g or per portion.
 +
* Using an end-to-end model to directly extract nutritional values. This may be done by scoring candidates for each field (protein_100g, protein_serving, carbohydrate_100g,...), and selecting the highest scoring token as the field value. This is similar to [https://azure.microsoft.com/fr-fr/services/cognitive-services/form-recognizer Microsoft Form Recognizer API] or [https://cloud.google.com/document-ai/docs Google Document API].
 +
 +
Each approach requires different kinds of annotation, so chosing the most effective approach at the beginning of the project is crucial.
 +
 +
[[File:Nutrition Facts extraction approaches (1).png]]
 +
 +
Source: https://docs.google.com/drawings/d/1YKgnTEX1RBgsMQpO4JIt94sMwf1IC_fPWIvPnUjswTg/edit?usp=sharing
 +
 +
= Planning =
 +
 +
* Test of an end-to-end model, through an API as a starter: Form Recognizer or Google Document AI API. If results are promising, we develop in-house end-to-end model, otherwise a layout model.
 +
* Thorough literature review
 +
* Architecture choice
 +
* Annotation campaign
 +
* Experimentation: model training and scoring
 +
* Integration to Robotoff/Hunger Games
 +
 +
Ramzi: test of end-to-end model with Form Recognizer API.
 +
 +
Yichen: test of image preprocessing before Form Recognizer (layout model), test "manual" approaches.
 +
* tested a pre-processing method (ported from Mathematica to python)
 +
* OCR results worsen after binarization
 +
* testing method to morph deformed images (by detecting horizontal and vertical lines)
 +
* "manual" approaches: compare positions of bounding boxes of OCR, use borders if we have some
 +
** manual horizontal or vertical scans of bounding boxes, using the angles of bounding boxes
 +
** robotoff has python classes for importing/analyzing Google Cloud Vision
 +
 +
Raphaël: Literature review (to be continued)
 +
 +
= Notes 24/07 =
 +
 +
* Ramzi: tested end-to-end model with learning
 +
** missing one step, results expected next week
 +
* Yasmine + Yichen: working on pre-processing
 +
** Testing cylinders distortions -> flat rectangle
 +
*** open cv
 +
*** Tesseract performance very poor
 +
 +
Next week:
 +
* Ramzi
 +
** finish test end-to-end model
 +
** increase size of data set to get significant results
 +
* Yasmine + Yichen
 +
** Cylinder unwrapping (classic + deep learning models)
 +
*** review litterature
 +
*** create data set for Ramzi using paid API "perfect label"
 +
** table structure detection : associate bounding boxes to key value pairs
 +
* All
 +
** Create test set with cylindric photos + unwrapped photos
 +
 +
= Resources =
 +
 +
* [https://aka.ms/trove Trove] - marketplace platform where AI developers who need photos can create projects looking for specific types of photos crowdsource photos from photo takers.
 +
* Lobe - train a custom machine learning model using a simple visual interface with no code
 +
** Step 1: Request an invite code from the team: lobeai@microsoft.com
 +
** Step 2: Download Lobe - For Mac: https://aka.ms/DownloadLobeMac - For PC: https://aka.ms/DownloadLobeWindows
 +
* [https://flow.microsoft.com/en-us/ai-builder/ AI Builder] - import your photos right away into a Power App (or a Power Automate flow) so you can use the output in any real world application.

Navigation menu