Publication:
IMAGE SEGMENTATION OF CARBOHYDRATES ON PLATES OF COOKED MEALS

dc.contributor.authorYEE LI XIEN
dc.date.accessioned2025-02-03T04:48:42Z
dc.date.available2025-02-03T04:48:42Z
dc.date.issued2024
dc.description.abstractThe accurate assessment of dietary intake is crucial for promoting health and managing diet-related conditions such as diabetes and obesity. Traditional methods of dietary assessment are often prone to inaccuracies and time-consuming. This study evaluates the effectiveness of three deep learning models U-Net with 16 filters, U-Net with 64 filters, and the Segment Anything Model (SAM) for segmenting carbohydrate regions in food images. The models were assessed using metrics such as accuracy, Intersection over Union (IoU), and Dice Score. The SAM model outperformed the U-Net models, achieving an overall accuracy of 99.24%, an IoU of 90.59%, and a Dice Score of 94.21%. The UNet 16-filter model showed better performance than the 64-filter model, with an accuracy of 97.86% and an IoU of 81.15%. These results highlight SAM's advanced capabilities in promptable segmentation and zero-shot transfer, making it the most effective model for this task. Future research should focus on expanding the dataset, integrating texture-based segmentation methods, and exploring data augmentation techniques to further enhance model robustness.
dc.identifier.urihttps://hdl.handle.net/20.500.14377/37015
dc.language.isoen
dc.publisherIMU University
dc.subjectEating
dc.subjectHealth Promotion
dc.subjectDiabetes Mellitus
dc.subjectObesity
dc.subjectCarbohydrates
dc.titleIMAGE SEGMENTATION OF CARBOHYDRATES ON PLATES OF COOKED MEALS
dc.typeThesis
dspace.entity.typePublication
oairecerif.author.affiliation#PLACEHOLDER_PARENT_METADATA_VALUE#
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
YEE LI XIEN.pdf
Size:
5.77 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description: