How Renzy works: nutrition analysis in 3 seconds
Renzy combines computer vision, a 500+ food database (Spanish + international + packaged products), and multimodal language models to turn a photo into a precise nutrition breakdown. This is the step-by-step guide — from opening the camera to seeing your calories, protein, carbs, fat, vitamins and a health score.
The 5-step Renzy scan
1Snap a photo of your plate
Open Renzy, tap the green button and frame the plate. Perfect angle is not required: the AI tolerates odd angles, low light and mixed dishes. The photo uploads compressed (~100KB) for speed and privacy.
2AI identifies the foods
A multimodal model (vision + language) detects each food in the photo, estimates the portion, and returns a confidence score. Common dishes — paella, Caesar salad, omelette, pizza — and ingredients in complex preparations are recognized reliably.
3We cross-reference the nutrition database
With the detected foods, we calculate macros (protein, carbs, fat) and micros (sodium, fiber, vitamins A, C, D, iron, calcium) using USDA + BEDCA + OpenFoodFacts data. For packaged goods, we read the barcode directly.
4We compute the health score
An algorithm evaluates the plate against your goal (lose, gain, maintain) and returns a 0-100 score with a short explanation. Not just calories: it penalizes ultra-processed items and rewards fiber and quality protein.
5Save or adjust
If something is off — larger portion, missing ingredient — edit the value with one tap. Save it to your log and it counts toward your daily goal. Syncs with HealthKit (iOS) and Google Fit.
What's behind the AI
No magic, just engineering. We use a state-of-the-art multimodal model (Claude Sonnet) fine-tuned on thousands of labeled food photos. Portions are estimated through visual references (utensils, standard plates) and context (a paella plate is typically 250-350g). When confidence is low, we ask you to confirm — better to ask than to guess wrong.
How accurate is it?
For common dishes, calorie accuracy lands within ±10-15% — comparable to a trained person eyeballing it. For packaged goods with a barcode, it's exact (direct label read). For exotic or homemade dishes with many hidden ingredients, the margin widens. That's why every analysis ships with a confidence level and you can always edit before saving.
Privacy: what we do with your photos
Photos are processed and discarded after analysis. We don't store them, train on them, or share them. We only keep the nutrition result (the numbers) tied to your account. You can delete your account and all data in one click from Settings — you receive a confirmation email with everything removed.
Frequently asked questions
Do I need internet to scan?
Yes — scanning needs internet because the AI runs server-side. Everything else — log, hydration, weekly planner — works offline and syncs when you reconnect.
Does it work with homemade food or only common dishes?
Both. For multi-ingredient homemade dishes (stews, mixed bowls) the AI identifies the main ingredients. If one is missing, add it with a tap.
How long does the analysis take?
Between 2 and 4 seconds typically. The slow part is the photo upload; the model itself responds in under a second.
Can I scan nutrition labels?
Yes. Point at the barcode for exact OpenFoodFacts data. If the product isn't in the database, scan the nutrition table and the AI reads it.
Does it work on iPhone and Android?
Yes, both. It also runs as a web app — scan from your mobile browser, no install required.
Try it with no signup
The home demo asks for nothing — no email, no card. Upload a photo and see the analysis instantly.