Files
Sage/.planning/research/README.md
Dani B bd477b0baa docs: complete project research (ecosystem analysis)
Research files synthesized:
- STACK.md: Flutter + Supabase + Riverpod recommended stack
- FEATURES.md: 7 table stakes, 6 differentiators, 7 anti-features identified
- ARCHITECTURE.md: Offline-first sync with optimistic locking, RLS multi-tenancy
- PITFALLS.md: 5 critical pitfalls (v1), 8 moderate (v1.5), 3 minor (v2+)
- SUMMARY.md: Executive synthesis with 3-phase roadmap implications

Key findings:
- Stack: Flutter + Supabase free tier + mobile_scanner + Open Food Facts
- Critical pitfalls: Barcode mismatches, timezone bugs, sync conflicts, setup complexity, notification fatigue
- Phase structure: MVP (core) → expansion (usage tracking) → differentiation (prediction + sales)
- All research grounded in ecosystem analysis (12+ competitors), official documentation, and production incidents

Confidence: HIGH
Ready for roadmap creation: YES

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
2026-01-27 23:42:37 -05:00

283 lines
9.6 KiB
Markdown

# Research: Food Inventory & Multi-User Tracking Apps
## Overview
This research documents common pitfalls, patterns, and best practices for food inventory and household tracking applications. It forms the foundation for Sage's roadmap and architecture decisions.
**Research Scope:** Food & household inventory tracking apps with multi-user support and expiration tracking
**Researched:** January 2026
**Primary Focus:** What mistakes do these apps commonly make? What architectural patterns work? What should Sage avoid?
---
## Files in This Research
### 1. **RESEARCH_SUMMARY.md** (Start here)
Executive summary with roadmap implications. Covers:
- Why food tracking apps fail (4 critical cascades)
- Key findings organized by severity
- Technology landscape overview
- Phase-based roadmap implications
- Confidence assessment and gaps
**Read this first to understand the domain.**
### 2. **PITFALLS.md** (Implementation guidance)
Comprehensive catalog of 13 major pitfalls organized by severity:
- **Critical (v1 must fix):** Barcode mismatches, timezone bugs, sync conflicts, setup complexity
- **Moderate (v1.5 required):** Notification fatigue, offline sync, search UX, permissions
- **Minor (v2 deferred):** API costs, performance, predictions, community spam
Each pitfall includes:
- What goes wrong (with examples from real apps)
- Why it happens
- Consequences
- Prevention strategies (specific, actionable)
- Phase mapping (v1/v1.5/v2)
- Detection (warning signs to watch for)
**Read this before starting architecture work.**
### 3. **STACK.md** (Technology decisions)
Recommended technology stack with rationale:
- Free barcode & nutrition APIs (USDA FoodData Central, Open Food Facts)
- Multi-user sync approach (optimistic locking)
- Database tech (PostgreSQL/SQLite with UTC timestamps)
- Notification strategy (FCM/APNs with scheduling)
**Use this for technology selection and justification.**
### 4. **ARCHITECTURE.md** (System design patterns)
System structure, component boundaries, and patterns:
- Component responsibilities and communication
- Data flow diagrams
- Patterns to follow (optimistic locking, CRDTs for offline)
- Anti-patterns to avoid
- Scalability considerations
**Reference this during v1 architecture design.**
### 5. **FEATURES.md** (What to build)
Feature landscape organized by category:
- Table stakes (users expect these)
- Differentiators (valued but not expected)
- Anti-features (explicitly don't build)
- MVP recommendations
- Feature dependencies
**Use this to scope phases and prioritize work.**
### 6. **SUMMARY.md** (Quick reference)
Condensed summary of all research with key takeaways.
**Use as a checklist before starting development.**
---
## Key Takeaways for Roadmap
### v1 MVP (6-8 weeks) - Foundation Phase
**Must address these 5 pitfalls or app won't work for multi-user households:**
1. **Barcode Data Mismatches** → Fuzzy matching + deduplication
2. **Timezone Bugs** → UTC throughout (non-negotiable)
3. **Sync Conflicts** → Optimistic locking with version numbers
4. **Setup Complexity** → 3-field MVP onboarding only
5. **Notification Fatigue** → Snooze + configurable alerts
### v1.5 (4-6 weeks) - Stabilization Phase
**After MVP feedback, smooth out rough edges:**
- Offline sync with persistent queue
- Barcode scanner error handling
- Advanced search (FTS + fuzzy matching)
- Permission model with audit logging
- Performance optimization
### v2+ (3-6 months) - Growth Phase
**Only after v1 is stable and users are happy:**
- Community features (sales DB, crowdsourced corrections)
- ML predictions (with safeguards)
- Multi-location households
- Advanced insights
---
## Critical Success Factors
**For Sage to succeed where other apps failed:**
1. **Ship with UTC timezone handling from day one** - Not a refactor; foundational
2. **Make multi-user work in v1** - This is the core value prop; don't defer
3. **Ruthlessly simplify onboarding** - 3 fields maximum. Measure dropout rates.
4. **Don't over-predict purchases** - If you do AI, wait until v2 and start simple
5. **Expect barcode data quality issues** - Fuzzy matching and deduplication aren't optional
---
## Confidence Levels
| Area | Level | Why |
|------|-------|-----|
| Critical pitfalls (barcode, timezone, sync) | **HIGH** | Confirmed by multiple app reviews, technical docs, real-world examples |
| Secondary pitfalls (offline, search, notifications) | **MEDIUM-HIGH** | Ecosystem research + best practices; fewer direct app examples |
| Tech stack recommendations | **MEDIUM-HIGH** | APIs verified; alternative comparisons done |
| Feature landscape | **MEDIUM** | Inferred from app reviews and competitor analysis |
| Multi-user sync patterns | **MEDIUM** | General distributed systems knowledge; fewer food-app-specific examples |
---
## What This Research Covers
### Breadth
- 13 major pitfalls organized by severity and phase
- 4 technology categories (barcode APIs, sync, notifications, database)
- 5 app categories (permissions, search, offline, prediction, community)
- 10+ real-world app examples with specific user complaints
### What It Doesn't Cover (Phase-Specific Research Needed Later)
1. **USDA FoodData API specifics** - Rate limits, query patterns, coverage gaps
2. **SQLite multi-user locking details** - If you go self-hosted, deep dive needed
3. **Mobile barcode scanning accuracy** - Need lab testing of ML Kit vs AVFoundation
4. **Notification UX specifics** - iOS/Android snooze/scheduling implementation
5. **Household permission models** - Need to research 5-10 successful apps' UX
These should be addressed in Phase-specific research (not critical for roadmap, but needed before implementation).
---
## How to Use This Research
### For Roadmap Creation
1. Read **RESEARCH_SUMMARY.md** for domain overview
2. Review **PITFALLS.md** phase mapping to structure phases
3. Reference **FEATURES.md** to scope each phase
4. Use **STACK.md** for technology justification
### For Architecture Design
1. Read **ARCHITECTURE.md** for system patterns
2. Reference **PITFALLS.md** prevention strategies
3. Check **STACK.md** for tech choices
4. Review **FEATURES.md** for component boundaries
### For Phase Execution
1. Reference **PITFALLS.md** detection signs to watch for
2. Check **STACK.md** for implementation guidance
3. Use **FEATURES.md** to stay in scope
4. Review **ARCHITECTURE.md** for design patterns
### Before User Testing
1. Read **PITFALLS.md** to create test checklist
2. Check **FEATURES.md** table stakes (all should be testable in v1)
3. Verify **STACK.md** choices align with MVP scope
---
## Data Quality Notes
### What We Verified
- USDA FoodData Central API coverage and free access
- Open Food Facts barcode support and community data model
- Multiple app store reviews (My Pantry Tracker, NoWaste, FoodShiner, Pantry Check)
- Technical documentation on timezone handling, race conditions, sync patterns
- Real-world case studies (Sylius inventory, Atlassian Confluence, GitHub JWT)
### What We Didn't Lab Test
- Actual barcode scanning accuracy with real cameras
- USDA API performance with 10K+ item lookups
- SQLite locking behavior under concurrent load
- iOS/Android notification scheduling specifics
These should be tested during Phase 1 (Architecture) and Phase 2 (Implementation).
---
## Sources Summary
**Total sources reviewed:** 40+
**Source categories:**
- App store reviews (8 apps)
- Technical documentation (USDA, Open Food Facts, barcode APIs)
- Distributed systems papers and guides
- User research on feature adoption
- Case studies and post-mortems
- Official API documentation
**Confidence indicators:**
- Sources marked **HIGH** have 3+ independent verification
- Sources marked **MEDIUM-HIGH** have official doc + 2+ examples
- Sources marked **MEDIUM** have 2-3 convergent sources
- Sources marked **LOW** have 1 source; marked for validation
---
## Next Steps
**Before starting development:**
1. Validate UTC timestamp handling approach with your chosen database
2. Test USDA FoodData API coverage on real food products
3. Research 5-10 successful multi-user apps' permission models
4. Create test plan based on **PITFALLS.md** detection signs
5. Scope v1 features against **FEATURES.md** table stakes
**During development:**
- Reference **PITFALLS.md** for each major feature
- Track **PITFALLS.md** detection signs as early warning system
- Run timezone tests across device timezones weekly
- Monitor sync conflict rates in staging environment
- Measure onboarding dropout against target
---
## Questions or Gaps?
This research answers "What mistakes do food inventory apps commonly make?"
If you need different research later, consider:
- **Deeper API research:** How does USDA FoodData coverage compare to Open Food Facts?
- **Competitive analysis:** How do top apps (Out of Milk, Your Food, NoWaste) handle timezone/sync?
- **UX research:** What do users actually want from multi-user households?
- **Performance profiling:** How fast should search/sync be on average phones?
These are phase-specific questions; save them for when you have a v1 to test with.
---
## Document Structure Reference
### Each Pitfall in PITFALLS.md Follows This Format:
```
### Pitfall N: [Name]
**What goes wrong:** [Observable symptoms]
**Why it happens:** [Root causes]
**Consequences:** [Impact if not fixed]
**Real-world example:** [Specific app or case study]
**Prevention:** [Actionable strategies]
**Phase mapping:** [v1/v1.5/v2 placement]
**Detection:** [Warning signs to watch for]
```
### Each Feature in FEATURES.md Follows This Format:
```
| Feature | Why Expected/Valuable | Complexity | Notes |
| [name] | [reason] | Low/Med/High | [dependencies] |
```
---
**Research completed by:** Claude Code (GSD Researcher)
**Date:** January 27, 2026
**Status:** Complete and ready for roadmap creation