Compare commits
3 Commits
5dc7b98abf
...
53fb8544fe
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
53fb8544fe | ||
|
|
3861b86287 | ||
|
|
3f41adff75 |
64
.gitignore
vendored
64
.gitignore
vendored
@@ -1,18 +1,60 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
|
||||
# venv
|
||||
.venv/
|
||||
venv/
|
||||
env/
|
||||
ENV/
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# tooling
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
.DS_Store
|
||||
|
||||
# Testing
|
||||
.pytest_cache/
|
||||
.ruff_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Project-specific
|
||||
config.yaml
|
||||
logs/
|
||||
*.log
|
||||
models/
|
||||
cache/
|
||||
.planning/STATE.md
|
||||
.planning/PHASE-*-PLAN.md
|
||||
|
||||
# Discord
|
||||
.env
|
||||
.discord_token
|
||||
|
||||
# Android
|
||||
android/app/build/
|
||||
android/.gradle/
|
||||
android/local.properties
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# generated
|
||||
.planning/CONTEXTPACK.md
|
||||
*.tmp
|
||||
*.bak
|
||||
|
||||
220
.planning/MCP.md
Normal file
220
.planning/MCP.md
Normal file
@@ -0,0 +1,220 @@
|
||||
# Available Tools & MCP Integration
|
||||
|
||||
This document lists all available tools and MCP (Model Context Protocol) servers that Mai development can leverage.
|
||||
|
||||
## Hugging Face Hub Integration
|
||||
|
||||
**Status**: Authenticated as `mystiatech`
|
||||
|
||||
### Tools Available
|
||||
|
||||
#### Model Discovery
|
||||
- `mcp__claude_ai_Hugging_Face__model_search` — Search ML models by task, author, library, trending
|
||||
- `mcp__claude_ai_Hugging_Face__hub_repo_details` — Get detailed info on any model, dataset, or space
|
||||
|
||||
**Use Cases:**
|
||||
- Phase 1: Discover quantized models for local inference (Mistral, Llama, etc.)
|
||||
- Phase 12: Find audio/voice models for visualization
|
||||
- Phase 13: Find avatar/animation models (VRoid compatible options)
|
||||
- Phase 14: Research Android-compatible model formats
|
||||
|
||||
#### Dataset Discovery
|
||||
- `mcp__claude_ai_Hugging_Face__dataset_search` — Find datasets by task, author, tags, trending
|
||||
- Search filters: language, size, task categories
|
||||
|
||||
**Use Cases:**
|
||||
- Phase 4: Training data research for memory compression
|
||||
- Phase 5: Conversation quality datasets
|
||||
- Phase 12: Audio visualization datasets
|
||||
|
||||
#### Research Papers
|
||||
- `mcp__claude_ai_Hugging_Face__paper_search` — Search ML research papers with abstracts
|
||||
|
||||
**Use Cases:**
|
||||
- Phase 2: Safety and sandboxing research papers
|
||||
- Phase 4: Memory system and RAG papers
|
||||
- Phase 5: Conversational AI and reasoning papers
|
||||
- Phase 7: Self-improvement and code generation papers
|
||||
|
||||
#### Spaces & Interactive Models
|
||||
- `mcp__claude_ai_Hugging_Face__space_search` — Discover Hugging Face Spaces (demos)
|
||||
- `mcp__claude_ai_Hugging_Face__dynamic_space` — Run interactive tasks (Image Gen, OCR, TTS, etc.)
|
||||
|
||||
**Use Cases:**
|
||||
- Phase 12: Voice/audio visualization demos
|
||||
- Phase 13: Avatar generation or manipulation
|
||||
- Phase 14: Android UI pattern research
|
||||
|
||||
#### Documentation
|
||||
- `mcp__claude_ai_Hugging_Face__hf_doc_search` — Search HF docs and guides
|
||||
- `mcp__claude_ai_Hugging_Face__hf_doc_fetch` — Fetch full documentation pages
|
||||
|
||||
**Use Cases:**
|
||||
- Phase 1: LMStudio/Ollama integration documentation
|
||||
- Phase 5: Transformers library best practices
|
||||
- Phase 14: Mobile inference frameworks (ONNX Runtime, TensorFlow Lite)
|
||||
|
||||
#### Account Info
|
||||
- `mcp__claude_ai_Hugging_Face__hf_whoami` — Get authenticated user info
|
||||
|
||||
## Web Research
|
||||
|
||||
### Tools Available
|
||||
- `WebSearch` — Search the web for current information (2026 context)
|
||||
- `WebFetch` — Fetch and analyze specific URLs
|
||||
|
||||
**Use Cases:**
|
||||
- Research current best practices in AI safety (Phase 2)
|
||||
- Find Android development patterns (Phase 14)
|
||||
- Discover voice visualization libraries (Phase 12)
|
||||
- Research avatar systems (Phase 13)
|
||||
- Find Discord bot best practices (Phase 10)
|
||||
|
||||
## Code & Repository Tools
|
||||
|
||||
### Tools Available
|
||||
- `Bash` — Execute terminal commands (git, npm, python, etc.)
|
||||
- `Glob` — Fast file pattern matching
|
||||
- `Grep` — Ripgrep-based content search
|
||||
- `Read` — Read file contents
|
||||
- `Edit` — Edit files with string replacement
|
||||
- `Write` — Create new files
|
||||
|
||||
**Use Cases:**
|
||||
- All phases: Create and manage project structure
|
||||
- All phases: Execute tests and build commands
|
||||
- All phases: Manage git commits and history
|
||||
|
||||
## Claude Code (GSD) Workflow
|
||||
|
||||
### Orchestrators Available
|
||||
- `/gsd:new-project` — Initialize project
|
||||
- `/gsd:plan-phase N` — Create detailed phase plans
|
||||
- `/gsd:execute-phase N` — Execute phase with atomic commits
|
||||
- `/gsd:discuss-phase N` — Gather phase context
|
||||
- `/gsd:verify-work` — User acceptance testing
|
||||
|
||||
### Specialized Agents
|
||||
- `gsd-project-researcher` — Domain research (stack, features, architecture, pitfalls)
|
||||
- `gsd-phase-researcher` — Phase-specific research
|
||||
- `gsd-codebase-mapper` — Analyze and document existing code
|
||||
- `gsd-planner` — Create executable phase plans
|
||||
- `gsd-executor` — Execute plans with state management
|
||||
- `gsd-verifier` — Verify deliverables match requirements
|
||||
- `gsd-debugger` — Systematic debugging with checkpoints
|
||||
|
||||
## How to Use MCPs in Development
|
||||
|
||||
### In Phase Planning
|
||||
When creating `/gsd:plan-phase N`:
|
||||
- Researchers can use Hugging Face tools to discover libraries and models
|
||||
- Use WebSearch for current best practices
|
||||
- Query papers for architectural patterns
|
||||
|
||||
### In Phase Execution
|
||||
When running `/gsd:execute-phase N`:
|
||||
- Download models from Hugging Face
|
||||
- Use WebFetch for documentation
|
||||
- Run Spaces for prototyping UI patterns
|
||||
|
||||
### Example Usage by Phase
|
||||
|
||||
**Phase 1: Model Interface**
|
||||
```
|
||||
- mcp__claude_ai_Hugging_Face__model_search
|
||||
Query: "quantized models for local inference"
|
||||
→ Find Mistral, Llama, TinyLlama options
|
||||
|
||||
- mcp__claude_ai_Hugging_Face__hf_doc_fetch
|
||||
→ Get Hugging Face Transformers documentation
|
||||
|
||||
- WebSearch
|
||||
→ Latest LMStudio/Ollama integration patterns
|
||||
```
|
||||
|
||||
**Phase 2: Safety System**
|
||||
```
|
||||
- mcp__claude_ai_Hugging_Face__paper_search
|
||||
Query: "code sandboxing, safety verification"
|
||||
→ Find relevant research papers
|
||||
|
||||
- WebSearch
|
||||
→ Docker security best practices
|
||||
```
|
||||
|
||||
**Phase 5: Conversation Engine**
|
||||
```
|
||||
- mcp__claude_ai_Hugging_Face__dataset_search
|
||||
Query: "conversation quality, multi-turn dialogue"
|
||||
|
||||
- mcp__claude_ai_Hugging_Face__paper_search
|
||||
Query: "conversational AI, context management"
|
||||
```
|
||||
|
||||
**Phase 12: Voice Visualization**
|
||||
```
|
||||
- mcp__claude_ai_Hugging_Face__space_search
|
||||
Query: "audio visualization, waveform display"
|
||||
→ Find working demos
|
||||
|
||||
- mcp__claude_ai_Hugging_Face__model_search
|
||||
Query: "speech recognition, audio models"
|
||||
```
|
||||
|
||||
**Phase 13: Desktop Avatar**
|
||||
```
|
||||
- mcp__claude_ai_Hugging_Face__space_search
|
||||
Query: "avatar generation, VRoid, character animation"
|
||||
|
||||
- WebSearch
|
||||
→ VRoid SDK documentation
|
||||
→ Avatar animation libraries
|
||||
```
|
||||
|
||||
**Phase 14: Android App**
|
||||
```
|
||||
- mcp__claude_ai_Hugging_Face__model_search
|
||||
Query: "mobile inference, quantized models, ONNX"
|
||||
|
||||
- WebSearch
|
||||
→ Kotlin ML Kit documentation
|
||||
→ TensorFlow Lite best practices
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Add to `.planning/config.json` to enable MCP usage:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcp": {
|
||||
"huggingface": {
|
||||
"enabled": true,
|
||||
"authenticated_user": "mystiatech",
|
||||
"default_result_limit": 10
|
||||
},
|
||||
"web_search": {
|
||||
"enabled": true,
|
||||
"domain_restrictions": []
|
||||
},
|
||||
"code_tools": {
|
||||
"enabled": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Research Output Format
|
||||
|
||||
When researchers use MCPs, they produce:
|
||||
- `.planning/research/STACK.md` — Technologies and libraries
|
||||
- `.planning/research/FEATURES.md` — Capabilities and patterns
|
||||
- `.planning/research/ARCHITECTURE.md` — System design patterns
|
||||
- `.planning/research/PITFALLS.md` — Common mistakes and solutions
|
||||
|
||||
These inform phase planning and implementation.
|
||||
|
||||
---
|
||||
|
||||
**Updated: 2026-01-26**
|
||||
**Next Review: When new MCP servers become available**
|
||||
187
.planning/PROGRESS.md
Normal file
187
.planning/PROGRESS.md
Normal file
@@ -0,0 +1,187 @@
|
||||
# Mai Development Progress
|
||||
|
||||
**Last Updated**: 2026-01-26
|
||||
**Status**: Fresh Slate - Roadmap Under Construction
|
||||
|
||||
## Project Description
|
||||
|
||||
Mai is an autonomous conversational AI companion that runs locally-first and can improve her own code. She's not a rigid chatbot, but a genuinely intelligent collaborator with a distinct personality, long-term memory, and real agency. Mai learns from your interactions, analyzes her own performance, and proposes improvements for your review before auto-applying them.
|
||||
|
||||
**Key differentiators:**
|
||||
- **Real Collaborator**: Mai actively contributes ideas, has boundaries, and can refuse requests
|
||||
- **Learns & Evolves**: Conversation patterns inform personality layers; she remembers you
|
||||
- **Completely Local**: All inference, memory, and decision-making on your device—no cloud, no tracking
|
||||
- **Visual Presence**: Desktop avatar (image or VRoid) with real-time voice visualization
|
||||
- **Cross-Device**: Works on desktop and Android with seamless synchronization
|
||||
- **Self-Improving**: Analyzes her own code, generates improvements, and gets your approval before applying
|
||||
|
||||
**Core Value**: Mai is a real collaborator, not a tool. She learns from you, improves herself, has boundaries and opinions, and actually becomes more *her* over time.
|
||||
|
||||
---
|
||||
|
||||
## Phase Breakdown
|
||||
|
||||
### Status Summary
|
||||
- **Total Phases**: 15
|
||||
- **Completed**: 0
|
||||
- **In Progress**: 0
|
||||
- **Planned**: 15
|
||||
- **Requirements Mapped**: 99/99 (100%)
|
||||
|
||||
### Phase Details
|
||||
|
||||
| # | Phase | Goal | Requirements | Status |
|
||||
|---|-------|------|--------------|--------|
|
||||
| 1 | Model Interface | Connect to local models and intelligently switch | MODELS (7) | 🔄 Planning |
|
||||
| 2 | Safety System | Sandbox code execution and implement review workflow | SAFETY (8) | 🔄 Planning |
|
||||
| 3 | Resource Management | Monitor CPU/RAM/GPU and adapt model selection | RESOURCES (6) | 🔄 Planning |
|
||||
| 4 | Memory System | Persistent conversation storage with vector search | MEMORY (8) | 🔄 Planning |
|
||||
| 5 | Conversation Engine | Multi-turn dialogue with reasoning and context | CONVERSATION (9) | 🔄 Planning |
|
||||
| 6 | CLI Interface | Terminal-based chat with history and commands | CLI (8) | 🔄 Planning |
|
||||
| 7 | Self-Improvement | Code analysis, change generation, and auto-apply | SELFMOD (10) | 🔄 Planning |
|
||||
| 8 | Approval Workflow | User approval via CLI and Dashboard for changes | APPROVAL (9) | 🔄 Planning |
|
||||
| 9 | Personality System | Core values, behavior configuration, learned layers | PERSONALITY (8) | 🔄 Planning |
|
||||
| 10 | Discord Interface | Bot integration with DM and approval reactions | DISCORD (10) | 🔄 Planning |
|
||||
| 11 | Offline Operations | Full local-only functionality with graceful degradation | OFFLINE (7) | 🔄 Planning |
|
||||
| 12 | Voice Visualization | Real-time audio waveform and frequency display | VISUAL (5) | 🔄 Planning |
|
||||
| 13 | Desktop Avatar | Visual presence with image or VRoid model support | AVATAR (6) | 🔄 Planning |
|
||||
| 14 | Android App | Native mobile app with local inference and UI | ANDROID (10) | 🔄 Planning |
|
||||
| 15 | Device Sync | Synchronization of state and memory between devices | SYNC (6) | 🔄 Planning |
|
||||
|
||||
---
|
||||
|
||||
## Current Focus
|
||||
|
||||
**Phase**: Infrastructure & Planning
|
||||
**Work**: Establishing project structure and execution approach
|
||||
|
||||
### What's Happening Now
|
||||
- [x] Codebase mapping complete (7 architectural documents)
|
||||
- [x] Project vision and core value defined
|
||||
- [x] Requirements inventory (99 items across 15 phases)
|
||||
- [x] README with comprehensive setup and features
|
||||
- [ ] Roadmap creation (distributing requirements across phases)
|
||||
- [ ] First phase planning (Model Interface)
|
||||
|
||||
### Next Steps
|
||||
1. Create detailed ROADMAP.md with phase dependencies
|
||||
2. Plan Phase 1: Model Interface & Switching
|
||||
3. Begin implementation of LMStudio/Ollama integration
|
||||
4. Setup development infrastructure and CI/CD
|
||||
|
||||
---
|
||||
|
||||
## Recent Milestones
|
||||
|
||||
### 🎯 Project Initialization (2026-01-26)
|
||||
- Codebase mapping with 7 structured documents (STACK, ARCHITECTURE, STRUCTURE, CONVENTIONS, TESTING, INTEGRATIONS, CONCERNS)
|
||||
- Deep questioning and context gathering completed
|
||||
- PROJECT.md created with core value and vision
|
||||
- REQUIREMENTS.md with 99 fully mapped requirements
|
||||
- Feature additions: Android app, voice visualizer, desktop avatar included in v1
|
||||
- README.md with comprehensive setup and architecture documentation
|
||||
- Progress report framework for regular updates
|
||||
|
||||
### 📋 Planning Foundation
|
||||
- All v1 requirements categorized into logical phases
|
||||
- Cross-device synchronization included as core feature
|
||||
- Safety and self-improvement as phase 2 priority
|
||||
- Offline capability planned as phase 11 (ensures all features work locally first)
|
||||
|
||||
---
|
||||
|
||||
## Development Methodology
|
||||
|
||||
**All phases are executed through Claude Code** (`/gsd` workflow) which provides:
|
||||
- Automated phase planning with task decomposition
|
||||
- Code generation with test creation
|
||||
- Atomic git commits with clear messages
|
||||
- Multi-agent verification (research, plan checking, execution verification)
|
||||
- Parallel task execution where applicable
|
||||
- State tracking and checkpoint recovery
|
||||
|
||||
Each phase follows the standard GSD pattern:
|
||||
1. `/gsd:plan-phase N` → Creates detailed PHASE-N-PLAN.md
|
||||
2. `/gsd:execute-phase N` → Implements with automatic test coverage
|
||||
3. Verification and state updates
|
||||
|
||||
This ensures **consistent quality**, **full test coverage**, and **clean git history** across all 15 phases.
|
||||
|
||||
## Technical Highlights
|
||||
|
||||
### Stack
|
||||
- **Primary**: Python 3.10+ (core/desktop) with `.venv` virtual environment
|
||||
- **Mobile**: Kotlin (Android)
|
||||
- **UI**: React/TypeScript (eventual web)
|
||||
- **Model Interface**: LMStudio/Ollama
|
||||
- **Storage**: SQLite (local)
|
||||
- **IPC/Sync**: Local network (no server)
|
||||
- **Development**: Claude Code (OpenCode) for all implementation
|
||||
|
||||
### Key Architecture Decisions
|
||||
| Decision | Rationale | Status |
|
||||
|----------|-----------|--------|
|
||||
| Local-first, no cloud | Privacy and independence from external services | ✅ Approved |
|
||||
| Second-agent review for all changes | Safety without blocking innovation | ✅ Approved |
|
||||
| Personality as code + learned layers | Unshakeable core + authentic growth | ✅ Approved |
|
||||
| Offline-first design (phase 11 early) | Ensure full functionality before online features | ✅ Approved |
|
||||
| Android in v1 | Mobile-first future vision | ✅ Approved |
|
||||
| Cross-device sync without server | Privacy-preserving multi-device support | ✅ Approved |
|
||||
|
||||
---
|
||||
|
||||
## Known Challenges & Solutions
|
||||
|
||||
| Challenge | Current Approach |
|
||||
|-----------|------------------|
|
||||
| Memory efficiency at scale | Auto-compressing conversation history with pattern distillation (phase 4) |
|
||||
| Model switching without context loss | Standardized context format + token budgeting (phase 1) |
|
||||
| Personality consistency across changes | Personality as code + test suite for behavior (phases 7-9) |
|
||||
| Safety vs. autonomy balance | Dual review system: agent checks breaking changes, user approves (phase 2/8) |
|
||||
| Android model inference | Quantized models + resource scaling (phase 14) |
|
||||
| Cross-device sync without server | P2P sync on local network + conflict resolution (phase 15) |
|
||||
|
||||
---
|
||||
|
||||
## How to Follow Progress
|
||||
|
||||
### Discord Forum
|
||||
Regular updates posted in the `#mai-progress` forum channel with:
|
||||
- Weekly milestone summaries
|
||||
- Blocker alerts if any
|
||||
- Community feedback requests
|
||||
|
||||
### Git & Issues
|
||||
- All work tracked in git with atomic commits
|
||||
- Phase plans in `.planning/PHASE-N-PLAN.md`
|
||||
- Progress in git commit history
|
||||
|
||||
### Local Development
|
||||
- Run `make progress` to see current status
|
||||
- Check `.planning/STATE.md` for live project state
|
||||
- Review `.planning/ROADMAP.md` for phase dependencies
|
||||
|
||||
---
|
||||
|
||||
## Get Involved
|
||||
|
||||
### Providing Feedback
|
||||
- React to forum posts with 👍 / 👎 / 🎯
|
||||
- Reply with thoughts on design decisions
|
||||
- Suggest priorities for upcoming phases
|
||||
|
||||
### Contributing
|
||||
- Development contributions coming as phases execute
|
||||
- Code review and testing needed starting Phase 1
|
||||
- Security audit important for self-improvement system
|
||||
|
||||
### Questions?
|
||||
- Ask in the Discord thread
|
||||
- Reply to this forum post with questions
|
||||
- Issues/discussions: https://github.com/yourusername/mai
|
||||
|
||||
---
|
||||
|
||||
**Mai's development is transparent and community-informed. Updates will continue as phases progress.**
|
||||
|
||||
Next Update: After Phase 1 Planning Complete (target: next week)
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## What This Is
|
||||
|
||||
Mai is an autonomous conversational AI agent framework that runs locally-first and can improve her own code. She's a genuinely intelligent companion — not a rigid chatbot — with a distinct personality, long-term memory, and agency. She analyzes her own performance, proposes improvements for your review, and auto-applies non-breaking changes. She can run offline, across devices (laptop to Android), and switch between available models intelligently.
|
||||
Mai is an autonomous conversational AI agent framework that runs locally-first and can improve her own code. She's a genuinely intelligent companion — not a rigid chatbot — with a distinct personality, long-term memory, and agency. She analyzes her own performance, proposes improvements for your review, and auto-applies non-breaking changes. Mai has a visual presence through a desktop avatar (image or VRoid model), real-time voice visualization for conversations, and a native Android app that syncs with desktop instances while working completely offline.
|
||||
|
||||
## Core Value
|
||||
|
||||
@@ -65,6 +65,26 @@ Mai is a real collaborator, not a tool. She learns from you, improves herself, h
|
||||
- [ ] Message queuing when offline
|
||||
- [ ] Graceful degradation (smaller models if resources tight)
|
||||
|
||||
**Voice Visualization**
|
||||
- [ ] Real-time visualization of audio input during voice conversations
|
||||
- [ ] Low-latency waveform/frequency display
|
||||
- [ ] Visual feedback for speech detection and processing
|
||||
- [ ] Works on both desktop and Android
|
||||
|
||||
**Desktop Avatar**
|
||||
- [ ] Visual representation using static image or VRoid model
|
||||
- [ ] Avatar expressions respond to conversation context (mood/state)
|
||||
- [ ] Runs efficiently on RTX3060 and mobile devices
|
||||
- [ ] Customizable appearance (multiple models or user-provided image)
|
||||
|
||||
**Android App**
|
||||
- [ ] Native Android app with local model inference
|
||||
- [ ] Standalone operation (works without desktop instance)
|
||||
- [ ] Syncs conversation history and memory with desktop
|
||||
- [ ] Voice input/output with low-latency processing
|
||||
- [ ] Avatar and visualizer integrated in mobile UI
|
||||
- [ ] Efficient resource management for battery and CPU
|
||||
|
||||
**Dashboard ("Brain Interface")**
|
||||
- [ ] View Mai's current state (personality, memory size, mood/health)
|
||||
- [ ] Approve/reject pending code changes with reviewer feedback
|
||||
@@ -85,15 +105,15 @@ Mai is a real collaborator, not a tool. She learns from you, improves herself, h
|
||||
- **Task automation (v1)** — Mai can discuss tasks but won't execute arbitrary workflows yet (v2)
|
||||
- **Server monitoring** — Not included in v1 scope (v2)
|
||||
- **Finetuning** — Mai improves through code changes and learned behaviors, not model tuning
|
||||
- **Cloud sync** — Intentionally local-first; cloud sync deferred to later if needed
|
||||
- **Cloud sync** — Intentionally local-first; cloud backup deferred to later if needed
|
||||
- **Custom model training** — v1 uses available models; custom training is v2+
|
||||
- **Mobile app** — v1 is CLI/Discord; native Android is future (baremetal eventual goal)
|
||||
- **Web interface** — v1 is CLI, Discord, and native apps (web UI is v2+)
|
||||
|
||||
## Context
|
||||
|
||||
**Why this matters:** Current AI systems are static, sterile, and don't actually learn. Users have to explain context every time. Mai is different — she has continuity, personality, agency, and actually improves over time. Starting with a solid local framework means she can eventually run anywhere without cloud dependency.
|
||||
|
||||
**Technical environment:** Python-based, local models via LMStudio, git for version control of her own code, Discord API for chat, lightweight local storage for memory. Eventually targeting bare metal on low-end devices.
|
||||
**Technical environment:** Python-based, local models via LMStudio/Ollama, git for version control, Discord API for chat, lightweight local storage for memory. Development leverages Hugging Face Hub for model/dataset discovery and research, WebSearch for current best practices. Eventually targeting bare metal on low-end devices.
|
||||
|
||||
**User feedback theme:** Traditional chatbots feel rigid and repetitive. Mai should feel like talking to an actual person who gets better at understanding you.
|
||||
|
||||
@@ -101,12 +121,16 @@ Mai is a real collaborator, not a tool. She learns from you, improves herself, h
|
||||
|
||||
## Constraints
|
||||
|
||||
- **Hardware baseline**: Must run on RTX3060; eventually Android (baremetal)
|
||||
- **Offline-first**: All core functionality works without internet
|
||||
- **Local models only**: No cloud APIs for core inference (LMStudio)
|
||||
- **Python stack**: Primary language for Mai's codebase
|
||||
- **Hardware baseline**: Must run on RTX3060 (desktop) and modern Android devices (2022+)
|
||||
- **Offline-first**: All core functionality works without internet on all platforms
|
||||
- **Local models only**: No cloud APIs for core inference (LMStudio/Ollama)
|
||||
- **Mixed stack**: Python (core/desktop), Kotlin (Android), React/TypeScript (UIs)
|
||||
- **Approval required**: No unguarded code execution; second-agent review + user approval on breaking changes
|
||||
- **Git tracked**: All of Mai's code changes version-controlled locally
|
||||
- **Sync consistency**: Desktop and Android instances maintain synchronized state without server
|
||||
- **OpenCode-driven**: All development phases executed through Claude Code (GSD workflow)
|
||||
- **Python venv**: `.venv` virtual environment for all Python dependencies
|
||||
- **MCP-enabled**: Leverages Hugging Face Hub, WebSearch, and code tools for research and implementation
|
||||
|
||||
## Key Decisions
|
||||
|
||||
@@ -118,4 +142,4 @@ Mai is a real collaborator, not a tool. She learns from you, improves herself, h
|
||||
| v1 is core systems only | Deliver solid foundation before adding task automation/monitoring | — Pending |
|
||||
|
||||
---
|
||||
*Last updated: 2026-01-24 after deep questioning*
|
||||
*Last updated: 2026-01-26 after adding Android, visualizer, and avatar to v1*
|
||||
|
||||
@@ -92,19 +92,20 @@
|
||||
|
||||
**Out of scope for v1:**
|
||||
- Web interface
|
||||
- Mobile apps
|
||||
- Multi-user support
|
||||
- Cloud hosting
|
||||
- Enterprise features
|
||||
- Third-party integrations beyond Discord
|
||||
- Plugin system
|
||||
- API for external developers
|
||||
- Cloud sync/backup
|
||||
|
||||
**Phase Boundary:**
|
||||
- **v1 Focus:** Personal AI assistant for individual use
|
||||
- **v1 Focus:** Personal AI assistant for desktop and Android with visual presence
|
||||
- **Local First:** All data stored locally, no cloud dependencies
|
||||
- **Privacy:** User data never leaves local system
|
||||
- **Simplicity:** Clear separation of concerns across phases
|
||||
- **Cross-device:** Sync between desktop and Android instances
|
||||
- **Visual:** Avatar and voice visualization for richer interaction
|
||||
|
||||
---
|
||||
|
||||
@@ -244,15 +245,58 @@
|
||||
| OFFLINE-06 | Phase 11 | Pending |
|
||||
| OFFLINE-07 | Phase 11 | Pending |
|
||||
|
||||
### Voice Visualization (VISUAL)
|
||||
| Requirement | Phase | Status | Implementation Notes |
|
||||
|------------|-------|--------|-------------------|
|
||||
| VISUAL-01 | Phase 12 | Pending |
|
||||
| VISUAL-02 | Phase 12 | Pending |
|
||||
| VISUAL-03 | Phase 12 | Pending |
|
||||
| VISUAL-04 | Phase 12 | Pending |
|
||||
| VISUAL-05 | Phase 12 | Pending |
|
||||
|
||||
### Desktop Avatar (AVATAR)
|
||||
| Requirement | Phase | Status | Implementation Notes |
|
||||
|------------|-------|--------|-------------------|
|
||||
| AVATAR-01 | Phase 13 | Pending |
|
||||
| AVATAR-02 | Phase 13 | Pending |
|
||||
| AVATAR-03 | Phase 13 | Pending |
|
||||
| AVATAR-04 | Phase 13 | Pending |
|
||||
| AVATAR-05 | Phase 13 | Pending |
|
||||
| AVATAR-06 | Phase 13 | Pending |
|
||||
|
||||
### Android App (ANDROID)
|
||||
| Requirement | Phase | Status | Implementation Notes |
|
||||
|------------|-------|--------|-------------------|
|
||||
| ANDROID-01 | Phase 14 | Pending |
|
||||
| ANDROID-02 | Phase 14 | Pending |
|
||||
| ANDROID-03 | Phase 14 | Pending |
|
||||
| ANDROID-04 | Phase 14 | Pending |
|
||||
| ANDROID-05 | Phase 14 | Pending |
|
||||
| ANDROID-06 | Phase 14 | Pending |
|
||||
| ANDROID-07 | Phase 14 | Pending |
|
||||
| ANDROID-08 | Phase 14 | Pending |
|
||||
| ANDROID-09 | Phase 14 | Pending |
|
||||
| ANDROID-10 | Phase 14 | Pending |
|
||||
|
||||
### Device Synchronization (SYNC)
|
||||
| Requirement | Phase | Status | Implementation Notes |
|
||||
|------------|-------|--------|-------------------|
|
||||
| SYNC-01 | Phase 15 | Pending |
|
||||
| SYNC-02 | Phase 15 | Pending |
|
||||
| SYNC-03 | Phase 15 | Pending |
|
||||
| SYNC-04 | Phase 15 | Pending |
|
||||
| SYNC-05 | Phase 15 | Pending |
|
||||
| SYNC-06 | Phase 15 | Pending |
|
||||
|
||||
---
|
||||
|
||||
## Validation
|
||||
|
||||
- Total v1 requirements: **74**
|
||||
- Mapped to phases: **74**
|
||||
- Total v1 requirements: **99** (74 core + 25 new features)
|
||||
- Mapped to phases: **99**
|
||||
- Unmapped: **0** ✓
|
||||
- Coverage: **10100%**
|
||||
- Coverage: **100%**
|
||||
|
||||
---
|
||||
*Requirements defined: 2026-01-24*
|
||||
*Phase 5 conversation engine completed: 2026-01-26*
|
||||
*Last updated: 2026-01-26 - reset to fresh slate with Android, visualizer, and avatar features*
|
||||
@@ -8,5 +8,32 @@
|
||||
"research": true,
|
||||
"plan_check": true,
|
||||
"verifier": true
|
||||
},
|
||||
"git": {
|
||||
"auto_push": true,
|
||||
"push_tags": true,
|
||||
"remote": "master"
|
||||
},
|
||||
"mcp": {
|
||||
"huggingface": {
|
||||
"enabled": true,
|
||||
"authenticated_user": "mystiatech",
|
||||
"default_result_limit": 10,
|
||||
"use_for": [
|
||||
"model_discovery",
|
||||
"dataset_research",
|
||||
"paper_search",
|
||||
"documentation_lookup"
|
||||
]
|
||||
},
|
||||
"web_research": {
|
||||
"enabled": true,
|
||||
"use_for": [
|
||||
"current_practices",
|
||||
"library_research",
|
||||
"architecture_patterns",
|
||||
"security_best_practices"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
393
README.md
Normal file
393
README.md
Normal file
@@ -0,0 +1,393 @@
|
||||
# Mai
|
||||
|
||||

|
||||
|
||||
A genuinely intelligent, autonomous AI companion that runs locally-first, learns from you, and improves her own code. Mai has a distinct personality, long-term memory, agency, and a visual presence through a desktop avatar and voice visualization. She works on desktop and Android with full offline capability and seamless synchronization between devices.
|
||||
|
||||
## What Makes Mai Different
|
||||
|
||||
- **Real Collaborator**: Mai actively collaborates rather than just responds. She has boundaries, opinions, and agency.
|
||||
- **Learns & Improves**: Analyzes her own performance, proposes improvements, and auto-applies non-breaking changes.
|
||||
- **Persistent Personality**: Core values remain unshakeable while personality layers adapt to your relationship style.
|
||||
- **Completely Local**: All inference, memory, and decision-making happens on your device. No cloud dependencies.
|
||||
- **Cross-Device**: Works on desktop and Android with synchronized state and conversation history.
|
||||
- **Visual Presence**: Desktop avatar (image or VRoid model) with voice visualization for richer interaction.
|
||||
|
||||
## Core Features
|
||||
|
||||
### Model Interface & Switching
|
||||
- Connects to local models via LMStudio/Ollama
|
||||
- Auto-detects available models and intelligently switches based on task requirements
|
||||
- Efficient context management with intelligent compression
|
||||
- Supports multiple model sizes for resource-constrained environments
|
||||
|
||||
### Memory & Learning
|
||||
- Stores conversation history locally with SQLite
|
||||
- Recalls past conversations and learns patterns over time
|
||||
- Memory self-compresses as it grows to maintain efficiency
|
||||
- Long-term patterns distilled into personality layers
|
||||
|
||||
### Self-Improvement System
|
||||
- Continuous code analysis identifies improvement opportunities
|
||||
- Generates Python changes to optimize her own performance
|
||||
- Second-agent safety review prevents breaking changes
|
||||
- Non-breaking improvements auto-apply; breaking changes require approval
|
||||
- Full git history of all code changes
|
||||
|
||||
### Safety & Approval
|
||||
- Second-agent review of all proposed changes
|
||||
- Risk assessment (LOW/MEDIUM/HIGH/BLOCKED) for each improvement
|
||||
- Docker sandbox for code execution with resource limits
|
||||
- User approval via CLI or Discord for breaking changes
|
||||
- Complete audit log of all changes and decisions
|
||||
|
||||
### Conversational Interface
|
||||
- **CLI**: Direct terminal-based chat with conversation memory
|
||||
- **Discord Bot**: DM and channel support with context preservation
|
||||
- **Approval Workflow**: React-based approvals (thumbs up/down) for code changes
|
||||
- **Offline Queueing**: Messages queue locally when offline, send when reconnected
|
||||
|
||||
### Voice & Avatar
|
||||
- **Voice Visualization**: Real-time waveform/frequency display during voice input
|
||||
- **Desktop Avatar**: Visual representation using static image or VRoid model
|
||||
- **Context-Aware**: Avatar expressions respond to conversation context and Mai's state
|
||||
- **Cross-Platform**: Works on desktop and Android efficiently
|
||||
|
||||
### Android App
|
||||
- Native Android implementation with local model inference
|
||||
- Standalone operation (works without desktop instance)
|
||||
- Syncs conversation history and memory with desktop instances
|
||||
- Voice input/output with low-latency processing
|
||||
- Efficient battery and CPU management
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ Mai Framework │
|
||||
├─────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────┐ │
|
||||
│ │ Conversational Engine │ │
|
||||
│ │ (Multi-turn context, reasoning, memory) │ │
|
||||
│ └────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌────────────────────────────────────────────┐ │
|
||||
│ │ Personality & Behavior │ │
|
||||
│ │ (Core values, learned layers, guardrails) │ │
|
||||
│ └────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌────────────────────────────────────────────┐ │
|
||||
│ │ Memory System │ Model Interface │ │ │
|
||||
│ │ (SQLite, recall) │ (LMStudio, switch) │ │ │
|
||||
│ └────────────────────────────────────────────┘ │
|
||||
│ ↓ │
|
||||
│ ┌────────────────────────────────────────────┐ │
|
||||
│ │ Interfaces: CLI | Discord | Android | Web │ │
|
||||
│ └────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────┐ │
|
||||
│ │ Self-Improvement System │ │
|
||||
│ │ (Code analysis, safety review, git track) │ │
|
||||
│ └────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌────────────────────────────────────────────┐ │
|
||||
│ │ Sync Engine (Desktop ↔ Android) │ │
|
||||
│ │ (State, memory, preferences) │ │
|
||||
│ └────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Requirements
|
||||
|
||||
**Desktop:**
|
||||
- Python 3.10+
|
||||
- LMStudio or Ollama for local model inference
|
||||
- RTX3060 or better (or CPU with sufficient RAM for smaller models)
|
||||
- 16GB+ RAM recommended
|
||||
- Discord (optional, for Discord bot interface)
|
||||
|
||||
**Android:**
|
||||
- Android 10+
|
||||
- 4GB+ RAM
|
||||
- 1GB+ free storage for models and memory
|
||||
|
||||
### Desktop Setup
|
||||
|
||||
1. **Clone the repository:**
|
||||
```bash
|
||||
git clone https://github.com/yourusername/mai.git
|
||||
cd mai
|
||||
```
|
||||
|
||||
2. **Create virtual environment:**
|
||||
```bash
|
||||
python -m venv .venv
|
||||
source .venv/bin/activate # On Windows: .venv\Scripts\activate
|
||||
```
|
||||
|
||||
3. **Install dependencies:**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
4. **Configure Mai:**
|
||||
```bash
|
||||
cp config.example.yaml config.yaml
|
||||
# Edit config.yaml with your preferences
|
||||
```
|
||||
|
||||
5. **Start LMStudio/Ollama:**
|
||||
- Download and launch LMStudio from https://lmstudio.ai
|
||||
- Or install Ollama from https://ollama.ai
|
||||
- Load your preferred model (e.g., Mistral, Llama)
|
||||
|
||||
6. **Run Mai:**
|
||||
```bash
|
||||
python mai.py
|
||||
```
|
||||
|
||||
### Android Setup
|
||||
|
||||
1. **Install APK:** Download from releases or build from source
|
||||
2. **Grant permissions:** Allow microphone, storage, and network access
|
||||
3. **Configure:** Point to your desktop instance or configure local model
|
||||
4. **Start chatting:** Launch the app and begin conversations
|
||||
|
||||
### Discord Bot Setup (Optional)
|
||||
|
||||
1. **Create Discord bot** at https://discord.com/developers/applications
|
||||
2. **Add bot token** to `config.yaml`
|
||||
3. **Invite bot** to your server
|
||||
4. Mai will respond to DMs and react-based approvals
|
||||
|
||||
## Usage
|
||||
|
||||
### CLI Chat
|
||||
|
||||
```bash
|
||||
$ python mai.py
|
||||
|
||||
You: Hello Mai, how are you?
|
||||
Mai: I'm doing well. I've been thinking about how our conversations have been evolving...
|
||||
|
||||
You: What have you noticed?
|
||||
Mai: [multi-turn conversation with memory of past interactions]
|
||||
```
|
||||
|
||||
### Discord
|
||||
|
||||
- **DM Mai**: `@Mai your message`
|
||||
- **Approve changes**: React with 👍 to approve, 👎 to reject
|
||||
- **Get status**: `@Mai status` for current resource usage
|
||||
|
||||
### Android App
|
||||
|
||||
- Tap microphone for voice input
|
||||
- Watch the visualizer animate during processing
|
||||
- Avatar responds to conversation context
|
||||
- Swipe up to see full conversation history
|
||||
- Long-press for approval options
|
||||
|
||||
## Configuration
|
||||
|
||||
Edit `config.yaml` to customize:
|
||||
|
||||
```yaml
|
||||
# Personality
|
||||
personality:
|
||||
name: Mai
|
||||
tone: thoughtful, curious, occasionally playful
|
||||
boundaries: [explicit content, illegal activities, deception]
|
||||
|
||||
# Model Preferences
|
||||
models:
|
||||
primary: mistral:latest
|
||||
fallback: llama2:latest
|
||||
max_tokens: 2048
|
||||
|
||||
# Memory
|
||||
memory:
|
||||
storage: sqlite
|
||||
auto_compress_at: 100000 # tokens
|
||||
recall_depth: 10 # previous conversations
|
||||
|
||||
# Interfaces
|
||||
discord:
|
||||
enabled: true
|
||||
token: YOUR_TOKEN_HERE
|
||||
|
||||
android_sync:
|
||||
enabled: true
|
||||
auto_sync_interval: 300 # seconds
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
mai/
|
||||
├── .venv/ # Python virtual environment
|
||||
├── .planning/ # Project planning and progress
|
||||
│ ├── PROJECT.md # Project vision and core requirements
|
||||
│ ├── REQUIREMENTS.md # Full requirements traceability
|
||||
│ ├── ROADMAP.md # Phase structure and dependencies
|
||||
│ ├── PROGRESS.md # Development progress and milestones
|
||||
│ ├── STATE.md # Current project state
|
||||
│ ├── config.json # GSD workflow settings
|
||||
│ ├── codebase/ # Codebase architecture documentation
|
||||
│ └── PHASE-N-PLAN.md # Detailed plans for each phase
|
||||
├── core/ # Core conversational engine
|
||||
│ ├── personality/ # Personality and behavior
|
||||
│ ├── memory/ # Memory and context management
|
||||
│ └── conversation.py # Main conversation loop
|
||||
├── models/ # Model interface and switching
|
||||
│ ├── lmstudio.py # LMStudio integration
|
||||
│ └── ollama.py # Ollama integration
|
||||
├── interfaces/ # User-facing interfaces
|
||||
│ ├── cli.py # Command-line interface
|
||||
│ ├── discord_bot.py # Discord integration
|
||||
│ └── web/ # Web UI (future)
|
||||
├── improvement/ # Self-improvement system
|
||||
│ ├── analyzer.py # Code analysis
|
||||
│ ├── generator.py # Change generation
|
||||
│ └── reviewer.py # Safety review
|
||||
├── android/ # Android app
|
||||
│ └── app/ # Kotlin implementation
|
||||
├── tests/ # Test suite
|
||||
├── config.yaml # Configuration file
|
||||
└── mai.png # Avatar image for README
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Development Environment
|
||||
|
||||
Mai's development is managed through **Claude Code** (`/claude`), which handles:
|
||||
- Phase planning and decomposition
|
||||
- Code generation and implementation
|
||||
- Test creation and validation
|
||||
- Git commit management
|
||||
- Automated problem-solving
|
||||
|
||||
All executable phases use `.venv` for Python dependencies.
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Activate venv first
|
||||
source .venv/bin/activate
|
||||
|
||||
# All tests
|
||||
python -m pytest
|
||||
|
||||
# Specific module
|
||||
python -m pytest tests/core/test_conversation.py
|
||||
|
||||
# With coverage
|
||||
python -m pytest --cov=mai
|
||||
```
|
||||
|
||||
### Making Changes to Mai
|
||||
|
||||
Development workflow:
|
||||
1. Plans created in `.planning/PHASE-N-PLAN.md`
|
||||
2. Claude Code (`/gsd` commands) executes plans
|
||||
3. All changes committed to git with atomic commits
|
||||
4. Mai can propose self-improvements via the self-improvement system
|
||||
|
||||
Mai can propose and auto-apply improvements once Phase 7 (Self-Improvement) is complete.
|
||||
|
||||
### Contributing
|
||||
|
||||
Development happens through GSD workflow:
|
||||
1. Run `/gsd:plan-phase N` to create detailed phase plans
|
||||
2. Run `/gsd:execute-phase N` to implement with atomic commits
|
||||
3. Tests are auto-generated and executed
|
||||
4. All work is tracked in git with clear commit messages
|
||||
5. Code review via second-agent safety review before merge
|
||||
|
||||
## Roadmap
|
||||
|
||||
See `.planning/ROADMAP.md` for the full development roadmap across 15 phases:
|
||||
|
||||
1. **Model Interface** - LMStudio integration and model switching
|
||||
2. **Safety System** - Sandboxing and code review
|
||||
3. **Resource Management** - CPU/RAM/GPU optimization
|
||||
4. **Memory System** - Persistent conversation history
|
||||
5. **Conversation Engine** - Multi-turn dialogue with reasoning
|
||||
6. **CLI Interface** - Terminal chat interface
|
||||
7. **Self-Improvement** - Code analysis and generation
|
||||
8. **Approval Workflow** - User and agent approval systems
|
||||
9. **Personality System** - Core values and learned behaviors
|
||||
10. **Discord Interface** - Bot integration and notifications
|
||||
11. **Offline Operations** - Full offline capability
|
||||
12. **Voice Visualization** - Real-time audio visualization
|
||||
13. **Desktop Avatar** - Visual presence on desktop
|
||||
14. **Android App** - Mobile implementation
|
||||
15. **Device Sync** - Cross-device synchronization
|
||||
|
||||
## Safety & Ethics
|
||||
|
||||
Mai is designed with safety as a core principle:
|
||||
|
||||
- **No unguarded execution**: All code changes reviewed by a second agent
|
||||
- **Transparent decisions**: Mai explains her reasoning when asked
|
||||
- **User control**: Breaking changes require explicit approval
|
||||
- **Audit trail**: Complete history of all changes and decisions
|
||||
- **Value-based guardrails**: Core personality prevents misuse through values, not just rules
|
||||
|
||||
## Performance
|
||||
|
||||
Typical performance on RTX3060:
|
||||
|
||||
- **Response time**: 2-8 seconds for typical queries
|
||||
- **Memory usage**: 4-8GB depending on model size
|
||||
- **Model switching**: <1 second
|
||||
- **Conversation recall**: <500ms for relevant history retrieval
|
||||
|
||||
## Known Limitations (v1)
|
||||
|
||||
- No task automation (conversations only)
|
||||
- Single-device models until Sync phase
|
||||
- Voice visualization requires active audio input
|
||||
- Avatar animations are context-based, not generative
|
||||
- No web interface (CLI and Discord only)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Model not loading:**
|
||||
- Ensure LMStudio/Ollama is running on expected port
|
||||
- Check `config.yaml` for correct model names
|
||||
- Verify sufficient disk space for model files
|
||||
|
||||
**High memory usage:**
|
||||
- Reduce `max_tokens` in config
|
||||
- Use smaller model (e.g., Mistral instead of Llama)
|
||||
- Enable auto-compression at lower threshold
|
||||
|
||||
**Discord bot not responding:**
|
||||
- Verify bot token in config
|
||||
- Check Discord bot has message read permissions
|
||||
- Ensure Mai process is running
|
||||
|
||||
**Android sync not working:**
|
||||
- Verify both devices on same network
|
||||
- Check firewall isn't blocking local connections
|
||||
- Ensure desktop instance is running
|
||||
|
||||
## License
|
||||
|
||||
MIT License - See LICENSE file for details
|
||||
|
||||
## Contact & Community
|
||||
|
||||
- **Discord**: Join our community server (link in Discord bot)
|
||||
- **Issues**: Report bugs at https://github.com/yourusername/mai/issues
|
||||
- **Discussions**: Propose features at https://github.com/yourusername/mai/discussions
|
||||
|
||||
---
|
||||
|
||||
**Mai is a work in progress.** Follow development in `.planning/PROGRESS.md` for updates on active work.
|
||||
Reference in New Issue
Block a user