Phase 01-model-interface: Foundation systems - 3 plan(s) in 2 wave(s) - 2 parallel, 1 sequential - Ready for execution
6.9 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 01-model-interface | 03 | execute | 2 |
|
|
true |
|
Purpose: Combine LM Studio client, resource monitoring, and context management into a cohesive system that can intelligently select and switch models based on resources and conversation needs. Output: Working ModelManager with intelligent switching and basic Mai orchestration.
<execution_context>
@/.opencode/get-shit-done/workflows/execute-plan.md
@/.opencode/get-shit-done/templates/summary.md
</execution_context>
Key methods:
- init: Load config, initialize adapters and monitors
- select_best_model(conversation_context): Choose optimal model
- switch_model(target_model_key): Handle model transition
- generate_response(message, conversation): Generate response with auto-switching
- get_current_model_status(): Return current model and resource usage
- preload_model(model_key): Background model loading
Follow CONTEXT.md decisions:
- Silent switching with no user notifications
- Dynamic switching mid-task if model struggles
- Smart context transfer during switches
- Auto-retry on model failures
Use research patterns for resource-aware selection and implement graceful degradation when no model fits constraints. python -c "from src.models.model_manager import ModelManager; mm = ModelManager(); print(hasattr(mm, 'select_best_model') and hasattr(mm, 'generate_response'))" ModelManager can intelligently select and switch models based on resources
Task 2: Create core Mai orchestration class src/mai.py Create core Mai class following architecture patterns: 1. Initialize ModelManager, ContextManager, and other systems 2. Provide main conversation interface: - process_message(user_input): Process message and return response - get_conversation_history(): Retrieve conversation context - get_system_status(): Return current model and resource status 3. Implement basic conversation flow using ModelManager 4. Add error handling and graceful degradation 5. Support both synchronous and async operation (asyncio) 6. Include basic logging of model switches and resource eventsKey methods:
- init: Initialize all subsystems
- process_message(message): Main conversation entry point
- get_status(): Return system state for monitoring
- shutdown(): Clean up resources
Follow architecture: Mai class is main coordinator, delegates to specialized subsystems. Keep logic simple - most complexity should be in ModelManager and ContextManager. python -c "from src.mai import Mai; mai = Mai(); print(hasattr(mai, 'process_message') and hasattr(mai, 'get_status'))" Core Mai class orchestrates conversation processing with model switching
Task 3: Create CLI entry point for testing src/__main__.py Create CLI entry point following project structure: 1. Implement __main__.py with command-line interface 2. Add simple interactive chat loop for testing model switching 3. Include status commands to show current model and resources 4. Support basic configuration and model management commands 5. Add proper signal handling for graceful shutdown 6. Include help text and usage examplesCommands:
- chat: Interactive conversation mode
- status: Show current model and system resources
- models: List available models
- switch : Manual model override for testing
Use argparse for command-line parsing. Follow standard Python package entry point patterns. python -m mai --help shows usage information and commands CLI interface provides working chat and system monitoring commands
Verify integrated system: 1. ModelManager can select appropriate models based on resources 2. Conversation processing works with automatic model switching 3. CLI interface allows testing chat and monitoring 4. Context is preserved during model switches 5. System gracefully handles model loading failures 6. Resource monitoring triggers appropriate model changes<success_criteria> Complete model interface system:
- Intelligent model selection based on system resources
- Seamless conversation processing with automatic switching
- Working CLI interface for testing and monitoring
- Foundation ready for integration with memory and personality systems </success_criteria>