docs(04): create phase plan
Phase 04: Memory & Context Management - 4 plan(s) in 3 wave(s) - 2 parallel, 2 sequential - Ready for execution
This commit is contained in:
@@ -51,6 +51,12 @@ Mai's development is organized into three major milestones, each delivering dist
|
|||||||
- Distill long-term patterns into personality layers
|
- Distill long-term patterns into personality layers
|
||||||
- Proactively surface relevant context from memory
|
- Proactively surface relevant context from memory
|
||||||
|
|
||||||
|
**Plans:** 4 plans in 3 waves
|
||||||
|
- [ ] 04-01-PLAN.md — Storage foundation with SQLite and sqlite-vec
|
||||||
|
- [ ] 04-02-PLAN.md — Semantic search and context-aware retrieval
|
||||||
|
- [ ] 04-03-PLAN.md — Progressive compression and JSON archival
|
||||||
|
- [ ] 04-04-PLAN.md — Personality learning and adaptive layers
|
||||||
|
|
||||||
### Phase 5: Conversation Engine
|
### Phase 5: Conversation Engine
|
||||||
- Multi-turn context preservation
|
- Multi-turn context preservation
|
||||||
- Reasoning transparency and clarifying questions
|
- Reasoning transparency and clarifying questions
|
||||||
|
|||||||
140
.planning/phases/04-memory-context-management/04-01-PLAN.md
Normal file
140
.planning/phases/04-memory-context-management/04-01-PLAN.md
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
---
|
||||||
|
phase: 04-memory-context-management
|
||||||
|
plan: 01
|
||||||
|
type: execute
|
||||||
|
wave: 1
|
||||||
|
depends_on: []
|
||||||
|
files_modified: ["src/memory/__init__.py", "src/memory/storage/sqlite_manager.py", "src/memory/storage/vector_store.py", "src/memory/storage/__init__.py", "requirements.txt"]
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "Conversations are stored locally in SQLite database"
|
||||||
|
- "Vector embeddings are stored using sqlite-vec extension"
|
||||||
|
- "Database schema supports conversations, messages, and embeddings"
|
||||||
|
- "Memory system persists across application restarts"
|
||||||
|
artifacts:
|
||||||
|
- path: "src/memory/storage/sqlite_manager.py"
|
||||||
|
provides: "SQLite database operations and schema management"
|
||||||
|
min_lines: 80
|
||||||
|
- path: "src/memory/storage/vector_store.py"
|
||||||
|
provides: "Vector storage and retrieval with sqlite-vec"
|
||||||
|
min_lines: 60
|
||||||
|
- path: "src/memory/__init__.py"
|
||||||
|
provides: "Memory module entry point"
|
||||||
|
exports: ["MemoryManager"]
|
||||||
|
key_links:
|
||||||
|
- from: "src/memory/storage/sqlite_manager.py"
|
||||||
|
to: "sqlite-vec extension"
|
||||||
|
via: "extension loading and virtual table creation"
|
||||||
|
pattern: "load_extension.*vec0"
|
||||||
|
- from: "src/memory/storage/vector_store.py"
|
||||||
|
to: "src/memory/storage/sqlite_manager.py"
|
||||||
|
via: "database connection for vector operations"
|
||||||
|
pattern: "sqlite_manager\\.db"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Create the foundational storage layer for conversation memory using SQLite with sqlite-vec extension. This establishes the hybrid storage architecture where recent conversations are kept in SQLite for fast access, with vector capabilities for semantic search.
|
||||||
|
|
||||||
|
Purpose: Provide persistent, reliable storage that serves as the foundation for all memory operations
|
||||||
|
Output: Working SQLite database with vector support and basic conversation/message storage
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@~/.opencode/get-shit-done/workflows/execute-plan.md
|
||||||
|
@~/.opencode/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/phases/04-memory-context-management/04-CONTEXT.md
|
||||||
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/STATE.md
|
||||||
|
|
||||||
|
# Reference existing models structure
|
||||||
|
@src/models/context_manager.py
|
||||||
|
@src/models/conversation.py
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Create memory module structure and SQLite manager</name>
|
||||||
|
<files>src/memory/__init__.py, src/memory/storage/__init__.py, src/memory/storage/sqlite_manager.py</files>
|
||||||
|
<action>
|
||||||
|
Create the memory module structure following the research pattern:
|
||||||
|
|
||||||
|
1. Create src/memory/__init__.py with MemoryManager class stub
|
||||||
|
2. Create src/memory/storage/__init__.py
|
||||||
|
3. Create src/memory/storage/sqlite_manager.py with:
|
||||||
|
- SQLiteManager class with connection management
|
||||||
|
- Database schema for conversations, messages, metadata
|
||||||
|
- Table creation with proper indexing
|
||||||
|
- Connection pooling and thread safety
|
||||||
|
- Database migration support
|
||||||
|
|
||||||
|
Use the schema from research with conversations table (id, title, created_at, updated_at, metadata) and messages table (id, conversation_id, role, content, timestamp, embedding_id).
|
||||||
|
|
||||||
|
Include proper error handling, connection management, and follow existing code patterns from src/models/ modules.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory.storage.sqlite_manager import SQLiteManager; db = SQLiteManager(':memory:'); print('SQLite manager created successfully')"</verify>
|
||||||
|
<done>SQLite manager can create and connect to database with proper schema</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Implement vector store with sqlite-vec integration</name>
|
||||||
|
<files>src/memory/storage/vector_store.py, requirements.txt</files>
|
||||||
|
<action>
|
||||||
|
Create src/memory/storage/vector_store.py with VectorStore class:
|
||||||
|
|
||||||
|
1. Add sqlite-vec to requirements.txt
|
||||||
|
2. Implement VectorStore with:
|
||||||
|
- sqlite-vec extension loading
|
||||||
|
- Virtual table creation for embeddings (using vec0)
|
||||||
|
- Vector insertion and retrieval methods
|
||||||
|
- Support for different embedding dimensions (start with 384 for all-MiniLM-L6-v2)
|
||||||
|
- Integration with SQLiteManager for database connection
|
||||||
|
|
||||||
|
Follow the research pattern for sqlite-vec setup:
|
||||||
|
```python
|
||||||
|
db.enable_load_extension(True)
|
||||||
|
db.load_extension("vec0")
|
||||||
|
CREATE VIRTUAL TABLE IF NOT EXISTS vec_memory USING vec0(embedding float[384], content text, message_id integer)
|
||||||
|
```
|
||||||
|
|
||||||
|
Include methods to:
|
||||||
|
- Store embeddings with message references
|
||||||
|
- Search by vector similarity
|
||||||
|
- Batch operations for multiple embeddings
|
||||||
|
- Handle embedding model version tracking
|
||||||
|
|
||||||
|
Use existing error handling patterns from src/models/ modules.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory.storage.vector_store import VectorStore; import numpy as np; vs = VectorStore(':memory:'); test_vec = np.random.rand(384).astype(np.float32); print('Vector store created successfully')"</verify>
|
||||||
|
<done>Vector store can create tables and handle basic vector operations</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
After completion, verify:
|
||||||
|
1. SQLite database can be created with proper schema
|
||||||
|
2. Vector extension loads correctly
|
||||||
|
3. Basic conversation and message storage works
|
||||||
|
4. Vector embeddings can be stored and retrieved
|
||||||
|
5. Integration with existing model system works
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- Memory module structure created following research recommendations
|
||||||
|
- SQLite manager handles database operations with proper schema
|
||||||
|
- Vector store integrates sqlite-vec for embedding storage and search
|
||||||
|
- Error handling and connection management follow existing patterns
|
||||||
|
- Database persists data correctly across restarts
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/04-memory-context-management/04-01-SUMMARY.md`
|
||||||
|
</output>
|
||||||
161
.planning/phases/04-memory-context-management/04-02-PLAN.md
Normal file
161
.planning/phases/04-memory-context-management/04-02-PLAN.md
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
---
|
||||||
|
phase: 04-memory-context-management
|
||||||
|
plan: 02
|
||||||
|
type: execute
|
||||||
|
wave: 2
|
||||||
|
depends_on: ["04-01"]
|
||||||
|
files_modified: ["src/memory/retrieval/__init__.py", "src/memory/retrieval/semantic_search.py", "src/memory/retrieval/context_aware.py", "src/memory/retrieval/timeline_search.py", "src/memory/__init__.py"]
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "User can search conversations by semantic meaning"
|
||||||
|
- "Search results are ranked by relevance to query"
|
||||||
|
- "Context-aware search prioritizes current topic discussions"
|
||||||
|
- "Timeline search allows filtering by date ranges"
|
||||||
|
- "Hybrid search combines semantic and keyword matching"
|
||||||
|
artifacts:
|
||||||
|
- path: "src/memory/retrieval/semantic_search.py"
|
||||||
|
provides: "Semantic search with embedding-based similarity"
|
||||||
|
min_lines: 70
|
||||||
|
- path: "src/memory/retrieval/context_aware.py"
|
||||||
|
provides: "Topic-based search prioritization"
|
||||||
|
min_lines: 50
|
||||||
|
- path: "src/memory/retrieval/timeline_search.py"
|
||||||
|
provides: "Date-range filtering and temporal search"
|
||||||
|
min_lines: 40
|
||||||
|
- path: "src/memory/__init__.py"
|
||||||
|
provides: "Updated MemoryManager with search capabilities"
|
||||||
|
exports: ["MemoryManager", "SemanticSearch"]
|
||||||
|
key_links:
|
||||||
|
- from: "src/memory/retrieval/semantic_search.py"
|
||||||
|
to: "src/memory/storage/vector_store.py"
|
||||||
|
via: "vector similarity search operations"
|
||||||
|
pattern: "vector_store\\.search_similar"
|
||||||
|
- from: "src/memory/retrieval/context_aware.py"
|
||||||
|
to: "src/memory/storage/sqlite_manager.py"
|
||||||
|
via: "conversation metadata for topic analysis"
|
||||||
|
pattern: "sqlite_manager\\.get_conversation_metadata"
|
||||||
|
- from: "src/memory/__init__.py"
|
||||||
|
to: "src/memory/retrieval/"
|
||||||
|
via: "search method delegation"
|
||||||
|
pattern: "semantic_search\\.find"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Implement the memory retrieval system with semantic search, context-aware prioritization, and timeline filtering. This enables intelligent recall of past conversations using multiple search strategies.
|
||||||
|
|
||||||
|
Purpose: Allow users and the system to find relevant conversations quickly using semantic meaning, context awareness, and temporal filters
|
||||||
|
Output: Working search system that can retrieve conversations by meaning, topic, and time range
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@~/.opencode/get-shit-done/workflows/execute-plan.md
|
||||||
|
@~/.opencode/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/phases/04-memory-context-management/04-CONTEXT.md
|
||||||
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/STATE.md
|
||||||
|
|
||||||
|
# Reference storage foundation
|
||||||
|
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
|
||||||
|
|
||||||
|
# Reference existing conversation handling
|
||||||
|
@src/models/conversation.py
|
||||||
|
@src/models/context_manager.py
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Create semantic search with embedding-based retrieval</name>
|
||||||
|
<files>src/memory/retrieval/__init__.py, src/memory/retrieval/semantic_search.py</files>
|
||||||
|
<action>
|
||||||
|
Create src/memory/retrieval/semantic_search.py with SemanticSearch class:
|
||||||
|
|
||||||
|
1. Add sentence-transformers to requirements.txt (use all-MiniLM-L6-v2 for efficiency)
|
||||||
|
2. Implement SemanticSearch with:
|
||||||
|
- Embedding model loading (lazy loading for performance)
|
||||||
|
- Query embedding generation
|
||||||
|
- Vector similarity search using VectorStore from plan 04-01
|
||||||
|
- Hybrid search combining semantic and keyword matching
|
||||||
|
- Result ranking and relevance scoring
|
||||||
|
- Conversation snippet generation for context
|
||||||
|
|
||||||
|
Follow research pattern for hybrid search:
|
||||||
|
- Generate query embedding
|
||||||
|
- Search vector store for similar conversations
|
||||||
|
- Fallback to keyword search if no semantic results
|
||||||
|
- Combine and rank results with weighted scoring
|
||||||
|
|
||||||
|
Include methods to:
|
||||||
|
- search(query: str, limit: int = 5) -> List[SearchResult]
|
||||||
|
- search_by_embedding(embedding: np.ndarray, limit: int = 5) -> List[SearchResult]
|
||||||
|
- keyword_search(query: str, limit: int = 5) -> List[SearchResult]
|
||||||
|
|
||||||
|
Use existing error handling patterns and type hints from src/models/ modules.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory.retrieval.semantic_search import SemanticSearch; search = SemanticSearch(':memory:'); print('Semantic search created successfully')"</verify>
|
||||||
|
<done>Semantic search can generate embeddings and perform basic search operations</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Implement context-aware and timeline search capabilities</name>
|
||||||
|
<files>src/memory/retrieval/context_aware.py, src/memory/retrieval/timeline_search.py, src/memory/__init__.py</files>
|
||||||
|
<action>
|
||||||
|
Create context-aware and timeline search components:
|
||||||
|
|
||||||
|
1. Create src/memory/retrieval/context_aware.py with ContextAwareSearch:
|
||||||
|
- Topic extraction from current conversation context
|
||||||
|
- Conversation topic classification using simple heuristics
|
||||||
|
- Topic-based result prioritization
|
||||||
|
- Current conversation context tracking
|
||||||
|
- Methods: prioritize_by_topic(results: List[SearchResult], current_topic: str) -> List[SearchResult]
|
||||||
|
|
||||||
|
2. Create src/memory/retrieval/timeline_search.py with TimelineSearch:
|
||||||
|
- Date range filtering for conversations
|
||||||
|
- Temporal proximity search (find conversations near specific dates)
|
||||||
|
- Recency-based result weighting
|
||||||
|
- Conversation age calculation and compression level awareness
|
||||||
|
- Methods: search_by_date_range(start: datetime, end: datetime, limit: int = 5) -> List[SearchResult]
|
||||||
|
|
||||||
|
3. Update src/memory/__init__.py to integrate search capabilities:
|
||||||
|
- Import all search classes
|
||||||
|
- Add search methods to MemoryManager
|
||||||
|
- Provide unified search interface combining semantic, context-aware, and timeline search
|
||||||
|
- Add search result dataclasses with relevance scores and conversation snippets
|
||||||
|
|
||||||
|
Follow existing patterns from src/models/ for data structures and error handling. Ensure search results include conversation metadata for context.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory import MemoryManager; mm = MemoryManager(':memory:'); print('Memory manager with search created successfully')"</verify>
|
||||||
|
<done>Memory manager provides unified search interface with all search modes</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
After completion, verify:
|
||||||
|
1. Semantic search can find conversations by meaning
|
||||||
|
2. Context-aware search prioritizes relevant topics
|
||||||
|
3. Timeline search filters by date ranges correctly
|
||||||
|
4. Hybrid search combines semantic and keyword results
|
||||||
|
5. Search results include proper relevance scoring and conversation snippets
|
||||||
|
6. Integration with storage layer works correctly
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- Semantic search uses sentence-transformers for embedding generation
|
||||||
|
- Context-aware search prioritizes topics relevant to current discussion
|
||||||
|
- Timeline search enables date-range filtering and temporal search
|
||||||
|
- Hybrid search combines multiple search strategies with proper ranking
|
||||||
|
- Memory manager provides unified search interface
|
||||||
|
- Search results include conversation context and relevance scoring
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/04-memory-context-management/04-02-SUMMARY.md`
|
||||||
|
</output>
|
||||||
172
.planning/phases/04-memory-context-management/04-03-PLAN.md
Normal file
172
.planning/phases/04-memory-context-management/04-03-PLAN.md
Normal file
@@ -0,0 +1,172 @@
|
|||||||
|
---
|
||||||
|
phase: 04-memory-context-management
|
||||||
|
plan: 03
|
||||||
|
type: execute
|
||||||
|
wave: 2
|
||||||
|
depends_on: ["04-01"]
|
||||||
|
files_modified: ["src/memory/backup/__init__.py", "src/memory/backup/archival.py", "src/memory/backup/retention.py", "src/memory/storage/compression.py", "src/memory/__init__.py"]
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "Old conversations are automatically compressed to save space"
|
||||||
|
- "Compression preserves important information while reducing size"
|
||||||
|
- "JSON archival system stores compressed conversations"
|
||||||
|
- "Smart retention keeps important conversations longer"
|
||||||
|
- "7/30/90 day compression tiers are implemented"
|
||||||
|
artifacts:
|
||||||
|
- path: "src/memory/storage/compression.py"
|
||||||
|
provides: "Progressive conversation compression"
|
||||||
|
min_lines: 80
|
||||||
|
- path: "src/memory/backup/archival.py"
|
||||||
|
provides: "JSON export/import for long-term storage"
|
||||||
|
min_lines: 60
|
||||||
|
- path: "src/memory/backup/retention.py"
|
||||||
|
provides: "Smart retention policies based on conversation importance"
|
||||||
|
min_lines: 50
|
||||||
|
- path: "src/memory/__init__.py"
|
||||||
|
provides: "MemoryManager with archival capabilities"
|
||||||
|
exports: ["MemoryManager", "CompressionEngine"]
|
||||||
|
key_links:
|
||||||
|
- from: "src/memory/storage/compression.py"
|
||||||
|
to: "src/memory/storage/sqlite_manager.py"
|
||||||
|
via: "conversation data retrieval for compression"
|
||||||
|
pattern: "sqlite_manager\\.get_conversation"
|
||||||
|
- from: "src/memory/backup/archival.py"
|
||||||
|
to: "src/memory/storage/compression.py"
|
||||||
|
via: "compressed conversation data"
|
||||||
|
pattern: "compression_engine\\.compress"
|
||||||
|
- from: "src/memory/backup/retention.py"
|
||||||
|
to: "src/memory/storage/sqlite_manager.py"
|
||||||
|
via: "conversation importance analysis"
|
||||||
|
pattern: "sqlite_manager\\.update_importance_score"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Implement progressive compression and archival system to manage memory growth efficiently. This ensures the memory system can scale without indefinite growth while preserving important information.
|
||||||
|
|
||||||
|
Purpose: Automatically compress and archive old conversations to maintain performance and storage efficiency
|
||||||
|
Output: Working compression engine with JSON archival and smart retention policies
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@~/.opencode/get-shit-done/workflows/execute-plan.md
|
||||||
|
@~/.opencode/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/phases/04-memory-context-management/04-CONTEXT.md
|
||||||
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/STATE.md
|
||||||
|
|
||||||
|
# Reference storage foundation
|
||||||
|
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
|
||||||
|
|
||||||
|
# Reference compression research patterns
|
||||||
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Implement progressive compression engine</name>
|
||||||
|
<files>src/memory/storage/compression.py</files>
|
||||||
|
<action>
|
||||||
|
Create src/memory/storage/compression.py with CompressionEngine class:
|
||||||
|
|
||||||
|
1. Implement progressive compression following research pattern:
|
||||||
|
- 7 days: Full content (no compression)
|
||||||
|
- 30 days: Key points extraction (70% retention)
|
||||||
|
- 90 days: Brief summary (40% retention)
|
||||||
|
- 365+ days: Metadata only
|
||||||
|
|
||||||
|
2. Add transformers to requirements.txt for summarization
|
||||||
|
3. Implement compression methods:
|
||||||
|
- extract_key_points(conversation: Conversation) -> str
|
||||||
|
- generate_summary(conversation: Conversation, target_ratio: float = 0.4) -> str
|
||||||
|
- extract_metadata_only(conversation: Conversation) -> dict
|
||||||
|
|
||||||
|
4. Use hybrid extractive-abstractive approach:
|
||||||
|
- Extract key sentences using NLTK or simple heuristics
|
||||||
|
- Generate abstractive summary using transformers pipeline
|
||||||
|
- Preserve important quotes, facts, and decision points
|
||||||
|
|
||||||
|
5. Include compression quality metrics:
|
||||||
|
- Information retention scoring
|
||||||
|
- Compression ratio calculation
|
||||||
|
- Quality validation checks
|
||||||
|
|
||||||
|
6. Add methods:
|
||||||
|
- compress_by_age(conversation: Conversation) -> CompressedConversation
|
||||||
|
- get_compression_level(age_days: int) -> CompressionLevel
|
||||||
|
- decompress(compressed: CompressedConversation) -> ConversationSummary
|
||||||
|
|
||||||
|
Follow existing error handling patterns from src/models/ modules.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory.storage.compression import CompressionEngine; ce = CompressionEngine(); print('Compression engine created successfully')"</verify>
|
||||||
|
<done>Compression engine can compress conversations at different levels</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Create JSON archival and smart retention systems</name>
|
||||||
|
<files>src/memory/backup/__init__.py, src/memory/backup/archival.py, src/memory/backup/retention.py, src/memory/__init__.py</files>
|
||||||
|
<action>
|
||||||
|
Create archival and retention components:
|
||||||
|
|
||||||
|
1. Create src/memory/backup/archival.py with ArchivalManager:
|
||||||
|
- JSON export/import for compressed conversations
|
||||||
|
- Archival directory structure by year/month
|
||||||
|
- Batch archival operations
|
||||||
|
- Import capabilities for restoring conversations
|
||||||
|
- Methods: archive_conversations(), restore_conversation(), list_archived()
|
||||||
|
|
||||||
|
2. Create src/memory/backup/retention.py with RetentionPolicy:
|
||||||
|
- Value-based retention scoring
|
||||||
|
- User-marked important conversations
|
||||||
|
- High engagement detection (length, back-and-forth)
|
||||||
|
- Smart retention overrides compression rules
|
||||||
|
- Methods: calculate_importance_score(), should_retain_full(), update_retention_policy()
|
||||||
|
|
||||||
|
3. Update src/memory/__init__.py to integrate archival:
|
||||||
|
- Add archival methods to MemoryManager
|
||||||
|
- Implement automatic compression triggering
|
||||||
|
- Add archival scheduling capabilities
|
||||||
|
- Provide manual archival controls
|
||||||
|
|
||||||
|
4. Include backup integration:
|
||||||
|
- Integrate with existing system backup processes
|
||||||
|
- Ensure archival data is included in regular backups
|
||||||
|
- Provide restore verification and validation
|
||||||
|
|
||||||
|
Follow existing patterns for data management and error handling. Ensure archival JSON structure is human-readable and versioned for future compatibility.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory import MemoryManager; mm = MemoryManager(':memory:'); print('Memory manager with archival created successfully')"</verify>
|
||||||
|
<done>Memory manager can compress and archive conversations automatically</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
After completion, verify:
|
||||||
|
1. Compression engine works at all 4 levels (7/30/90/365+ days)
|
||||||
|
2. JSON archival stores compressed conversations correctly
|
||||||
|
3. Smart retention keeps important conversations from over-compression
|
||||||
|
4. Archival directory structure is organized and navigable
|
||||||
|
5. Integration with storage layer works for compression triggers
|
||||||
|
6. Restore functionality brings back conversations correctly
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- Progressive compression reduces storage usage while preserving information
|
||||||
|
- JSON archival provides human-readable long-term storage
|
||||||
|
- Smart retention policies preserve important conversations
|
||||||
|
- Compression ratios meet research recommendations (70%/40%/metadata)
|
||||||
|
- Archival system integrates with existing backup processes
|
||||||
|
- Memory manager provides unified interface for compression and archival
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/04-memory-context-management/04-03-SUMMARY.md`
|
||||||
|
</output>
|
||||||
184
.planning/phases/04-memory-context-management/04-04-PLAN.md
Normal file
184
.planning/phases/04-memory-context-management/04-04-PLAN.md
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
---
|
||||||
|
phase: 04-memory-context-management
|
||||||
|
plan: 04
|
||||||
|
type: execute
|
||||||
|
wave: 3
|
||||||
|
depends_on: ["04-01", "04-02", "04-03"]
|
||||||
|
files_modified: ["src/memory/personality/__init__.py", "src/memory/personality/pattern_extractor.py", "src/memory/personality/layer_manager.py", "src/memory/personality/adaptation.py", "src/memory/__init__.py", "src/personality.py"]
|
||||||
|
autonomous: true
|
||||||
|
|
||||||
|
must_haves:
|
||||||
|
truths:
|
||||||
|
- "Personality layers learn from conversation patterns"
|
||||||
|
- "Multi-dimensional learning covers topics, sentiment, interaction patterns"
|
||||||
|
- "Personality overlays enhance rather than replace core values"
|
||||||
|
- "Learning algorithms prevent overfitting to recent conversations"
|
||||||
|
- "Personality system integrates with existing personality.py"
|
||||||
|
artifacts:
|
||||||
|
- path: "src/memory/personality/pattern_extractor.py"
|
||||||
|
provides: "Pattern extraction from conversations"
|
||||||
|
min_lines: 80
|
||||||
|
- path: "src/memory/personality/layer_manager.py"
|
||||||
|
provides: "Personality overlay system"
|
||||||
|
min_lines: 60
|
||||||
|
- path: "src/memory/personality/adaptation.py"
|
||||||
|
provides: "Dynamic personality updates"
|
||||||
|
min_lines: 50
|
||||||
|
- path: "src/memory/__init__.py"
|
||||||
|
provides: "Complete MemoryManager with personality learning"
|
||||||
|
exports: ["MemoryManager", "PersonalityLearner"]
|
||||||
|
- path: "src/personality.py"
|
||||||
|
provides: "Updated personality system with memory integration"
|
||||||
|
min_lines: 20
|
||||||
|
key_links:
|
||||||
|
- from: "src/memory/personality/pattern_extractor.py"
|
||||||
|
to: "src/memory/storage/sqlite_manager.py"
|
||||||
|
via: "conversation data for pattern analysis"
|
||||||
|
pattern: "sqlite_manager\\.get_conversations_for_analysis"
|
||||||
|
- from: "src/memory/personality/layer_manager.py"
|
||||||
|
to: "src/memory/personality/pattern_extractor.py"
|
||||||
|
via: "pattern data for layer creation"
|
||||||
|
pattern: "pattern_extractor\\.extract_patterns"
|
||||||
|
- from: "src/personality.py"
|
||||||
|
to: "src/memory/personality/layer_manager.py"
|
||||||
|
via: "personality overlay application"
|
||||||
|
pattern: "layer_manager\\.get_active_layers"
|
||||||
|
---
|
||||||
|
|
||||||
|
<objective>
|
||||||
|
Implement personality learning system that extracts patterns from conversations and creates adaptive personality layers. This enables Mai to learn and adapt communication patterns while maintaining core personality values.
|
||||||
|
|
||||||
|
Purpose: Enable Mai to learn from user interactions and adapt personality while preserving core values
|
||||||
|
Output: Working personality learning system with pattern extraction, layer management, and dynamic adaptation
|
||||||
|
</objective>
|
||||||
|
|
||||||
|
<execution_context>
|
||||||
|
@~/.opencode/get-shit-done/workflows/execute-plan.md
|
||||||
|
@~/.opencode/get-shit-done/templates/summary.md
|
||||||
|
</execution_context>
|
||||||
|
|
||||||
|
<context>
|
||||||
|
@.planning/phases/04-memory-context-management/04-CONTEXT.md
|
||||||
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
||||||
|
@.planning/PROJECT.md
|
||||||
|
@.planning/ROADMAP.md
|
||||||
|
@.planning/STATE.md
|
||||||
|
|
||||||
|
# Reference existing personality system
|
||||||
|
@src/personality.py
|
||||||
|
@src/resource/personality.py
|
||||||
|
|
||||||
|
# Reference memory components
|
||||||
|
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
|
||||||
|
@.planning/phases/04-memory-context-management/04-02-SUMMARY.md
|
||||||
|
@.planning/phases/04-memory-context-management/04-03-SUMMARY.md
|
||||||
|
</context>
|
||||||
|
|
||||||
|
<tasks>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 1: Create pattern extraction system</name>
|
||||||
|
<files>src/memory/personality/__init__.py, src/memory/personality/pattern_extractor.py</files>
|
||||||
|
<action>
|
||||||
|
Create src/memory/personality/pattern_extractor.py with PatternExtractor class:
|
||||||
|
|
||||||
|
1. Implement multi-dimensional pattern extraction following research:
|
||||||
|
- Topics: Track frequently discussed subjects and user interests
|
||||||
|
- Sentiment: Analyze emotional tone and sentiment patterns
|
||||||
|
- Interaction patterns: Response times, question asking, information sharing
|
||||||
|
- Time-based preferences: Communication style by time of day/week
|
||||||
|
- Response styles: Formality level, verbosity, use of emojis/humor
|
||||||
|
|
||||||
|
2. Pattern extraction methods:
|
||||||
|
- extract_topic_patterns(conversations: List[Conversation]) -> TopicPatterns
|
||||||
|
- extract_sentiment_patterns(conversations: List[Conversation]) -> SentimentPatterns
|
||||||
|
- extract_interaction_patterns(conversations: List[Conversation]) -> InteractionPatterns
|
||||||
|
- extract_temporal_patterns(conversations: List[Conversation]) -> TemporalPatterns
|
||||||
|
- extract_response_style_patterns(conversations: List[Conversation]) -> ResponseStylePatterns
|
||||||
|
|
||||||
|
3. Analysis techniques:
|
||||||
|
- Simple frequency analysis for topics
|
||||||
|
- Basic sentiment analysis using keyword lists or simple models
|
||||||
|
- Statistical analysis for interaction patterns
|
||||||
|
- Time series analysis for temporal patterns
|
||||||
|
- Linguistic analysis for response styles
|
||||||
|
|
||||||
|
4. Pattern validation:
|
||||||
|
- Confidence scoring for extracted patterns
|
||||||
|
- Pattern stability tracking over time
|
||||||
|
- Outlier detection for unusual patterns
|
||||||
|
|
||||||
|
Follow existing error handling patterns. Keep analysis lightweight to avoid heavy computational overhead.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory.personality.pattern_extractor import PatternExtractor; pe = PatternExtractor(); print('Pattern extractor created successfully')"</verify>
|
||||||
|
<done>Pattern extractor can analyze conversations and extract patterns</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
<task type="auto">
|
||||||
|
<name>Task 2: Implement personality layer management and adaptation</name>
|
||||||
|
<files>src/memory/personality/layer_manager.py, src/memory/personality/adaptation.py, src/memory/__init__.py, src/personality.py</files>
|
||||||
|
<action>
|
||||||
|
Create personality management system:
|
||||||
|
|
||||||
|
1. Create src/memory/personality/layer_manager.py with LayerManager:
|
||||||
|
- PersonalityLayer dataclass with weights and application rules
|
||||||
|
- Layer creation from extracted patterns
|
||||||
|
- Layer conflict resolution (when patterns contradict)
|
||||||
|
- Layer activation based on conversation context
|
||||||
|
- Methods: create_layer_from_patterns(), get_active_layers(), apply_layers()
|
||||||
|
|
||||||
|
2. Create src/memory/personality/adaptation.py with PersonalityAdaptation:
|
||||||
|
- Time-weighted learning (recent patterns have less influence)
|
||||||
|
- Gradual adaptation with stability controls
|
||||||
|
- Feedback integration for user preferences
|
||||||
|
- Adaptation rate limiting to prevent rapid changes
|
||||||
|
- Methods: update_personality_layer(), calculate_adaptation_rate(), apply_stability_controls()
|
||||||
|
|
||||||
|
3. Update src/memory/__init__.py to integrate personality learning:
|
||||||
|
- Add PersonalityLearner to MemoryManager
|
||||||
|
- Implement learning triggers (after conversations, periodically)
|
||||||
|
- Add personality data persistence
|
||||||
|
- Provide learning controls and configuration
|
||||||
|
|
||||||
|
4. Update src/personality.py to integrate with memory:
|
||||||
|
- Import and use PersonalityLearner from memory system
|
||||||
|
- Apply personality layers during conversation responses
|
||||||
|
- Maintain separation between core personality and learned layers
|
||||||
|
- Add configuration for learning enable/disable
|
||||||
|
|
||||||
|
5. Personality layer application:
|
||||||
|
- Hybrid system prompt + behavior configuration
|
||||||
|
- Context-aware layer activation
|
||||||
|
- Core value enforcement (learned layers cannot override core values)
|
||||||
|
- Layer priority and conflict resolution
|
||||||
|
|
||||||
|
Follow existing patterns from src/resource/personality.py for personality management. Ensure core personality values remain protected from learned modifications.
|
||||||
|
</action>
|
||||||
|
<verify>python -c "from src.memory.personality.layer_manager import LayerManager; lm = LayerManager(); print('Layer manager created successfully')"</verify>
|
||||||
|
<done>Personality system can learn patterns and apply adaptive layers</done>
|
||||||
|
</task>
|
||||||
|
|
||||||
|
</tasks>
|
||||||
|
|
||||||
|
<verification>
|
||||||
|
After completion, verify:
|
||||||
|
1. Pattern extractor analyzes conversations across multiple dimensions
|
||||||
|
2. Layer manager creates personality overlays from patterns
|
||||||
|
3. Adaptation system prevents overfitting and maintains stability
|
||||||
|
4. Personality learning integrates with existing personality.py
|
||||||
|
5. Core personality values are protected from learned modifications
|
||||||
|
6. Learning system can be enabled/disabled through configuration
|
||||||
|
</verification>
|
||||||
|
|
||||||
|
<success_criteria>
|
||||||
|
- Pattern extraction covers topics, sentiment, interaction, temporal, and style patterns
|
||||||
|
- Personality layers work as adaptive overlays that enhance core personality
|
||||||
|
- Time-weighted learning prevents overfitting to recent conversations
|
||||||
|
- Stability controls maintain personality consistency
|
||||||
|
- Integration with existing personality system preserves core values
|
||||||
|
- Learning system is configurable and can be controlled by user
|
||||||
|
</success_criteria>
|
||||||
|
|
||||||
|
<output>
|
||||||
|
After completion, create `.planning/phases/04-memory-context-management/04-04-SUMMARY.md`
|
||||||
|
</output>
|
||||||
Reference in New Issue
Block a user