Phase 04: Memory & Context Management - 4 plan(s) in 3 wave(s) - 2 parallel, 2 sequential - Ready for execution
7.0 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 04-memory-context-management | 03 | execute | 2 |
|
|
true |
|
Purpose: Automatically compress and archive old conversations to maintain performance and storage efficiency Output: Working compression engine with JSON archival and smart retention policies
<execution_context>
@/.opencode/get-shit-done/workflows/execute-plan.md
@/.opencode/get-shit-done/templates/summary.md
</execution_context>
Reference storage foundation
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
Reference compression research patterns
@.planning/phases/04-memory-context-management/04-RESEARCH.md
Task 1: Implement progressive compression engine src/memory/storage/compression.py Create src/memory/storage/compression.py with CompressionEngine class:-
Implement progressive compression following research pattern:
- 7 days: Full content (no compression)
- 30 days: Key points extraction (70% retention)
- 90 days: Brief summary (40% retention)
- 365+ days: Metadata only
-
Add transformers to requirements.txt for summarization
-
Implement compression methods:
- extract_key_points(conversation: Conversation) -> str
- generate_summary(conversation: Conversation, target_ratio: float = 0.4) -> str
- extract_metadata_only(conversation: Conversation) -> dict
-
Use hybrid extractive-abstractive approach:
- Extract key sentences using NLTK or simple heuristics
- Generate abstractive summary using transformers pipeline
- Preserve important quotes, facts, and decision points
-
Include compression quality metrics:
- Information retention scoring
- Compression ratio calculation
- Quality validation checks
-
Add methods:
- compress_by_age(conversation: Conversation) -> CompressedConversation
- get_compression_level(age_days: int) -> CompressionLevel
- decompress(compressed: CompressedConversation) -> ConversationSummary
Follow existing error handling patterns from src/models/ modules. python -c "from src.memory.storage.compression import CompressionEngine; ce = CompressionEngine(); print('Compression engine created successfully')" Compression engine can compress conversations at different levels
Task 2: Create JSON archival and smart retention systems src/memory/backup/__init__.py, src/memory/backup/archival.py, src/memory/backup/retention.py, src/memory/__init__.py Create archival and retention components:-
Create src/memory/backup/archival.py with ArchivalManager:
- JSON export/import for compressed conversations
- Archival directory structure by year/month
- Batch archival operations
- Import capabilities for restoring conversations
- Methods: archive_conversations(), restore_conversation(), list_archived()
-
Create src/memory/backup/retention.py with RetentionPolicy:
- Value-based retention scoring
- User-marked important conversations
- High engagement detection (length, back-and-forth)
- Smart retention overrides compression rules
- Methods: calculate_importance_score(), should_retain_full(), update_retention_policy()
-
Update src/memory/init.py to integrate archival:
- Add archival methods to MemoryManager
- Implement automatic compression triggering
- Add archival scheduling capabilities
- Provide manual archival controls
-
Include backup integration:
- Integrate with existing system backup processes
- Ensure archival data is included in regular backups
- Provide restore verification and validation
Follow existing patterns for data management and error handling. Ensure archival JSON structure is human-readable and versioned for future compatibility. python -c "from src.memory import MemoryManager; mm = MemoryManager(':memory:'); print('Memory manager with archival created successfully')" Memory manager can compress and archive conversations automatically
After completion, verify: 1. Compression engine works at all 4 levels (7/30/90/365+ days) 2. JSON archival stores compressed conversations correctly 3. Smart retention keeps important conversations from over-compression 4. Archival directory structure is organized and navigable 5. Integration with storage layer works for compression triggers 6. Restore functionality brings back conversations correctly<success_criteria>
- Progressive compression reduces storage usage while preserving information
- JSON archival provides human-readable long-term storage
- Smart retention policies preserve important conversations
- Compression ratios meet research recommendations (70%/40%/metadata)
- Archival system integrates with existing backup processes
- Memory manager provides unified interface for compression and archival </success_criteria>