Phase 04: Memory & Context Management - 4 plan(s) in 3 wave(s) - 2 parallel, 2 sequential - Ready for execution
172 lines
7.0 KiB
Markdown
172 lines
7.0 KiB
Markdown
---
|
|
phase: 04-memory-context-management
|
|
plan: 03
|
|
type: execute
|
|
wave: 2
|
|
depends_on: ["04-01"]
|
|
files_modified: ["src/memory/backup/__init__.py", "src/memory/backup/archival.py", "src/memory/backup/retention.py", "src/memory/storage/compression.py", "src/memory/__init__.py"]
|
|
autonomous: true
|
|
|
|
must_haves:
|
|
truths:
|
|
- "Old conversations are automatically compressed to save space"
|
|
- "Compression preserves important information while reducing size"
|
|
- "JSON archival system stores compressed conversations"
|
|
- "Smart retention keeps important conversations longer"
|
|
- "7/30/90 day compression tiers are implemented"
|
|
artifacts:
|
|
- path: "src/memory/storage/compression.py"
|
|
provides: "Progressive conversation compression"
|
|
min_lines: 80
|
|
- path: "src/memory/backup/archival.py"
|
|
provides: "JSON export/import for long-term storage"
|
|
min_lines: 60
|
|
- path: "src/memory/backup/retention.py"
|
|
provides: "Smart retention policies based on conversation importance"
|
|
min_lines: 50
|
|
- path: "src/memory/__init__.py"
|
|
provides: "MemoryManager with archival capabilities"
|
|
exports: ["MemoryManager", "CompressionEngine"]
|
|
key_links:
|
|
- from: "src/memory/storage/compression.py"
|
|
to: "src/memory/storage/sqlite_manager.py"
|
|
via: "conversation data retrieval for compression"
|
|
pattern: "sqlite_manager\\.get_conversation"
|
|
- from: "src/memory/backup/archival.py"
|
|
to: "src/memory/storage/compression.py"
|
|
via: "compressed conversation data"
|
|
pattern: "compression_engine\\.compress"
|
|
- from: "src/memory/backup/retention.py"
|
|
to: "src/memory/storage/sqlite_manager.py"
|
|
via: "conversation importance analysis"
|
|
pattern: "sqlite_manager\\.update_importance_score"
|
|
---
|
|
|
|
<objective>
|
|
Implement progressive compression and archival system to manage memory growth efficiently. This ensures the memory system can scale without indefinite growth while preserving important information.
|
|
|
|
Purpose: Automatically compress and archive old conversations to maintain performance and storage efficiency
|
|
Output: Working compression engine with JSON archival and smart retention policies
|
|
</objective>
|
|
|
|
<execution_context>
|
|
@~/.opencode/get-shit-done/workflows/execute-plan.md
|
|
@~/.opencode/get-shit-done/templates/summary.md
|
|
</execution_context>
|
|
|
|
<context>
|
|
@.planning/phases/04-memory-context-management/04-CONTEXT.md
|
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
|
@.planning/PROJECT.md
|
|
@.planning/ROADMAP.md
|
|
@.planning/STATE.md
|
|
|
|
# Reference storage foundation
|
|
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
|
|
|
|
# Reference compression research patterns
|
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
|
</context>
|
|
|
|
<tasks>
|
|
|
|
<task type="auto">
|
|
<name>Task 1: Implement progressive compression engine</name>
|
|
<files>src/memory/storage/compression.py</files>
|
|
<action>
|
|
Create src/memory/storage/compression.py with CompressionEngine class:
|
|
|
|
1. Implement progressive compression following research pattern:
|
|
- 7 days: Full content (no compression)
|
|
- 30 days: Key points extraction (70% retention)
|
|
- 90 days: Brief summary (40% retention)
|
|
- 365+ days: Metadata only
|
|
|
|
2. Add transformers to requirements.txt for summarization
|
|
3. Implement compression methods:
|
|
- extract_key_points(conversation: Conversation) -> str
|
|
- generate_summary(conversation: Conversation, target_ratio: float = 0.4) -> str
|
|
- extract_metadata_only(conversation: Conversation) -> dict
|
|
|
|
4. Use hybrid extractive-abstractive approach:
|
|
- Extract key sentences using NLTK or simple heuristics
|
|
- Generate abstractive summary using transformers pipeline
|
|
- Preserve important quotes, facts, and decision points
|
|
|
|
5. Include compression quality metrics:
|
|
- Information retention scoring
|
|
- Compression ratio calculation
|
|
- Quality validation checks
|
|
|
|
6. Add methods:
|
|
- compress_by_age(conversation: Conversation) -> CompressedConversation
|
|
- get_compression_level(age_days: int) -> CompressionLevel
|
|
- decompress(compressed: CompressedConversation) -> ConversationSummary
|
|
|
|
Follow existing error handling patterns from src/models/ modules.
|
|
</action>
|
|
<verify>python -c "from src.memory.storage.compression import CompressionEngine; ce = CompressionEngine(); print('Compression engine created successfully')"</verify>
|
|
<done>Compression engine can compress conversations at different levels</done>
|
|
</task>
|
|
|
|
<task type="auto">
|
|
<name>Task 2: Create JSON archival and smart retention systems</name>
|
|
<files>src/memory/backup/__init__.py, src/memory/backup/archival.py, src/memory/backup/retention.py, src/memory/__init__.py</files>
|
|
<action>
|
|
Create archival and retention components:
|
|
|
|
1. Create src/memory/backup/archival.py with ArchivalManager:
|
|
- JSON export/import for compressed conversations
|
|
- Archival directory structure by year/month
|
|
- Batch archival operations
|
|
- Import capabilities for restoring conversations
|
|
- Methods: archive_conversations(), restore_conversation(), list_archived()
|
|
|
|
2. Create src/memory/backup/retention.py with RetentionPolicy:
|
|
- Value-based retention scoring
|
|
- User-marked important conversations
|
|
- High engagement detection (length, back-and-forth)
|
|
- Smart retention overrides compression rules
|
|
- Methods: calculate_importance_score(), should_retain_full(), update_retention_policy()
|
|
|
|
3. Update src/memory/__init__.py to integrate archival:
|
|
- Add archival methods to MemoryManager
|
|
- Implement automatic compression triggering
|
|
- Add archival scheduling capabilities
|
|
- Provide manual archival controls
|
|
|
|
4. Include backup integration:
|
|
- Integrate with existing system backup processes
|
|
- Ensure archival data is included in regular backups
|
|
- Provide restore verification and validation
|
|
|
|
Follow existing patterns for data management and error handling. Ensure archival JSON structure is human-readable and versioned for future compatibility.
|
|
</action>
|
|
<verify>python -c "from src.memory import MemoryManager; mm = MemoryManager(':memory:'); print('Memory manager with archival created successfully')"</verify>
|
|
<done>Memory manager can compress and archive conversations automatically</done>
|
|
</task>
|
|
|
|
</tasks>
|
|
|
|
<verification>
|
|
After completion, verify:
|
|
1. Compression engine works at all 4 levels (7/30/90/365+ days)
|
|
2. JSON archival stores compressed conversations correctly
|
|
3. Smart retention keeps important conversations from over-compression
|
|
4. Archival directory structure is organized and navigable
|
|
5. Integration with storage layer works for compression triggers
|
|
6. Restore functionality brings back conversations correctly
|
|
</verification>
|
|
|
|
<success_criteria>
|
|
- Progressive compression reduces storage usage while preserving information
|
|
- JSON archival provides human-readable long-term storage
|
|
- Smart retention policies preserve important conversations
|
|
- Compression ratios meet research recommendations (70%/40%/metadata)
|
|
- Archival system integrates with existing backup processes
|
|
- Memory manager provides unified interface for compression and archival
|
|
</success_criteria>
|
|
|
|
<output>
|
|
After completion, create `.planning/phases/04-memory-context-management/04-03-SUMMARY.md`
|
|
</output> |