Phase 04: Memory & Context Management - 4 plan(s) in 3 wave(s) - 2 parallel, 2 sequential - Ready for execution
6.9 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 04-memory-context-management | 02 | execute | 2 |
|
|
true |
|
Purpose: Allow users and the system to find relevant conversations quickly using semantic meaning, context awareness, and temporal filters Output: Working search system that can retrieve conversations by meaning, topic, and time range
<execution_context>
@/.opencode/get-shit-done/workflows/execute-plan.md
@/.opencode/get-shit-done/templates/summary.md
</execution_context>
Reference storage foundation
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
Reference existing conversation handling
@src/models/conversation.py @src/models/context_manager.py
Task 1: Create semantic search with embedding-based retrieval src/memory/retrieval/__init__.py, src/memory/retrieval/semantic_search.py Create src/memory/retrieval/semantic_search.py with SemanticSearch class:- Add sentence-transformers to requirements.txt (use all-MiniLM-L6-v2 for efficiency)
- Implement SemanticSearch with:
- Embedding model loading (lazy loading for performance)
- Query embedding generation
- Vector similarity search using VectorStore from plan 04-01
- Hybrid search combining semantic and keyword matching
- Result ranking and relevance scoring
- Conversation snippet generation for context
Follow research pattern for hybrid search:
- Generate query embedding
- Search vector store for similar conversations
- Fallback to keyword search if no semantic results
- Combine and rank results with weighted scoring
Include methods to:
- search(query: str, limit: int = 5) -> List[SearchResult]
- search_by_embedding(embedding: np.ndarray, limit: int = 5) -> List[SearchResult]
- keyword_search(query: str, limit: int = 5) -> List[SearchResult]
Use existing error handling patterns and type hints from src/models/ modules. python -c "from src.memory.retrieval.semantic_search import SemanticSearch; search = SemanticSearch(':memory:'); print('Semantic search created successfully')" Semantic search can generate embeddings and perform basic search operations
Task 2: Implement context-aware and timeline search capabilities src/memory/retrieval/context_aware.py, src/memory/retrieval/timeline_search.py, src/memory/__init__.py Create context-aware and timeline search components:-
Create src/memory/retrieval/context_aware.py with ContextAwareSearch:
- Topic extraction from current conversation context
- Conversation topic classification using simple heuristics
- Topic-based result prioritization
- Current conversation context tracking
- Methods: prioritize_by_topic(results: List[SearchResult], current_topic: str) -> List[SearchResult]
-
Create src/memory/retrieval/timeline_search.py with TimelineSearch:
- Date range filtering for conversations
- Temporal proximity search (find conversations near specific dates)
- Recency-based result weighting
- Conversation age calculation and compression level awareness
- Methods: search_by_date_range(start: datetime, end: datetime, limit: int = 5) -> List[SearchResult]
-
Update src/memory/init.py to integrate search capabilities:
- Import all search classes
- Add search methods to MemoryManager
- Provide unified search interface combining semantic, context-aware, and timeline search
- Add search result dataclasses with relevance scores and conversation snippets
Follow existing patterns from src/models/ for data structures and error handling. Ensure search results include conversation metadata for context. python -c "from src.memory import MemoryManager; mm = MemoryManager(':memory:'); print('Memory manager with search created successfully')" Memory manager provides unified search interface with all search modes
After completion, verify: 1. Semantic search can find conversations by meaning 2. Context-aware search prioritizes relevant topics 3. Timeline search filters by date ranges correctly 4. Hybrid search combines semantic and keyword results 5. Search results include proper relevance scoring and conversation snippets 6. Integration with storage layer works correctly<success_criteria>
- Semantic search uses sentence-transformers for embedding generation
- Context-aware search prioritizes topics relevant to current discussion
- Timeline search enables date-range filtering and temporal search
- Hybrid search combines multiple search strategies with proper ranking
- Memory manager provides unified search interface
- Search results include conversation context and relevance scoring </success_criteria>