Phase 04: Memory & Context Management - 4 plan(s) in 3 wave(s) - 2 parallel, 2 sequential - Ready for execution
161 lines
6.9 KiB
Markdown
161 lines
6.9 KiB
Markdown
---
|
|
phase: 04-memory-context-management
|
|
plan: 02
|
|
type: execute
|
|
wave: 2
|
|
depends_on: ["04-01"]
|
|
files_modified: ["src/memory/retrieval/__init__.py", "src/memory/retrieval/semantic_search.py", "src/memory/retrieval/context_aware.py", "src/memory/retrieval/timeline_search.py", "src/memory/__init__.py"]
|
|
autonomous: true
|
|
|
|
must_haves:
|
|
truths:
|
|
- "User can search conversations by semantic meaning"
|
|
- "Search results are ranked by relevance to query"
|
|
- "Context-aware search prioritizes current topic discussions"
|
|
- "Timeline search allows filtering by date ranges"
|
|
- "Hybrid search combines semantic and keyword matching"
|
|
artifacts:
|
|
- path: "src/memory/retrieval/semantic_search.py"
|
|
provides: "Semantic search with embedding-based similarity"
|
|
min_lines: 70
|
|
- path: "src/memory/retrieval/context_aware.py"
|
|
provides: "Topic-based search prioritization"
|
|
min_lines: 50
|
|
- path: "src/memory/retrieval/timeline_search.py"
|
|
provides: "Date-range filtering and temporal search"
|
|
min_lines: 40
|
|
- path: "src/memory/__init__.py"
|
|
provides: "Updated MemoryManager with search capabilities"
|
|
exports: ["MemoryManager", "SemanticSearch"]
|
|
key_links:
|
|
- from: "src/memory/retrieval/semantic_search.py"
|
|
to: "src/memory/storage/vector_store.py"
|
|
via: "vector similarity search operations"
|
|
pattern: "vector_store\\.search_similar"
|
|
- from: "src/memory/retrieval/context_aware.py"
|
|
to: "src/memory/storage/sqlite_manager.py"
|
|
via: "conversation metadata for topic analysis"
|
|
pattern: "sqlite_manager\\.get_conversation_metadata"
|
|
- from: "src/memory/__init__.py"
|
|
to: "src/memory/retrieval/"
|
|
via: "search method delegation"
|
|
pattern: "semantic_search\\.find"
|
|
---
|
|
|
|
<objective>
|
|
Implement the memory retrieval system with semantic search, context-aware prioritization, and timeline filtering. This enables intelligent recall of past conversations using multiple search strategies.
|
|
|
|
Purpose: Allow users and the system to find relevant conversations quickly using semantic meaning, context awareness, and temporal filters
|
|
Output: Working search system that can retrieve conversations by meaning, topic, and time range
|
|
</objective>
|
|
|
|
<execution_context>
|
|
@~/.opencode/get-shit-done/workflows/execute-plan.md
|
|
@~/.opencode/get-shit-done/templates/summary.md
|
|
</execution_context>
|
|
|
|
<context>
|
|
@.planning/phases/04-memory-context-management/04-CONTEXT.md
|
|
@.planning/phases/04-memory-context-management/04-RESEARCH.md
|
|
@.planning/PROJECT.md
|
|
@.planning/ROADMAP.md
|
|
@.planning/STATE.md
|
|
|
|
# Reference storage foundation
|
|
@.planning/phases/04-memory-context-management/04-01-SUMMARY.md
|
|
|
|
# Reference existing conversation handling
|
|
@src/models/conversation.py
|
|
@src/models/context_manager.py
|
|
</context>
|
|
|
|
<tasks>
|
|
|
|
<task type="auto">
|
|
<name>Task 1: Create semantic search with embedding-based retrieval</name>
|
|
<files>src/memory/retrieval/__init__.py, src/memory/retrieval/semantic_search.py</files>
|
|
<action>
|
|
Create src/memory/retrieval/semantic_search.py with SemanticSearch class:
|
|
|
|
1. Add sentence-transformers to requirements.txt (use all-MiniLM-L6-v2 for efficiency)
|
|
2. Implement SemanticSearch with:
|
|
- Embedding model loading (lazy loading for performance)
|
|
- Query embedding generation
|
|
- Vector similarity search using VectorStore from plan 04-01
|
|
- Hybrid search combining semantic and keyword matching
|
|
- Result ranking and relevance scoring
|
|
- Conversation snippet generation for context
|
|
|
|
Follow research pattern for hybrid search:
|
|
- Generate query embedding
|
|
- Search vector store for similar conversations
|
|
- Fallback to keyword search if no semantic results
|
|
- Combine and rank results with weighted scoring
|
|
|
|
Include methods to:
|
|
- search(query: str, limit: int = 5) -> List[SearchResult]
|
|
- search_by_embedding(embedding: np.ndarray, limit: int = 5) -> List[SearchResult]
|
|
- keyword_search(query: str, limit: int = 5) -> List[SearchResult]
|
|
|
|
Use existing error handling patterns and type hints from src/models/ modules.
|
|
</action>
|
|
<verify>python -c "from src.memory.retrieval.semantic_search import SemanticSearch; search = SemanticSearch(':memory:'); print('Semantic search created successfully')"</verify>
|
|
<done>Semantic search can generate embeddings and perform basic search operations</done>
|
|
</task>
|
|
|
|
<task type="auto">
|
|
<name>Task 2: Implement context-aware and timeline search capabilities</name>
|
|
<files>src/memory/retrieval/context_aware.py, src/memory/retrieval/timeline_search.py, src/memory/__init__.py</files>
|
|
<action>
|
|
Create context-aware and timeline search components:
|
|
|
|
1. Create src/memory/retrieval/context_aware.py with ContextAwareSearch:
|
|
- Topic extraction from current conversation context
|
|
- Conversation topic classification using simple heuristics
|
|
- Topic-based result prioritization
|
|
- Current conversation context tracking
|
|
- Methods: prioritize_by_topic(results: List[SearchResult], current_topic: str) -> List[SearchResult]
|
|
|
|
2. Create src/memory/retrieval/timeline_search.py with TimelineSearch:
|
|
- Date range filtering for conversations
|
|
- Temporal proximity search (find conversations near specific dates)
|
|
- Recency-based result weighting
|
|
- Conversation age calculation and compression level awareness
|
|
- Methods: search_by_date_range(start: datetime, end: datetime, limit: int = 5) -> List[SearchResult]
|
|
|
|
3. Update src/memory/__init__.py to integrate search capabilities:
|
|
- Import all search classes
|
|
- Add search methods to MemoryManager
|
|
- Provide unified search interface combining semantic, context-aware, and timeline search
|
|
- Add search result dataclasses with relevance scores and conversation snippets
|
|
|
|
Follow existing patterns from src/models/ for data structures and error handling. Ensure search results include conversation metadata for context.
|
|
</action>
|
|
<verify>python -c "from src.memory import MemoryManager; mm = MemoryManager(':memory:'); print('Memory manager with search created successfully')"</verify>
|
|
<done>Memory manager provides unified search interface with all search modes</done>
|
|
</task>
|
|
|
|
</tasks>
|
|
|
|
<verification>
|
|
After completion, verify:
|
|
1. Semantic search can find conversations by meaning
|
|
2. Context-aware search prioritizes relevant topics
|
|
3. Timeline search filters by date ranges correctly
|
|
4. Hybrid search combines semantic and keyword results
|
|
5. Search results include proper relevance scoring and conversation snippets
|
|
6. Integration with storage layer works correctly
|
|
</verification>
|
|
|
|
<success_criteria>
|
|
- Semantic search uses sentence-transformers for embedding generation
|
|
- Context-aware search prioritizes topics relevant to current discussion
|
|
- Timeline search enables date-range filtering and temporal search
|
|
- Hybrid search combines multiple search strategies with proper ranking
|
|
- Memory manager provides unified search interface
|
|
- Search results include conversation context and relevance scoring
|
|
</success_criteria>
|
|
|
|
<output>
|
|
After completion, create `.planning/phases/04-memory-context-management/04-02-SUMMARY.md`
|
|
</output> |