Architecture: - Custom GPT-style decoder-only transformer (500M params) - 768 hidden size, 12 layers, 12 attention heads - 32k vocabulary with BPE tokenizer - Built-in emotion classification head - 2048 token context window Components: - Multi-head self-attention mechanism - Feed-forward networks with GELU- Layer normalization and residual connections - Custom tokenizer with special tokens for emotions/actions - Generation with temperature, top-k, and nucleus sampling Training Infrastructure: - Full training script with data loading - Gradient clipping and mixed precision support - Checkpoint management - Training guide with 3-phase approach: * Phase 1: Base language (10-50B tokens, 3-7 days) * Phase 2: Personality fine-tuning (100k-500k examples, 1-2 days) * Phase 3: Emotion training (50k-100k examples, 6-12 hours) Integration: - Inference engine for real-time generation - Emotion detection from responses - Conversation history management - Ready for desktop app and Discord bot integration No external model dependencies - 100% custom and unbiased 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
28 lines
695 B
Plaintext
28 lines
695 B
Plaintext
# Additional requirements for model training
|
|
# Install with: pip install -r requirements-training.txt
|
|
|
|
# Deep Learning
|
|
torch>=2.0.0
|
|
torchvision>=0.15.0
|
|
torchaudio>=2.0.0
|
|
|
|
# Training utilities
|
|
wandb>=0.15.0 # Experiment tracking
|
|
tensorboard>=2.13.0 # Tensorboard logging
|
|
tqdm>=4.65.0 # Progress bars
|
|
|
|
# Data processing
|
|
datasets>=2.13.0 # HuggingFace datasets
|
|
transformers>=4.30.0 # For comparison/reference only
|
|
sentencepiece>=0.1.99 # Alternative tokenizer
|
|
tokenizers>=0.13.3 # Fast tokenizers
|
|
|
|
# Optimization
|
|
apex # NVIDIA apex for mixed precision (optional, requires CUDA)
|
|
accelerate>=0.20.0 # Multi-GPU training
|
|
|
|
# Data collection
|
|
requests>=2.31.0
|
|
beautifulsoup4>=4.12.0
|
|
lxml>=4.9.0
|