English | 中文
DeepRolePlay is a deep role-playing system based on LangGraph workflows that completely solves the character forgetting problem of traditional large language models through automated memory management. Memory Flashback Processing + Scenario Update Management + Main Conversation Model, enabling AI to bid farewell to character forgetting and achieve truly coherent role-playing.
- 🤖 AI Suddenly Forgets Character Settings: A mage suddenly picks up a sword
- 📖 Inconsistent Plot: Important plots from yesterday are completely forgotten today
- 💸 Huge Token Consumption: Long conversation costs skyrocket, experience interrupted
- 📚 Insufficient LLM Background Knowledge: Lacks understanding of specific novels, movies, or custom worldviews
- 🧠 Never Forget: Automated memory management system, character settings permanently preserved
- 🔄 Plot Coherence: Intelligent scenario updates, logical clarity even after millions of conversation rounds
- 💰 Cost Control: Scenario compression technology, long conversation costs reduced by 80%
- 📚 Intelligent Internet Access: Integrated Wikipedia encyclopedia, automatic completion of character backgrounds and story settings
- 📄 External Knowledge Mounting: Support for txt document mounting, solving LLM's insufficient knowledge of specific works or custom worldviews
- 🗂️ Structured Management: JSON table system manages worldview, characters, items, etc., supports dynamic CRUD operations
- ⚡ Plug and Play: 5-minute integration, direct use with SillyTavern and other platforms
- 🚀 Ultra-Fast Response: Supports any OpenAI style models, except for initial scenario construction, normal dialogue only adds 10 seconds
-
📦 Extract Software Package
- Download the released software package and extract it to a non-Chinese path
- The extracted folder contains:
config.yamlconfiguration file +DeepRolePlay.exemain program
-
⚙️ Modify Configuration File
Edit the
config.yamlfile:The configuration file includes detailed beginner's guide, you mainly need to modify the following settings:
# API Proxy Configuration - Forwarding Target (Your main chat LLM) proxy: target_url: "https://api.your-provider.com/v1" # Change to your API address api_key: "Your-Main-LLM-API-key" # Change to your API key # Agent Configuration - Background processing model agent: model: "deepseek-chat" # Any OpenAI format model base_url: "https://api.deepseek.com/v1" # API address api_key: "Your-Agent-API-Key" # Change to your API key workflow_mode: "fast" # Workflow mode: fast=fast economical mode, drp=flexible but expensive mode external_knowledge_path: "knowledge/custom.txt" # External knowledge document path (optional)
-
🚀 Start Program
- Double-click
DeepRolePlay.exeto start - You can see the project running normally in the terminal
- 🌟 This project will check if the port is occupied. If it is, it will automatically increment by 1. Therefore, the actual port needs to be checked from the terminal output.
- Double-click
-
🔗 Configure Role-Playing Frontend
- In platforms like SillyTavern, OpenWebUI
- Change
base_urlto:http://localhost:6666/v1 - Important: Disable history record limits, must send full history to proxy! (Don't worry about token explosion, max_history_length will control it)
-
🎭 Start Role-Playing
- Immediately enjoy forgetting-free role-playing experience!
- Smart Scene Management: When switching presets and character cards, the system automatically clears old scenarios, no manual operation needed
DeepRolePlay supports direct command input in the chat interface to manage system state and data, without entering special modes.
| Command | Function | Description |
|---|---|---|
$drp or $help |
Show help information | View all available commands and usage instructions |
$show |
Display data tables | View all current memory tables (worldview, characters, items, etc.) |
$rm |
Clear data | Reset all memory tables and scenario files |
$reset |
Smart reset | Intelligently analyze conversation history and set appropriate AI message index for preset adaptation |
-
View Help: Send
$drpor$helpin any chat interfaceUser: $drp System: 📚 DeepRolePlay Command Help Current version supports direct command input in conversation, no special mode required. 🔧 Available Commands: • $help or $drp - Show this help information • $reset - Smart AI message index adaptation, automatically determine real role-playing responses • $rm - Clear all table data and scenario files • $show - Display all current table data -
View Data Tables: Send
$showto view all currently stored character informationUser: $show System: Current Memory Tables: [Worldview Table] (2 rows) [Character Table] (1 rows) [Item Table] (0 rows) ... -
Clear Character Data: Send
$rmto completely reset role-playing stateUser: $rm System: Memory tables and scenarios directory have been reset successfully. -
Smart Reset: Send
$resetfor intelligent preset adaptationUser: $reset System: ✅ last_ai_messages_index in memory has been successfully updated to: 2 🔧 Adaptation complete! The system has intelligently determined and set the appropriate AI message index based on current conversation history.
DeepRolePlay integrates ComfyUI backend for automatic image generation during role-playing:
- 🖼️ Smart Image Generation: Automatically generates relevant images based on dialogue content and scene descriptions
- 🔧 Custom Workflows: Supports importing custom ComfyUI workflow JSON files
- ⚡ Asynchronous Processing: Image generation runs parallel to dialogue without affecting response speed
- 📱 Frontend Optimization: Automatically adjusts image sizes for optimal transmission efficiency
Configuration example:
comfyui:
enabled: true # Enable image generation
ip: "127.0.0.1" # ComfyUI server address
port: 8188 # ComfyUI port
workflow_path: "3rd/comfyui/wai.json" # Workflow file pathTraditional single model problems: Character Forgetting → Plot Breakdown → Experience Collapse
DeepRolePlay's workflow solution:
- 🔍 Memory Flashback Processing: Intelligently retrieves historical conversations and external knowledge, automated execution based on LangGraph
- 📝 Scenario Update Management: Real-time maintenance of character state and plot coherence, supports tabular data management
- 🗂️ Table Management System: Structured storage of worldview, characters, items, etc., supports dynamic CRUD operations
- 🎭 Main Conversation Model: Generates character responses based on complete context
User Request -> HTTP Proxy Service
|
v
[Check if Console Command]
/ \
Yes / \ No
v v
Backend Console Trigger Workflow Execution
| |
Command Parsing +------+------+
($drp/$show/ | |
$rm/$exit) v v
| Memory Flashback Scenario Update
Execute Commands Processing Processing
- Display tables Node Node
- Reset data | |
- Mode switching | Table Management
| | (CRUD)
v | |
Return Command Result +------+------+
|
v
Inject Updated Scenario
|
v
Forward to Target LLM
|
v
Return Enhanced Response
- Python 3.12
- UV Virtual Environment Manager (Recommended)
git clone https://github.com/yourusername/deepRolePlay.git
cd deepRolePlay
uv venv --python 3.12
uv pip install -r requirements.txtuv run python main.pyChange your AI application's (SillyTavern, OpenWebUI, etc.) API endpoint to:
http://localhost:6666/v1
🌟 This project will check if the port is occupied. If it is, it will automatically increment by 1. Therefore, the actual port needs to be checked from the terminal output.
The system will automatically:
- Intercept conversation requests
- Execute workflow
- Update scenario state
- Inject enhanced context into requests
- Return more accurate role-playing responses
Use PyInstaller to package as executable:
pyinstaller --name DeepRolePlay --onefile --clean --console \
--add-data "src;src" --add-data "utils;utils" --add-data "config;config" \
--add-data "3rd;3rd" \
--hidden-import=locale --hidden-import=codecs \
main.pyAfter packaging, DeepRolePlay.exe will be generated in the dist/ directory, distribute it together with the configuration file to users.
This project uses standard OpenAI API format, both background processing models (Agent) and forwarding target models (Proxy) support any OpenAI Style format models:
- 🌟 OpenAI Style: All APIs supporting OpenAI Style format
- 🔥 OpenRouter: Aggregates multiple service providers, rich model selection
- 💻 Local Ollama: Fully private deployment, data security
- 🚀 DeepSeek: High-quality dialogue, low cost
- ⚡ Claude: Through OpenRouter or other compatible services
- 🧠 Gemini: Through compatible interfaces
- 🔧 Self-deployed Models: Any self-hosted model following OpenAI API format
- Agent Model: Used for background memory processing and scenario updates, recommend cost-effective models
- Proxy Model: Target model for actual user dialogue, can choose high-quality conversation models
- Dual Configuration: Both can use different service providers for flexible cost and effect optimization
The design philosophy of this project is inspired by the following research:
- Building effective agents - Anthropic
- LangGraph Documentation - LangChain
- st-memory-enhancement - muyoou
MIT License
