This example demonstrates how to build an AI Customer Service Bot using PowerMem for intelligent memory management, LangGraph for stateful conversation workflows, and OceanBase as the database backend.
- 🔄 Stateful Workflows: Multi-step conversation management with LangGraph state graphs
- 🧠 Intelligent Memory: Automatic extraction of customer information, orders, and preferences
- 💬 Context-Aware Responses: Personalized responses based on customer history
- 📊 Multi-Step Processing: Handles order inquiries, issue resolution, and general questions
- 🔒 Privacy Protection: Customer data isolation through user_id
- 🚀 Scalable Storage: OceanBase database backend for enterprise-scale deployments
┌─────────────────┐
│ LangGraph │ Stateful workflow management
│ (State Graph) │ - Intent classification
│ │ - Multi-step routing
│ │ - Conversation flow
└────────┬────────┘
│
▼
┌─────────────────┐
│ PowerMem │ Intelligent memory management
│ (Memory Layer) │ - Fact extraction
│ │ - Semantic search
│ │ - Context retrieval
└────────┬────────┘
│
▼
┌─────────────────┐
│ OceanBase │ Vector database for scalable storage
│ (Database) │ - Customer memories
│ │ - Order history
│ │ - Preferences
└─────────────────┘
- Python 3.11+
- OceanBase Database (configured and running)
- API Keys:
- LLM API key (OpenAI, Qwen, etc.)
- Embedding API key (if different from LLM)
Option 1: Install from requirements.txt (Recommended)
cd examples/langgraph
pip install -r requirements.txtOption 2: Install manually
# Core dependencies
pip install powermem python-dotenv
# LangGraph dependencies
pip install langgraph>=1.0.0 langchain>=1.1.0 langchain-core>=1.1.0 langchain-openai>=1.1.0 langchain-community>=0.4.1
# OceanBase dependencies (if not already installed)
pip install pyobvector sqlalchemyOption 3: Install all at once
pip install powermem python-dotenv langgraph>=1.0.0 langchain>=1.1.0 langchain-core>=1.1.0 langchain-openai>=1.1.0 langchain-community>=0.4.1 pyobvector sqlalchemyCopy the configuration template and edit it:
# From project root
cp .env.example .envEdit .env and configure:
# Database Configuration
DATABASE_PROVIDER=oceanbase
OCEANBASE_HOST=127.0.0.1
OCEANBASE_PORT=2881
OCEANBASE_USER=root@sys
OCEANBASE_PASSWORD=your_password
OCEANBASE_DATABASE=powermem
OCEANBASE_COLLECTION=customer_memories
# LLM Configuration
LLM_PROVIDER=qwen # or openai
LLM_API_KEY=your_llm_api_key
LLM_MODEL=qwen-plus # or gpt-3.5-turbo
# Embedding Configuration
EMBEDDING_PROVIDER=qwen # or openai
EMBEDDING_API_KEY=your_embedding_api_key
EMBEDDING_MODEL=text-embedding-v4
EMBEDDING_DIMS=1536Ensure your OceanBase instance is running and accessible:
# Test connection (adjust host/port as needed)
mysql -h 127.0.0.1 -P 2881 -u root -pRun a predefined conversation demonstration:
cd examples/langgraph
python customer_service_bot.py --mode demoThis will:
- Initialize the bot with OceanBase
- Run through a sample customer conversation
- Demonstrate stateful workflow management
- Show memory storage and retrieval
- Display customer information summary
Run the bot in interactive mode for real-time conversations:
cd examples/langgraph
python customer_service_bot.py --mode interactiveInteractive Commands:
- Type your message to chat with the bot
- Type
summaryto see customer information summary - Type
quitorexitto end the conversation
Specify a customer ID for the conversation:
python customer_service_bot.py --mode interactive --customer-id customer_john_001The bot uses LangGraph's StateGraph to manage conversation flow:
# State schema
class CustomerServiceState(TypedDict):
messages: List[BaseMessage]
customer_id: str
intent: str # "order_inquiry", "issue_resolution", "general"
order_number: str
issue_type: str
context: Dict[str, Any]
resolved: boolThe graph consists of several nodes:
- load_context: Loads customer context from PowerMem
- classify_intent: Classifies customer intent (order inquiry, issue, general)
- handle_order_inquiry: Processes order-related questions
- handle_issue_resolution: Handles customer issues and complaints
- handle_general: Handles general inquiries
- save_conversation: Saves conversation to PowerMem
The graph uses conditional edges to route based on intent:
workflow.add_conditional_edges(
"classify_intent",
route_intent,
{
"order_inquiry": "handle_order_inquiry",
"issue_resolution": "handle_issue_resolution",
"general": "handle_general",
}
)PowerMem is used to:
- Store conversations with intelligent fact extraction
- Retrieve context based on current query
- Track customer preferences and order history
- Maintain privacy by isolating data by customer_id
All customer memories are stored in OceanBase with:
- Vector Embeddings: For semantic search
- Metadata: Intent, order numbers, issue types, timestamps
- Scalability: Handles large-scale customer data
Customer: Hello, I'd like to check the status of my order #ORD-12345
[Node: load_context] Loading context for customer customer_alice_001
[Node: classify_intent] Classifying intent...
Classified intent: order_inquiry
[Node: handle_order_inquiry] Processing order inquiry...
[Node: save_conversation] Saving conversation to PowerMem...
✓ Conversation saved to PowerMem
Bot: I can help you with your order inquiry. I found some previous order
information in your history. Your order #ORD-12345 is currently being
processed and will be shipped within 2-3 business days.
The bot can provide a summary of stored customer information:
summary = bot.get_customer_summary()
# Returns:
# {
# "total_memories": 15,
# "order_mentions": 8,
# "issue_mentions": 3,
# "preference_mentions": 4,
# "recent_memories": [...]
# }DATABASE_PROVIDER: Set tooceanbaseOCEANBASE_HOST: OceanBase server hostnameOCEANBASE_PORT: OceanBase port (default: 2881)OCEANBASE_NAME: Database nameOCEANBASE_COLLECTION: Collection/table name for memories
LLM_PROVIDER:qwen,openai, or other supported providersLLM_MODEL: Model name (e.g.,qwen-plus,gpt-3.5-turbo)LLM_TEMPERATURE: Response creativity (0.0-1.0)
EMBEDDING_PROVIDER: Embedding model providerEMBEDDING_MODEL: Embedding model nameEMBEDDING_DIMS: Vector dimensions (must match model)
You can enhance the intent classification by using an LLM:
def _classify_intent(self, state: CustomerServiceState) -> CustomerServiceState:
"""Classify intent using LLM for better accuracy."""
user_input = state["messages"][-1].content
prompt = f"""Classify the customer's intent. Options: order_inquiry, issue_resolution, general.
Customer message: {user_input}
Intent:"""
response = self.llm.invoke(prompt)
intent = response.content.strip().lower()
state["intent"] = intent
return stateYou can extend the workflow by adding new nodes:
def _handle_product_inquiry(self, state: CustomerServiceState) -> CustomerServiceState:
"""Handle product information requests."""
# Your custom logic here
return state
# Add to graph
workflow.add_node("handle_product_inquiry", self._handle_product_inquiry)
workflow.add_edge("classify_intent", "handle_product_inquiry")Problem: Cannot connect to OceanBase
Solution:
- Verify OceanBase is running:
mysql -h 127.0.0.1 -P 2881 -u root -p - Check configuration in
.env - Verify network connectivity and firewall settings
Problem: ModuleNotFoundError: No module named 'langgraph'
Solution:
pip install langgraph>=1.0.0 langchain>=1.1.0 langchain-core>=1.1.0 langchain-openai>=1.1.0 langchain-community>=0.4.1Problem: LLM or embedding API errors
Solution:
- Verify API keys in
.env - Check API key validity and quotas
- Ensure correct provider is configured
Problem: Conversations not being stored
Solution:
- Check OceanBase connection
- Verify
infer=Trueis set insave_conversation - Check database permissions
- Review error messages in console
- Customer Privacy: Always use unique
customer_idfor each customer - Data Security: Encrypt sensitive customer information
- Regular Backups: Backup OceanBase database regularly
- Monitoring: Monitor memory usage and database performance
- State Management: Keep state objects lightweight and focused
- Error Handling: Implement robust error handling in workflow nodes
| Feature | LangChain Example | LangGraph Example |
|---|---|---|
| Framework | LangChain Chains | LangGraph StateGraph |
| State Management | Memory-based | Explicit state objects |
| Workflow | Linear chain | Multi-step graph with routing |
| Intent Handling | Single handler | Conditional routing by intent |
| Use Case | Simple conversations | Complex multi-step workflows |
- LangChain Integration - Simple conversation chains
- Basic Usage - Simple memory operations
- Agent Memory - Multi-agent memory management
- Intelligent Memory - Advanced memory features
For issues or questions:
- Check the main README
- Review PowerMem documentation
- Open an issue on GitHub