-
Notifications
You must be signed in to change notification settings - Fork 30
[AI] - Integrate OpenAI (or similar AI service) for real-time conversation practice functionality #47
Copy link
Copy link
Open
Labels
Description
📘 Issue Description
Integrate OpenAI (or similar AI service) for real-time conversation practice functionality. This feature will allow authenticated users to engage in AI-powered conversations to practice their language skills. The implementation should include a secure /chat/ai POST endpoint that handles chat requests and responses while maintaining user authentication and proper error handling.
🔍 Steps
Backend Implementation
-
Install OpenAI SDK
- Add
openaipackage to dependencies - Add OpenAI API key configuration to environment variables and settings
- Add
-
Create Chat Controller
- Create
src/controllers/chat.controller.ts - Implement
sendMessagemethod for handling chat requests - Handle OpenAI API integration with proper error handling
- Support conversation context/history management
- Create
-
Create Chat Service
- Create
src/services/chat.service.ts - Implement OpenAI API calls with proper configuration
- Handle rate limiting and token management
- Support different conversation modes (practice levels: A1, A2, B1, B2, C1, C2)
- Create
-
Create Chat Routes
- Create
src/routes/api/modules/chat.routes.ts - Implement
POST /api/v1/chat/aiendpoint - Apply authentication middleware (
isAuthorized()) - Add request validation middleware
- Create
-
Create Validation Schema
- Create
src/models/validations/chat.validators.ts - Validate message content, conversation context, and practice level
- Ensure proper sanitization of user input
- Create
-
Environment Configuration
- Add
OPENAI_API_KEYto environment variables - Add
OPENAI_MODELconfiguration (default: gpt-3.5-turbo) - Update settings schema in
src/core/config/settings.ts
- Add
-
Error Handling & Security
- Implement proper error responses for API failures
- Add rate limiting specific to chat endpoints
- Sanitize user input and AI responses
- Handle OpenAI API quota limits gracefully
Database Considerations (Optional Enhancement)
- Chat History Storage
- Consider adding chat history tables to Prisma schema
- Store conversation sessions for user progress tracking
- Implement data retention policies
✅ Acceptance Criteria
- Authentication Required: Only authenticated users can access the chat endpoint
- Secure API Integration: OpenAI API key is properly configured and secured
- Request Validation: All incoming chat requests are validated (message content, length limits)
- Error Handling: Proper error responses for API failures, rate limits, and invalid requests
- Rate Limiting: Chat endpoint has appropriate rate limiting to prevent abuse
- Response Format: Consistent API response format following existing patterns (
SuccessResponse,BadRequestResponse) - Environment Configuration: OpenAI configuration is managed through environment variables
- TypeScript Support: Full TypeScript implementation with proper type definitions
- Logging: Proper logging for chat interactions and API calls
- Testing: Unit tests for chat service and controller methods
API Specification
Endpoint: POST /api/v1/chat/ai
Headers:
-
Authorization:
Bearer <jwt_token> -
Content-Type:
application/json
Request Body:
{
"message": "Hello, I want to practice English conversation",
"conversationContext": [],
"practiceLevel": "B1",
"conversationType": "general"
}Success Response (200):
{
"success": true,
"message": "Chat response generated successfully",
"data": {
"response": "Hello! I'd be happy to help you practice English...",
"conversationId": "uuid-string",
"timestamp": "2024-01-15T10:30:00Z"
}
}🌎 References
- OpenAI API Documentation
- OpenAI Node.js SDK
- Existing codebase patterns in
/src/controllers/and/src/services/ - Authentication patterns in
/src/middlewares/authentication.ts - Validation patterns in
/src/models/validations/
📜 Additional Notes
Technical Considerations
- Cost Management: Implement token counting and usage tracking to manage OpenAI API costs
- Performance: Consider implementing response caching for common practice scenarios
- Scalability: Design with future multi-language support in mind
- Privacy: Ensure user messages are handled according to privacy policies
- Fallback: Consider fallback responses if OpenAI API is unavailable
Integration Points
- Follow existing controller patterns (
auth.controller.ts,question.controller.ts) - Use existing authentication middleware (
isAuthorized()) - Follow validation patterns from
question.validators.ts - Maintain consistency with existing API response formats
- Use existing logging configuration from
core/config/logger.ts
Reactions are currently unavailable