Skip to content

[AI] - Integrate OpenAI (or similar AI service) for real-time conversation practice functionality #47

@Gerson2102

Description

@Gerson2102

📘 Issue Description

Integrate OpenAI (or similar AI service) for real-time conversation practice functionality. This feature will allow authenticated users to engage in AI-powered conversations to practice their language skills. The implementation should include a secure /chat/ai POST endpoint that handles chat requests and responses while maintaining user authentication and proper error handling.

🔍 Steps

Backend Implementation

  1. Install OpenAI SDK

    • Add openai package to dependencies
    • Add OpenAI API key configuration to environment variables and settings
  2. Create Chat Controller

    • Create src/controllers/chat.controller.ts
    • Implement sendMessage method for handling chat requests
    • Handle OpenAI API integration with proper error handling
    • Support conversation context/history management
  3. Create Chat Service

    • Create src/services/chat.service.ts
    • Implement OpenAI API calls with proper configuration
    • Handle rate limiting and token management
    • Support different conversation modes (practice levels: A1, A2, B1, B2, C1, C2)
  4. Create Chat Routes

    • Create src/routes/api/modules/chat.routes.ts
    • Implement POST /api/v1/chat/ai endpoint
    • Apply authentication middleware (isAuthorized())
    • Add request validation middleware
  5. Create Validation Schema

    • Create src/models/validations/chat.validators.ts
    • Validate message content, conversation context, and practice level
    • Ensure proper sanitization of user input
  6. Environment Configuration

    • Add OPENAI_API_KEY to environment variables
    • Add OPENAI_MODEL configuration (default: gpt-3.5-turbo)
    • Update settings schema in src/core/config/settings.ts
  7. Error Handling & Security

    • Implement proper error responses for API failures
    • Add rate limiting specific to chat endpoints
    • Sanitize user input and AI responses
    • Handle OpenAI API quota limits gracefully

Database Considerations (Optional Enhancement)

  1. Chat History Storage
    • Consider adding chat history tables to Prisma schema
    • Store conversation sessions for user progress tracking
    • Implement data retention policies

✅ Acceptance Criteria

  • Authentication Required: Only authenticated users can access the chat endpoint
  • Secure API Integration: OpenAI API key is properly configured and secured
  • Request Validation: All incoming chat requests are validated (message content, length limits)
  • Error Handling: Proper error responses for API failures, rate limits, and invalid requests
  • Rate Limiting: Chat endpoint has appropriate rate limiting to prevent abuse
  • Response Format: Consistent API response format following existing patterns (SuccessResponse, BadRequestResponse)
  • Environment Configuration: OpenAI configuration is managed through environment variables
  • TypeScript Support: Full TypeScript implementation with proper type definitions
  • Logging: Proper logging for chat interactions and API calls
  • Testing: Unit tests for chat service and controller methods

API Specification

Endpoint: POST /api/v1/chat/ai

Headers:

  • Authorization: Bearer <jwt_token>

  • Content-Type: application/json

Request Body:

{
  "message": "Hello, I want to practice English conversation",
  "conversationContext": [],
  "practiceLevel": "B1",
  "conversationType": "general"
}

Success Response (200):

{
  "success": true,
  "message": "Chat response generated successfully",
  "data": {
    "response": "Hello! I'd be happy to help you practice English...",
    "conversationId": "uuid-string",
    "timestamp": "2024-01-15T10:30:00Z"
  }
}

🌎 References

  • OpenAI API Documentation
  • OpenAI Node.js SDK
  • Existing codebase patterns in /src/controllers/ and /src/services/
  • Authentication patterns in /src/middlewares/authentication.ts
  • Validation patterns in /src/models/validations/

📜 Additional Notes

Technical Considerations

  1. Cost Management: Implement token counting and usage tracking to manage OpenAI API costs
  2. Performance: Consider implementing response caching for common practice scenarios
  3. Scalability: Design with future multi-language support in mind
  4. Privacy: Ensure user messages are handled according to privacy policies
  5. Fallback: Consider fallback responses if OpenAI API is unavailable

Integration Points

  • Follow existing controller patterns (auth.controller.ts, question.controller.ts)
  • Use existing authentication middleware (isAuthorized())
  • Follow validation patterns from question.validators.ts
  • Maintain consistency with existing API response formats
  • Use existing logging configuration from core/config/logger.ts

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions