A containerized AI-powered CI/CD pipeline service that brings intelligent automation to your GitHub Actions workflows. This service provides automated code analysis, test generation, smart deployment strategies, and intelligent reporting using Large Language Models.
- π Intelligent Code Analysis: AI-powered security vulnerability detection and code quality assessment
- π§ͺ Automated Test Generation: Generate comprehensive unit tests for files with low coverage
- π Smart Deployment Strategies: AI-recommended deployment strategies based on risk assessment
- π Intelligent Reporting: Generate actionable insights and summaries of CI/CD pipeline results
- π Comprehensive Analytics: Track security posture, quality metrics, deployment patterns, and MTTR
- π§ AI-Powered Log Analysis: Analyze GitHub Actions, application, and deployment logs for insights
- π Real-Time Dashboards: Monitor KPIs and trends with interactive dashboards
- π Advanced Reporting: Generate detailed reports on security, quality, deployment frequency, and more
- π³ Containerized Service: Easy-to-use Docker container that integrates with any GitHub Actions workflow
- π Batch Processing: Execute multiple operations in a single API call
- π‘οΈ Security-First: Built with security best practices and non-root container execution
Add this to your .github/workflows/ci.yml:
name: LLM-Powered CI/CD Pipeline
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
llm-analysis:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: LLM Code Analysis
uses: your-org/llm-cicd-action@v1
with:
openai-api-key: ${{ secrets.OPENAI_API_KEY }}
analysis-type: 'analyze'
security-scan-results: '[]'
quality-metrics: '[]'
- name: Generate Tests (if low coverage)
if: steps.llm-analysis.outputs.deployment-approved == 'true'
uses: your-org/llm-cicd-action@v1
with:
openai-api-key: ${{ secrets.OPENAI_API_KEY }}
analysis-type: 'generate-tests'
source-files: '[{"path": "src/components/Button.tsx", "content": "..."}]'# Pull and run the container
docker run -d \
-p 3000:3000 \
-e OPENAI_API_KEY="your-openai-api-key" \
-e GITHUB_TOKEN="your-github-token" \
ghcr.io/your-org/llm-cicd-service:latest
# Test the service
curl http://localhost:3000/healthGET /healthGET /api/statusPOST /api/analyze
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
{
"diff": "git diff content...",
"githubContext": {
"ref": "refs/heads/main",
"event_name": "push",
"actor": "username"
},
"securityFindings": [],
"codeQualityIssues": []
}POST /api/generate-report
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
X-GitHub-Token: YOUR_GITHUB_TOKEN
{
"workflowResults": {},
"securityData": {},
"qualityData": {},
"testingData": {},
"deploymentData": {},
"performanceData": {}
}POST /api/deploy
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
{
"environment": "production",
"analysisResults": {
"security_risk": "low",
"code_quality_score": 8,
"deployment_risk": "medium"
},
"deploymentConfig": {}
}POST /api/generate-tests
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
{
"sourceFiles": [
{
"path": "src/components/Button.tsx",
"content": "export const Button = () => { ... }"
}
],
"coverageThreshold": 70,
"testFramework": "jest"
}POST /api/batch
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
{
"operations": [
{
"type": "analyze",
"data": { "diff": "...", "githubContext": {...} }
},
{
"type": "generate-tests",
"data": { "sourceFiles": [...] }
}
]
}GET /api/analytics/metrics/security?days=30GET /api/analytics/metrics/quality?days=30GET /api/analytics/metrics/deployment?days=30GET /api/analytics/metrics/mttr?days=30GET /api/dashboard/overview?days=30GET /api/reports/comprehensive?days=30POST /api/analytics/logs/analyze/github-actions
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
{
"logData": {
"repository": "my-org/my-repo",
"workflow": "CI/CD Pipeline",
"logs": [
{
"timestamp": "2024-01-15T10:30:00Z",
"level": "error",
"message": "Build failed: timeout"
}
]
}
}| Variable | Description | Required | Default |
|---|---|---|---|
OPENAI_API_KEY |
OpenAI API key for LLM functionality | Yes | - |
GITHUB_TOKEN |
GitHub token for repository access | No | - |
PORT |
Service port | No | 3000 |
NODE_ENV |
Node environment | No | production |
ALLOWED_ORIGINS |
Comma-separated list of allowed CORS origins | No | * |
| Input | Description | Required | Default |
|---|---|---|---|
openai-api-key |
OpenAI API key | Yes | - |
github-token |
GitHub token | No | ${{ github.token }} |
analysis-type |
Type of analysis to perform | No | analyze |
environment |
Target environment | No | staging |
coverage-threshold |
Test coverage threshold | No | 70 |
container-image |
Docker image to use | No | latest |
service-port |
Service port | No | 3000 |
timeout |
Request timeout in seconds | No | 300 |
The GitHub Action provides the following outputs:
analysis-result: Complete analysis resultsdeployment-decision: Deployment decision and strategygenerated-report: Generated intelligent reportgenerated-tests: Generated test filessecurity-risk: Security risk level (low/medium/high/critical)code-quality-score: Code quality score (1-10)deployment-approved: Whether deployment is approveddeployment-strategy: Recommended deployment strategymetrics: Collected analytics metrics (security, quality, deployment)
- Security Posture: Vulnerability counts, risk levels, security gate triggers
- Quality Metrics: Code quality scores, test coverage, improvement trends
- Deployment Frequency: Success rates, strategy distribution, timing patterns
- MTTR: Mean time to recovery, incident response times
- Rollback Metrics: Rollback frequency, reasons, prevention strategies
- Artifact Patterns: Build sizes, types, optimization opportunities
- Overview Dashboard: High-level KPIs and trends
- Security Dashboard: Security posture and vulnerability analysis
- Quality Dashboard: Code quality metrics and improvement areas
- Deployment Dashboard: Deployment patterns and success rates
- Performance Dashboard: System performance and resource usage
- GitHub Actions Logs: Workflow performance, errors, bottlenecks
- Application Logs: Application health, performance, user impact
- Deployment Logs: Deployment success, failures, rollback patterns
- Intelligent Insights: AI-generated recommendations and trend analysis
# Clone the repository
git clone <repository-url>
cd action-intellicode-suite
# Install dependencies
npm install
# Start development server
npm run start:dev
# Run tests
npm test
# Run linting
npm run lint# Build the Docker image
docker build -t llm-cicd-service .
# Run locally
docker run -p 3000:3000 \
-e OPENAI_API_KEY="your-key" \
llm-cicd-service# Build and push to GitHub Container Registry
docker build -t ghcr.io/your-org/llm-cicd-service:latest .
docker push ghcr.io/your-org/llm-cicd-service:latest- Non-root execution: Container runs as non-root user
- Security headers: Helmet.js for security headers
- Input validation: All inputs are validated and sanitized
- Rate limiting: Built-in protection against abuse
- Secrets management: Secure handling of API keys and tokens
- Multi-stage Docker build: Optimized container size
- Health checks: Built-in health monitoring
- Graceful shutdown: Proper signal handling
- Request timeouts: Configurable timeout limits
- Resource limits: Memory and CPU constraints
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: Check this README and inline code comments
- Issues: Report bugs and request features via GitHub Issues
- Discussions: Join community discussions in GitHub Discussions
- v1.0.0: Initial release with core LLM functionality
- v1.1.0: Added batch processing and improved error handling
- v1.2.0: Enhanced security features and performance optimizations
Built for running GHA with super intelligence