GitHub Analytics Dashboard for Open Source Project Metrics.
# Start all services (Frontend, Backend, Database)
./start.sh
# Or manually:
docker compose up --build
This will start:
- Frontend (Next.js): http://localhost:3000
- Backend (FastAPI): http://localhost:8000
- Backend OpenAPI Specs: http://localhost:8000/docs
- ClickHouse Database: localhost:9001
- Database initialization: Automatically creates table and populates with 1000 dummy events
# View logs for specific service
docker compose logs -f frontend
docker compose logs -f backend
# Stop all services
docker compose down
The Open Source (OS) Score is a composite metric designed to provide assessment of a project's health and responsiveness. It ranges from 0 to 100, where a higher score indicates better performance.
The score is generated by looking at the performance data based on a predefined configuration. Here's a step-by-step breakdown of the calculation:
The OS Score is derived from several key performance indicators (KPIs). Each KPI is assigned a specific weight, determining its contribution to the final score. The current metrics and their configurations are:
- First Response Time (firstResponseTime):
- Weight: 0.40 (40% of the total score). Measures how quickly maintainers or contributors initially respond to new issues.
- Normalization Range:
- Best Case (min): 0 days/hours (instant response)
- Worst Case (max): 7 days (responses taking 7 days or longer will receive the minimum score for this metric component)
- Average Issue Resolution Time (avgIssueResolution):
- Weight: 0.20 (20% of the total score). Indicates the average time taken to close issues once they are opened.
- Normalization Range:
- Best Case (min): 0 days/hours
- Worst Case (max): 30 days (issues taking 30 days or longer to resolve will receive the minimum score for this metric component)
- Pull Request (PR) Review Time (prReviewTime):
- Weight: 0.40 (40% of the total score). Reflects how quickly the project team reviews pull requests.
- Normalization Range:
- Best Case (min): 0 days/hours
- Worst Case (max): 5 days (PRs taking 5 days or longer for review will receive the minimum score for this metric component)
Note: The sum of weights for all included metrics is 1.0, representing 100% of the score.)
Before combining, each raw metric value is normalized to a common scale of 0 to 100. This step ensures that different metrics (e.g., time in days) can be fairly compared and aggregated. The normalization process for each metric works as follows:
- Input: The raw currentValue of the metric (e.g., average issue resolution time in days), along with its configured minValue (best case) and maxValue (worst reasonable case).
- Capping: The currentValue is first "capped" or "clamped" to ensure it falls within the [minValue, maxValue] range.
- If currentValue < minValue, it's treated as minValue.
- If currentValue > maxValue, it's treated as maxValue. This means performance worse than the defined maxValue does not further penalize the score for that metric component; it simply receives the score associated with maxValue.
- Normalization Formula: Since all currently configured metrics are "lower is better" (e.g., a shorter response time is preferable), the following formula is used:
normalized_value = (maxValue - cappedValue) / (maxValue - minValue)
- If cappedValue is minValue (best performance), normalized_value becomes 1.
- If cappedValue is maxValue (worst performance boundary), normalized_value becomes 0.
- Values in between are scaled linearly.
- Scaling to 0-100: The normalized_value (which is between 0 and 1) is then multiplied by 100. The result is strictly between 0 and 100 to produce the final normalized score for that metric.
- Edge Case: If minValue and maxValue are identical, the function returns 100 if the currentValue matches this point, and 0 otherwise, to prevent division by zero.
Once each relevant metric has a normalized score (0-100), the calculateOsScore function computes the final OS Score: For each configured metric that has a value provided:
- The normalizedScore (from step 2) is multiplied by its config.weight (from step 1).
- This individual weightedMetricScore is added to the totalWeightedScore.
- Example Contribution:
- If firstResponseTime has a normalized score of 90, its contribution to the total is 90 * 0.40 = 36.
- If avgIssueResolution has a normalized score of 80, its contribution is 80 * 0.20 = 16.
After processing all configured metrics, the totalWeightedScore represents the sum of all individual weighted normalized scores. This sum is then rounded to two decimal places and returned as the project's OS Score.
Let's walk through an example with the following raw metric values:
- First Response Time: 2.1 days
- Average Issue Resolution Time: 3 days
- Pull Request (PR) Review Time: 1.5 days
1. First Response Time (firstResponseTime)
Raw Value: 2.1 days
Config: weight: 0.40, min: 0, max: 7
Capping: cappedValue = Math.max(0, Math.min(2.1, 7)) = 2.1
Normalization (0-1): (7 - 2.1) / (7 - 0) = 4.9 / 7 = 0.7
Normalized Score (0-100): 0.7 * 100 = 70
Weighted Contribution: 70 * 0.40 = 28
2. Average Issue Resolution Time (avgIssueResolution)
Raw Value: 3 days
Config: weight: 0.20, min: 0, max: 30
Capping: cappedValue = Math.max(0, Math.min(3, 30)) = 3
Normalization (0-1): (30 - 3) / (30 - 0) = 27 / 30 = 0.9
Normalized Score (0-100): 0.9 * 100 = 90
Weighted Contribution: 90 * 0.20 = 18
3. Pull Request (PR) Review Time (prReviewTime)
Raw Value: 1.5 days
Config: weight: 0.40, min: 0, max: 5
Capping: cappedValue = Math.max(0, Math.min(1.5, 5)) = 1.5
Normalization (0-1): (5 - 1.5) / (5 - 0) = 3.5 / 5 = 0.7
Normalized Score (0-100): 0.7 * 100 = 70
Weighted Contribution: 70 * 0.40 = 28
4. Final OS Score Calculation
OS Score = Weighted Contribution (First Response Time) + Weighted Contribution (Avg Issue Resolution) + Weighted Contribution (PR Review Time)
OS Score = 28 + 18 + 28
OS Score = 74.00
So, for a project with these input metrics, the calculated OS Score would be 74.00.