Skip to content

Add feat ai risk summary#272

Open
S3DFX-CYBER wants to merge 11 commits intoOWASP-BLT:mainfrom
S3DFX-CYBER:add-feat-ai-summary
Open

Add feat ai risk summary#272
S3DFX-CYBER wants to merge 11 commits intoOWASP-BLT:mainfrom
S3DFX-CYBER:add-feat-ai-summary

Conversation

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor

@S3DFX-CYBER S3DFX-CYBER commented Mar 6, 2026

Closes #185
Introduce an optional AI-based summary generator that:
Takes existing deterministic rule outputs as input
Produces a short, professional risk narrative
Does not replace or alter core rule logic
Includes deterministic fallback if AI fails

Summary by CodeRabbit

  • New Features
    • Added a sortable "Risk Summary" column to the pull request table with contextual, HTML-escaped tooltips.
    • Each PR now shows a one-paragraph risk summary (deterministic fallback content for now); cells display "-" until a summary is available and use color-coded badges to indicate readiness (green/amber/red).

Added AI-powered risk summary generation for PR readiness.
@owasp-blt
Copy link
Copy Markdown

owasp-blt bot commented Mar 6, 2026

👋 Thanks for opening this pull request, @S3DFX-CYBER!

Before your PR is reviewed, please ensure:

  • Your code follows the project's coding style and guidelines.
  • You have written or updated tests for your changes.
  • The commit messages are clear and descriptive.
  • You have linked any relevant issues (e.g., Closes #123).

🔍 Our team will review your PR shortly. If you have questions, feel free to ask in the comments.

🚀 Keep up the great work! — OWASP BLT

@owasp-blt
Copy link
Copy Markdown

owasp-blt bot commented Mar 6, 2026

📊 Monthly Leaderboard

Hi @S3DFX-CYBER! Here's how you rank for March 2026:

Rank User Open PRs PRs (merged) PRs (closed) Reviews Comments Total
#17 @YasogaN 0 12 0 0 0 120
#18 @S3DFX-CYBER 4 13 12 0 0 110
#19 @DonnieBLT 5 5 13 2 29 97

Scoring this month (across OWASP-BLT org): Open PRs (+1 each), Merged PRs (+10), Closed (not merged) (−2), Reviews (+5; first two per PR in-month), Comments (+2, excludes CodeRabbit). Run /leaderboard on any issue or PR to see your rank!

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 6, 2026

🍃 PR Readiness Check

Check the readiness of this PR on Leaf:
👉 Open on Leaf

Leaf reviews pull requests for operational readiness, security risks, and production-impacting changes before they ship.

@github-actions github-actions bot added the files-changed: 2 PR changes 2 files label Mar 6, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 6, 2026

Walkthrough

Adds a "Risk Summary" column to the PR table UI and augments PR readiness computation with a new risk_summary string produced by generate_ai_risk_summary() (which currently builds a prompt but returns a deterministic generate_fallback_summary(); no external AI call implemented).

Changes

Cohort / File(s) Summary
UI Layer
public/index.html
Inserted sortable "Risk Summary" table header (th[data-sort-column="risk_summary"], min-width styling, tooltip) and per-PR cell (td id="readiness-risk-summary-${pr.id}") initialized to -. Updated inline readiness cell rendering logic to display the summary with conditional color (emerald, red, amber) based on merge readiness and blockers.
Risk Summary Generation
src/utils.py
Added generate_ai_risk_summary(pr_readiness_data) and generate_fallback_summary(pr_readiness_data) functions for generating deterministic risk summary text. Updated calculate_pr_readiness(...) to include new risk_summary field in the returned dictionary. Minor comment text adjustment in parse_pr_url. Minor whitespace cleanup.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR partially addresses #185's requirements but contains unresolved syntax errors and incomplete implementation that prevent code from running. Fix remaining issues reported by coderabbitai: remove backtick syntax error line, replace undefined call_gemini_api with fallback, and add frontend logic to populate Risk Summary cells.
Title check ❓ Inconclusive The title 'Add feat ai risk summary' is vague and grammatically awkward, using non-standard phrasing that doesn't clearly convey the main change. Revise title to be clearer and more grammatically correct, such as 'Add AI-based risk summary feature' or 'Introduce optional AI risk summarizer'.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Out of Scope Changes check ✅ Passed All changes directly align with issue #185's scope: adding optional AI risk summary, fallback logic, and frontend display.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai why is one test failing

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 6, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the PR details and check runs
gh pr view 272 --repo OWASP-BLT/BLT-Leaf --json statusCheckRollup,headRefOid

# Get check runs for the PR
gh pr checks 272 --repo OWASP-BLT/BLT-Leaf

Length of output: 1868


🏁 Script executed:

#!/bin/bash
# Get the logs for the failing workflow run
gh run view 22777989399 --repo OWASP-BLT/BLT-Leaf --log-failed

Length of output: 1147


🏁 Script executed:

#!/bin/bash
# Find and inspect utils.py around line 580
fd utils.py --exec sed -n '565,600p' {}

Length of output: 1048


@S3DFX-CYBER: The failing test is the "Check Python syntax" step in the "Test Data Display" workflow. It's caused by an IndentationError in src/utils.py at line 580.

The return statement (and its closing brace) uses 2-space indentation, while the rest of the function body uses 4-space indentation. Python sees the return keyword as an unrecognized indentation level and raises the error.

Here's the problematic block:

    # ✅ 4-space indented (correct)
    if merge_ready:
        classification = 'READY_TO_MERGE'
    ...

  return {          # ❌ Only 2 spaces — should be 4 spaces
    'overall_score': overall_score,
    ...
    'risk_summary': generate_ai_risk_summary({...})
}                   # ❌ Closing brace also misaligned

The fix is to change the return block to use consistent 4-space indentation:

    return {
        'overall_score': overall_score,
        'ci_score': ci_score,
        'review_score': review_score,
        'classification': classification,
        'merge_ready': merge_ready,
        'blockers': blockers,
        'warnings': warnings,
        'recommendations': recommendations,
        'risk_summary': generate_ai_risk_summary({
            'overall_score': overall_score,
            'ci_score': ci_score,
            'review_score': review_score,
            'classification': classification,
            'merge_ready': merge_ready,
            'blockers': blockers,
            'warnings': warnings,
            'recommendations': recommendations
        })
    }

This was likely introduced when the risk_summary key was added to the return dict as part of this PR — the return statement's indentation wasn't corrected at the same time.

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai can u apply this fixes to the appropriate files and provide me with the updated files so i can apply

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 6, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the full context of the problematic function in utils.py
fd utils.py --exec sed -n '520,620p' {}

Length of output: 3586


@S3DFX-CYBER: Here is the corrected return block for src/utils.py. Replace the misindented section (starting from return { to the closing }) with the following:

    return {
        'overall_score': overall_score,
        'ci_score': ci_score,
        'review_score': review_score,
        'classification': classification,
        'merge_ready': merge_ready,
        'blockers': blockers,
        'warnings': warnings,
        'recommendations': recommendations,
        'risk_summary': generate_ai_risk_summary({
            'overall_score': overall_score,
            'ci_score': ci_score,
            'review_score': review_score,
            'classification': classification,
            'merge_ready': merge_ready,
            'blockers': blockers,
            'warnings': warnings,
            'recommendations': recommendations
        })
    }

What changed:

  • return { — corrected from 2 spaces4 spaces
  • All dict keys inside — corrected from 2 spaces8 spaces (4-space indent relative to return)
  • The generate_ai_risk_summary({...}) inner dict — corrected from 4 spaces12 spaces
  • The closing }) and } — corrected from 0/2 spaces8/4 spaces

This is the only change needed in src/utils.py. Once applied, the Python syntax check should pass.

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai can u provide the updated utils.py with the fix , so i can apply

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 6, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the full content of utils.py
fd utils.py --exec cat {}

Length of output: 23965


@S3DFX-CYBER: Here is the complete updated src/utils.py with the indentation fix applied. Only the return block at the end of calculate_pr_readiness() has been changed.

📄 Updated src/utils.py (click to expand)
"""Utility functions for PR parsing and analysis"""

import re
from datetime import datetime, timezone

# Score multiplier when changes are requested
# Reduces overall readiness score by 50% when reviewers request changes
_CHANGES_REQUESTED_SCORE_MULTIPLIER = 0.5

# Score multiplier when PR has merge conflicts
# Reduces overall readiness score by 33% when mergeable state is 'dirty' (conflicts)
_MERGE_CONFLICTS_SCORE_MULTIPLIER = 0.67


def parse_pr_url(pr_url):
    """
    Parse GitHub PR URL to extract owner, repo, and PR number.
    
    Security Hardening (Issue `#45`):
    - Type validation to prevent type confusion attacks
    - Anchored regex pattern to block malformed URLs with trailing junk
    - Raises ValueError instead of returning None for better error handling
    """
    # FIX Issue `#45`: Type validation
    if not isinstance(pr_url, str):
        raise ValueError("PR URL must be a string")
    
    if not pr_url:
        raise ValueError("PR URL is required")
    
    pr_url = pr_url.strip().rstrip('/')
    
    # FIX Issue `#45`: Anchored regex - must match EXACTLY, no trailing junk allowed
    pattern = r'^https?://github\.com/([^/]+)/([^/]+)/pull/(\d+)$'
    match = re.match(pattern, pr_url)
    
    if not match:
        # FIX Issue `#45`: Raise error instead of returning None
        raise ValueError("Invalid GitHub PR URL. Format: https://github.com/OWNER/REPO/pull/NUMBER")
    
    return {
        'owner': match.group(1),
        'repo': match.group(2),
        'pr_number': int(match.group(3))
    }


def parse_repo_url(url):
    """Parse GitHub Repo URL to extract owner and repo name"""
    if not url: return None
    url = url.strip().rstrip('/')
    pattern = r'https?://github\.com/([^/]+)/([^/]+)(?:/.*)?$'
    match = re.match(pattern, url)
    if match:
        return {
            'owner': match.group(1),
            'repo': match.group(2)
        }
    return None


def parse_org_url(url):
    """Parse GitHub Organization/User URL to extract the org/user name.  
    Returns dict with 'owner' key, or None if not a valid org URL.
    """
    if not url:
        return None
    url = url.strip().rstrip('/')
    # Match org/user URL: github.com/<owner> with no further path segments
    pattern = r'^https?://github\.com/([A-Za-z0-9_.-]+)$'
    match = re.match(pattern, url)
    if match:
        owner = match.group(1)
        # Exclude GitHub reserved paths that aren't orgs/users
        reserved = {'settings', 'organizations', 'explore', 'marketplace',
                    'notifications', 'new', 'login', 'signup', 'features',
                    'enterprise', 'pricing', 'topics', 'collections',
                    'trending', 'sponsors', 'about', 'security', 'pulls',
                    'issues', 'codespaces', 'discussions'}
        if owner.lower() in reserved:
            return None
        return {'owner': owner}
    return None


def calculate_review_status(reviews_data):
    """
    Calculate overall review status from reviews data.
    
    Args:
        reviews_data: List of review objects from GitHub API
        
    Returns:
        str: 'pending', 'approved', or 'changes_requested'
    """
    review_status = 'pending'
    if reviews_data:
        # Filter out reviews without submitted_at and sort by timestamp
        valid_reviews = [r for r in reviews_data if r.get('submitted_at')]
        sorted_reviews = sorted(valid_reviews, key=lambda x: x.get('submitted_at', ''))
        latest_reviews = {}
        for review in sorted_reviews:
            # Safely access user field - can be null for deleted accounts
            user = review.get('user')
            if user and user.get('login'):
                latest_reviews[user['login']] = review['state']

        # Determine overall status: changes_requested takes precedence over approved
        if 'CHANGES_REQUESTED' in latest_reviews.values():
            review_status = 'changes_requested'
        elif 'APPROVED' in latest_reviews.values():
            review_status = 'approved'
    
    return review_status


def parse_github_timestamp(timestamp_str):
    """Parse GitHub ISO 8601 timestamp to datetime object"""
    try:
        # GitHub timestamps are in format: 2024-01-15T10:30:45Z
        return datetime.strptime(timestamp_str.replace('Z', '+00:00'), '%Y-%m-%dT%H:%M:%S%z')
    except Exception as exc:
        # Raise error instead of silently using current time to avoid incorrect event ordering
        raise ValueError(f"Invalid GitHub timestamp: {timestamp_str!r}") from exc


def build_pr_timeline(timeline_data):
    """
    Build unified chronological timeline from PR events
    
    Args:
        timeline_data: Dict with commits, reviews, review_comments, issue_comments
    
    Returns:
        List of event dicts sorted by timestamp:
        {
            'type': 'commit' | 'review' | 'review_comment' | 'issue_comment',
            'timestamp': datetime object,
            'author': str,
            'data': dict with event-specific data
        }
    """
    events = []
    
    # Process commits
    for commit in timeline_data.get('commits', []):
        try:
            commit_data = commit.get('commit', {})
            author_data = commit_data.get('author', {})
            
            events.append({
                'type': 'commit',
                'timestamp': parse_github_timestamp(author_data.get('date', '')),
                'author': commit.get('author', {}).get('login', author_data.get('name', 'Unknown')),
                'data': {
                    'sha': commit.get('sha', '')[:7],
                    'message': commit_data.get('message', '').split('\n')[0]  # First line only
                }
            })
        except Exception:
            continue  # Skip malformed commits
    
    # Process reviews
    for review in timeline_data.get('reviews', []):
        try:
            # Skip pending reviews
            if review.get('state') == 'PENDING':
                continue
            
            events.append({
                'type': 'review',
                'timestamp': parse_github_timestamp(review.get('submitted_at', '')),
                'author': review.get('user', {}).get('login', 'Unknown'),
                'data': {
                    'state': review.get('state', ''),  # APPROVED, CHANGES_REQUESTED, COMMENTED
                    'body': review.get('body', '')
                }
            })
        except Exception:
            continue
    
    # Process review comments (inline code comments)
    for comment in timeline_data.get('review_comments', []):
        try:
            events.append({
                'type': 'review_comment',
                'timestamp': parse_github_timestamp(comment.get('created_at', '')),
                'author': comment.get('user', {}).get('login', 'Unknown'),
                'data': {
                    'body': comment.get('body', ''),
                    'path': comment.get('path', ''),
                    'in_reply_to': comment.get('in_reply_to_id')
                }
            })
        except Exception:
            continue
    
    # Process issue comments (general PR comments)
    for comment in timeline_data.get('issue_comments', []):
        try:
            events.append({
                'type': 'issue_comment',
                'timestamp': parse_github_timestamp(comment.get('created_at', '')),
                'author': comment.get('user', {}).get('login', 'Unknown'),
                'data': {
                    'body': comment.get('body', '')
                }
            })
        except Exception:
            continue
    
    # Sort all events by timestamp
    events.sort(key=lambda x: x['timestamp'])
    
    return events


def analyze_review_progress(timeline, pr_author):
    """
    Analyze review feedback loops and author responsiveness
    
    Args:
        timeline: List of timeline events from build_pr_timeline()
        pr_author: GitHub login of PR author
    
    Returns:
        Dict with:
        {
            'feedback_loops': List of feedback/response pairs,
            'total_feedback_count': int,
            'responded_count': int,
            'response_rate': float (0-1),
            'awaiting_author': bool,
            'awaiting_reviewer': bool,
            'stale_feedback': List of unaddressed feedback,
            'latest_review_state': str or None,
            'last_reviewer_action': datetime or None,
            'last_author_action': datetime or None
        }
    """
    feedback_loops = []
    latest_review_state = None
    last_reviewer_action = None
    last_author_action = None
    
    # Iterate through timeline to detect feedback patterns
    for event in timeline:
        author = event['author']
        timestamp = event['timestamp']
        event_type = event['type']
        
        # Track reviewer actions (reviews and comments from non-authors)
        if event_type in ['review', 'review_comment'] and author != pr_author:
            last_reviewer_action = timestamp
            
            # Update latest review state
            if event_type == 'review':
                latest_review_state = event['data'].get('state', '')
            
            # Create feedback loop entry
            feedback_loops.append({
                'reviewer': author,
                'feedback_time': timestamp,
                'feedback_type': event_type,
                'author_responded': False,
                'response_time': None,
                'response_type': None,
                'response_delay_hours': None
            })
        
        # Track author actions (commits and comments from author)
        elif author == pr_author and event_type in ['commit', 'issue_comment', 'review_comment']:
            last_author_action = timestamp
            
            # Check if this responds to pending feedback
            # Match to the most recent unresponded feedback
            for loop in reversed(feedback_loops):
                if not loop['author_responded'] and loop['feedback_time'] < timestamp:
                    loop['author_responded'] = True
                    loop['response_time'] = timestamp
                    loop['response_type'] = event_type
                    
                    # Calculate delay in hours
                    delay = (timestamp - loop['feedback_time']).total_seconds() / 3600
                    loop['response_delay_hours'] = round(delay, 1)
                    break
    
    # Calculate response metrics
    total_feedback = len(feedback_loops)
    responded_count = sum(1 for loop in feedback_loops if loop['author_responded'])
    response_rate = responded_count / total_feedback if total_feedback > 0 else 1.0
    
    # Determine current state
    awaiting_author = (
        latest_review_state == 'CHANGES_REQUESTED' or
        (last_reviewer_action and 
         (not last_author_action or last_reviewer_action > last_author_action))
    )
    
    awaiting_reviewer = (
        not awaiting_author and
        last_author_action and
        (not last_reviewer_action or last_author_action > last_reviewer_action)
    )
    
    # Find stale feedback (older than 3 days without response)
    now = datetime.now(timezone.utc)
    stale_threshold_hours = 72  # 3 days
    
    stale_feedback = []
    for loop in feedback_loops:
        if not loop['author_responded']:
            hours_old = (now - loop['feedback_time']).total_seconds() / 3600
            if hours_old > stale_threshold_hours:
                stale_feedback.append({
                    'reviewer': loop['reviewer'],
                    'feedback_type': loop['feedback_type'],
                    'days_old': round(hours_old / 24, 1)
                })
    
    return {
        'feedback_loops': feedback_loops,
        'total_feedback_count': total_feedback,
        'responded_count': responded_count,
        'response_rate': response_rate,
        'awaiting_author': awaiting_author,
        'awaiting_reviewer': awaiting_reviewer,
        'stale_feedback': stale_feedback,
        'latest_review_state': latest_review_state,
        'last_reviewer_action': last_reviewer_action.isoformat() if last_reviewer_action else None,
        'last_author_action': last_author_action.isoformat() if last_author_action else None
    }


def classify_review_health(review_data):
    """
    Classify review health and assign score (0-100)
    
    Args:
        review_data: Output from analyze_review_progress()
    
    Returns:
        Tuple of (classification: str, score: int)
        
        Classifications:
        - APPROVED: 90-100 - Reviews approved
        - ACTIVE: 70-85 - Good progress, responsive
        - AWAITING_REVIEWER: 60-80 - Waiting on reviewers
        - AWAITING_AUTHOR: 35-55 - Needs author response
        - STALLED: 10-30 - No activity or unaddressed feedback
        - NO_ACTIVITY: 50 - No reviews or feedback yet
    """
    response_rate = review_data['response_rate']
    stale_count = len(review_data['stale_feedback'])
    awaiting_author = review_data['awaiting_author']
    awaiting_reviewer = review_data['awaiting_reviewer']
    latest_state = review_data['latest_review_state']
    total_feedback = review_data['total_feedback_count']
    
    # No feedback yet
    if total_feedback == 0:
        return ('NO_ACTIVITY', 50)
    
    # Approved state
    if latest_state == 'APPROVED':
        return ('APPROVED', 95)
    
    # Stalled (has stale feedback)
    if stale_count > 0:
        # More stale feedback = lower score
        score = max(10, 50 - (stale_count * 15))
        return ('STALLED', score)
    
    # Awaiting author with poor response rate
    if awaiting_author and response_rate < 0.5:
        classification = 'AWAITING_AUTHOR'
        score = 35
    # Awaiting author with good response rate
    elif awaiting_author:
        classification = 'AWAITING_AUTHOR'
        score = 55
    # Awaiting reviewer
    elif awaiting_reviewer:
        # Higher score if author has been responsive
        classification = 'AWAITING_REVIEWER'
        score = 70 + int(response_rate * 10)
        score = min(score, 80)
    # Active (good back and forth)
    elif response_rate > 0.7:
        classification = 'ACTIVE'
        score = 85
    # Default active state
    else:
        classification = 'ACTIVE'
        score = 70
    
    # Apply penalty if changes were requested
    if latest_state == 'CHANGES_REQUESTED':
        score = max(0, score - 10)
    
    return (classification, score)


def calculate_ci_confidence(checks_passed, checks_failed, checks_skipped):
    """
    Calculate CI confidence score from check results
    
    Args:
        checks_passed: Number of passing checks
        checks_failed: Number of failing checks
        checks_skipped: Number of skipped checks
    
    Returns:
        int: Confidence score 0-100
    """
    total_checks = checks_passed + checks_failed + checks_skipped
    
    # No checks = neutral score
    if total_checks == 0:
        return 50
    
    # All failed = 0
    if checks_passed == 0 and checks_failed > 0:
        return 0
    
    # All passed = 100
    if checks_failed == 0 and checks_passed > 0:
        return 100
    
    # Calculate based on pass rate, penalize failures more than skipped
    pass_rate = checks_passed / total_checks
    fail_rate = checks_failed / total_checks
    skip_rate = checks_skipped / total_checks
    
    # Weighted score: passes add, failures subtract (reduced for flaky test tolerance), skips slightly reduce
    score = (pass_rate * 100) - (fail_rate * 50) - (skip_rate * 20)
    
    return max(0, min(100, int(score)))


def calculate_pr_readiness(pr_data, review_classification, review_score):
    """
    Calculate overall PR readiness combining CI and review health
    
    Args:
        pr_data: Dict with PR info including CI checks
        review_classification: str from classify_review_health
        review_score: int from classify_review_health
    
    Returns:
        Dict with:
        {
            'overall_score': int 0-100,
            'ci_score': int 0-100,
            'review_score': int 0-100,
            'classification': str,
            'merge_ready': bool,
            'blockers': List[str],
            'warnings': List[str],
            'recommendations': List[str]
        }
    """
    # Calculate CI score
    ci_score = calculate_ci_confidence(
        pr_data.get('checks_passed', 0),
        pr_data.get('checks_failed', 0),
        pr_data.get('checks_skipped', 0)
    )
    
    # Weighted combination: 45% CI, 55% Review (reduced CI weight due to flaky tests)
    overall_score_raw = (ci_score * 0.45) + (review_score * 0.55)
    
    # Reduce readiness by 50% when changes are requested
    if review_classification == 'AWAITING_AUTHOR':
        overall_score_raw *= _CHANGES_REQUESTED_SCORE_MULTIPLIER
    
    # Reduce readiness by 33% when PR has merge conflicts.
    # Note: this multiplier compounds with other score multipliers (e.g. changes
    # requested), so a PR with both conditions would be scaled by
    # 0.5 * 0.67 = 0.335 (~66.5% total reduction).
    mergeable_state = pr_data.get('mergeable_state', '')
    if mergeable_state == 'dirty':
        overall_score_raw *= _MERGE_CONFLICTS_SCORE_MULTIPLIER
    
    overall_score = int(overall_score_raw)
    
    # Force score to 0% for Draft PRs
    is_draft = pr_data.get('is_draft') == 1 or pr_data.get('is_draft') == True
    if is_draft:
        overall_score = 0
    
    # Deduct 3 points for each open conversation
    open_conversations_count = pr_data.get('open_conversations_count', 0)
    if open_conversations_count > 0:
        overall_score = max(0, overall_score - (open_conversations_count * 3))
    
    # Identify blockers, warnings, recommendations
    blockers = []
    warnings = []
    recommendations = []
    
    # Draft blocker
    if is_draft:
        blockers.append("PR is in draft mode")
        recommendations.append("Convert to 'Ready for review' when finished")
    
    # CI blockers (with tolerance for 1-2 flaky test failures)
    checks_failed = pr_data.get('checks_failed', 0)
    checks_skipped = pr_data.get('checks_skipped', 0)
    
    if checks_failed > 2:
        blockers.append(f"{checks_failed} CI check(s) failing")
        recommendations.append("Fix failing CI checks before merging")
    elif checks_failed > 0:
        warnings.append(f"{checks_failed} CI check(s) failing (possibly flaky tests)")
        recommendations.append("Verify if failures are from known flaky tests (Selenium, Docker)")
    
    if checks_skipped > 0:
        warnings.append(f"{checks_skipped} CI check(s) skipped")
    
    # Review blockers
    if review_classification == 'AWAITING_AUTHOR':
        blockers.append("Awaiting author response to feedback")
        recommendations.append("Address reviewer comments and push updates")
    
    if review_classification == 'STALLED':
        blockers.append("PR has stale unaddressed feedback")
        recommendations.append("Review and respond to old comments")
    
    if review_classification == 'NO_ACTIVITY':
        warnings.append("No review activity yet")
        recommendations.append("Request reviews from maintainers")
    
    if review_classification == 'AWAITING_REVIEWER':
        warnings.append("Awaiting reviewer approval")
        recommendations.append("Ping reviewers or request re-review")
    
    # PR state warnings
    if pr_data.get('state') == 'closed':
        blockers.append("PR is closed")
    
    if pr_data.get('is_merged') == 1:
        blockers.append("PR is already merged")
    
    mergeable_state = pr_data.get('mergeable_state', '')
    if mergeable_state == 'dirty':
        blockers.append("PR has merge conflicts")
        recommendations.append("Resolve merge conflicts with base branch")
    elif mergeable_state == 'blocked':
        warnings.append("PR is blocked by required status checks or reviews")
    
    # File change warnings
    files_changed = pr_data.get('files_changed', 0)
    if files_changed > 30:
        warnings.append(f"Large PR ({files_changed} files changed)")
        recommendations.append("Consider splitting into smaller PRs for easier review")
    
    # Open conversations warning
    if open_conversations_count > 0:
        warnings.append(f"{open_conversations_count} open conversation(s) unresolved")
        recommendations.append("Resolve open review conversations before merging")
    
    # Determine if merge ready
    merge_ready = (
        overall_score >= 70 and
        len(blockers) == 0 and
        review_classification in ['APPROVED', 'AWAITING_REVIEWER', 'ACTIVE']
    )
    
    # Overall classification
    if merge_ready:
        classification = 'READY_TO_MERGE'
    elif overall_score >= 60:
        classification = 'NEARLY_READY'
    elif overall_score >= 40:
        classification = 'NEEDS_WORK'
    else:
        classification = 'NOT_READY'

    return {
        'overall_score': overall_score,
        'ci_score': ci_score,
        'review_score': review_score,
        'classification': classification,
        'merge_ready': merge_ready,
        'blockers': blockers,
        'warnings': warnings,
        'recommendations': recommendations,
        'risk_summary': generate_ai_risk_summary({
            'overall_score': overall_score,
            'ci_score': ci_score,
            'review_score': review_score,
            'classification': classification,
            'merge_ready': merge_ready,
            'blockers': blockers,
            'warnings': warnings,
            'recommendations': recommendations
        })
    }


`#Add` ai summary feat
def generate_ai_risk_summary(pr_readiness_data):
    """
    Generate a concise AI-powered risk summary for a PR.

    Args:
        pr_readiness_data: Dict from calculate_pr_readiness()

    Returns:
        str: AI-generated summary or fallback summary
    """
    # Extract key data for the prompt
    blockers = pr_readiness_data.get('blockers', [])
    warnings = pr_readiness_data.get('warnings', [])
    recommendations = pr_readiness_data.get('recommendations', [])
    overall_score = pr_readiness_data.get('overall_score', 0)
    classification = pr_readiness_data.get('classification', 'NOT_READY')

    # Build the prompt for Gemini
    prompt = (
        f"Generate a concise, professional risk summary for a PR with the following details:\n"
        f"- Classification: {classification}\n"
        f"- Overall score: {overall_score}\n"
        f"- Blockers: {blockers}\n"
        f"- Warnings: {warnings}\n"
        f"- Recommendations: {recommendations}\n\n"
        f"Focus on why the PR may be risky and what should be addressed first. "
        f"Keep the summary to 1-2 sentences. If the PR is ready, say so clearly."
    )

    # Call Gemini (pseudo-code; replace with actual API call)
    try:
        ai_summary = call_gemini_api(prompt)  # Replace with your Gemini integration
        return ai_summary
    except Exception as e:
        # Fallback to deterministic summary
        return generate_fallback_summary(pr_readiness_data)


def generate_fallback_summary(pr_readiness_data):
    """
    Generate a deterministic fallback summary if AI fails.
    """
    blockers = pr_readiness_data.get('blockers', [])
    warnings = pr_readiness_data.get('warnings', [])
    classification = pr_readiness_data.get('classification', 'NOT_READY')

    if blockers:
        return (
            f"This PR is not merge-ready due to {len(blockers)} blocker(s), "
            f"including: {', '.join(blockers[:2])}. "
            f"Address these issues before proceeding."
        )
    elif warnings:
        return (
            f"This PR is nearly ready but has {len(warnings)} warning(s), "
            f"such as: {', '.join(warnings[:2])}. "
            f"Review these before merging."
        )
    else:
        return f"This PR is {classification.lower().replace('_', ' ')} and ready for review/merge."

Summary of the only change made — the return block at the end of calculate_pr_readiness():

Before After
··return { (2-space) ····return { (4-space)
····'overall_score': ... (4-space) ········'overall_score': ... (8-space)
Inner dict keys at 8-space Inner dict keys at 12-space
Closing }) at 4-space, } at 0-space Closing }) at 8-space, } at 4-space

Once you replace src/utils.py with this content and push, the Python syntax check should pass. 🐇✨

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai why are tests still failing

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 6, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check current status of PR checks
gh pr checks 272 --repo OWASP-BLT/BLT-Leaf

echo "---"

# Check the latest workflow run details
gh run list --repo OWASP-BLT/BLT-Leaf --limit 5

echo "---"

# Get the current content of utils.py around the problematic area
fd utils.py --exec sed -n '560,625p' {}

echo "---"

# Also check if call_gemini_api is defined anywhere - it was referenced but may be missing
fd utils.py --exec grep -n "call_gemini_api\|import\|generate_ai_risk_summary\|generate_fallback_summary" {}

Length of output: 3677


🏁 Script executed:

#!/bin/bash
# Get the failed logs from the latest failing run
gh run view 22778345599 --repo OWASP-BLT/BLT-Leaf --log-failed

echo "---"

# Check the exact line with the backtick issue
fd utils.py --exec grep -n "\`#Add\`\|SyntaxError\|call_gemini_api" {}

echo "---"

# Show lines 598-610 to confirm the exact problematic line in context
fd utils.py --exec sed -n '598,610p' {}

Length of output: 1764


@S3DFX-CYBER: Found two issues still in src/utils.py:


🔴 Issue 1 — Still causing the SyntaxError (line 602)

The line:

`#Add` ai summary feat

is not valid Python — it's markdown-style backtick syntax accidentally left in the file between the two functions. It needs to be removed or replaced with a proper Python comment.

Fix: Change line 602 to:

# Add AI summary feat

or simply delete the line entirely.


🟡 Issue 2 — call_gemini_api is never defined (line 634)

Inside generate_ai_risk_summary, the code calls call_gemini_api(prompt) which doesn't exist anywhere in utils.py. This will cause a NameError at runtime whenever the function is called.

Since the Gemini integration is a placeholder, replace this block:

    try:
        ai_summary = call_gemini_api(prompt)  # Replace with your Gemini integration
        return ai_summary
    except Exception as e:
        # Fallback to deterministic summary
        return generate_fallback_summary(pr_readiness_data)

with just the fallback for now (until the real Gemini API is integrated):

    # TODO: Replace with actual Gemini API call when integration is ready
    return generate_fallback_summary(pr_readiness_data)

Summary of what to do in src/utils.py

Line Current Fix
602 `#Add` ai summary feat # Add AI summary feat (or delete)
634–637 call_gemini_api(prompt) in try/except Replace with return generate_fallback_summary(pr_readiness_data)

Fix #1 will unblock the CI syntax check immediately. Fix #2 prevents a runtime crash once the feature is exercised. 🐇

@owasp-blt owasp-blt bot added the needs-peer-review PR needs peer review label Mar 9, 2026
@owasp-blt
Copy link
Copy Markdown

owasp-blt bot commented Mar 9, 2026

👋 Hi @S3DFX-CYBER!

This pull request needs a peer review before it can be merged. Please request a review from a team member who is not:

  • The PR author
  • coderabbitai
  • copilot

Once a valid peer review is submitted, this check will pass automatically. Thank you!

⚠️ Peer review enforcement is active.

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai why are the tests failing

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 20, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check current PR checks status
gh pr checks 272 --repo OWASP-BLT/BLT-Leaf

echo "=== Latest workflow runs ==="
gh run list --repo OWASP-BLT/BLT-Leaf --limit 5

echo "=== Current utils.py around the problem areas ==="
fd utils.py --exec sed -n '595,645p' {}

echo "=== Check for remaining syntax issues ==="
fd utils.py --exec grep -n "backtick\|\`#\|call_gemini_api\|SyntaxError" {}

Length of output: 3558


🏁 Script executed:

#!/bin/bash
# Get the failed logs from the latest failing run
gh run view 23293325124 --repo OWASP-BLT/BLT-Leaf --log-failed

echo "=== Validate job logs ==="
gh run view 23293325107 --repo OWASP-BLT/BLT-Leaf --log-failed

Length of output: 93633


@S3DFX-CYBER: The fix from our previous conversation has not been pushed to the branch yet — the file on GitHub still contains the broken line. The CI is pulling the latest commit (752faa9) which still has the invalid syntax.

Both jobs fail with the exact same error:

File "/session/metadata/utils.py", line 602
  `#Add` ai summary feat
  ^
SyntaxError: invalid syntax

This crashes wrangler dev before it can even start, causing the "Authentication runtime behavior test setup" test to fail (82/83 pass otherwise).


✅ What you need to do — two edits in src/utils.py:

Fix 1 — Line 602 (causes the SyntaxError crash):

# Before (broken):
`#Add` ai summary feat

# After (valid Python comment):
# Add AI summary feat

Fix 2 — Lines 634–637 (prevents a NameError at runtime):

# Before:
    try:
        ai_summary = call_gemini_api(prompt)  # Replace with your Gemini integration
        return ai_summary
    except Exception as e:
        # Fallback to deterministic summary
        return generate_fallback_summary(pr_readiness_data)

# After (use fallback until real Gemini integration is added):
    # TODO: Replace with actual Gemini API call when integration is ready
    return generate_fallback_summary(pr_readiness_data)

Once you make these two edits and push the commit to your branch, both CI jobs should go green. 🐇

@S3DFX-CYBER S3DFX-CYBER marked this pull request as ready for review March 20, 2026 13:48
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/utils.py (1)

19-19: Backticks in docstrings/comments are harmless but inconsistent.

The backticks around issue numbers (e.g., Issue \#45``) in comments are syntactically valid Python strings, but they introduce inconsistency with typical Python comment conventions. Consider removing the backticks for consistency:

-    Security Hardening (Issue `#45`):
+    Security Hardening (Issue `#45`):

Also applies to: 24-24, 33-33, 38-38

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils.py` at line 19, Replace inconsistent backtick-wrapped issue
references in comments/docstrings (e.g., the string "Security Hardening (Issue
`#45`)" and the other occurrences showing backticks) by removing the backticks
so they read "Issue `#45`" (and similarly for the other instances) to match normal
Python comment/docstring conventions; update all occurrences in src/utils.py
where issue numbers are enclosed in backticks so formatting is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@public/index.html`:
- Around line 2712-2714: Add logic in the updateInlineCells function to populate
the cell with id readiness-risk-summary-${pr.id} using the API's
readiness.risk_summary: locate updateInlineCells (the block that updates the
Issues cell) and after that block get the element by id using the prId variable,
check readiness.risk_summary exists, then set innerHTML to a truncated, escaped
version (use the existing escapeHtml helper) and include a title attribute with
the full escaped text so long summaries show on hover.
- Around line 2381-2385: The TH element that defines the "Risk Summary" column
has a mismatched quote in the class attribute and a typo in the title; locate
the <th> element that includes attributes class, data-sort-column="risk_summary"
and title="AI-generated risk summmary for the PR" and fix the class attribute so
its opening and closing quotes match (use either both single or both double
quotes) and correct the title text from "summmary" to "summary" so the HTML
parses correctly.

In `@src/utils.py`:
- Around line 633-638: The try/except is calling an undefined function
call_gemini_api which will raise NameError and is masked by the exception
handler; replace the whole try/except so the function returns the deterministic
summary directly by invoking generate_fallback_summary(pr_readiness_data) (e.g.,
return generate_fallback_summary(pr_readiness_data)) and remove references to
ai_summary/call_gemini_api until a proper Gemini integration (and import) is
added.

---

Nitpick comments:
In `@src/utils.py`:
- Line 19: Replace inconsistent backtick-wrapped issue references in
comments/docstrings (e.g., the string "Security Hardening (Issue `#45`)" and the
other occurrences showing backticks) by removing the backticks so they read
"Issue `#45`" (and similarly for the other instances) to match normal Python
comment/docstring conventions; update all occurrences in src/utils.py where
issue numbers are enclosed in backticks so formatting is consistent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: OWASP-BLT/coderabbit/.coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 1761bb2e-8de1-4aa0-b2d0-e89340983cc0

📥 Commits

Reviewing files that changed from the base of the PR and between 4b3d830 and f8fc818.

📒 Files selected for processing (2)
  • public/index.html
  • src/utils.py

S3DFX-CYBER and others added 2 commits March 20, 2026 19:27
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai Missing JavaScript logic to populate the Risk Summary cell.

The cell is created with id readiness-risk-summary-${pr.id} but the updateInlineCells function (around lines 3055-3157) does not update this cell with the risk_summary value from the API response. The cell will always show -

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 20, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the updateInlineCells function and surrounding context
fd index.html --exec sed -n '3040,3180p' {}

echo "=== Also check how risk_summary cell is defined ==="
fd index.html --exec grep -n "risk.summary\|risk_summary" {}

Length of output: 7588


@S3DFX-CYBER: You're correct. The updateInlineCells function handles every readiness cell except readiness-risk-summary-${prId}. The fix is to add one new block at the end of updateInlineCells, just before its closing }.

Add this snippet after the Issues/Blockers block (after the issuesCell.innerHTML = issuesHTML; closing section):

            // Update Risk Summary
            const riskSummaryCell = document.getElementById(`readiness-risk-summary-${prId}`);
            if (riskSummaryCell) {
                const summary = readiness.risk_summary;
                if (summary && summary.trim()) {
                    const colorClass = readiness.merge_ready
                        ? 'text-emerald-700 dark:text-emerald-400'
                        : readiness.blockers && readiness.blockers.length > 0
                            ? 'text-red-700 dark:text-red-400'
                            : 'text-amber-700 dark:text-amber-400';
                    riskSummaryCell.innerHTML = `
                        <span class="text-xs ${colorClass}" title="${escapeHtml(summary)}">
                            ${escapeHtml(summary)}
                        </span>`;
                } else {
                    riskSummaryCell.innerHTML = '<span class="text-xs text-slate-400">-</span>';
                }
            }

What this does:

  • Reads readiness.risk_summary (the string returned by the Python backend).
  • Colors the text green if merge_ready, red if blockers are present, amber otherwise — matching the risk level conveyed by the summary.
  • Uses escapeHtml() (already used by the Issues cell) on both the visible text and the title tooltip, so long summaries are still readable on hover.
  • Falls back to a muted - if the value is absent or empty.

Where to insert it — in public/index.html, find the closing } of updateInlineCells (after the issuesCell.innerHTML = issuesHTML; block that ends around line 3155) and paste the block immediately before that closing brace.

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai can u show me where exactly i have to input the changes

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 20, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the updateInlineCells function and its closing brace
fd index.html --exec grep -n "updateInlineCells\|issuesCell\|issuesHTML\|risk.summary\|risk_summary\|readiness-risk-summary" {}

echo "=== End of updateInlineCells (lines 3140-3175) ==="
fd index.html --exec sed -n '3140,3175p' {}

Length of output: 4476


@S3DFX-CYBER: Here is the exact location in public/index.html. The new block goes at Line 3157, between the two closing braces at the end of updateInlineCells:

                issuesHTML += '</div>';
                issuesCell.innerHTML = issuesHTML;
            }
        }   // ← this closes updateInlineCells (line 3158)

👉 Insert the new code between the } on line 3157 and the } on line 3158, so it looks like this:

                issuesHTML += '</div>';
                issuesCell.innerHTML = issuesHTML;
            }   // ← line 3157: closes `if (issuesCell)` block

            // ✅ INSERT NEW CODE HERE ↓
            // Update Risk Summary
            const riskSummaryCell = document.getElementById(`readiness-risk-summary-${prId}`);
            if (riskSummaryCell) {
                const summary = readiness.risk_summary;
                if (summary && summary.trim()) {
                    const colorClass = readiness.merge_ready
                        ? 'text-emerald-700 dark:text-emerald-400'
                        : readiness.blockers && readiness.blockers.length > 0
                            ? 'text-red-700 dark:text-red-400'
                            : 'text-amber-700 dark:text-amber-400';
                    riskSummaryCell.innerHTML = `
                        <span class="text-xs ${colorClass}" title="${escapeHtml(summary)}">
                            ${escapeHtml(summary)}
                        </span>`;
                } else {
                    riskSummaryCell.innerHTML = '<span class="text-xs text-slate-400">-</span>';
                }
            }
            // ✅ END OF NEW CODE ↑

        }   // ← line 3158: closes `updateInlineCells` function

        function invalidateApiCache(path) {   // ← next function starts here

Visual summary of the change:

Line What's there now After change
3155 issuesHTML += '</div>'; unchanged
3156 issuesCell.innerHTML = issuesHTML; unchanged
3157 } ← closes if (issuesCell) unchanged
3157.5 (nothing) insert the Risk Summary block here
3158 } ← closes updateInlineCells unchanged
3159+ function invalidateApiCache... unchanged

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@public/index.html`:
- Around line 3160-3178: The risk summary DOM update block currently runs at
module load and references an undefined pr (pr.id), causing a runtime error;
move that entire block into the updateInlineCells function and change references
from pr.id to prId (the function's parameter), locating the logic around
readiness, riskSummaryCell and escapeHtml so it uses the passed-in prId and the
existing readiness object inside updateInlineCells; ensure the fallback branch
remains the same but uses prId to build the element id
(`readiness-risk-summary-${prId}`) and keep colorClass computation and innerHTML
assignments unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: OWASP-BLT/coderabbit/.coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 1e24b2d1-3b17-4a5a-ae8d-677cdc4e6bb8

📥 Commits

Reviewing files that changed from the base of the PR and between 89aa72e and d25eaa6.

📒 Files selected for processing (1)
  • public/index.html

Comment on lines +3160 to +3178
// Update Risk Summary
const riskSummaryCell = document.getElementById(`readiness-risk-summary-${pr.id}`);
if (riskSummaryCell) {
const summary = readiness.risk_summary;
if (summary && summary.trim()) {
const colorClass = readiness.merge_ready
? 'text-emerald-700 dark:text-emerald-400'
: readiness.blockers && readiness.blockers.length > 0
? 'text-red-700 dark:text-red-400'
: 'text-amber-700 dark:text-amber-400';
riskSummaryCell.innerHTML = `
<span class="text-xs ${colorClass}" title="${escapeHtml(summary)}">
${escapeHtml(summary)}
</span>`;
} else {
riskSummaryCell.innerHTML = '<span class="text-xs text-slate-400">-</span>';
}
}

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Risk summary update block is outside updateInlineCells and uses undefined pr.

Line 3161 executes at script load and references pr.id, which triggers a runtime error. Move this block inside updateInlineCells and use prId.

🐛 Suggested fix
 function updateInlineCells(prId, readiness, reviewHealth) {
@@
             if (issuesCell) {
@@
                 issuesCell.innerHTML = issuesHTML;
             }
-        }
- 
-         // Update Risk Summary
-            const riskSummaryCell = document.getElementById(`readiness-risk-summary-${pr.id}`);
+
+            // Update Risk Summary
+            const riskSummaryCell = document.getElementById(`readiness-risk-summary-${prId}`);
             if (riskSummaryCell) {
                 const summary = readiness.risk_summary;
                 if (summary && summary.trim()) {
                     const colorClass = readiness.merge_ready
                         ? 'text-emerald-700 dark:text-emerald-400'
                         : readiness.blockers && readiness.blockers.length > 0
                             ? 'text-red-700 dark:text-red-400'
                             : 'text-amber-700 dark:text-amber-400';
                     riskSummaryCell.innerHTML = `
                         <span class="text-xs ${colorClass}" title="${escapeHtml(summary)}">
                             ${escapeHtml(summary)}
                         </span>`;
                 } else {
                     riskSummaryCell.innerHTML = '<span class="text-xs text-slate-400">-</span>';
                 }
             }
-        
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@public/index.html` around lines 3160 - 3178, The risk summary DOM update
block currently runs at module load and references an undefined pr (pr.id),
causing a runtime error; move that entire block into the updateInlineCells
function and change references from pr.id to prId (the function's parameter),
locating the logic around readiness, riskSummaryCell and escapeHtml so it uses
the passed-in prId and the existing readiness object inside updateInlineCells;
ensure the fallback branch remains the same but uses prId to build the element
id (`readiness-risk-summary-${prId}`) and keep colorClass computation and
innerHTML assignments unchanged.

@S3DFX-CYBER
Copy link
Copy Markdown
Contributor Author

@coderabbitai check for syntax issues and prId is not defined im getting some locally testing issues

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 20, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the full updateInlineCells function to check variable names
fd index.html --exec grep -n "updateInlineCells\|prId\|pr\.id\|prNumber\|const pr\|let pr\|var pr\|function update" {}

echo "=== Full updateInlineCells function ==="
fd index.html --exec grep -n "updateInlineCells" {}

echo "=== Show function signature and first 30 lines of updateInlineCells ==="
fd index.html --exec python3 -c "
import re, sys
content = open('{}').read()
lines = content.split('\n')
start = None
for i, line in enumerate(lines):
    if 'function updateInlineCells' in line or 'updateInlineCells' in line and 'function' in line:
        start = i
        break
if start is not None:
    for j in range(start, min(start+50, len(lines))):
        print(f'{j+1}: {lines[j]}')
" 2>/dev/null || true

echo "=== Lines 3040-3175 of index.html ==="
fd index.html --exec sed -n '3040,3175p' {}

echo "=== Check for risk_summary references ==="
fd index.html --exec grep -n "risk.summary\|risk_summary\|riskSummary\|prId\|prNumber\|readiness-risk-summary" {}

Length of output: 28117


@S3DFX-CYBER: Found both issues. The Risk Summary block was placed outside the closing } of updateInlineCells (after Line 3159), which means it's floating in global scope — that's why prId and pr are both undefined. Additionally, it uses pr.id instead of the function parameter prId.

Here's exactly what your file looks like now vs. what it should be:

Current (broken) — Lines 3155–3175:

                issuesHTML += '</div>';
                issuesCell.innerHTML = issuesHTML;
            }
        }   // ← updateInlineCells closes HERE (line ~3159)

         // Update Risk Summary                           ← ❌ OUTSIDE the function!
            const riskSummaryCell = document.getElementById(`readiness-risk-summary-${pr.id}`);  // ← ❌ pr.id not in scope
            if (riskSummaryCell) {
                ...
            }

Fixed — move the block inside updateInlineCells and change pr.idprId:

                issuesHTML += '</div>';
                issuesCell.innerHTML = issuesHTML;
            }   // ← closes `if (issuesCell)`

            // ✅ Risk Summary block — INSIDE the function, BEFORE closing brace
            // Update Risk Summary
            const riskSummaryCell = document.getElementById(`readiness-risk-summary-${prId}`);  // ← ✅ prId (not pr.id)
            if (riskSummaryCell) {
                const summary = readiness.risk_summary;
                if (summary && summary.trim()) {
                    const colorClass = readiness.merge_ready
                        ? 'text-emerald-700 dark:text-emerald-400'
                        : readiness.blockers && readiness.blockers.length > 0
                            ? 'text-red-700 dark:text-red-400'
                            : 'text-amber-700 dark:text-amber-400';
                    riskSummaryCell.innerHTML = `
                        <span class="text-xs ${colorClass}" title="${escapeHtml(summary)}">
                            ${escapeHtml(summary)}
                        </span>`;
                } else {
                    riskSummaryCell.innerHTML = '<span class="text-xs text-slate-400">-</span>';
                }
            }

        }   // ← updateInlineCells closes HERE

Summary of both fixes:

# Problem Fix
1 Block placed after } closing updateInlineCells Move it before that }
2 pr.id used (undefined in this scope) Change to prId (the function parameter)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

files-changed: 2 PR changes 2 files needs-peer-review PR needs peer review quality: high

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feat: Add Minimal AI-Based PR Risk Summary

1 participant