-
Notifications
You must be signed in to change notification settings - Fork 3
Code Search Relevancy Scoring Agent #283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
- Add an agent that uses the criteria generated by dynamic criteria generation agent and outputs a relevancy score - Add initial system prompt for the relevancy scoring agent
|
❌ Tests failed (exit code: 2) 📊 Test Results
Branch: 📋 Full coverage report and logs are available in the workflow run. |
|
❌ Tests failed (exit code: 2) 📊 Test Results
Branch: 📋 Full coverage report and logs are available in the workflow run. |
|
cc: @iamsims check this. i think better collaborate and come to a common ground on score criteria thing here. We don't want various disconnected components doing the same things. |
|
also @pranath-reddy i'd recommend to put the new relevance agents (if we happen to finalize it/merge it later) to already existing module |
- Update base relevancy criterion to match the criteria gen agent
|
❌ Tests failed (exit code: 2) 📊 Test Results
Branch: 📋 Full coverage report and logs are available in the workflow run. |
Summary 📝
This PR introduces a Dynamic Relevancy Scoring Agent that evaluates repository content against dynamically generated relevance criteria using a three-level scoring rubric. The agent assesses combined repository content (README, description, title, topics) against both required and nice-to-have criteria, producing quantitative relevance scores with detailed qualitative reasoning. This enables precise ranking of search results while providing transparency into scoring decisions through specific content evidence.
Details
Dynamic Relevancy Scoring
Core Functionality
DynamicRelevancyCriteriaAgentEvaluation Outputs
Code Changes
Added system prompt at
akd/configs/code_prompts.py:RELEVANCY_SCORING_PROMPTfor evidence-based relevancy evaluation logicNew Agent Implementation:
RelevancyScoringAgentLiteLLMInstructorBaseAgentwith specialized input/output schemasExtended
RelevancyCriterionschema:is_required: boolfield to distinguish required vs. nice-to-have criteriaRelevanceCriterionfromDynamicRelevancyCriteriaAgentUsage
Checks