-
Notifications
You must be signed in to change notification settings - Fork 3
Add haiku explanation type with flexible prompt composition #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…mpt composition - Add haiku-specific test cases covering simple to complex algorithms - Implement structure-aware methods in Prompt class for better code reuse - Update prompt_advisor.py to handle nested audience_levels in explanation types - Add haiku-specific evaluation criteria in claude_reviewer.py - Update documentation to explain flexible prompt composition system - Support explanation-type overrides for audience guidance (enables haiku to bypass assembly-specific instructions) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Remove duplication between get_audience_metadata methods - implement instance method in terms of class method - Remove overly complex get_all_audience_locations method, add comment about future needs - Remove hardcoded haiku-specific evaluation criteria - use same criteria for all explanation types - Remove magic string detection for haiku targeting - explanation types should be explicitly specified - Simplify prompt_advisor.py to directly check both audience locations without complex abstractions - Fix test to check audience information in user prompt where it actually appears - Use .values() instead of .items() when we don't need the keys 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Adds a haiku explanation type to the Claude assembly explanation service with a flexible prompt composition system that allows explanation types to override audience-specific guidance. This enables poetic explanations that bypass technical assembly guidance while maintaining audience differentiation for assembly explanations.
- Added
ExplanationType.HAIKU
enum value and haiku-specific prompt configuration - Implemented explanation-type audience overrides in prompt system with structure-aware class methods
- Enhanced testing framework with haiku test cases and Claude evaluation criteria
Reviewed Changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
prompt_testing/test_cases/haiku_tests.yaml | Comprehensive haiku test cases covering various algorithm complexities |
prompt_testing/evaluation/prompt_advisor.py | Enhanced to handle nested explanation-specific audience configurations |
prompt_testing/evaluation/claude_reviewer.py | Added haiku-specific evaluation criteria |
claude_explain.md | Updated documentation with flexible prompt system details |
app/test_explain.py | Fixed test to check audience guidance in user message instead of system prompt |
app/prompt.yaml | Added haiku configuration with audience overrides and restructured prompts |
app/prompt.py | Added structure-aware class methods for prompt composition |
app/main.py | Added configurable logging via LOG_LEVEL environment variable |
app/explanation_types.py | Added HAIKU enum value |
app/explain.py | Added debug logging for actual prompts sent to Claude |
app/config.py | Added log_level configuration setting |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
- Replace module-level resource initialization with lifespan context manager - Create configure_logging() function for proper logging setup - Move shared resources (anthropic_client, prompt, settings) to app.state - Update endpoints to access resources from app.state via Request object - Eliminate multiple get_settings() calls at module import time - Use proper FastAPI 2024 best practices for resource management - Maintain backward compatibility and all tests pass This fixes the issue where settings were locked at import time and makes testing much easier since settings can now be mocked before app creation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
Summary
Adds haiku explanation type and flexible prompt composition system that allows explanation types to override audience-specific guidance.
Changes Made
Core Implementation
ExplanationType.HAIKU
enum valueTesting & Evaluation
prompt_testing/test_cases/haiku_tests.yaml
with test scenariosclaude_reviewer.py
with haiku-specific evaluation criteriaprompt_advisor.py
to handle nested audience configurationsCode Quality & Reuse
Prompt
class:get_audience_metadata_from_dict()
- Works with raw prompt dictionarieshas_audience_override()
- Checks for explanation-specific overridesget_all_audience_locations()
- Finds all audience guidance locationsDebug Support
LOG_LEVEL
environment variableDocumentation
claude_explain.md
with flexible prompt system documentationTest Plan