Skip to content

fix(qq): add LLM-based self-healing for send errors#1560

Open
leoleils wants to merge 2 commits intoagentscope-ai:mainfrom
leoleils:fix/qq-channel-self-healing
Open

fix(qq): add LLM-based self-healing for send errors#1560
leoleils wants to merge 2 commits intoagentscope-ai:mainfrom
leoleils:fix/qq-channel-self-healing

Conversation

@leoleils
Copy link

@leoleils leoleils commented Mar 16, 2026

  • Simplify LLM prompt: just send error info and let LLM decide how to fix
  • Apply LLM fixing to all QQ send errors (not just content violation)
  • Token 过期自动刷新重试
  • Markdown 问题自动转纯文本
  • Remove unused helper functions

Description

[Describe what this PR does and why]

Related Issue: Fixes #(issue_number) or Relates to #(issue_number)

Security Considerations: [If applicable, e.g. channel auth, env/config handling]

Type of Change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation
  • Refactoring

Component(s) Affected

  • Core / Backend (app, agents, config, providers, utils, local_models)
  • Console (frontend web UI)
  • Channels (DingTalk, Feishu, QQ, Discord, iMessage, etc.)
  • Skills
  • CLI
  • Documentation (website)
  • Tests
  • CI/CD
  • Scripts / Deploy

Checklist

  • I ran pre-commit run --all-files locally and it passes
  • If pre-commit auto-fixed files, I committed those changes and reran checks
  • I ran tests locally (pytest or as relevant) and they pass
  • Documentation updated (if needed)
  • Ready for review

Testing

[How to test these changes]

Local Verification Evidence

pre-commit run --all-files
# paste summary result

pytest
# paste summary result

Additional Notes

[Optional: any other context]

- Simplify LLM prompt: just send error info and let LLM decide how to fix
- Apply LLM fixing to all QQ send errors (not just content violation)
- Token 过期自动刷新重试
- Markdown 问题自动转纯文本
- Remove unused helper functions
@github-actions github-actions bot added the first-time-contributor PR created by a first time contributor label Mar 16, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the robustness of the QQ channel's message sending capabilities by introducing self-healing mechanisms. It leverages an LLM to intelligently correct messages that fail to send, automatically refreshes expired API tokens to prevent service interruptions, and improves markdown handling, thereby reducing manual intervention and improving message delivery reliability.

Highlights

  • LLM-based Self-Healing: Implemented a new LLM-based mechanism to automatically fix QQ messages that fail to send due to various errors, simplifying the prompt to focus on error information.
  • Token Refresh and Retry: Added logic to detect QQ API token expiration (401 errors) and automatically refresh the token, then retry sending the message once.
  • Expanded Error Handling: Extended the message dispatch error handling to apply LLM fixing to all QQ send errors, not just specific content violations.
  • Markdown Fallback Refinement: Refined the fallback mechanism for markdown issues, ensuring messages with payload validation problems are automatically converted to plain text.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • src/copaw/app/channels/qq/channel.py
    • Added _fix_content_with_llm function to use an LLM for repairing messages that failed to send, based on the original text and error information.
    • Added _is_token_expired_error function to detect token expiration or authentication failures from exceptions.
    • Modified the _dispatch method to incorporate new error handling logic:
    • If a token expiration error occurs, the token cache is cleared, and a retry is attempted after refreshing the token.
    • If a markdown payload validation error occurs, a fallback to plain text is attempted.
    • For all other send errors, the _fix_content_with_llm function is invoked to attempt an LLM-based repair, and if successful, the fixed message is dispatched.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Generative AI Prohibited Use Policy, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust self-healing mechanism for QQ message sending. It adds logic to automatically handle token expiration by refreshing the token and retrying, and it leverages an LLM to fix message content for other types of send errors. The implementation is logical and enhances the reliability of the QQ channel. My review includes one suggestion to improve code readability in an error logging statement.

Comment on lines +806 to +811
logger.warning(
"QQ send failed (%s), trying LLM to fix",
error_info.get("err_code") or error_info.get("code", "unknown")
if isinstance(error_info, dict)
else str(exc)[:50],
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The conditional expression to determine the error code for logging is quite complex and embedded within the logger.warning call, which makes it difficult to read and understand at a glance. For better readability and maintainability, consider extracting this logic into a separate variable before the logging call.

Suggested change
logger.warning(
"QQ send failed (%s), trying LLM to fix",
error_info.get("err_code") or error_info.get("code", "unknown")
if isinstance(error_info, dict)
else str(exc)[:50],
)
if isinstance(error_info, dict):
err_code = error_info.get("err_code") or error_info.get("code", "unknown")
else:
err_code = str(exc)[:50]
logger.warning(
"QQ send failed (%s), trying LLM to fix",
err_code,
)

Extract complex conditional expression into separate variable for better readability.

Addresses review feedback from PR agentscope-ai#1560.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

first-time-contributor PR created by a first time contributor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant