Hi team, thanks for publishing and maintaining SWE-Bench Pro and its leaderboard.
While checking the SWE-Bench Pro Public leaderboard, I noticed it currently appears to focus on LLM model results. I could not find a systematic comparison by agent frameworks (for example, scaffold/orchestration-level evaluations).
Could you clarify:
Are there plans to add agent framework results to this leaderboard (or publish a separate board)?
If such evaluations already exist, where can we find the latest public results?
If not yet available, is there an estimated timeline or roadmap?
This would be very helpful for real-world adoption, since both model capability and agent framework design can materially affect outcomes.
Thanks in advance
Hi team, thanks for publishing and maintaining SWE-Bench Pro and its leaderboard.
While checking the SWE-Bench Pro Public leaderboard, I noticed it currently appears to focus on LLM model results. I could not find a systematic comparison by agent frameworks (for example, scaffold/orchestration-level evaluations).
Could you clarify:
Are there plans to add agent framework results to this leaderboard (or publish a separate board)?
If such evaluations already exist, where can we find the latest public results?
If not yet available, is there an estimated timeline or roadmap?
This would be very helpful for real-world adoption, since both model capability and agent framework design can materially affect outcomes.
Thanks in advance