You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the major issues with ai programming is the context window. Agents perform better with 128k+ window models compared to 4k or 8k. Does ACR have the same issue as well? Gemma-2-27b-it is great and performs as well as gpt-4 variants. However, it has an 8k context window. 30% score on swe-bench lite is with the gpt-4o, which is a 128k model. Is it possible to get +- 25% score level performance with Gemma an 8k model?
The text was updated successfully, but these errors were encountered:
One of the major issues with ai programming is the context window. Agents perform better with 128k+ window models compared to 4k or 8k. Does ACR have the same issue as well? Gemma-2-27b-it is great and performs as well as gpt-4 variants. However, it has an 8k context window. 30% score on swe-bench lite is with the gpt-4o, which is a 128k model. Is it possible to get +- 25% score level performance with Gemma an 8k model?
The text was updated successfully, but these errors were encountered: