Skip to content

Conversation

@akaashrp
Copy link
Contributor

Certain q0f32 models are running into correctness issues after the TVM FFI refactor:

  1. Qwen3-0.6B-q0f32-MLC
  2. Qwen2.5-0.5B-Instruct-q0f32-MLC
  3. Qwen2.5-Coder-0.5B-Instruct-q0f32-MLC
  4. Qwen2-0.5B-Instruct-q0f32-MLC
  5. Llama-3.2-1B-Instruct-q0f32-MLC

These have temporarily been commented out in config.ts while these issues are being debugged. If you need to use these specific models, please use WebLLM v0.2.79 (https://www.npmjs.com/package/@mlc-ai/web-llm/v/0.2.79).

@akaashrp akaashrp merged commit ed368d7 into mlc-ai:main Nov 23, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant