Skip to content

model test request #20

@dnhkng

Description

@dnhkng

Now that the llama.cpp server is running correctly, would it be possible to have this model tested?

https://huggingface.co/Infinimol/miiqu-gguf
using ChatML format and context length >= 1024, please :)

It is a model I been working on it for some time, and I think it's interesting. It is not a fine-tune, but a merge, and I find it consistently scores higher than the base model (miqu), which I think is a first for a pure merge model. Eq-bench runs in about 15 mins on an A100.

The model is GGUF, but split to fit under the 50Gb limit on Huggingface, but the model card give the one-liner to reassemble the file.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions