Skip to content

Decide what to do with original 'ollama' benchmark functionality #38

@geerlingguy

Description

@geerlingguy

I recently moved my Pyinfra ai-benchmark script into this repository, and I've been using that a lot more since ollama lags behind quite often compared to llama.cpp for model support, Vulkan support, and things like that.

I still have obench.sh in this repo, and it still works fine with Ollama... but as I'm leaning more on llama-bench for reproducible benchmark scenarios, I don't think I'll really use Ollama much anymore.

So, should I:

  1. Drop obench.sh and the associated automation in the Pyinfra script?
  2. Leave it in place and do nothing with it?
  3. Move the functionality (running multiple benchmark runs and compiling the result in a markdown table) into ai-benchmarks.py?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions