-
-
Notifications
You must be signed in to change notification settings - Fork 7
chore: Consolidate Converters #58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Caution Review failedThe pull request is closed. WalkthroughAdds a BaseConverter and ConversionHelper for alias/endpoint resolution, metadata extraction, and type/state/disk-size determination; refactors multiple provider converters to embed and use the base helper; adds unit tests and updates vLLM tests to use a public constructor. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller as Converter Caller
participant Conv as Converter
participant Base as BaseConverter
participant Help as ConversionHelper
Caller->>Conv: convert(model)
Conv->>Base: NewConversionHelper(model)
Base-->>Conv: helper (alias, endpoint, metadata)
Conv->>Help: ShouldSkip()
alt skip
Help-->>Conv: true
Conv-->>Caller: nil
else proceed
Help-->>Conv: false
Conv->>Help: GetModelType(default)
Conv->>Help: GetState(default)
Conv->>Help: GetDiskSize()
Conv->>Help: GetMetadataString/Int/Bool(...)
Conv-->>Caller: provider-specific model object
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/adapter/converter/unified_converter.go (1)
164-171
: Availability filter only treats “loaded” as available; include “available”/“ready” statesCurrent logic will exclude models that are “available” (used in vLLM tests) or “ready”. This can unintentionally hide usable models.
Apply this diff:
- isAvailable := false - for _, ep := range model.SourceEndpoints { - if ep.State == "loaded" { - isAvailable = true - break - } - } + isAvailable := false + for _, ep := range model.SourceEndpoints { + switch strings.ToLower(ep.State) { + case "loaded", "available", "ready": + isAvailable = true + break + } + } return *available == isAvailableAdditionally, add the import:
// at the top of the file import ( "time" "strings" ... )
🧹 Nitpick comments (18)
internal/adapter/converter/openai_converter.go (2)
55-61
: Fix typo and clarify the rationale in the comment (alias vs alas)The current comment has a typo and could better explain the “why”.
Apply this diff:
- // OLLA-85: [Unification] Models with different digests fail to unify correctly. - // we need to use first alas as ID for routing compatibility - // to make sure the returned model ID can be used for requests + // OLLA-85: [Unification] Models with different digests fail to unify correctly. + // Use the first alias as the ID for routing compatibility so the returned ID + // matches what callers can actually use for requests.
63-69
: Prefer stable timestamps derived from model metadata rather than time.Now()Using the model’s LastSeen when available makes outputs deterministic across runs and aligns the “created” time with when Olla observed the model.
Apply this diff:
-func (c *OpenAIConverter) convertModel(model *domain.UnifiedModel) OpenAIModelData { +func (c *OpenAIConverter) convertModel(model *domain.UnifiedModel) OpenAIModelData { @@ - return OpenAIModelData{ + created := time.Now().Unix() + if !model.LastSeen.IsZero() { + created = model.LastSeen.Unix() + } + return OpenAIModelData{ ID: modelID, Object: "model", - Created: time.Now().Unix(), + Created: created, OwnedBy: "olla", } }internal/adapter/converter/vllm_converter_test.go (2)
210-211
: Avoid potential panics by asserting the converter type in testsType assertions on an interface can panic if the constructor changes. Guard with require.
Apply this diff:
- converter := NewVLLMConverter().(*VLLMConverter) + raw := NewVLLMConverter() + converter, ok := raw.(*VLLMConverter) + require.True(t, ok)Do the same change for both occurrences.
Also applies to: 248-249
250-265
: Add a test case where only SourceEndpoints carry the vLLM-native nameGiven the converter prioritises provider-native names, cover the scenario where aliases lack a vLLM source but SourceEndpoints.NativeName has it. This helps prevent regressions if alias-based resolution changes.
Would you like me to add a focused test ensuring findVLLMNativeName falls back to vLLM endpoint native names when no vLLM alias exists?
Also applies to: 266-277, 279-292, 294-309
internal/adapter/converter/vllm_converter.go (1)
63-69
: Capture time once to avoid micro-drift between fields and improve consistencyMinor, but using a single timestamp makes the object internally consistent and slightly improves test stability.
Apply this diff:
- vllmModel := &profile.VLLMModel{ + now := time.Now().Unix() + vllmModel := &profile.VLLMModel{ ID: modelID, Object: "model", - Created: time.Now().Unix(), + Created: now, OwnedBy: c.determineOwner(modelID), Root: modelID, // vLLM typically sets root to the model ID } @@ - ID: "modelperm-olla-" + strings.ReplaceAll(modelID, "/", "-"), + ID: "modelperm-olla-" + strings.NewReplacer("/", "-", ":", "-", " ", "-").Replace(modelID), Object: "model_permission", - Created: time.Now().Unix(), + Created: now,Also applies to: 79-83
internal/adapter/converter/unified_converter.go (3)
82-88
: Fix typo and clarify the rationale in the comment (alias vs alas)Clarify why the alias is used to prevent future confusion.
Apply this diff:
- // OLLA-85: [Unification] Models with different digests fail to unify correctly. - // we need to use first alas as ID for routing compatibility - // to make sure the returned model ID can be used for requests + // OLLA-85: [Unification] Models with different digests fail to unify correctly. + // Use the first alias as the ID for routing compatibility so the returned ID + // matches what callers can actually use for requests.
90-106
: Prefer stable timestamps derived from model metadata rather than time.Now()Using LastSeen when present better reflects when Olla observed the model.
Apply this diff:
- return UnifiedModelData{ + created := time.Now().Unix() + if !model.LastSeen.IsZero() { + created = model.LastSeen.Unix() + } + return UnifiedModelData{ ID: modelID, Object: "model", - Created: time.Now().Unix(), + Created: created, OwnedBy: "olla",
183-194
: Avoid constructing a new BaseConverter with an empty provider for type determinationPassing an empty provider may disable provider-specific heuristics. Given this is the unified converter, use the unified provider or reuse an existing instance.
Apply this minimal diff:
- // Use base converter to determine model type - base := NewBaseConverter("") - modelType := base.DetermineModelType(model, "") + // Use base converter to determine model type (explicit provider for clarity) + base := NewBaseConverter("unified") + modelType := base.DetermineModelType(model, "")Longer-term, consider making matchesTypeFilter a method on UnifiedConverter and using c.BaseConverter directly to avoid per-call allocations.
Happy to submit a small follow-up refactor PR if you prefer.
internal/adapter/converter/ollama_converter.go (2)
67-72
: Confirm helper.ShouldSkip() semantics to avoid dropping legitimate modelsPlease double-check that ShouldSkip() only filters out models that truly lack an Ollama-resolvable alias/endpoint and won’t exclude models that are available but only via digest or endpoint-native names.
If helpful, I can add unit tests that prove we don’t skip digest-only models.
111-113
: Comment says “uppercase” but code returns the input; implement the intended behaviourMake the default branch actually uppercase the quantisation string.
Apply this diff:
- // Default: uppercase the quantization - return quant + // Default: uppercase the quantisation + return strings.ToUpper(quant)And add the import:
import ( "time" "strings" ... )internal/adapter/converter/base_converter_test.go (3)
149-182
: Metadata extraction tests read well; consider adding a couple of edge casesTwo worthwhile additions to harden behaviour:
- json.Number and int64 support for integers (common with bespoke JSON unmarshalling).
- Confirm behaviour when metadata contains an empty string, zero, or false (explicitly stored defaults).
Happy to draft these if you want them in this PR.
184-264
: Helper behaviour is covered; add a case for empty endpoint stateYou verify defaulting when the endpoint is nil. Add a case where an endpoint exists but State == "" to pin the “use default” path.
@@ func TestConversionHelper(t *testing.T) { t.Run("with valid ollama model", func(t *testing.T) { ... }) + t.Run("with endpoint but empty state uses default", func(t *testing.T) { + model := &domain.UnifiedModel{ + Aliases: []domain.AliasEntry{{Name: "llama3:latest", Source: "ollama"}}, + SourceEndpoints: []domain.SourceEndpoint{{ + EndpointURL: "http://localhost:11434", + NativeName: "llama3:latest", + State: "", + }}, + } + helper := base.NewConversionHelper(model) + assert.Equal(t, "unknown", helper.GetState("unknown")) + })
266-334
: Type resolution tests are clear; add cross-provider alias guardGiven the provider coupling, add one small test where BaseConverter is for “lmstudio” and the model has an “lmstudio” alias (mirroring your Ollama case) to ensure ShouldSkip() returns false there as well.
@@ func TestBaseConverter_DetermineModelType(t *testing.T) { } +func TestConversionHelper_LMStudioAliasAllowsConversion(t *testing.T) { + base := NewBaseConverter("lmstudio") + model := &domain.UnifiedModel{ + Aliases: []domain.AliasEntry{{Name: "microsoft/phi-4", Source: "lmstudio"}}, + SourceEndpoints: []domain.SourceEndpoint{{ + EndpointURL: "http://localhost:1234", + NativeName: "microsoft/phi-4", + State: "loaded", + }}, + } + helper := base.NewConversionHelper(model) + assert.False(t, helper.ShouldSkip()) + assert.Equal(t, "microsoft/phi-4", helper.Alias) +}internal/adapter/converter/lmstudio_converter.go (1)
61-85
: Lean helper-driven conversion; minor clarity improvementThe flow is sound: gate on alias, derive type, pick publisher, state from endpoint, and forward the max context length. Consider adding a short comment explaining why alias-gating is used (prevents leaking non-LM Studio models into the LM Studio response). This aligns with our “explain why” guideline.
func (c *LMStudioConverter) convertModel(model *domain.UnifiedModel) *LMStudioModelData { helper := c.BaseConverter.NewConversionHelper(model) - if helper.ShouldSkip() { + // Skip models that don’t have an LM Studio alias to avoid surfacing + // models native to other providers in an LM Studio response. + if helper.ShouldSkip() { return nil }internal/adapter/converter/base_converter.go (4)
19-27
: Add nil-guards to prevent accidental panics on unexpected inputsWhile upstream callers generally provide non-nil models/metadata, lightweight checks make these helpers safer and document intent.
func (b *BaseConverter) FindProviderAlias(model *domain.UnifiedModel) (string, bool) { + if model == nil || model.Aliases == nil { + return "", false + } for _, alias := range model.Aliases { @@ func (b *BaseConverter) FindProviderEndpoint(model *domain.UnifiedModel, providerName string) *domain.SourceEndpoint { - for i := range model.SourceEndpoints { + if model == nil || providerName == "" || model.SourceEndpoints == nil { + return nil + } + for i := range model.SourceEndpoints { @@ func (b *BaseConverter) DetermineModelType(model *domain.UnifiedModel, defaultType string) string { + if model == nil { + return defaultType + } // Check metadata first @@ func (b *BaseConverter) NewConversionHelper(model *domain.UnifiedModel) *ConversionHelper { + if model == nil { + return &ConversionHelper{BaseConverter: b} + } alias, _ := b.FindProviderAlias(model) @@ func (h *ConversionHelper) GetMetadataString(key string) string { - return h.ExtractMetadataString(h.Model.Metadata, key) + if h.Model == nil { + return "" + } + return h.ExtractMetadataString(h.Model.Metadata, key) } @@ func (h *ConversionHelper) GetMetadataInt(key string) int { - return h.ExtractMetadataInt(h.Model.Metadata, key) + if h.Model == nil { + return 0 + } + return h.ExtractMetadataInt(h.Model.Metadata, key) } @@ func (h *ConversionHelper) GetMetadataBool(key string) bool { - return h.ExtractMetadataBool(h.Model.Metadata, key) + if h.Model == nil { + return false + } + return h.ExtractMetadataBool(h.Model.Metadata, key) }Also applies to: 29-38, 88-105, 114-125, 147-160
48-57
: Broaden numeric handling in ExtractMetadataIntSupport int64/uint64/json.Number to match varied decoding paths; optional rounding vs truncation can be documented if needed.
-import ( - "github.com/thushan/olla/internal/core/domain" -) +import ( + "encoding/json" + "math" + "strconv" + "github.com/thushan/olla/internal/core/domain" +) @@ func (b *BaseConverter) ExtractMetadataInt(metadata map[string]interface{}, key string) int { - if val, ok := metadata[key].(int); ok { + if metadata == nil { + return 0 + } + switch v := metadata[key].(type) { + case int: + return v + case int8, int16, int32: + return int(reflect.ValueOf(v).Int()) + case int64: + return int(v) + case uint, uint8, uint16, uint32, uint64: + return int(reflect.ValueOf(v).Uint()) + case float32: + return int(v) + case float64: + return int(v) + case json.Number: + if i64, err := v.Int64(); err == nil { + return int(i64) + } + if f64, err := v.Float64(); err == nil { + return int(f64) + } + case string: + if i64, err := strconv.ParseInt(v, 10, 64); err == nil { + return int(i64) + } + if f64, err := strconv.ParseFloat(v, 64); err == nil { + return int(math.Trunc(f64)) + } } - if val, ok := metadata[key].(float64); ok { - return int(val) - } return 0 }Note: If you prefer to avoid reflect for performance/readability, narrow the cases to the handful you expect (int, int64, float64, json.Number, string).
29-38
: Prefer a loaded endpoint when multiple matchIf multiple endpoints share the same native name, choose a “loaded” endpoint first; otherwise fall back to the first match.
func (b *BaseConverter) FindProviderEndpoint(model *domain.UnifiedModel, providerName string) *domain.SourceEndpoint { - for i := range model.SourceEndpoints { - ep := &model.SourceEndpoints[i] - if providerName != "" && ep.NativeName == providerName { - return ep - } - } - return nil + if model == nil || providerName == "" || len(model.SourceEndpoints) == 0 { + return nil + } + var firstMatch *domain.SourceEndpoint + for i := range model.SourceEndpoints { + ep := &model.SourceEndpoints[i] + if ep.NativeName == providerName { + if ep.State == "loaded" { + return ep + } + if firstMatch == nil { + firstMatch = ep + } + } + } + return firstMatch }
75-81
: State: consider honouring typed ModelState when presentdomain.SourceEndpoint exposes ModelState; if populated, prefer mapping the typed value to a string before falling back to the legacy State field. This will reduce divergence when providers move to typed states.
If ModelState has a String() method or constants, we can wire it in. Want me to draft the mapping once I see the type?
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (8)
internal/adapter/converter/base_converter.go
(1 hunks)internal/adapter/converter/base_converter_test.go
(1 hunks)internal/adapter/converter/lmstudio_converter.go
(2 hunks)internal/adapter/converter/ollama_converter.go
(2 hunks)internal/adapter/converter/openai_converter.go
(1 hunks)internal/adapter/converter/unified_converter.go
(2 hunks)internal/adapter/converter/vllm_converter.go
(2 hunks)internal/adapter/converter/vllm_converter_test.go
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{go,md}
📄 CodeRabbit inference engine (CLAUDE.md)
Use Australian English in comments and documentation; write comments explaining why rather than what
Files:
internal/adapter/converter/base_converter.go
internal/adapter/converter/base_converter_test.go
internal/adapter/converter/vllm_converter.go
internal/adapter/converter/unified_converter.go
internal/adapter/converter/lmstudio_converter.go
internal/adapter/converter/vllm_converter_test.go
internal/adapter/converter/ollama_converter.go
internal/adapter/converter/openai_converter.go
🧬 Code graph analysis (8)
internal/adapter/converter/base_converter.go (1)
internal/core/domain/unified_model.go (2)
UnifiedModel
(15-31)SourceEndpoint
(34-44)
internal/adapter/converter/base_converter_test.go (2)
internal/adapter/converter/base_converter.go (1)
NewBaseConverter
(13-17)internal/core/domain/unified_model.go (3)
UnifiedModel
(15-31)AliasEntry
(9-12)SourceEndpoint
(34-44)
internal/adapter/converter/vllm_converter.go (3)
internal/adapter/converter/base_converter.go (2)
BaseConverter
(8-10)NewBaseConverter
(13-17)internal/core/ports/model_converter.go (1)
ModelResponseConverter
(10-15)internal/core/constants/providers.go (1)
ProviderTypeVLLM
(8-8)
internal/adapter/converter/unified_converter.go (2)
internal/adapter/converter/base_converter.go (2)
BaseConverter
(8-10)NewBaseConverter
(13-17)internal/core/ports/model_converter.go (1)
ModelResponseConverter
(10-15)
internal/adapter/converter/lmstudio_converter.go (3)
internal/adapter/converter/base_converter.go (2)
BaseConverter
(8-10)NewBaseConverter
(13-17)internal/core/ports/model_converter.go (1)
ModelResponseConverter
(10-15)internal/core/constants/providers.go (1)
ProviderPrefixLMStudio1
(17-17)
internal/adapter/converter/vllm_converter_test.go (1)
internal/adapter/converter/vllm_converter.go (2)
NewVLLMConverter
(24-28)VLLMConverter
(19-21)
internal/adapter/converter/ollama_converter.go (3)
internal/adapter/converter/base_converter.go (2)
BaseConverter
(8-10)NewBaseConverter
(13-17)internal/core/ports/model_converter.go (1)
ModelResponseConverter
(10-15)internal/core/constants/providers.go (1)
ProviderTypeOllama
(4-4)
internal/adapter/converter/openai_converter.go (2)
internal/adapter/converter/base_converter.go (2)
BaseConverter
(8-10)NewBaseConverter
(13-17)internal/core/ports/model_converter.go (1)
ModelResponseConverter
(10-15)
🔇 Additional comments (10)
internal/adapter/converter/openai_converter.go (1)
25-33
: Embedding BaseConverter is a sound step towards consolidationGood alignment with the new shared base; no functional risk introduced here.
internal/adapter/converter/unified_converter.go (1)
146-157
: Confirm whether endpoint filter should match on name vs URLElsewhere we avoid surfacing URLs for security. If filters.Endpoint is intended to be an endpoint name, update comparison to EndpointName to keep behaviour consistent.
If names are intended, the inner comparison should be:
if ep.EndpointName == endpoint { return true }Let me know and I can patch it accordingly with tests.
internal/adapter/converter/ollama_converter.go (2)
34-42
: Embedding BaseConverter aligns Ollama with the new shared pathGood consolidation—this simplifies future enhancements and consistency across providers.
73-84
: Validate that both Name and Model should be set to the aliasSome Ollama clients expect “model” to reflect the underlying digest, while “name” is the human-friendly tag. If your API consumers rely on digest, consider mapping Model to the digest and Name to the alias.
If required, this minimal change would do it:
- Name: helper.Alias, - Model: helper.Alias, + Name: helper.Alias, + Model: helper.GetMetadataString("digest"),internal/adapter/converter/base_converter_test.go (2)
11-66
: Solid table-driven coverage for alias resolutionGood breadth across populated, empty, and nil alias slices. This locks in provider-specific behaviour for “ollama”.
68-147
: Endpoint matching tests are effective and realisticThe positive/negative cases, plus empty-provider-name branch, exercise the pointer-to-slice-element pattern and avoid the classic range-variable-address bug. Nice.
internal/adapter/converter/lmstudio_converter.go (2)
28-36
: Good move embedding BaseConverterEmbedding keeps converters consistent and trims duplicated alias/endpoint logic. Constructor wiring to ProviderPrefixLMStudio1 looks right.
60-85
: No action required:denormalizeQuantization
is defined in the same packageThe
rg
search confirms thatdenormalizeQuantization
is implemented ininternal/adapter/converter/ollama_converter.go
and, since both files share theconverter
package, it will be visible tolmstudio_converter.go
. The code will compile successfully.internal/adapter/converter/base_converter.go (2)
31-36
: Nice avoidance of range-variable address pitfallIndexing the slice and taking &model.SourceEndpoints[i] is the correct pattern in Go; avoids the common pointer-to-iteration-variable bug.
83-86
: Resolved:hasCapability
helper is defined in the same packageThe unexported function
hasCapability
is implemented atinternal/adapter/converter/unified_converter.go:209
, so the call inBaseConverter.HasCapability
is valid and the package will build as-is. No changes required.
This PR consolidates and refactors the converters so they can be reused and maintained now that we've got more than the original Ollama and OpenAI ones.
Summary by CodeRabbit
New Features
Refactor
Tests