Skip to content

Conversation

thushan
Copy link
Owner

@thushan thushan commented Aug 23, 2025

This PR consolidates and refactors the converters so they can be reused and maintained now that we've got more than the original Ollama and OpenAI ones.

Summary by CodeRabbit

  • New Features

    • Consistent model metadata handling across providers (alias/endpoint resolution, type/state, disk size) and improved capability recognition (vision → vlm, embeddings).
  • Refactor

    • Centralised conversion utilities into a shared BaseConverter and ConversionHelper; applied to LM Studio, Ollama, OpenAI, Unified and vLLM converters for consistent behaviour.
  • Tests

    • Added unit tests for BaseConverter/ConversionHelper and updated vLLM converter tests/constructor.

Copy link

coderabbitai bot commented Aug 23, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds a BaseConverter and ConversionHelper for alias/endpoint resolution, metadata extraction, and type/state/disk-size determination; refactors multiple provider converters to embed and use the base helper; adds unit tests and updates vLLM tests to use a public constructor.

Changes

Cohort / File(s) Summary
Base utilities added
internal/adapter/converter/base_converter.go, internal/adapter/converter/base_converter_test.go
New BaseConverter and ConversionHelper types with constructors and helpers: FindProviderAlias, FindProviderEndpoint, metadata extractors (string/int/bool), GetEndpointDiskSize, DetermineModelState, HasCapability, DetermineModelType, and wrapper methods on ConversionHelper; comprehensive unit tests added.
LMStudio converter
internal/adapter/converter/lmstudio_converter.go
LMStudioConverter now embeds *BaseConverter; uses ConversionHelper for skip logic, alias, endpoint, model type, state, publisher and disk-size resolution; ID/fields sourced from helper.
Ollama converter
internal/adapter/converter/ollama_converter.go
OllamaConverter now embeds *BaseConverter; convertModel uses ConversionHelper for alias, endpoint, disk size, digest and skip logic; retains quantization denormalization.
OpenAI converter
internal/adapter/converter/openai_converter.go
OpenAIConverter now embeds *BaseConverter and is initialised with provider type; conversion logic otherwise unchanged.
Unified converter
internal/adapter/converter/unified_converter.go
UnifiedConverter embeds *BaseConverter and uses DetermineModelType for type filtering (fallback to capability checks for compatibility).
vLLM converter & tests
internal/adapter/converter/vllm_converter.go, internal/adapter/converter/vllm_converter_test.go
VLLMConverter now embeds *BaseConverter, NewVLLMConverter constructor added; alias resolution delegated to BaseConverter.FindProviderAlias; tests updated to use constructor and assert concrete type.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Caller as Converter Caller
  participant Conv as Converter
  participant Base as BaseConverter
  participant Help as ConversionHelper

  Caller->>Conv: convert(model)
  Conv->>Base: NewConversionHelper(model)
  Base-->>Conv: helper (alias, endpoint, metadata)
  Conv->>Help: ShouldSkip()
  alt skip
    Help-->>Conv: true
    Conv-->>Caller: nil
  else proceed
    Help-->>Conv: false
    Conv->>Help: GetModelType(default)
    Conv->>Help: GetState(default)
    Conv->>Help: GetDiskSize()
    Conv->>Help: GetMetadataString/Int/Bool(...)
    Conv-->>Caller: provider-specific model object
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • chore: Consolidate Converters #58 — Similar refactor adding BaseConverter/ConversionHelper and updating converters to embed and use it.
  • feat: API Redesign #33 — Related provider/constants and routing/profile changes referenced by the refactor (provider prefixes and constructor/init patterns).

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c4bbb98 and 9c03ec9.

📒 Files selected for processing (1)
  • internal/adapter/converter/vllm_converter.go (5 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/consolidate-converters

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@thushan thushan added the refactor Refactoring of code label Aug 23, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/adapter/converter/unified_converter.go (1)

164-171: Availability filter only treats “loaded” as available; include “available”/“ready” states

Current logic will exclude models that are “available” (used in vLLM tests) or “ready”. This can unintentionally hide usable models.

Apply this diff:

-	isAvailable := false
-	for _, ep := range model.SourceEndpoints {
-		if ep.State == "loaded" {
-			isAvailable = true
-			break
-		}
-	}
+	isAvailable := false
+	for _, ep := range model.SourceEndpoints {
+		switch strings.ToLower(ep.State) {
+		case "loaded", "available", "ready":
+			isAvailable = true
+			break
+		}
+	}
 	return *available == isAvailable

Additionally, add the import:

// at the top of the file
import (
    "time"
    "strings"
    ...
)
🧹 Nitpick comments (18)
internal/adapter/converter/openai_converter.go (2)

55-61: Fix typo and clarify the rationale in the comment (alias vs alas)

The current comment has a typo and could better explain the “why”.

Apply this diff:

-	// OLLA-85: [Unification] Models with different digests fail to unify correctly.
-	// we need to use first alas as ID for routing compatibility
-	// to make sure the returned model ID can be used for requests
+	// OLLA-85: [Unification] Models with different digests fail to unify correctly.
+	// Use the first alias as the ID for routing compatibility so the returned ID
+	// matches what callers can actually use for requests.

63-69: Prefer stable timestamps derived from model metadata rather than time.Now()

Using the model’s LastSeen when available makes outputs deterministic across runs and aligns the “created” time with when Olla observed the model.

Apply this diff:

-func (c *OpenAIConverter) convertModel(model *domain.UnifiedModel) OpenAIModelData {
+func (c *OpenAIConverter) convertModel(model *domain.UnifiedModel) OpenAIModelData {
@@
-	return OpenAIModelData{
+	created := time.Now().Unix()
+	if !model.LastSeen.IsZero() {
+		created = model.LastSeen.Unix()
+	}
+	return OpenAIModelData{
 		ID:      modelID,
 		Object:  "model",
-		Created: time.Now().Unix(),
+		Created: created,
 		OwnedBy: "olla",
 	}
 }
internal/adapter/converter/vllm_converter_test.go (2)

210-211: Avoid potential panics by asserting the converter type in tests

Type assertions on an interface can panic if the constructor changes. Guard with require.

Apply this diff:

-	converter := NewVLLMConverter().(*VLLMConverter)
+	raw := NewVLLMConverter()
+	converter, ok := raw.(*VLLMConverter)
+	require.True(t, ok)

Do the same change for both occurrences.

Also applies to: 248-249


250-265: Add a test case where only SourceEndpoints carry the vLLM-native name

Given the converter prioritises provider-native names, cover the scenario where aliases lack a vLLM source but SourceEndpoints.NativeName has it. This helps prevent regressions if alias-based resolution changes.

Would you like me to add a focused test ensuring findVLLMNativeName falls back to vLLM endpoint native names when no vLLM alias exists?

Also applies to: 266-277, 279-292, 294-309

internal/adapter/converter/vllm_converter.go (1)

63-69: Capture time once to avoid micro-drift between fields and improve consistency

Minor, but using a single timestamp makes the object internally consistent and slightly improves test stability.

Apply this diff:

-	vllmModel := &profile.VLLMModel{
+	now := time.Now().Unix()
+	vllmModel := &profile.VLLMModel{
 		ID:      modelID,
 		Object:  "model",
-		Created: time.Now().Unix(),
+		Created: now,
 		OwnedBy: c.determineOwner(modelID),
 		Root:    modelID, // vLLM typically sets root to the model ID
 	}
@@
-			ID:                 "modelperm-olla-" + strings.ReplaceAll(modelID, "/", "-"),
+			ID:                 "modelperm-olla-" + strings.NewReplacer("/", "-", ":", "-", " ", "-").Replace(modelID),
 			Object:             "model_permission",
-			Created:            time.Now().Unix(),
+			Created:            now,

Also applies to: 79-83

internal/adapter/converter/unified_converter.go (3)

82-88: Fix typo and clarify the rationale in the comment (alias vs alas)

Clarify why the alias is used to prevent future confusion.

Apply this diff:

-	// OLLA-85: [Unification] Models with different digests fail to unify correctly.
-	// we need to use first alas as ID for routing compatibility
-	// to make sure the returned model ID can be used for requests
+	// OLLA-85: [Unification] Models with different digests fail to unify correctly.
+	// Use the first alias as the ID for routing compatibility so the returned ID
+	// matches what callers can actually use for requests.

90-106: Prefer stable timestamps derived from model metadata rather than time.Now()

Using LastSeen when present better reflects when Olla observed the model.

Apply this diff:

-	return UnifiedModelData{
+	created := time.Now().Unix()
+	if !model.LastSeen.IsZero() {
+		created = model.LastSeen.Unix()
+	}
+	return UnifiedModelData{
 		ID:      modelID,
 		Object:  "model",
-		Created: time.Now().Unix(),
+		Created: created,
 		OwnedBy: "olla",

183-194: Avoid constructing a new BaseConverter with an empty provider for type determination

Passing an empty provider may disable provider-specific heuristics. Given this is the unified converter, use the unified provider or reuse an existing instance.

Apply this minimal diff:

-	// Use base converter to determine model type
-	base := NewBaseConverter("")
-	modelType := base.DetermineModelType(model, "")
+	// Use base converter to determine model type (explicit provider for clarity)
+	base := NewBaseConverter("unified")
+	modelType := base.DetermineModelType(model, "")

Longer-term, consider making matchesTypeFilter a method on UnifiedConverter and using c.BaseConverter directly to avoid per-call allocations.

Happy to submit a small follow-up refactor PR if you prefer.

internal/adapter/converter/ollama_converter.go (2)

67-72: Confirm helper.ShouldSkip() semantics to avoid dropping legitimate models

Please double-check that ShouldSkip() only filters out models that truly lack an Ollama-resolvable alias/endpoint and won’t exclude models that are available but only via digest or endpoint-native names.

If helpful, I can add unit tests that prove we don’t skip digest-only models.


111-113: Comment says “uppercase” but code returns the input; implement the intended behaviour

Make the default branch actually uppercase the quantisation string.

Apply this diff:

-	// Default: uppercase the quantization
-	return quant
+	// Default: uppercase the quantisation
+	return strings.ToUpper(quant)

And add the import:

import (
    "time"
    "strings"
    ...
)
internal/adapter/converter/base_converter_test.go (3)

149-182: Metadata extraction tests read well; consider adding a couple of edge cases

Two worthwhile additions to harden behaviour:

  • json.Number and int64 support for integers (common with bespoke JSON unmarshalling).
  • Confirm behaviour when metadata contains an empty string, zero, or false (explicitly stored defaults).

Happy to draft these if you want them in this PR.


184-264: Helper behaviour is covered; add a case for empty endpoint state

You verify defaulting when the endpoint is nil. Add a case where an endpoint exists but State == "" to pin the “use default” path.

@@ func TestConversionHelper(t *testing.T) {
   t.Run("with valid ollama model", func(t *testing.T) { ... })
 
+  t.Run("with endpoint but empty state uses default", func(t *testing.T) {
+    model := &domain.UnifiedModel{
+      Aliases: []domain.AliasEntry{{Name: "llama3:latest", Source: "ollama"}},
+      SourceEndpoints: []domain.SourceEndpoint{{
+        EndpointURL: "http://localhost:11434",
+        NativeName:  "llama3:latest",
+        State:       "",
+      }},
+    }
+    helper := base.NewConversionHelper(model)
+    assert.Equal(t, "unknown", helper.GetState("unknown"))
+  })

266-334: Type resolution tests are clear; add cross-provider alias guard

Given the provider coupling, add one small test where BaseConverter is for “lmstudio” and the model has an “lmstudio” alias (mirroring your Ollama case) to ensure ShouldSkip() returns false there as well.

@@ func TestBaseConverter_DetermineModelType(t *testing.T) {
 }
 
+func TestConversionHelper_LMStudioAliasAllowsConversion(t *testing.T) {
+  base := NewBaseConverter("lmstudio")
+  model := &domain.UnifiedModel{
+    Aliases: []domain.AliasEntry{{Name: "microsoft/phi-4", Source: "lmstudio"}},
+    SourceEndpoints: []domain.SourceEndpoint{{
+      EndpointURL: "http://localhost:1234",
+      NativeName:  "microsoft/phi-4",
+      State:       "loaded",
+    }},
+  }
+  helper := base.NewConversionHelper(model)
+  assert.False(t, helper.ShouldSkip())
+  assert.Equal(t, "microsoft/phi-4", helper.Alias)
+}
internal/adapter/converter/lmstudio_converter.go (1)

61-85: Lean helper-driven conversion; minor clarity improvement

The flow is sound: gate on alias, derive type, pick publisher, state from endpoint, and forward the max context length. Consider adding a short comment explaining why alias-gating is used (prevents leaking non-LM Studio models into the LM Studio response). This aligns with our “explain why” guideline.

 func (c *LMStudioConverter) convertModel(model *domain.UnifiedModel) *LMStudioModelData {
   helper := c.BaseConverter.NewConversionHelper(model)
 
-  if helper.ShouldSkip() {
+  // Skip models that don’t have an LM Studio alias to avoid surfacing
+  // models native to other providers in an LM Studio response.
+  if helper.ShouldSkip() {
     return nil
   }
internal/adapter/converter/base_converter.go (4)

19-27: Add nil-guards to prevent accidental panics on unexpected inputs

While upstream callers generally provide non-nil models/metadata, lightweight checks make these helpers safer and document intent.

 func (b *BaseConverter) FindProviderAlias(model *domain.UnifiedModel) (string, bool) {
+  if model == nil || model.Aliases == nil {
+    return "", false
+  }
   for _, alias := range model.Aliases {
@@
 func (b *BaseConverter) FindProviderEndpoint(model *domain.UnifiedModel, providerName string) *domain.SourceEndpoint {
-  for i := range model.SourceEndpoints {
+  if model == nil || providerName == "" || model.SourceEndpoints == nil {
+    return nil
+  }
+  for i := range model.SourceEndpoints {
@@
 func (b *BaseConverter) DetermineModelType(model *domain.UnifiedModel, defaultType string) string {
+  if model == nil {
+    return defaultType
+  }
   // Check metadata first
@@
 func (b *BaseConverter) NewConversionHelper(model *domain.UnifiedModel) *ConversionHelper {
+  if model == nil {
+    return &ConversionHelper{BaseConverter: b}
+  }
   alias, _ := b.FindProviderAlias(model)
@@
 func (h *ConversionHelper) GetMetadataString(key string) string {
-  return h.ExtractMetadataString(h.Model.Metadata, key)
+  if h.Model == nil {
+    return ""
+  }
+  return h.ExtractMetadataString(h.Model.Metadata, key)
 }
@@
 func (h *ConversionHelper) GetMetadataInt(key string) int {
-  return h.ExtractMetadataInt(h.Model.Metadata, key)
+  if h.Model == nil {
+    return 0
+  }
+  return h.ExtractMetadataInt(h.Model.Metadata, key)
 }
@@
 func (h *ConversionHelper) GetMetadataBool(key string) bool {
-  return h.ExtractMetadataBool(h.Model.Metadata, key)
+  if h.Model == nil {
+    return false
+  }
+  return h.ExtractMetadataBool(h.Model.Metadata, key)
 }

Also applies to: 29-38, 88-105, 114-125, 147-160


48-57: Broaden numeric handling in ExtractMetadataInt

Support int64/uint64/json.Number to match varied decoding paths; optional rounding vs truncation can be documented if needed.

-import (
-  "github.com/thushan/olla/internal/core/domain"
-)
+import (
+  "encoding/json"
+  "math"
+  "strconv"
+  "github.com/thushan/olla/internal/core/domain"
+)
@@
 func (b *BaseConverter) ExtractMetadataInt(metadata map[string]interface{}, key string) int {
-  if val, ok := metadata[key].(int); ok {
+  if metadata == nil {
+    return 0
+  }
+  switch v := metadata[key].(type) {
+  case int:
+    return v
+  case int8, int16, int32:
+    return int(reflect.ValueOf(v).Int())
+  case int64:
+    return int(v)
+  case uint, uint8, uint16, uint32, uint64:
+    return int(reflect.ValueOf(v).Uint())
+  case float32:
+    return int(v)
+  case float64:
+    return int(v)
+  case json.Number:
+    if i64, err := v.Int64(); err == nil {
+      return int(i64)
+    }
+    if f64, err := v.Float64(); err == nil {
+      return int(f64)
+    }
+  case string:
+    if i64, err := strconv.ParseInt(v, 10, 64); err == nil {
+      return int(i64)
+    }
+    if f64, err := strconv.ParseFloat(v, 64); err == nil {
+      return int(math.Trunc(f64))
+    }
   }
-  if val, ok := metadata[key].(float64); ok {
-    return int(val)
-  }
   return 0
 }

Note: If you prefer to avoid reflect for performance/readability, narrow the cases to the handful you expect (int, int64, float64, json.Number, string).


29-38: Prefer a loaded endpoint when multiple match

If multiple endpoints share the same native name, choose a “loaded” endpoint first; otherwise fall back to the first match.

 func (b *BaseConverter) FindProviderEndpoint(model *domain.UnifiedModel, providerName string) *domain.SourceEndpoint {
-  for i := range model.SourceEndpoints {
-    ep := &model.SourceEndpoints[i]
-    if providerName != "" && ep.NativeName == providerName {
-      return ep
-    }
-  }
-  return nil
+  if model == nil || providerName == "" || len(model.SourceEndpoints) == 0 {
+    return nil
+  }
+  var firstMatch *domain.SourceEndpoint
+  for i := range model.SourceEndpoints {
+    ep := &model.SourceEndpoints[i]
+    if ep.NativeName == providerName {
+      if ep.State == "loaded" {
+        return ep
+      }
+      if firstMatch == nil {
+        firstMatch = ep
+      }
+    }
+  }
+  return firstMatch
 }

75-81: State: consider honouring typed ModelState when present

domain.SourceEndpoint exposes ModelState; if populated, prefer mapping the typed value to a string before falling back to the legacy State field. This will reduce divergence when providers move to typed states.

If ModelState has a String() method or constants, we can wire it in. Want me to draft the mapping once I see the type?

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 33c2a04 and c4bbb98.

📒 Files selected for processing (8)
  • internal/adapter/converter/base_converter.go (1 hunks)
  • internal/adapter/converter/base_converter_test.go (1 hunks)
  • internal/adapter/converter/lmstudio_converter.go (2 hunks)
  • internal/adapter/converter/ollama_converter.go (2 hunks)
  • internal/adapter/converter/openai_converter.go (1 hunks)
  • internal/adapter/converter/unified_converter.go (2 hunks)
  • internal/adapter/converter/vllm_converter.go (2 hunks)
  • internal/adapter/converter/vllm_converter_test.go (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{go,md}

📄 CodeRabbit inference engine (CLAUDE.md)

Use Australian English in comments and documentation; write comments explaining why rather than what

Files:

  • internal/adapter/converter/base_converter.go
  • internal/adapter/converter/base_converter_test.go
  • internal/adapter/converter/vllm_converter.go
  • internal/adapter/converter/unified_converter.go
  • internal/adapter/converter/lmstudio_converter.go
  • internal/adapter/converter/vllm_converter_test.go
  • internal/adapter/converter/ollama_converter.go
  • internal/adapter/converter/openai_converter.go
🧬 Code graph analysis (8)
internal/adapter/converter/base_converter.go (1)
internal/core/domain/unified_model.go (2)
  • UnifiedModel (15-31)
  • SourceEndpoint (34-44)
internal/adapter/converter/base_converter_test.go (2)
internal/adapter/converter/base_converter.go (1)
  • NewBaseConverter (13-17)
internal/core/domain/unified_model.go (3)
  • UnifiedModel (15-31)
  • AliasEntry (9-12)
  • SourceEndpoint (34-44)
internal/adapter/converter/vllm_converter.go (3)
internal/adapter/converter/base_converter.go (2)
  • BaseConverter (8-10)
  • NewBaseConverter (13-17)
internal/core/ports/model_converter.go (1)
  • ModelResponseConverter (10-15)
internal/core/constants/providers.go (1)
  • ProviderTypeVLLM (8-8)
internal/adapter/converter/unified_converter.go (2)
internal/adapter/converter/base_converter.go (2)
  • BaseConverter (8-10)
  • NewBaseConverter (13-17)
internal/core/ports/model_converter.go (1)
  • ModelResponseConverter (10-15)
internal/adapter/converter/lmstudio_converter.go (3)
internal/adapter/converter/base_converter.go (2)
  • BaseConverter (8-10)
  • NewBaseConverter (13-17)
internal/core/ports/model_converter.go (1)
  • ModelResponseConverter (10-15)
internal/core/constants/providers.go (1)
  • ProviderPrefixLMStudio1 (17-17)
internal/adapter/converter/vllm_converter_test.go (1)
internal/adapter/converter/vllm_converter.go (2)
  • NewVLLMConverter (24-28)
  • VLLMConverter (19-21)
internal/adapter/converter/ollama_converter.go (3)
internal/adapter/converter/base_converter.go (2)
  • BaseConverter (8-10)
  • NewBaseConverter (13-17)
internal/core/ports/model_converter.go (1)
  • ModelResponseConverter (10-15)
internal/core/constants/providers.go (1)
  • ProviderTypeOllama (4-4)
internal/adapter/converter/openai_converter.go (2)
internal/adapter/converter/base_converter.go (2)
  • BaseConverter (8-10)
  • NewBaseConverter (13-17)
internal/core/ports/model_converter.go (1)
  • ModelResponseConverter (10-15)
🔇 Additional comments (10)
internal/adapter/converter/openai_converter.go (1)

25-33: Embedding BaseConverter is a sound step towards consolidation

Good alignment with the new shared base; no functional risk introduced here.

internal/adapter/converter/unified_converter.go (1)

146-157: Confirm whether endpoint filter should match on name vs URL

Elsewhere we avoid surfacing URLs for security. If filters.Endpoint is intended to be an endpoint name, update comparison to EndpointName to keep behaviour consistent.

If names are intended, the inner comparison should be:

if ep.EndpointName == endpoint {
    return true
}

Let me know and I can patch it accordingly with tests.

internal/adapter/converter/ollama_converter.go (2)

34-42: Embedding BaseConverter aligns Ollama with the new shared path

Good consolidation—this simplifies future enhancements and consistency across providers.


73-84: Validate that both Name and Model should be set to the alias

Some Ollama clients expect “model” to reflect the underlying digest, while “name” is the human-friendly tag. If your API consumers rely on digest, consider mapping Model to the digest and Name to the alias.

If required, this minimal change would do it:

-	Name:       helper.Alias,
-	Model:      helper.Alias,
+	Name:       helper.Alias,
+	Model:      helper.GetMetadataString("digest"),
internal/adapter/converter/base_converter_test.go (2)

11-66: Solid table-driven coverage for alias resolution

Good breadth across populated, empty, and nil alias slices. This locks in provider-specific behaviour for “ollama”.


68-147: Endpoint matching tests are effective and realistic

The positive/negative cases, plus empty-provider-name branch, exercise the pointer-to-slice-element pattern and avoid the classic range-variable-address bug. Nice.

internal/adapter/converter/lmstudio_converter.go (2)

28-36: Good move embedding BaseConverter

Embedding keeps converters consistent and trims duplicated alias/endpoint logic. Constructor wiring to ProviderPrefixLMStudio1 looks right.


60-85: No action required: denormalizeQuantization is defined in the same package

The rg search confirms that denormalizeQuantization is implemented in internal/adapter/converter/ollama_converter.go and, since both files share the converter package, it will be visible to lmstudio_converter.go. The code will compile successfully.

internal/adapter/converter/base_converter.go (2)

31-36: Nice avoidance of range-variable address pitfall

Indexing the slice and taking &model.SourceEndpoints[i] is the correct pattern in Go; avoids the common pointer-to-iteration-variable bug.


83-86: Resolved: hasCapability helper is defined in the same package

The unexported function hasCapability is implemented at internal/adapter/converter/unified_converter.go:209, so the call in BaseConverter.HasCapability is valid and the package will build as-is. No changes required.

@thushan thushan merged commit f5b50f0 into main Aug 23, 2025
5 of 6 checks passed
@thushan thushan deleted the feature/consolidate-converters branch August 23, 2025 10:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
refactor Refactoring of code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant