Skip to content

Conversation

@Dentling
Copy link

@Dentling Dentling commented Jul 1, 2025

To access custom OpenAI-compatible endpoints, I made the baseURL for openai customizable.

@betomoedano
Copy link
Owner

Thanks a bunch for opening this PR!

Could you share a video of how these would look? and some use cases?

Thanks again! 😃

@Dentling
Copy link
Author

Dentling commented Jul 5, 2025

i don't know what you mean with "a video of how these would look" but essentially some tools provide an openai-compatible API for companies, that have their own AI running in their own datacenter for privacy/compliance reasons.
So to use these, the baseUrl has to be edited.

An example would be LiteLLM: https://github.com/BerriAI/litellm

@qazsero
Copy link

qazsero commented Aug 7, 2025

But that wouldnt work given that the tool is hardcoded to use either dalle or gpt-image-1. While deepseek and grok can be used via the openai library, that wouldnt work.

The only usecase I can see is for users wanting to use this with gpt-image-1 or dalle through Azure Openai, which requires a different baseUrl

Edit: sorryyy didnt notice this is already a month old. btw @betomoedano I've been playing with this all morning, amazing prompt engineering to return such beautiful logos/icons. Thank you!

@betomoedano
Copy link
Owner

Thanks for the comments!

At the moment I don't have the time to explore adding those features but please feel free to fork the project and make your owns version.

Thank you!

@mikehardy
Copy link

Saw this project linked from "The React Native Rewind" https://thereactnativerewind.com/issues-blog-post/micro-bundles-we-barely-understand-overdosing-on-liquid-glass-and-a-flutter-hit-piece

Looked pretty neat - but I've been experimenting with running LLMs locally, and I've been doing so effectively, with OpenAI-compatible tool integrations via:

https://lmstudio.ai/docs/app/api/endpoints/openai

So this PR would be pretty sweet in combination with selecting different models, could have it generate icon packs without ever leaving my local machine or local network, and no API key or money needed

So that's the use case anyway: local / private / free LLM usage via local OpenAI-compatible endpoints.

Yes I know I can fork it and add that, but this PR seems to already do the trick mostly

@qazsero
Copy link

qazsero commented Aug 18, 2025

I think there is a conceptual misunderstanding.

LLMs dont generate images. Only text.
When chatgpt creates an image, it does not use gpt4o (or now 5), it uses a separate model called gpt-image-1.

This library is built around the model gpt-image-1. Replacing the endpoint would not make it work with local models, you could use locally Stable Diffusion, Flux dev, WAN, but even if you can expose these models through the openai connector, you would need to have a look at the code where it makes a call specifically to the gpt-image-1 model.

I hope this clears a bit of confusion on why this PR isnt as feasible as it looks like.

@mikehardy
Copy link

sorry I was using the wrong term (LLM) - yes it would require fiddling with another part to select a different model - but yes I've been playing with image generation locally as well. Trying to see how far I can go locally with private data and known energy consumption vs cloud and subscription and such

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants