-
-
Notifications
You must be signed in to change notification settings - Fork 81
make openai baseUrl configurable #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thanks a bunch for opening this PR! Could you share a video of how these would look? and some use cases? Thanks again! 😃 |
|
i don't know what you mean with "a video of how these would look" but essentially some tools provide an openai-compatible API for companies, that have their own AI running in their own datacenter for privacy/compliance reasons. An example would be LiteLLM: https://github.com/BerriAI/litellm |
|
But that wouldnt work given that the tool is hardcoded to use either dalle or gpt-image-1. While deepseek and grok can be used via the openai library, that wouldnt work. The only usecase I can see is for users wanting to use this with gpt-image-1 or dalle through Azure Openai, which requires a different baseUrl Edit: sorryyy didnt notice this is already a month old. btw @betomoedano I've been playing with this all morning, amazing prompt engineering to return such beautiful logos/icons. Thank you! |
|
Thanks for the comments! At the moment I don't have the time to explore adding those features but please feel free to fork the project and make your owns version. Thank you! |
|
Saw this project linked from "The React Native Rewind" https://thereactnativerewind.com/issues-blog-post/micro-bundles-we-barely-understand-overdosing-on-liquid-glass-and-a-flutter-hit-piece Looked pretty neat - but I've been experimenting with running LLMs locally, and I've been doing so effectively, with OpenAI-compatible tool integrations via: https://lmstudio.ai/docs/app/api/endpoints/openai So this PR would be pretty sweet in combination with selecting different models, could have it generate icon packs without ever leaving my local machine or local network, and no API key or money needed So that's the use case anyway: local / private / free LLM usage via local OpenAI-compatible endpoints. Yes I know I can fork it and add that, but this PR seems to already do the trick mostly |
|
I think there is a conceptual misunderstanding. LLMs dont generate images. Only text. This library is built around the model gpt-image-1. Replacing the endpoint would not make it work with local models, you could use locally Stable Diffusion, Flux dev, WAN, but even if you can expose these models through the openai connector, you would need to have a look at the code where it makes a call specifically to the gpt-image-1 model. I hope this clears a bit of confusion on why this PR isnt as feasible as it looks like. |
|
sorry I was using the wrong term (LLM) - yes it would require fiddling with another part to select a different model - but yes I've been playing with image generation locally as well. Trying to see how far I can go locally with private data and known energy consumption vs cloud and subscription and such |
To access custom OpenAI-compatible endpoints, I made the baseURL for openai customizable.