About fallback models and environment files #135
Replies: 9 comments 3 replies
-
I support your point, I having to input my API key everytime I run is annoying |
Beta Was this translation helpful? Give feedback.
-
imo we should use .env for transparency, convenience, and security:
|
Beta Was this translation helpful? Give feedback.
-
I think having the desired model as an environment variable is a great idea and would not add to complexity or lines of code. Additionally support for loading a local .env file would add value and could be implemented while respecting the goal of minimal lines of code. Reliable fallback logic has been added to support users without access to gpt-4, which is great because it should work "out of the box" for anyone with an openai account but I think it's reasonable that users shouldn't have to read and edit code to change something that could easily treated as a setting. Using python-dotenv, adding just two lines of code to main.py : from dotenv import load_env
load_dotenv() change the default arg in main() to model: str = os.getenv("GPT_ENGINEER_MODEL", "gpt-4") would keep current behavior while adding support for configurable model via .env file or host env var GPT_ENGINEER_MODEL ( I think MODEL is too vague an env var name, might be safer to namespace it a bit). If no env var is set, it'd still try gpt-4, falling back on gpt-3.5-turbo. The README.md would need be updated to reflect the new env var and the support for a .env file. For one dependency and two more total lines of code, this seems like a low cost high value feature to add. |
Beta Was this translation helpful? Give feedback.
-
+1 for me. |
Beta Was this translation helpful? Give feedback.
-
Just leave a trail on the previous pr on .env staff : #29 Here is my fork on this project with .env support, just to keep it around as a reference in case anyone is interested: https://github.com/yhyu13/gpt-engineer/tree/local_dev |
Beta Was this translation helpful? Give feedback.
-
Big agree. This is how my fork works as it is more cost efficient to switch models depending on the task. Also, it's just more accessible in general, in my opinion. +1 |
Beta Was this translation helpful? Give feedback.
-
Surprised by so many preferring env variables! Why not just pass in Is it because you run it from IDE / vscode? |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Given that gpt-engineer also accepts a model as an argument, there could be discrepancy between the two. E.g. supplying Additionally, regarding model fall-back logic: my first contribution to this project was implementing a hard-coded fall-back to gpt-3.5-turbo since I don't have access to gpt-4 and I believe at the time there was no way to provide an alternative model to gpt-engineer w/o changing the code yourself. My intention was to make it work "out of the box" for whoever regardless of their API key. Now that we can provide the model as a cli param, I think we should rethink that pattern. If a user pip installs gpt-engineer, sets up their openai key env var and runs |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
There have been numerous questions, issues and PRs (#14, #22, #29 , #41, #42, #83, #92, #106, #144) regarding these matters, meaning there is definitely some interest and/or use cases.
I think we should make a final decision.
In order to do that I think a discussion would be valuable.
My two cents: if we adopted using a
.env
file, each of us could set the following:Which for example, solves the fallback issue, where some people want to use
gpt-4
and others can't and have to usegpt-3.5-turbo
(or-16k
or-16k-0613
)I personally like the idea of each of us tweaking things like this instead of taking care of them in the code trying to fulfil everyone's need, each user can be responsible of their own settings, if you will.
So if I want to use a model that will soon be deprecated, like these new
-0613
which will eventually be deprecated, it is on me, to properly modify my.env
orsettings.py
.Also, many users have asked about the issue where they forgot to
export
their KEY and honestly, from my point of view, it is just annoying having to do theexport
every time I open a terminal to run GPT Engineer.So, this solution would also solve that.
I am aware that @AntonOsika (repo owner) is not convinced about this.
Just to be clear, maybe
.env
is not the way, maybe a propersettings.py
file would do the trick. But I think we need something to cover this cases.I just thought we should discuss openly here and get some feedback from the community.
Please keep your comments on the matter, don't get too sidetracked if possible.
Let's discuss!
Beta Was this translation helpful? Give feedback.
All reactions