Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Support for Multiple Models #6

Open
JustinHsu1019 opened this issue Jan 8, 2025 · 2 comments
Open

Add Support for Multiple Models #6

JustinHsu1019 opened this issue Jan 8, 2025 · 2 comments

Comments

@JustinHsu1019
Copy link
Contributor

Description

The project currently supports only a single model for generating Selenium scripts. Users may want to choose between different models (e.g., GPT-4, GPT-4-turbo, Gemini, Llama, or other APIs) for flexibility and potentially faster responses.

Proposed Solution

  • Modularize the model invocation logic by moving it into a separate file or module.
  • Add support for multiple models by creating a configuration file or menu where users can select the desired model.
  • Implement a simple extension menu for model selection with options to configure model-specific parameters.
@guoriyue
Copy link
Owner

Support GPT4 GPT3.5 and DeepSeek now

@JustinHsu1019
Copy link
Contributor Author

Great! Thank you~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants