This serves as the run executor to OpenGPTs-platform. View full Figma spec
To elaborate, when you create an assistant with the assistants-api
, then set up a thread and use a run
command like
run = client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant.id
)
This will create a run
object that will be sent to the RabbitMQ
queue. Then this repository, that is already listening to the queue, will consume the request and execute the run
by generating text and using tools in a loop until the user’s message has been adequately responded to.
- If you intend to use a custom version of
assistants-api
we recommend you use OpenGPTs-platform/assistants-api follow the quickstart guide to get the API running. Alternatively you can use the official OpenAI Assistants's API throughopenai==1.13.4
, but you will have limited tool functionality. - Copy the
.env.example
file to.env
and fill in the required fields. - Run
pre-commit install
for linting and formatting. - (Recommended: Create a virtual environment) Install the dependencies using
pip install -r requirements.txt
. - Run the run executor worker using
python run_executor_worker.py
. This will consume theRabbitMQ
queue and execute the tasks.