Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] Support load_dotenv #120

Merged
merged 14 commits into from
Mar 19, 2024
22 changes: 21 additions & 1 deletion Quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,34 @@ Before running the evaluation script, you need to **configure** the VLMs and set

After that, you can use a single script `run.py` to inference and evaluate multiple VLMs and benchmarks at a same time.

## Step0. Installation
## Step0. Installation & Setup essential keys

**Installation. **

```bash
git clone https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .
```

**Setup Keys.**

- To infer with API models (GPT-4v, Gemini-Pro-V, etc.) or use LLM APIs as the **judge or choice extractor**, you need to first setup API keys. You can place the required keys in `$VLMEvalKit/.env` or directly set them as the environment variable. If you choose to create a `.env` file, its content will look like:

```bash
# The .env file, place it under $VLMEvalKit
# Alles-apin-token, for intra-org use only
ALLES=
# API Keys of Proprietary VLMs
DASHSCOPE_API_KEY=
GOOGLE_API_KEY=
OPENAI_API_KEY=
OPENAI_API_BASE=
STEPAI_API_KEY=
```

- Fill the blanks with your API keys (if necessary). Those API keys will be automatically loaded when doing the inference and evaluation.

## Step1. Configuration

**VLM Configuration**: All VLMs are configured in `vlmeval/config.py`, for some VLMs, you need to configure the code root (MiniGPT-4, PandaGPT, etc.) or the model_weight root (LLaVA-v1-7B, etc.) before conducting the evaluation. During evaluation, you should use the model name specified in `supported_VLM` in `vlmeval/config.py` to select the VLM. For MiniGPT-4 and InstructBLIP, you also need to modify the config files in `vlmeval/vlm/misc` to configure LLM path and ckpt path.
Expand Down
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ pandas>=1.5.3
pillow
portalocker
pycocoevalcap
python-dotenv
requests
rich
seaborn
Expand Down
1 change: 1 addition & 0 deletions run.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,4 +144,5 @@ def main():


if __name__ == '__main__':
load_env()
main()
2 changes: 2 additions & 0 deletions vlmeval/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,5 @@
from .utils import *
from .vlm import *
from .config import *

load_env()
2 changes: 2 additions & 0 deletions vlmeval/evaluate/misc.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import os
from vlmeval.api import OpenAIWrapper, OpenAIWrapperInternal
from vlmeval.smp import load_env

INTERNAL = os.environ.get('INTERNAL', 0)


def build_judge(version, **kwargs):
load_env()
model_map = {
'gpt-4-turbo': 'gpt-4-1106-preview',
'gpt-4-0613': 'gpt-4-0613',
Expand Down
21 changes: 21 additions & 0 deletions vlmeval/smp/misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,3 +147,24 @@ def run_command(cmd):
if isinstance(cmd, str):
cmd = cmd.split()
return subprocess.check_output(cmd)

def load_env():
try:
import vlmeval
except ImportError:
warnings.warn('VLMEval is not installed. Failed to import environment variables from .env file. ')
return
pth = osp.realpath(vlmeval.__path__[0])
pth = osp.join(pth, '../.env')
pth = osp.realpath(pth)
if not osp.exists(pth):
warnings.warn(f'Did not detect the .env file at {pth}, failed to load. ')
return

from dotenv import dotenv_values
values = dotenv_values(pth)
for k, v in values.items():
if v is not None and len(v):
os.environ[k] = v
print(f'API Keys successfully loaded from {pth}')
return
Loading