diff --git a/evi/evi-flutter/README.md b/evi/evi-flutter/README.md index cf0b863b..2c61703b 100644 --- a/evi/evi-flutter/README.md +++ b/evi/evi-flutter/README.md @@ -5,52 +5,54 @@ This project features a sample implementation of Hume's [Empathic Voice Interface](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview) using Flutter. This is lightly adapted from the stater project provided by `flutter create`. -**Targets:** The example supports iOS, Android, and Web. +**Targets:** The example supports iOS, Android, and Web. -**Dependencies:** It uses the [record](https://pub.dev/packages/record) Flutter package for audio recording, and [audioplayers](https://pub.dev/packages/audioplayers) package for playback. +**Dependencies:** It uses the [record](https://pub.dev/packages/record) Flutter package for audio recording, and [audioplayers](https://pub.dev/packages/audioplayers) package for playback. ## Instructions 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/flutter/evi-flutter - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-flutter + ``` 2. Install Flutter (if needed) following the [official guide](https://docs.flutter.dev/get-started/install). 3. Install dependencies: - ```shell - flutter pub get - ``` + + ```shell + flutter pub get + ``` 4. Set up your API key: - You must authenticate to use the EVI API. Your API key can be retrieved from the [Hume AI platform](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - This example uses [flutter_dotenv](https://pub.dev/packages/flutter_dotenv). Place your API key in a `.env` file at the root of your project. + You must authenticate to use the EVI API. Your API key can be retrieved from the [Hume AI platform](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + + This example uses [flutter_dotenv](https://pub.dev/packages/flutter_dotenv). Place your API key in a `.env` file at the root of your project. - ```shell - echo "HUME_API_KEY=your_api_key_here" > .env - ``` - - You can copy the `.env.example` file to use as a template. + ```shell + echo "HUME_API_KEY=your_api_key_here" > .env + ``` - **Note:** the `HUME_API_KEY` environment variable is for development only. In a production flutter app you should avoid building your api key into the app -- the client should fetch an access token from an endpoint on your server. You should supply the `MY_SERVER_AUTH_URL` environment variable and uncomment the call to `fetchAccessToken` in `lib/main.dart`. + You can copy the `.env.example` file to use as a template. + + **Note:** the `HUME_API_KEY` environment variable is for development only. In a production flutter app you should avoid building your api key into the app -- the client should fetch an access token from an endpoint on your server. You should supply the `MY_SERVER_AUTH_URL` environment variable and uncomment the call to `fetchAccessToken` in `lib/main.dart`. 5. Specify an EVI configuration (Optional): - EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). + EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). - ```shell - echo "HUME_CONFIG_ID=your_config_id_here" >> .env - ``` + ```shell + echo "HUME_CONFIG_ID=your_config_id_here" >> .env + ``` 6. Run the app: - ```shell - flutter run - ``` + + ```shell + flutter run + ``` 7. If you are using the Android emulator, make sure to send audio to the emulator from the host. @@ -58,8 +60,8 @@ This project features a sample implementation of Hume's [Empathic Voice Interfac ## Notes -* **Echo cancellation**. Echo cancellation is important for a good user experience using EVI. Without echo cancellation, EVI will detect its own speech as user interruptions, and will cut itself off and become incoherent. This flutter example *requests* echo cancellation from the browser or the device's operating system, but echo cancellation is hardware-dependent and may not be provided in all environments. - * Echo cancellation works consistently on physical iOS devices and on the web. - * Echo cancellation works on some physical Android devices. - * Echo cancellation doesn't seem to work using the iOS simulator or Android Emulator when forwarding audio from the host. - * If you need to test using a simulator or emulator, or in an environment where echo cancellation is not provided, use headphones, or enable the mute button while EVI is speaking. +- **Echo cancellation**. Echo cancellation is important for a good user experience using EVI. Without echo cancellation, EVI will detect its own speech as user interruptions, and will cut itself off and become incoherent. This flutter example _requests_ echo cancellation from the browser or the device's operating system, but echo cancellation is hardware-dependent and may not be provided in all environments. + - Echo cancellation works consistently on physical iOS devices and on the web. + - Echo cancellation works on some physical Android devices. + - Echo cancellation doesn't seem to work using the iOS simulator or Android Emulator when forwarding audio from the host. + - If you need to test using a simulator or emulator, or in an environment where echo cancellation is not provided, use headphones, or enable the mute button while EVI is speaking. diff --git a/evi/evi-next-js-app-router-quickstart/README.md b/evi/evi-next-js-app-router-quickstart/README.md index 450a7f2e..3be700e8 100644 --- a/evi/evi-next-js-app-router-quickstart/README.md +++ b/evi/evi-next-js-app-router-quickstart/README.md @@ -22,49 +22,50 @@ Below are the steps to completing deployment: 1. Create a Git Repository for your project. 2. Provide the required environment variables. To get your API key and Secret key, log into the Hume AI Platform and visit the [API keys page](https://platform.hume.ai/settings/keys). -## Modify the project +## Modify the project 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/next-js/evi-next-js-app-router-quickstart - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-next-js-app-router-quickstart + ``` 2. Install dependencies: - ```shell - npm install - ``` + + ```shell + npm install + ``` 3. Set up your API key and Secret key: - In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project. + In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + + Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project. - ```shell - echo "HUME_API_KEY=your_api_key_here" > .env - echo "HUME_SECRET_KEY=your_secret_key_here" >> .env - ``` + ```shell + echo "HUME_API_KEY=your_api_key_here" > .env + echo "HUME_SECRET_KEY=your_secret_key_here" >> .env + ``` - You can copy the `.env.example` file to use as a template. + You can copy the `.env.example` file to use as a template. 4. Specify an EVI configuration (Optional): EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). - - You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-app-router-quickstart/components/Chat.tsx). - Here's an example: - ```tsx - - ``` + You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-app-router-quickstart/components/Chat.tsx). -5. Run the project: - ```shell - npm run dev - ``` + Here's an example: + ```tsx + + ``` + +5. Run the project: + ```shell + npm run dev + ``` diff --git a/evi/evi-next-js-function-calling/README.md b/evi/evi-next-js-function-calling/README.md index 0e66bbff..b43abe42 100644 --- a/evi/evi-next-js-function-calling/README.md +++ b/evi/evi-next-js-function-calling/README.md @@ -13,85 +13,87 @@ See the [Tool Use guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/f 1. [Create a tool](https://dev.hume.ai/docs/empathic-voice-interface-evi/tool-use#create-a-tool) with the following payload: - ```json - { - "name": "get_current_weather", - "description": "This tool is for getting the current weather.", - "parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }" - } - ``` + ```json + { + "name": "get_current_weather", + "description": "This tool is for getting the current weather.", + "parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }" + } + ``` 2. [Create a configuration](https://dev.hume.ai/docs/empathic-voice-interface-evi/tool-use#create-a-configuration) equipped with that tool: - ```json - { - "name": "Weather Assistant Config", - "language_model": { - "model_provider": "ANTHROPIC", - "model_resource": "claude-3-5-sonnet-20240620", - }, - "tools": [ - { - "id": "", - "version": 0 - } - ] - } - ``` + ```json + { + "name": "Weather Assistant Config", + "language_model": { + "model_provider": "ANTHROPIC", + "model_resource": "claude-3-5-sonnet-20240620" + }, + "tools": [ + { + "id": "", + "version": 0 + } + ] + } + ``` ## Instructions 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/next-js/evi-next-js-function-calling - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-next-js-function-calling + ``` 2. Install dependencies: - ```shell - pnpm install - ``` + + ```shell + pnpm install + ``` 3. Set up your API key and Secret key: - In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project. + In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + + Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project. - ```shell - echo "HUME_API_KEY=your_api_key_here" > .env - echo "HUME_SECRET_KEY=your_secret_key_here" >> .env - ``` + ```shell + echo "HUME_API_KEY=your_api_key_here" > .env + echo "HUME_SECRET_KEY=your_secret_key_here" >> .env + ``` - You can copy the `.env.example` file to use as a template. + You can copy the `.env.example` file to use as a template. 4. Add your Config ID to the `.env` file. This ID is from the EVI configuration you created earlier that includes your weather tool. - ```shell - echo "NEXT_PUBLIC_HUME_CONFIG_ID=your_config_id_here" >> .env - ``` + ```shell + echo "NEXT_PUBLIC_HUME_CONFIG_ID=your_config_id_here" >> .env + ``` 5. Add the Geocoding API key to the `.env` file. You can obtain it for free from [geocode.maps.co](https://geocode.maps.co/). - ```shell - echo "GEOCODING_API_KEY=your_geocoding_api_key_here" >> .env - ``` + ```shell + echo "GEOCODING_API_KEY=your_geocoding_api_key_here" >> .env + ``` 6. Run the project: - ```shell - pnpm run dev - ``` - This will start the Next.js development server, and you can access the application at `http://localhost:3000`. + ```shell + pnpm run dev + ``` + + This will start the Next.js development server, and you can access the application at `http://localhost:3000`. ## Example Conversation Here's an example of how you might interact with the EVI to get weather information: -*User: "What's the weather like in New York City?"* +_User: "What's the weather like in New York City?"_ -*EVI: (Uses the get_current_weather tool to fetch data) "Currently in New York City, it's 72°F (22°C) and partly cloudy. The forecast calls for a high of 78°F (26°C) and a low of 65°F (18°C) today."* +_EVI: (Uses the get_current_weather tool to fetch data) "Currently in New York City, it's 72°F (22°C) and partly cloudy. The forecast calls for a high of 78°F (26°C) and a low of 65°F (18°C) today."_ ## License diff --git a/evi/evi-next-js-pages-router-quickstart/README.md b/evi/evi-next-js-pages-router-quickstart/README.md index cb123d30..26a452f1 100644 --- a/evi/evi-next-js-pages-router-quickstart/README.md +++ b/evi/evi-next-js-pages-router-quickstart/README.md @@ -22,38 +22,39 @@ Below are the steps to completing deployment: 1. Create a Git Repository for your project. 2. Provide the required environment variables. To get your API key and Secret key, log into the Hume AI Platform and visit the [API keys page](https://platform.hume.ai/settings/keys). -## Modify the project +## Modify the project 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/next-js/evi-next-js-pages-router-quickstart - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-next-js-pages-router-quickstart + ``` 2. Install dependencies: - ```shell - pnpm install - ``` + + ```shell + pnpm install + ``` 3. Set up your API key and Secret key: - In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project. + In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + + Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project. - ```shell - echo "HUME_API_KEY=your_api_key_here" > .env - echo "HUME_SECRET_KEY=your_secret_key_here" >> .env - ``` + ```shell + echo "HUME_API_KEY=your_api_key_here" > .env + echo "HUME_SECRET_KEY=your_secret_key_here" >> .env + ``` - You can copy the `.env.example` file to use as a template. + You can copy the `.env.example` file to use as a template. 4. Specify an EVI configuration (Optional): - EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). - - You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-pages-router-quickstart/components/Chat.tsx). + EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). + + You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-pages-router-quickstart/components/Chat.tsx). Here's an example: ```tsx @@ -64,8 +65,6 @@ Below are the steps to completing deployment: ``` 5. Run the project: - ```shell - pnpm run dev - ``` - - + ```shell + pnpm run dev + ``` diff --git a/evi/evi-python-chat-history/README.md b/evi/evi-python-chat-history/README.md index d2a3abb0..44750112 100644 --- a/evi/evi-python-chat-history/README.md +++ b/evi/evi-python-chat-history/README.md @@ -23,63 +23,66 @@ 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/python/evi-python-chat-history - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-python-chat-history + ``` 2. Verify Poetry is installed (version 1.7.1 or higher): - Check your version: - ```sh - poetry --version - ``` + Check your version: - If you need to update or install Poetry, follow the instructions on the [official Poetry website](https://python-poetry.org/). + ```sh + poetry --version + ``` + + If you need to update or install Poetry, follow the instructions on the [official Poetry website](https://python-poetry.org/). 3. Set up your API key: - You must authenticate to use the EVI API. Your API key can be retrieved from the [Hume AI platform](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Place your API key in a `.env` file at the root of your project. + You must authenticate to use the EVI API. Your API key can be retrieved from the [Hume AI platform](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + + Place your API key in a `.env` file at the root of your project. - ```shell - echo "HUME_API_KEY=your_api_key_here" > .env - ``` + ```shell + echo "HUME_API_KEY=your_api_key_here" > .env + ``` - You can copy the `.env.example` file to use as a template. + You can copy the `.env.example` file to use as a template. 4. Specify the Chat ID: - In the main function within `main.py`, set the `CHAT_ID` variable to the target conversation ID: + In the main function within `main.py`, set the `CHAT_ID` variable to the target conversation ID: - ```python - async def main(): - # Replace with your actual Chat ID - CHAT_ID = "" - # ... - ``` + ```python + async def main(): + # Replace with your actual Chat ID + CHAT_ID = "" + # ... + ``` - This determines which Chat's events to fetch and process. + This determines which Chat's events to fetch and process. 5. Install dependencies: - ```sh - poetry install - ``` + + ```sh + poetry install + ``` 6. Run the project: - ```sh - poetry run python main.py - ``` - #### What happens when run: + ```sh + poetry run python main.py + ``` + + #### What happens when run: - - The script fetches all events for the specified `CHAT_ID`. - - It generates a `transcript_.txt` file containing the user and assistant messages with timestamps. - - It logs the top 3 average emotions to the console: + - The script fetches all events for the specified `CHAT_ID`. + - It generates a `transcript_.txt` file containing the user and assistant messages with timestamps. + - It logs the top 3 average emotions to the console: - ```sh - Top 3 Emotions: {'Joy': 0.7419108072916666, 'Interest': 0.63111979166666666, 'Amusement': 0.63061116536458334} - ``` + ```sh + Top 3 Emotions: {'Joy': 0.7419108072916666, 'Interest': 0.63111979166666666, 'Amusement': 0.63061116536458334} + ``` - (These keys and scores are just examples; the actual output depends on the Chat's content.) \ No newline at end of file + (These keys and scores are just examples; the actual output depends on the Chat's content.) diff --git a/evi/evi-python-clm-sse/README.md b/evi/evi-python-clm-sse/README.md index 13c1f311..39270945 100644 --- a/evi/evi-python-clm-sse/README.md +++ b/evi/evi-python-clm-sse/README.md @@ -21,28 +21,30 @@ A Python client library for handling Server-Sent Events (SSE) with Hume Custom L 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/python/evi-python-clm-sse - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-python-clm-sse + ``` 2. Verify Poetry is installed (version 1.7.1 or higher): - Check your version: - ```sh - poetry --version - ``` + Check your version: - If you need to update or install Poetry, follow the instructions on the [official Poetry website](https://python-poetry.org/). + ```sh + poetry --version + ``` + + If you need to update or install Poetry, follow the instructions on the [official Poetry website](https://python-poetry.org/). 3. Install dependencies: - ```sh - poetry install - ``` + + ```sh + poetry install + ``` 4. Run the server: - ```sh - poetry run python openai_sse.py - ``` + ```sh + poetry run python openai_sse.py + ``` Spin it up behind ngrok and use the ngrok URL in your config. diff --git a/evi/evi-python-function-calling/README.md b/evi/evi-python-function-calling/README.md index 6506b7ea..fdc5c384 100644 --- a/evi/evi-python-function-calling/README.md +++ b/evi/evi-python-function-calling/README.md @@ -22,163 +22,163 @@ It does not currently support Windows. 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/python/evi-python-function-calling - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-python-function-calling + ``` 2. Set up a virtual environment (Optional): - - It's recommended to isolate dependencies in a virtual environment. Choose one of the following methods: - - - **Using `conda`** (requires [Miniconda](https://docs.anaconda.com/miniconda/) or [Anaconda](https://www.anaconda.com/)): - ```bash - conda create --name evi-env python=3.11 - conda activate evi-env - ``` + It's recommended to isolate dependencies in a virtual environment. Choose one of the following methods: - - **Using built-in `venv`** (available with Python 3.3+): + - **Using `conda`** (requires [Miniconda](https://docs.anaconda.com/miniconda/) or [Anaconda](https://www.anaconda.com/)): - ```bash - python -m venv evi-env - source evi-env/bin/activate - ``` + ```bash + conda create --name evi-env python=3.11 + conda activate evi-env + ``` + + - **Using built-in `venv`** (available with Python 3.3+): + + ```bash + python -m venv evi-env + source evi-env/bin/activate + ``` After activating the environment, proceed with installing dependencies. 3. Set up environment variables: - This project uses `python-dotenv` to load your API credentials securely from a `.env` file. + This project uses `python-dotenv` to load your API credentials securely from a `.env` file. - 1. Install the package: + 1. Install the package: - ```bash - pip install python-dotenv - ``` + ```bash + pip install python-dotenv + ``` - 2. Copy the `.env.example` file to use as a template: + 2. Copy the `.env.example` file to use as a template: - ```shell - cp .env.example .env - ``` + ```shell + cp .env.example .env + ``` - 2. Place your API keys inside: + 3. Place your API keys inside: - - Visit the [API keys page](https://platform.hume.ai/settings/keys) on the Hume Platform to retrieve your API keys. See our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Upon doing so, the `.env` file becomes a persistent local store of your API key, Secret key, and EVI config ID. The `.gitignore` file contains local env file paths so that they are not committed to GitHub. + - Visit the [API keys page](https://platform.hume.ai/settings/keys) on the Hume Platform to retrieve your API keys. See our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + - Upon doing so, the `.env` file becomes a persistent local store of your API key, Secret key, and EVI config ID. The `.gitignore` file contains local env file paths so that they are not committed to GitHub. 4. Install dependencies: - Install the Hume Python SDK with microphone support: + Install the Hume Python SDK with microphone support: - ```bash - pip install "hume[microphone]" - ``` + ```bash + pip install "hume[microphone]" + ``` - For audio playback and processing, additional system-level dependencies are required. Below are download instructions for each supported operating system: + For audio playback and processing, additional system-level dependencies are required. Below are download instructions for each supported operating system: - #### macOS + #### macOS - To ensure audio playback functionality, you will need to install `ffmpeg`, a powerful multimedia framework that handles audio and video processing. + To ensure audio playback functionality, you will need to install `ffmpeg`, a powerful multimedia framework that handles audio and video processing. - One of the most common ways to install `ffmpeg` on macOS is by using [Homebrew](https://brew.sh/). Homebrew is a popular package manager for macOS that simplifies the installation of software by automating the process of downloading, compiling, and setting up packages. + One of the most common ways to install `ffmpeg` on macOS is by using [Homebrew](https://brew.sh/). Homebrew is a popular package manager for macOS that simplifies the installation of software by automating the process of downloading, compiling, and setting up packages. - To install `ffmpeg` using Homebrew, follow these steps: + To install `ffmpeg` using Homebrew, follow these steps: - 1. Install Homebrew onto your system according to the instructions on the [Homebrew website](https://brew.sh/). + 1. Install Homebrew onto your system according to the instructions on the [Homebrew website](https://brew.sh/). - 2. Once Homebrew is installed, you can install `ffmpeg` with: - ```bash - brew install ffmpeg - ``` + 2. Once Homebrew is installed, you can install `ffmpeg` with: + ```bash + brew install ffmpeg + ``` - If you prefer not to use Homebrew, you can download a pre-built `ffmpeg` binary directly from the [FFmpeg website](https://ffmpeg.org/download.html) or use other package managers like [MacPorts](https://www.macports.org/). + If you prefer not to use Homebrew, you can download a pre-built `ffmpeg` binary directly from the [FFmpeg website](https://ffmpeg.org/download.html) or use other package managers like [MacPorts](https://www.macports.org/). - #### Linux + #### Linux - On Linux systems, you will need to install a few additional packages to support audio input/output and playback: + On Linux systems, you will need to install a few additional packages to support audio input/output and playback: - - `libasound2-dev`: This package contains development files for the ALSA (Advanced Linux Sound Architecture) sound system. - - `libportaudio2`: PortAudio is a cross-platform audio I/O library that is essential for handling audio streams. - - `ffmpeg`: Required for processing audio and video files. + - `libasound2-dev`: This package contains development files for the ALSA (Advanced Linux Sound Architecture) sound system. + - `libportaudio2`: PortAudio is a cross-platform audio I/O library that is essential for handling audio streams. + - `ffmpeg`: Required for processing audio and video files. - To install these dependencies, use the following commands: + To install these dependencies, use the following commands: - ```bash - sudo apt-get --yes update - sudo apt-get --yes install libasound2-dev libportaudio2 ffmpeg - ``` + ```bash + sudo apt-get --yes update + sudo apt-get --yes install libasound2-dev libportaudio2 ffmpeg + ``` - #### Windows + #### Windows - Not yet supported. + Not yet supported. 5. **Set up EVI configuration** - Before running this project, you'll need to set up EVI with the ability to leverage tools or call functions. Follow these steps for authentication, creating a Tool, and adding it to a configuration. - - > See our documentation on [Setup for Tool Use](https://dev.hume.ai/docs/empathic-voice-interface-evi/tool-use#setup) for no-code and full-code guides on creating a tool and adding it to a configuration. - - - [Create a tool](https://dev.hume.ai/reference/empathic-voice-interface-evi/tools/create-tool) with the following payload: - - ```bash - curl -X POST https://api.hume.ai/v0/evi/tools \ - -H "X-Hume-Api-Key: " \ - -H "Content-Type: application/json" \ - -d '{ - "name": "get_current_weather", - "parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }", - "version_description": "Fetches current weather and uses celsius or fahrenheit based on location of user.", - "description": "This tool is for getting the current weather.", - "fallback_content": "Unable to fetch current weather." - }' - ``` - - This will yield a Tool ID, which you can assign to a new EVI configuration. - - - [Create a configuration](https://dev.hume.ai/reference/empathic-voice-interface-evi/configs/create-config) equipped with that tool: - - ```bash - curl -X POST https://api.hume.ai/v0/evi/configs \ - -H "X-Hume-Api-Key: " \ - -H "Content-Type: application/json" \ - -d '{ - "evi_version": "2", - "name": "Weather Assistant Config", - "voice": { - "provider": "HUME_AI", - "name": "ITO" - }, - "language_model": { - "model_provider": "ANTHROPIC", - "model_resource": "claude-3-5-sonnet-20240620", - "temperature": 1 - }, - "tools": [ - { - "id": "" - } - ] - }' - ``` - - - Add the Config ID to your environmental variables in your `.env` file: - - ```bash - HUME_CONFIG_ID= - ``` + Before running this project, you'll need to set up EVI with the ability to leverage tools or call functions. Follow these steps for authentication, creating a Tool, and adding it to a configuration. + + > See our documentation on [Setup for Tool Use](https://dev.hume.ai/docs/empathic-voice-interface-evi/tool-use#setup) for no-code and full-code guides on creating a tool and adding it to a configuration. + + - [Create a tool](https://dev.hume.ai/reference/empathic-voice-interface-evi/tools/create-tool) with the following payload: + + ```bash + curl -X POST https://api.hume.ai/v0/evi/tools \ + -H "X-Hume-Api-Key: " \ + -H "Content-Type: application/json" \ + -d '{ + "name": "get_current_weather", + "parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }", + "version_description": "Fetches current weather and uses celsius or fahrenheit based on location of user.", + "description": "This tool is for getting the current weather.", + "fallback_content": "Unable to fetch current weather." + }' + ``` + + This will yield a Tool ID, which you can assign to a new EVI configuration. + + - [Create a configuration](https://dev.hume.ai/reference/empathic-voice-interface-evi/configs/create-config) equipped with that tool: + + ```bash + curl -X POST https://api.hume.ai/v0/evi/configs \ + -H "X-Hume-Api-Key: " \ + -H "Content-Type: application/json" \ + -d '{ + "evi_version": "2", + "name": "Weather Assistant Config", + "voice": { + "provider": "HUME_AI", + "name": "ITO" + }, + "language_model": { + "model_provider": "ANTHROPIC", + "model_resource": "claude-3-5-sonnet-20240620", + "temperature": 1 + }, + "tools": [ + { + "id": "" + } + ] + }' + ``` + + - Add the Config ID to your environmental variables in your `.env` file: + + ```bash + HUME_CONFIG_ID= + ``` 6. Add the Geocoding API key to the `.env` file. You can obtain it for free from [geocode.maps.co](https://geocode.maps.co/). - ```bash - GEOCODING_API_KEY= - ``` + ```bash + GEOCODING_API_KEY= + ``` 7. Run the project: - ```shell - python main.py - ``` + ```shell + python main.py + ``` #### What happens when run: @@ -190,6 +190,6 @@ It does not currently support Windows. Here's an example of how you might interact with the EVI to get weather information: -*User: "What's the weather like in New York City?"* +_User: "What's the weather like in New York City?"_ -*EVI: (Uses the get_current_weather tool to fetch data) "Currently in New York City, it's 72°F (22°C) and partly cloudy. The forecast calls for a high of 78°F (26°C) and a low of 65°F (18°C) today."* \ No newline at end of file +_EVI: (Uses the get_current_weather tool to fetch data) "Currently in New York City, it's 72°F (22°C) and partly cloudy. The forecast calls for a high of 78°F (26°C) and a low of 65°F (18°C) today."_ diff --git a/evi/evi-python-quickstart/README.md b/evi/evi-python-quickstart/README.md index 49917b20..b86b23ca 100644 --- a/evi/evi-python-quickstart/README.md +++ b/evi/evi-python-quickstart/README.md @@ -7,6 +7,7 @@ ## Overview + This project features a minimal implementation of Hume's [Empathic Voice Interface (EVI)](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview) using Hume's [Python SDK](https://github.com/HumeAI/hume-python-sdk). It demonstrates how to authenticate, connect to, and display output from EVI in a terminal application. See the [Quickstart guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/quickstart/python) for a detailed explanation of the code in this project. @@ -21,110 +22,110 @@ It does not currently support Windows. Windows developers can use our [Python Ra 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/python/evi-python-quickstart - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-python-quickstart + ``` 2. Set up a virtual environment (Optional): - - It's recommended to isolate dependencies in a virtual environment. Choose one of the following methods: - - - **Using `conda`** (requires [Miniconda](https://docs.anaconda.com/miniconda/) or [Anaconda](https://www.anaconda.com/)): - ```bash - conda create --name evi-env python=3.11 - conda activate evi-env - ``` + It's recommended to isolate dependencies in a virtual environment. Choose one of the following methods: + + - **Using `conda`** (requires [Miniconda](https://docs.anaconda.com/miniconda/) or [Anaconda](https://www.anaconda.com/)): - - **Using built-in `venv`** (available with Python 3.3+): + ```bash + conda create --name evi-env python=3.11 + conda activate evi-env + ``` - ```bash - python -m venv evi-env - source evi-env/bin/activate - ``` + - **Using built-in `venv`** (available with Python 3.3+): + + ```bash + python -m venv evi-env + source evi-env/bin/activate + ``` After activating the environment, proceed with installing dependencies. 3. Set up environment variables: - This project uses `python-dotenv` to load your API credentials securely from a `.env` file. + This project uses `python-dotenv` to load your API credentials securely from a `.env` file. - 1. Install the package: + 1. Install the package: - ```bash - pip install python-dotenv - ``` + ```bash + pip install python-dotenv + ``` - 2. Copy the `.env.example` file to use as a template: + 2. Copy the `.env.example` file to use as a template: - ```shell - cp .env.example .env - ``` + ```shell + cp .env.example .env + ``` - 2. Place your API keys inside: + 3. Place your API keys inside: - - Visit the [API keys page](https://platform.hume.ai/settings/keys) on the Hume Platform to retrieve your API keys. See our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Upon doing so, the `.env` file becomes a persistent local store of your API key, Secret key, and EVI config ID. The `.gitignore` file contains local env file paths so that they are not committed to GitHub. + - Visit the [API keys page](https://platform.hume.ai/settings/keys) on the Hume Platform to retrieve your API keys. See our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + - Upon doing so, the `.env` file becomes a persistent local store of your API key, Secret key, and EVI config ID. The `.gitignore` file contains local env file paths so that they are not committed to GitHub. + 4. Create an EVI configuration and place its config ID inside: - 3. Create an EVI configuration and place its config ID inside: - - - See our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). + - See our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration). - (Note: `.env` is a hidden file so on Mac you would need to hit `COMMAND-SHIFT .` to make it viewable in the finder). + (Note: `.env` is a hidden file so on Mac you would need to hit `COMMAND-SHIFT .` to make it viewable in the finder). 4. Install the required packages and system dependencies: - The `hume` package contains Hume's Python SDK, including the asynchronous WebSocket infrastructure for using EVI. To install it, run: + The `hume` package contains Hume's Python SDK, including the asynchronous WebSocket infrastructure for using EVI. To install it, run: - ```bash - pip install "hume[microphone]" - ``` + ```bash + pip install "hume[microphone]" + ``` - For audio playback and processing, additional system-level dependencies are required. Below are download instructions for each supported operating system: + For audio playback and processing, additional system-level dependencies are required. Below are download instructions for each supported operating system: - #### macOS + #### macOS - To ensure audio playback functionality, you will need to install `ffmpeg`, a powerful multimedia framework that handles audio and video processing. + To ensure audio playback functionality, you will need to install `ffmpeg`, a powerful multimedia framework that handles audio and video processing. - One of the most common ways to install `ffmpeg` on macOS is by using [Homebrew](https://brew.sh/). Homebrew is a popular package manager for macOS that simplifies the installation of software by automating the process of downloading, compiling, and setting up packages. + One of the most common ways to install `ffmpeg` on macOS is by using [Homebrew](https://brew.sh/). Homebrew is a popular package manager for macOS that simplifies the installation of software by automating the process of downloading, compiling, and setting up packages. - To install `ffmpeg` using Homebrew, follow these steps: + To install `ffmpeg` using Homebrew, follow these steps: - 1. Install Homebrew onto your system according to the instructions on the [Homebrew website](https://brew.sh/). + 1. Install Homebrew onto your system according to the instructions on the [Homebrew website](https://brew.sh/). - 2. Once Homebrew is installed, you can install `ffmpeg` with: - ```bash - brew install ffmpeg - ``` + 2. Once Homebrew is installed, you can install `ffmpeg` with: + ```bash + brew install ffmpeg + ``` - If you prefer not to use Homebrew, you can download a pre-built `ffmpeg` binary directly from the [FFmpeg website](https://ffmpeg.org/download.html) or use other package managers like [MacPorts](https://www.macports.org/). + If you prefer not to use Homebrew, you can download a pre-built `ffmpeg` binary directly from the [FFmpeg website](https://ffmpeg.org/download.html) or use other package managers like [MacPorts](https://www.macports.org/). - #### Linux + #### Linux - On Linux systems, you will need to install a few additional packages to support audio input/output and playback: + On Linux systems, you will need to install a few additional packages to support audio input/output and playback: - - `libasound2-dev`: This package contains development files for the ALSA (Advanced Linux Sound Architecture) sound system. - - `libportaudio2`: PortAudio is a cross-platform audio I/O library that is essential for handling audio streams. - - `ffmpeg`: Required for processing audio and video files. + - `libasound2-dev`: This package contains development files for the ALSA (Advanced Linux Sound Architecture) sound system. + - `libportaudio2`: PortAudio is a cross-platform audio I/O library that is essential for handling audio streams. + - `ffmpeg`: Required for processing audio and video files. - To install these dependencies, use the following commands: + To install these dependencies, use the following commands: - ```bash - sudo apt-get --yes update - sudo apt-get --yes install libasound2-dev libportaudio2 ffmpeg - ``` + ```bash + sudo apt-get --yes update + sudo apt-get --yes install libasound2-dev libportaudio2 ffmpeg + ``` - #### Windows + #### Windows - Not yet supported. + Not yet supported. ## Run the project Below are the steps to run the project: + 1. Create a virtual environment using venv, conda or other method. 2. Activate the virtual environment. 3. Install the required packages and system dependencies. 4. Execute the script by running `python quickstart.py`. -5. Terminate the script by pressing `Ctrl+C`. \ No newline at end of file +5. Terminate the script by pressing `Ctrl+C`. diff --git a/evi/evi-python-raw-api/README.md b/evi/evi-python-raw-api/README.md index 06cb52dc..3e28d4b2 100644 --- a/evi/evi-python-raw-api/README.md +++ b/evi/evi-python-raw-api/README.md @@ -14,59 +14,59 @@ This project features a minimal implementation of Hume's [Empathic Voice Interfa 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/python/evi-python-raw-api - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-python-raw-api + ``` 2. Set up a virtual environment (Optional): - - It's recommended to isolate dependencies in a virtual environment. Choose one of the following methods: - - - **Using `conda`** (requires [Miniconda](https://docs.anaconda.com/miniconda/) or [Anaconda](https://www.anaconda.com/)): - ```bash - conda create --name evi-env python=3.11 - conda activate evi-env - ``` + It's recommended to isolate dependencies in a virtual environment. Choose one of the following methods: - - **Using built-in `venv`** (available with Python 3.3+): + - **Using `conda`** (requires [Miniconda](https://docs.anaconda.com/miniconda/) or [Anaconda](https://www.anaconda.com/)): - ```bash - python -m venv evi-env - source evi-env/bin/activate - ``` + ```bash + conda create --name evi-env python=3.11 + conda activate evi-env + ``` + + - **Using built-in `venv`** (available with Python 3.3+): + + ```bash + python -m venv evi-env + source evi-env/bin/activate + ``` After activating the environment, proceed with installing dependencies. - + 3. Install the required dependencies: - #### Mac + #### Mac - ```bash - pip install -r requirements_mac.txt - ``` + ```bash + pip install -r requirements_mac.txt + ``` - #### Linux + #### Linux - ```bash - pip install -r requirements_linux.txt - ``` + ```bash + pip install -r requirements_linux.txt + ``` 4. Set up environment variables: - 1. Copy the `.env.example` file to use as a template: + 1. Copy the `.env.example` file to use as a template: - ```shell - cp .env.example .env - ``` + ```shell + cp .env.example .env + ``` - 2. Place your API keys inside: + 2. Place your API keys inside: - - Visit the [API keys page](https://platform.hume.ai/settings/keys) on the Hume Platform to retrieve your API keys. See our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). - - Upon doing so, the `.env` file becomes a persistent local store of your API key, Secret key, and EVI config ID. The `.gitignore` file contains local env file paths so that they are not committed to GitHub. + - Visit the [API keys page](https://platform.hume.ai/settings/keys) on the Hume Platform to retrieve your API keys. See our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key). + - Upon doing so, the `.env` file becomes a persistent local store of your API key, Secret key, and EVI config ID. The `.gitignore` file contains local env file paths so that they are not committed to GitHub. - (Note: `.env` is a hidden file so on Mac you would need to hit `COMMAND-SHIFT .` to make it viewable in the finder). + (Note: `.env` is a hidden file so on Mac you would need to hit `COMMAND-SHIFT .` to make it viewable in the finder). ## Run the project diff --git a/evi/evi-python-webhooks/README.md b/evi/evi-python-webhooks/README.md index 6497ad46..03ed1259 100644 --- a/evi/evi-python-webhooks/README.md +++ b/evi/evi-python-webhooks/README.md @@ -37,50 +37,50 @@ If you need to update or install Poetry, visit the [official Poetry website](htt 1. Clone this examples repository: - ```shell - git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/python/evi-python-webhooks - ``` + ```shell + git clone https://github.com/humeai/hume-api-examples + cd hume-api-examples/evi/evi-python-webhooks + ``` 2. Set up API credentials: - - **Obtain Your API Key**: Follow the instructions in the [Hume documentation](https://dev.hume.ai/docs/introduction/api-key) to acquire your API key. - - **Create a `.env` File**: In the project's root directory, create a `.env` file if it doesn't exist. Add your API key: + - **Obtain Your API Key**: Follow the instructions in the [Hume documentation](https://dev.hume.ai/docs/introduction/api-key) to acquire your API key. + - **Create a `.env` File**: In the project's root directory, create a `.env` file if it doesn't exist. Add your API key: - ```sh - HUME_API_KEY="" - ``` + ```sh + HUME_API_KEY="" + ``` - Refer to `.env.example` as a template. + Refer to `.env.example` as a template. 3. Install the required dependencies with Poetry: - ```sh - poetry install - ``` + ```sh + poetry install + ``` ## Usage 1. Running the server: - Start the FastAPI server by running the `app.py` file: + Start the FastAPI server by running the `app.py` file: - ```sh - python app.py - ``` + ```sh + python app.py + ``` 2. Testing the webhook: - Use [ngrok](https://ngrok.com/) or a similar tool to expose your local server to the internet: + Use [ngrok](https://ngrok.com/) or a similar tool to expose your local server to the internet: - ```sh - ngrok http 5000 - ``` + ```sh + ngrok http 5000 + ``` 3. Copy the public URL generated by ngrok and update your webhook configuration in the Hume Config: - - **Webhook URL**: `/hume-webhook` - - **Events**: Subscribe to `chat_started` and `chat_ended`. + - **Webhook URL**: `/hume-webhook` + - **Events**: Subscribe to `chat_started` and `chat_ended`. ## How It Works diff --git a/evi/evi-react-native/README.md b/evi/evi-react-native/README.md index 2ce0c358..33604fa4 100644 --- a/evi/evi-react-native/README.md +++ b/evi/evi-react-native/README.md @@ -15,7 +15,7 @@ This project features a sample implementation of Hume's [Empathic Voice Interfac ```shell git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/react-native/evi-react-native + cd hume-api-examples/evi/evi-react-native ``` 2. Set up API credentials: diff --git a/evi/evi-vue-widget/README.md b/evi/evi-vue-widget/README.md index fbc70feb..24892e10 100644 --- a/evi/evi-vue-widget/README.md +++ b/evi/evi-vue-widget/README.md @@ -13,7 +13,7 @@ This project features a sample implementation of Hume's [Empathic Voice Interfac ```shell git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/evi/vue/evi-vue-widget + cd hume-api-examples/evi/evi-vue-widget ``` 2. **Set up API credentials** diff --git a/tts/tts-python-quickstart/README.md b/tts/tts-python-quickstart/README.md index 2c0dd4c3..1ff94788 100644 --- a/tts/tts-python-quickstart/README.md +++ b/tts/tts-python-quickstart/README.md @@ -20,7 +20,7 @@ See the [Quickstart guide](https://dev.hume.ai/docs/text-to-speech-tts/quickstar ```shell git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/tts/python/tts-python-quickstart + cd hume-api-examples/tts/tts-python-quickstart ``` 2. Set up the environment: diff --git a/tts/tts-typescript-quickstart/README.md b/tts/tts-typescript-quickstart/README.md index d59402d6..9bc86208 100644 --- a/tts/tts-typescript-quickstart/README.md +++ b/tts/tts-typescript-quickstart/README.md @@ -20,7 +20,7 @@ See the [Quickstart guide](https://dev.hume.ai/docs/text-to-speech-tts/quickstar ```shell git clone https://github.com/humeai/hume-api-examples - cd hume-api-examples/tts/typescript/tts-typescript-quickstart + cd hume-api-examples/tts/tts-typescript-quickstart ``` 2. Install dependencies: