CrisperWhisper: Accurate Timestamps on Verbatim Speech Transcriptions with improved hallucination robustness #2341
LaurinmyReha
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
CrisperWhisper
Checkout the original repo here:
https://github.com/nyrahealth/CrisperWhisper
and the paper:
https://www.arxiv.org/abs/2408.16589
CrisperWhisper is an advanced variant of OpenAI's Whisper, designed for fast, precise, and verbatim speech recognition with accurate (crisp) word-level timestamps. Unlike the original Whisper, which tends to omit disfluencies and follows more of a intended transcription style, CrisperWhisper aims to transcribe every spoken word exactly as it is, including fillers, pauses, stutters and false starts.
Key Features
Table of Contents
Highlights
1. Performance Overview
1.1 Qualitative Performance Overview
1.2 Quantitative Performance Overview
Transcription Performance
CrisperWhisper significantly outperforms Whisper Large v3, especially on datasets that have a more verbatim transcription style in the ground truth, such as AMI and TED-LIUM.
Segmentation Performance
CrisperWhisper demonstrates superior performance segmentation performance. This performance gap is especially pronounced around disfluencies and pauses.
The following table uses the metrics as defined in the paper. For this table we used a collar of 50ms. Heads for each Model were selected using the method described in the How section and the result attaining the highest F1 Score was choosen for each model using varying number of heads.
More plots and ablations can be found in the
run_experiments/plots
folder.2. Setup ⚙️
2.1 Prerequisites
2.2 Environment Setup
Clone the Repository:
git clone https://github.com/nyrahealth/CrisperWhisper.git cd CrisperWhisper
Create Python Environment:
Install Dependencies:
Additional Installations:
Follow OpenAI's instructions to install additional dependencies like
ffmpeg
andrust
: Whisper Setup.3. Usage
Here's how to use CrisperWhisper in your Python scripts:
3.1 Usage with 🤗 transformers
First make sure that you have a huggingface account and accept the licensing of the model. Grab you huggingface access token and login so you are certainly able to download the model.
3.2 Usage with faster whisper
We also provide a converted model to be compatible with faster whisper. However, due to the different implementation of the timestamp calculation in faster whisper or more precisely CTranslate2 the timestamp accuracy can not be guaranteed.
First make sure that you have a huggingface account and accept the licensing of the model. Grab you huggingface access token and login so you are certainly able to download the model.
3.3 Commandline usage
First make sure that you have a huggingface account and accept the licensing of the model. Grab you huggingface access token and login so you are certainly able to download the model.
afterwards:
To transcribe an audio file, use the following command:
4. Running the Streamlit App
To use the CrisperWhisper model with a user-friendly interface, you can run the provided Streamlit app. This app allows you to record or upload audio files for transcription and view the results with accurate word-level timestamps.
4.1 Prerequisites
Make sure you have followed the Setup ⚙️ instructions above and have the
crisperWhisper
environment activated.4.2 Steps to Run the Streamlit App
Activate the Conda Environment
Ensure you are in the
crisperWhisper
environment:Navigate to the App Directory
Change directory to where the
app.py
script is located:Run the Streamlit App
Use the following command to run the app. Make sure to replace
/path/to/your/model
with the actual path to your CrisperWhisper model directory:For example:
Access the App
After running the command, the Streamlit server will start, and you can access the app in your web browser at:
4.3 Features of the App
5. How?
We employ the popular Dynamic Time Warping (DTW) on the Whisper cross-attention scores, as detailed in our paper to derive word-level timestamps. By leveraging our retokenization process, this method allows us to consistently detect pauses. Given that the accuracy of the timestamps heavily depends on the DTW cost matrix and, consequently, on the quality of the cross-attentions, we developed a specialized loss function for the selected alignment heads to enhance precision.
Although this loss function was not included in the original paper due to time constraints preventing the completion of experiments and training before the submission deadline, it has been used to train our publicly available models.
Key Features of this loss are as follows:
Data Preparation
Token-Word Alignment
Ground Truth Cross-Attention
Loss Calculation
1 - cosine similarity
between the predicted cross-attention vector (when predicting a token) and the ground truth cross-attention vector.5 Alignment Head selection
Beta Was this translation helpful? Give feedback.
All reactions