Troubleshooting of whisperX: Difference between revisions
(Created page with "[https://github.com/m-bain/whisperX whisperX] is an enhanced version of OpenAI's Whisper, offering fast automatic speech recognition with word-level timestamps and speaker diarization. It uses the faster-whisper backend and can run the large-v2 model on less than 8GB of GPU memory. whisperX also includes voice activity detection (VAD) preprocessing, reducing hallucinations and supporting batch processing. == whisperX Troubleshooting Guide == === Error: HF_TOKEN environ...") |
mNo edit summary |
||
| Line 30: | Line 30: | ||
<pre>whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 </pre> | <pre>whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 </pre> | ||
=== ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory === | === ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory === | ||
Revision as of 19:03, 4 December 2024
whisperX is an enhanced version of OpenAI's Whisper, offering fast automatic speech recognition with word-level timestamps and speaker diarization. It uses the faster-whisper backend and can run the large-v2 model on less than 8GB of GPU memory. whisperX also includes voice activity detection (VAD) preprocessing, reducing hallucinations and supporting batch processing.
whisperX Troubleshooting Guide
Error: HF_TOKEN environment variable is not set
Problematic command:
whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 --hf_token <token>
When running the bash command, you encounter the error:
Error: HF_TOKEN environment variable is not set Please run: export HF_TOKEN='your Hugging Face token'
Solution:
- Log in to Hugging Face – The AI community building the future. https://huggingface.co/settings/tokens to obtain an access token
- Return to terminal * Activate virtual environment by running
conda activate whisperx - Enter
export HF_TOKEN='your Hugging Face token' - Run bash command
Repeated Same Dialog Issue
Problematic command:
whisperx input.wav --model large-v2 --diarize --highlight_words True
Fixed command:
whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10
When running:
$ whisperx input.mp3 --model large-v2 --diarize --highlight_words TrueYou encounter a long error trace ending with:
ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory
This error indicates that the libcudnn.so.9 library is missing or not accessible in your system’s library path. This library is part of NVIDIA’s cuDNN (CUDA Deep Neural Network) package, which is essential for GPU-accelerated deep learning applications.
Possible Causes:
- cuDNN Not Installed: The cuDNN library might not be installed on your system.
- Version Mismatch: The installed cuDNN version may not match the version required by your application.
- Library Path Issues: The system might not be able to locate the cuDNN library due to incorrect environment variables.
Steps to Resolve:
- Verify cuDNN Installation:
Check if cuDNN is installed by listing the contents of the CUDA library directory:
ls /usr/local/cuda/lib64 | grep libcudnn
If the
libcudnn.so.9file is not present, proceed to install or update cuDNN.
- Install or Update cuDNN:
- Download the appropriate cuDNN version compatible with your CUDA installation from NVIDIA’s cuDNN download page.
- Operating System: Linux
- Architecture: x86_64 (
uname -a) - Distribution: Ubuntu
- Version: 22_04 (
cat /etc/lsb-release) - Installer Type: deb (network)
- cuda version: 12 (
nvcc --version) sudo apt-get -y install cudnn-cuda-12
- Follow the installation instructions provided by NVIDIA to install or update cuDNN.
- Download the appropriate cuDNN version compatible with your CUDA installation from NVIDIA’s cuDNN download page.
Installation Instructions:
# Download and install CUDA keyring
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
# Install cuDNN (choose one of the following based on your CUDA version)
# For CUDA 11:
sudo apt-get -y install cudnn-cuda-11
# For CUDA 12:
sudo apt-get -y install cudnn-cuda-12- Set Environment Variables:
Ensure that the CUDA and cuDNN libraries are included in your system’s library path.
Add the following lines to your
~/.bashrcor~/.zshrcfile:export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH export PATH=/usr/local/cuda/bin:$PATH
Apply the changes by sourcing the file:
source ~/.bashrc
This step ensures that the system can locate the CUDA and cuDNN libraries during execution.
- Verify Installation:
After installation, verify that the system recognizes the cuDNN version:
python -c "import torch; print(torch.backends.cudnn.version())"
This command should display the installed cuDNN version, confirming that PyTorch can access it.
Additional Considerations:
- Compatibility: Ensure that the versions of CUDA, cuDNN, and PyTorch are compatible with each other. Refer to the PyTorch documentation for version compatibility details.
- Virtual Environments: If you’re using a virtual environment, make sure it has access to the system’s CUDA and cuDNN installations. You might need to install CUDA and cuDNN within the virtual environment or ensure that the environment variables are correctly set.