Editing
Troubleshooting of whisperX
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== whisperX Troubleshooting Guide == === Error: HF_TOKEN environment variable is not set === Problematic command: <pre> whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 --hf_token <token> </pre> When running the bash command, you encounter the error: <pre>Error: HF_TOKEN environment variable is not set Please run: export HF_TOKEN='your Hugging Face token' </pre> Solution: # Log in to Hugging Face – The AI community building the future. https://huggingface.co/settings/tokens to obtain an access token # Return to terminal * Activate virtual environment by running <code>conda activate whisperx</code> # Enter <code>export HF_TOKEN='your Hugging Face token'</code> # Run bash command === Repeated Same Dialog Issue === Problematic command: <pre>whisperx input.wav --model large-v2 --diarize --highlight_words True</pre> Fixed command: <pre>whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 </pre> === ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory === When running: <pre lang="bash">$ whisperx input.mp3 --model large-v2 --diarize --highlight_words True</pre> You encounter a long error trace ending with: <pre>ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory</pre> This error indicates that the <code>libcudnn.so.9</code> library is missing or not accessible in your system’s library path. This library is part of NVIDIA’s cuDNN (CUDA Deep Neural Network) package, which is essential for GPU-accelerated deep learning applications. '''Possible Causes:''' # '''cuDNN Not Installed:''' The cuDNN library might not be installed on your system. # '''Version Mismatch:''' The installed cuDNN version may not match the version required by your application. # '''Library Path Issues:''' The system might not be able to locate the cuDNN library due to incorrect environment variables. '''Steps to Resolve:''' <ol style="list-style-type: decimal;"> <li>'''Verify cuDNN Installation:''' <ul> <li><p>Check if cuDNN is installed by listing the contents of the CUDA library directory:</p> <pre lang="bash">ls /usr/local/cuda/lib64 | grep libcudnn</pre></li> <li><p>If the <code>libcudnn.so.9</code> file is not present, proceed to install or update cuDNN.</p></li></ul> </li> <li>'''Install or Update cuDNN:''' <ul> <li>Download the appropriate cuDNN version compatible with your CUDA installation from NVIDIA’s [https://developer.nvidia.com/cudnn cuDNN download page]. <ul> <li>Operating System: Linux</li> <li>Architecture: x86_64 (<code>uname -a</code>)</li> <li>Distribution: Ubuntu</li> <li>Version: 22_04 (<code>cat /etc/lsb-release</code>)</li> <li>Installer Type: deb (network)</li> <li>cuda version: 12 (<code>nvcc --version</code>)</li> <li><code>sudo apt-get -y install cudnn-cuda-12</code></li></ul> </li> <li>Follow the installation instructions provided by NVIDIA to install or update cuDNN.</li></ul> </li></ol> Installation Instructions: <pre lang="bash"># Download and install CUDA keyring wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb sudo dpkg -i cuda-keyring_1.1-1_all.deb sudo apt-get update # Install cuDNN (choose one of the following based on your CUDA version) # For CUDA 11: sudo apt-get -y install cudnn-cuda-11 # For CUDA 12: sudo apt-get -y install cudnn-cuda-12</pre> <ol start="3" style="list-style-type: decimal;"> <li>'''Set Environment Variables:''' <ul> <li><p>Ensure that the CUDA and cuDNN libraries are included in your system’s library path.</p></li> <li><p>Add the following lines to your <code>~/.bashrc</code> or <code>~/.zshrc</code> file:</p> <pre lang="bash">export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH export PATH=/usr/local/cuda/bin:$PATH</pre></li> <li><p>Apply the changes by sourcing the file:</p> <pre lang="bash">source ~/.bashrc</pre></li> <li><p>This step ensures that the system can locate the CUDA and cuDNN libraries during execution.</p></li></ul> </li> <li>'''Verify Installation:''' <ul> <li><p>After installation, verify that the system recognizes the cuDNN version:</p> <pre lang="bash">python -c "import torch; print(torch.backends.cudnn.version())"</pre></li> <li><p>This command should display the installed cuDNN version, confirming that PyTorch can access it.</p></li></ul> </li></ol> '''Additional Considerations:''' * '''Compatibility:''' Ensure that the versions of CUDA, cuDNN, and PyTorch are compatible with each other. Refer to the [https://pytorch.org/get-started/previous-versions/ PyTorch documentation] for version compatibility details. * '''Virtual Environments:''' If you’re using a virtual environment, make sure it has access to the system’s CUDA and cuDNN installations. You might need to install CUDA and cuDNN within the virtual environment or ensure that the environment variables are correctly set. === WhisperX Audio Transcription Commands for Multiple Languages === Standard Command: English transcription output <pre> whisperx /path/to/audio/file.wav \ --model large-v3 \ --language en \ --diarize \ --batch_size 24 \ --no_align \ --chunk_size 10 \ --hf_token your_huggingface_token \ --output_dir /path/to/output/directory \ --output_format all </pre> To change output to Thai, modify the following parameters: Method 1: Set language to Thai {{kbd | key=<nowiki>--language th</nowiki>}} <pre> whisperx /path/to/audio/file.wav \ --model large-v3 \ --language th \ --diarize \ --batch_size 24 \ --no_align \ --chunk_size 10 \ --hf_token your_huggingface_token \ --output_dir /path/to/output/directory \ --output_format all </pre> Method 2: Auto-detect language (remove the {{kbd | key=<nowiki>--language</nowiki>}} parameter) <pre> whisperx /path/to/audio/file.wav \ --model large-v3 \ --diarize \ --batch_size 24 \ --no_align \ --chunk_size 10 \ --hf_token your_huggingface_token \ --output_dir /path/to/output/directory \ --output_format all </pre> Common Language Code Reference: # th = Thai # zh = Chinese # en = English # ja = Japanese # ko = Korean # es = Spanish # fr = French Supported language codes can be found in the [https://github.com/openai/whisper/blob/main/whisper/tokenizer.py Whisper tokenizer documentation] & [https://github.com/m-bain/whisperX/blob/main/EXAMPLES.md whisperX/EXAMPLES.md at main · m-bain/whisperX]
Summary:
Please note that all contributions to LemonWiki共筆 are considered to be released under the Creative Commons Attribution-NonCommercial-ShareAlike (see
LemonWiki:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Current events
Recent changes
Random page
Help
Categories
Tools
What links here
Related changes
Special pages
Page information