14,958
edits
(Created page with "[https://github.com/m-bain/whisperX whisperX] is an enhanced version of OpenAI's Whisper, offering fast automatic speech recognition with word-level timestamps and speaker diarization. It uses the faster-whisper backend and can run the large-v2 model on less than 8GB of GPU memory. whisperX also includes voice activity detection (VAD) preprocessing, reducing hallucinations and supporting batch processing. == whisperX Troubleshooting Guide == === Error: HF_TOKEN environ...") |
|||
| (7 intermediate revisions by the same user not shown) | |||
| Line 30: | Line 30: | ||
<pre>whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 </pre> | <pre>whisperx input.mp3 --model large-v3 --language zh --diarize --batch_size 24 --no_align --chunk_size 10 </pre> | ||
=== ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory === | === ImportError: libcudnn.so.9: cannot open shared object file: No such file or directory === | ||
When running: | When running: | ||
< | <pre lang="bash">$ whisperx input.mp3 --model large-v2 --diarize --highlight_words True</pre> | ||
You encounter a long error trace ending with: | You encounter a long error trace ending with: | ||
| Line 53: | Line 53: | ||
<ul> | <ul> | ||
<li><p>Check if cuDNN is installed by listing the contents of the CUDA library directory:</p> | <li><p>Check if cuDNN is installed by listing the contents of the CUDA library directory:</p> | ||
< | <pre lang="bash">ls /usr/local/cuda/lib64 | grep libcudnn</pre></li> | ||
<li><p>If the <code>libcudnn.so.9</code> file is not present, proceed to install or update cuDNN.</p></li></ul> | <li><p>If the <code>libcudnn.so.9</code> file is not present, proceed to install or update cuDNN.</p></li></ul> | ||
</li> | </li> | ||
| Line 73: | Line 73: | ||
Installation Instructions: | Installation Instructions: | ||
< | <pre lang="bash"># Download and install CUDA keyring | ||
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb | wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb | ||
sudo dpkg -i cuda-keyring_1.1-1_all.deb | sudo dpkg -i cuda-keyring_1.1-1_all.deb | ||
| Line 83: | Line 83: | ||
# For CUDA 12: | # For CUDA 12: | ||
sudo apt-get -y install cudnn-cuda-12</ | sudo apt-get -y install cudnn-cuda-12</pre> | ||
<ol start="3" style="list-style-type: decimal;"> | <ol start="3" style="list-style-type: decimal;"> | ||
<li>'''Set Environment Variables:''' | <li>'''Set Environment Variables:''' | ||
| Line 89: | Line 89: | ||
<li><p>Ensure that the CUDA and cuDNN libraries are included in your system’s library path.</p></li> | <li><p>Ensure that the CUDA and cuDNN libraries are included in your system’s library path.</p></li> | ||
<li><p>Add the following lines to your <code>~/.bashrc</code> or <code>~/.zshrc</code> file:</p> | <li><p>Add the following lines to your <code>~/.bashrc</code> or <code>~/.zshrc</code> file:</p> | ||
< | <pre lang="bash">export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH | ||
export PATH=/usr/local/cuda/bin:$PATH</ | export PATH=/usr/local/cuda/bin:$PATH</pre></li> | ||
<li><p>Apply the changes by sourcing the file:</p> | <li><p>Apply the changes by sourcing the file:</p> | ||
< | <pre lang="bash">source ~/.bashrc</pre></li> | ||
<li><p>This step ensures that the system can locate the CUDA and cuDNN libraries during execution.</p></li></ul> | <li><p>This step ensures that the system can locate the CUDA and cuDNN libraries during execution.</p></li></ul> | ||
</li> | </li> | ||
| Line 98: | Line 98: | ||
<ul> | <ul> | ||
<li><p>After installation, verify that the system recognizes the cuDNN version:</p> | <li><p>After installation, verify that the system recognizes the cuDNN version:</p> | ||
< | <pre lang="bash">python -c "import torch; print(torch.backends.cudnn.version())"</pre></li> | ||
<li><p>This command should display the installed cuDNN version, confirming that PyTorch can access it.</p></li></ul> | <li><p>This command should display the installed cuDNN version, confirming that PyTorch can access it.</p></li></ul> | ||
</li></ol> | </li></ol> | ||
| Line 107: | Line 107: | ||
* '''Virtual Environments:''' If you’re using a virtual environment, make sure it has access to the system’s CUDA and cuDNN installations. You might need to install CUDA and cuDNN within the virtual environment or ensure that the environment variables are correctly set. | * '''Virtual Environments:''' If you’re using a virtual environment, make sure it has access to the system’s CUDA and cuDNN installations. You might need to install CUDA and cuDNN within the virtual environment or ensure that the environment variables are correctly set. | ||
[[Category: Generative AI]] [[Category: Software]] | |||
=== WhisperX Audio Transcription Commands for Multiple Languages === | |||
Standard Command: English transcription output | |||
<pre> | |||
whisperx /path/to/audio/file.wav \ | |||
--model large-v3 \ | |||
--language en \ | |||
--diarize \ | |||
--batch_size 24 \ | |||
--no_align \ | |||
--chunk_size 10 \ | |||
--hf_token your_huggingface_token \ | |||
--output_dir /path/to/output/directory \ | |||
--output_format all | |||
</pre> | |||
To change output to Thai, modify the following parameters: | |||
Method 1: Set language to Thai {{kbd | key=<nowiki>--language th</nowiki>}} | |||
<pre> | |||
whisperx /path/to/audio/file.wav \ | |||
--model large-v3 \ | |||
--language th \ | |||
--diarize \ | |||
--batch_size 24 \ | |||
--no_align \ | |||
--chunk_size 10 \ | |||
--hf_token your_huggingface_token \ | |||
--output_dir /path/to/output/directory \ | |||
--output_format all | |||
</pre> | |||
Method 2: Auto-detect language (remove the {{kbd | key=<nowiki>--language</nowiki>}} parameter) | |||
<pre> | |||
whisperx /path/to/audio/file.wav \ | |||
--model large-v3 \ | |||
--diarize \ | |||
--batch_size 24 \ | |||
--no_align \ | |||
--chunk_size 10 \ | |||
--hf_token your_huggingface_token \ | |||
--output_dir /path/to/output/directory \ | |||
--output_format all | |||
</pre> | |||
Common Language Code Reference: | |||
# th = Thai | |||
# zh = Chinese | |||
# en = English | |||
# ja = Japanese | |||
# ko = Korean | |||
# es = Spanish | |||
# fr = French | |||
Supported language codes can be found in the [https://github.com/openai/whisper/blob/main/whisper/tokenizer.py Whisper tokenizer documentation] & [https://github.com/m-bain/whisperX/blob/main/EXAMPLES.md whisperX/EXAMPLES.md at main · m-bain/whisperX] | |||
== whisperX Transcript File Format Guide == | |||
Transcripts come in different file formats: | |||
# txt - all content combined together | |||
# srt, vtt - different subtitle formats: distinguished by time, speaker, and speech content | |||
# tsv - distinguished by time and speech content | |||
# json - distinguished by time, speaker, and speech content, suitable for programming processing | |||
If using for the first time, it's recommended to directly open the srt format | |||
== Further reading == | |||
* [https://medium.com/@planetoid/how-to-add-punctuation-to-whisper-transcripts-using-ai-619362c9160c How to Add Punctuation to Whisper Transcripts Using AI | Medium] | |||
[[Category: Generative AI]] [[Category: Software]] [[Category: Revised with LLMs]] | |||