-
-
Notifications
You must be signed in to change notification settings - Fork 215
Command Line Arguments
jhj0517 edited this page Aug 24, 2024
·
18 revisions
You can run project with some command line arguments by running user-start-webui.bat
file.
You can set the values in there and run it.
Or you can just run the project directly on command line with lower case arguments, for example:
python app.py --whisper_type insanely_fast_whisper --share --server_name 0.0.0.0 --theme author/example_theme
Most arguments are related to Gradio's launch
options. See the Gradio documentation for a detailed explanation.
Variable | Description | Example Value |
---|---|---|
--server_name |
Server name to set in Gradio. | 0.0.0.0 |
--server_port |
Server port number to use when launching Gradio. | 7861 |
--username |
Server username for Gradio. Both USERNAME and PASSWORD must be set simultaneously to enable authentication. | myusername |
--password |
Server password for Gradio. Both USERNAME and PASSWORD must be set simultaneously to enable authentication. | mypassword |
--share |
Enables the creation of a public link for the project. | True |
--root_path |
Gradio root path of the URL. If the application is served at "https://example.com/myapp", the root_path should be set to "/myapp". |
/myapp |
--theme |
UI theme to apply in the browser. View available themes here: Theme Gallery | author/example_theme |
--whisper_type |
A type of the implementation of the whisper to use. • whisper : Normal whisper implementation by OpenAI • faster-whisper : Optimized for transcription speed & VRAM usage for whisper by SYSTRAN • insanely-fast-whisper : Optimized for transcription speed using some transformers options. Fastest implementation for now. This project uses faster-whisper by default. Enter the repository name as the value. |
whisper or faster-whisper or insanely-fast-whisper
|
--faster_whisper_model_dir |
Use this if you want to use custom model directory path for faster-whisper. Make sure you enter the path "inside quotes". | "C:\Whisper-WebUI\my_faster_whisper_model_path" |
--insanely_fast_whisper_model_dir |
Use this if you want to use custom model directory path for insanely_fast_whisper. Make sure you enter the path "inside quotes". | "C:\Whisper-WebUI\my_insanely_fast_whisper_model_path" |
--whisper_model_dir |
Use this if you want to use custom model directory path for whisper. Make sure you enter the path "inside quotes". | "C:\Whisper-WebUI\my_whisper_model_path" |
--diarization_model_dir |
Use this if you want to use custom model directory path for speaker diarization models. Make sure you enter the path "inside quotes". To download the model, you must manually go to https://huggingface.co/pyannote/speaker-diarization-3.1 and agree to their requirement. |
"C:\Whisper-WebUI\model_dir" |
--output_dir |
Use this if you want custom outputs directory | "C:\Whisper-WebUI\outputs" |
--colab |
Whether the envrionment is Google Colab or not. If it's true, "Input Folder Path (Optional)" tab will appear and you can enter the folder path instead of uploading via gradio.Files() . |
True |